Orienting to sensory stimuli in real-world situations

When looking at sensory studies you can see that most of the lab-based experiments tend to study modality-specific stimuli (e.g., visual, auditory, tactile…). However, if one examines real-world situations from a phenomenological point of view it seems that the world is not divided into modalities like the audition, vision, touch, smell… but as a unit.

Consider you’re having a conversation with your friends in your living room. All of them appear to occupy a position in the physic world (in your sofa), and the voices and face movements of all of them seem to occur together as a block. From a phenomenological point of view, it seems that the world is not divided into modalities like the audition, vision, touch, smell… but into discrete objects that occupy their positions in the world that we can perceive outside of us.

Nonetheless, sensory physiology tells us that the visual input is processed by different systems and in different places in the brain than is information from the voice. To put it in other words, my brain processes the face of my friends (visual) differently from their voices (auditory) although both types of stimuli are coming from the same “source”. So, when information comes from different sensory modalities (multisensory) it gets spread in different neural routes (brain patterns) in my brain and gets integrated across modalities. This way I am able to perceive my friends as a unit. Isn’t that incredible? When I speak to my friends, I recognize their faces, movements and voices, and as I see their faces moving and speaking, I hear their voices and associate them to the friend who is speaking because audio and video are coherent and match (I will get deeper into this topic in another near-future post).

I would like to finish saying that in some multisensory studies people are better off when they know where the stimulus will occur in the brain (i.e., sensory modality: visual, auditory, tactile…) than when they know where in space will occur (i.e., spatial location: right, left…). So, le’ts say we start playing Pictionary with my friends. Imagine that the opposite team gets in line in front of us and we don’t know who of them will play in each round. Considering the findings, it means that I’d be faster at recognizing and look at the friend who is performing at the moment if I previously know whether s/he will be actually singing/doing mimic rather than who will that person be (located in space). I guess we could all try and test this by doing the experiment at home with our people under the COVID-19 situation

Sources

Feature image from Pexels – C0 license.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s