Authors: Dean Burnett
It goes on like this, the visual-processing areas spreading out further into the brain, and the further they spread from the primary visual cortex the more specific they get regarding what it is they process. It even crosses over into other lobes, such as the parietal lobe containing areas that process spatial awareness, to the inferior temporal lobe processing recognition of specific objects and (going back to the start) faces. We have parts of the brain that are dedicated to recognizing
faces, so we see them everywhere. Even if they're not there, because it's just a piece of toast.
These are just some of the impressive facets of the visual system. But perhaps the one that is most fundamental is the fact that we can see in three dimensions, or “3D” as the kids are calling it. It's a big ask, because the brain has to create a rich 3D impression of the environment from a patchy 2D image. The retina itself is technically a “flat” surface, so it can't support 3D images any more than a blackboard can. Luckily, the brain has a few tricks to get around this.
Firstly, having two eyes helps. They may be close together on the face, but they're far enough apart to supply subtly different images to the brain, and the brain uses this difference to work out depth and distance in the final image we end up perceiving.
It doesn't just rely on the parallax resulting from ocular disparity (that's the technical way of saying what I just said) though, as this requires two eyes to be working in unison, but when you close or cover one eye, the world doesn't instantly convert to a flat image. This is because the brain can also use aspects of the image delivered by the retina to work out depth and distance. Things like occlusion (objects covering other objects), texture (fine details in a surface if it's close but not if it's far away) and convergence (things up close tend to be much further apart than things in the distance; imagine a long road receding to a single point) and more. While having two eyes is the most beneficial and effective way to work out depth, the brain can get by fine with just one, and can even keep performing tasks that involve fine manipulation. I once knew a successful dentist who could see out of only one eye; if you can't manage depth perception, you don't last long in that job.
These visual-system methods of recognizing depth are exploited by 3D films. When you look at a movie screen, you can see the necessary depth because all the required cues discussed above are there. But to a certain extent you are still aware that you're looking at images on a flat screen, because that is the case. But 3D films are essentially two slightly different streams of images on top of each other. Wearing 3D glasses filters out these images, but one lens filters out a specific image and the other filters out the other. As a result, each eye receives a subtly different image. The brain recognizes this as depth, and suddenly images on the screen leap out at us and we have to pay double the price for a ticket.
Such is the complexity and density of the visual-system processing that there are many ways it can be fooled. The Jesus-in-a-piece-of-toast phenomenon occurs because there is a temporal-cortex region of the visual system responsible for recognizing and processing faces, so anything that looks a bit like a face will be perceived as a face. The memory system can chip in and say if it's a familiar face or not, too. Another common illusion makes two things that are exactly the same color look different when placed on different backgrounds. This can be traced to the secondary visual cortex getting confused.
Other visual illusions are more subtle. The classic “is it two faces looking at each other or actually a candlestick?” image is possibly the most familiar. This image presents two possible interpretations, both images are “correct” but are mutually exclusive. The brain really doesn't handle ambiguity well, so it effectively imposes order on what it's receiving by picking one possible interpretation. But it can change its mind, too, as there are two solutions.
All this barely scratches the surface. It's not really possible to convey the true complexity and sophistication of the visual-processing system in a few pages, but I felt it worth the attempt because vision is so complex a neurological process that underpins so much of our lives, and most people think nothing of it until it starts going awry. Consider this section just the tip of the iceberg of the brain's visual system; there's a vast amount more in the depths below it. And you can perceive such depths only because the visual system is as complex as it is.
Why your ears are burning
(Strengths and weaknesses of human attention, and why you can't help eavesdropping)
Our senses provide copious information but the brain, despite its best efforts, cannot deal with all of it. And why should it? How much is actually relevant? The brain is an incredibly demanding organ in terms of resources, and using it to focus intently on a patch of drying paint would just squander them. The brain
has
to pick and choose what gets noticed. As such, the brain is able to direct perception and conscious processing to things of potential interest. This is attention, and how we use it plays a big role in what we observe of the world around us. Or, often more importantly, what we fail to observe.
For the study of attention, there are two important questions. One is, what's the brain's capacity for attention? How much can it realistically take in before it gets overwhelmed?
The other is, what is it that determines where the attention is directed? If the brain is constantly being bombarded with sensory information, what is it about certain stimuli or input that prioritizes it over other things?
Let's start with capacity. Most people have noticed attention has a limited capacity. You've probably experienced a group of people all trying to talk to you at once, “clamoring for attention.” This is frustrating, usually resulting in loss of patience and shouts of, “One at a
time
!”
Early experiments, such as those by Colin Cherry in 1953,
10
suggested attention capacity was alarmingly limited, demonstrated by a technique called “dichotic listening.” This is where subjects wear headphones and receive a different audio stream (typically, a sequence of words) in each ear. They were told they had to repeat the words received in one ear, but then were asked what they could recall from the other ear. Most could identify whether the voice was male or female, but that's it, not even what language was spoken. So attention has such a limited capacity, it can't be stretched beyond a single audio stream.
These and similar findings resulted in “bottleneck” models of attention, which argued that all the sensory information that is presented to the brain is filtered through the narrow space offered by attention. Think of a telescope: it provides a very detailed image of a small part of the landscape or sky. But, beyond that, there's nothing.
Later experiments changed things. Von Wright and his colleagues in 1975 conditioned subjects to expect a shock when they heard certain words. Then they did the dichotic-listening task. The stream in the
other
ear, not the focus of attention, featured the shock-provoking words. Subjects still showed a
measurable fear reaction when the words were heard, revealing that the brain was clearly paying attention to the “other” stream. But it doesn't reach the level of
conscious
processing, so we aren't aware of it. The bottleneck models break down in the face of data like this, showing people can still recognize and process things “outside” of the supposed boundaries of attention.
This can be demonstrated in less clinical surroundings. The title of this section refers to when people say their “ears are burning.” The phrase usually used to mean someone has overheard others talking about them. It occurs often, particularly at social occasions such as wedding receptions, farewell parties, sporting events, where a lot of people are gathered in various groups, all talking at once. At some point, you'll be having a perfectly enjoyable conversation about your mutual interests (football, baking, celery, whatever), when someone within earshot says your name. They aren't part of your current group; maybe you didn't even know they were there. But they said your name, perhaps followed by the words, “is a tremendous waste of skin,” and suddenly you're paying attention to their conversation, rather than the one you are having, wondering why you ever asked that person to be your best man.
If attention was as limited as the bottleneck models suggest, then this should be impossible. But, clearly, it isn't. This occurrence is known as “the cocktail-party effect,” because professional psychologists are a refined bunch.
The limitations of the bottleneck model led to formation of the capacity model, typically attributed to work by Daniel Kahneman in 1973,
11
but expounded on by many since. Whereas bottleneck models argued that there is one “stream” of attention that hops about like a spotlight depending on where it's
needed, the capacity model argues that attention is more like a finite resource that can be divided between multiple streams (focuses of attention) so long as the resources are not exhausted.
Both proposed models explain why multitasking is so difficult; with bottleneck models, you have one single stream of attention that keeps leaping between different tasks, making it very difficult to keep track. The capacity model would allow you to pay attention to more than one thing at a time, but only so far as you have the resources to process them effectively; as soon as you go beyond your capacity, you lose the ability to keep track of what's going on. And the resources are limited enough to make it look like a “single” stream is all we've got in many scenarios.
But
why
this limited capacity? One explanation is that attention is strongly associated with working memory, what we use to store the information we're consciously processing. Attention provides the information to be processed, so if working memory is already “full,” adding more information is going to be difficult, if not impossible. And we know working (short-term) memory has a limited capacity.
This is often sufficient for your typical human, but context is crucial. Many studies focus on how attention is used while driving, where a lack of attention can have serious consequences. In many states, driving while physically using a phone is not allowed; you have to use a hands-free set-up and keep both hands on the wheel. But a study from the University of Utah in 2013 revealed that, in terms of how it affects performance, using a hands-free set-up is just as bad as using the phone with your hands, because both require a similar amount of attention.
12
The fact that you have two hands on the wheel as opposed to one may provide some advantage, but the study measured overall speed of responses, scanning of environment, noticing important cues; all these and more are reduced to a similar worrying extent whether using hands-free or not, because they require similar levels of attention. You may well be keeping your eyes on the road, but that's irrelevant if you're ignoring what your eyes are showing you.
Even more worrying, the data suggests it's not just the phone: changing the radio or carrying on a conversation with a passenger can also be equally distracting. With increased technology found in cars and on phones (it's technically not illegal at present to check your emails while driving) the options for distraction are bound to increase.
With all this, you may wonder how anyone can drive for more than ten minutes straight without ending up in a disastrous wreck. It's because we're talking about
conscious
attention, which is where the capacity is limited. As we've discussed, do something often enough and the brain adapts to it, allowing procedural memory, described in
Chapter 2
. People say they can do something “without thinking,” and that's quite accurate here. Driving can be an anxious, overwhelming experience for beginners, but eventually it becomes so familiar the unconscious systems take over, so conscious attention can be applied elsewhere. However, driving is not something that can be done entirely without thinking; taking account of all other road users and hazards needs conscious awareness, as these are different each time.
Neurologically, attention is supported by many regions, one of which is that repeat offender the prefrontal cortex, which makes sense as that's where working memory is
processed. Also implicated is the anterior cingulate gyrus, a large and complex region deep in the temporal lobe that also extends into the parietal lobe, where a lot of sensory information is processed and linked to higher functions such as consciousness.
But the attention controlling systems are quite diffuse, and this has consequences. In
Chapter 1
, we saw how more advanced conscious parts of the brain and the more primitive “reptile” elements often end up getting in each other's way. The attention-controlling systems are similar; better organized, but a familiar combination or conflict of conscious and subconscious processing.
For example, attention is directed by exogenous and endogenous cues. Or, in plain English, it has both bottom-up and top-down control systems. Or, even more simply, our attention responds to stuff that happens either outside our head, or inside it. Both of these are demonstrated by the cocktail-party effect, where we direct our attention to specific sounds, also known as “selective listening.” The sound of your name suddenly causes your attention to shift to it. You didn't know it was coming; you weren't consciously aware of it until it had happened. But, once aware of it, you direct your attention to the source, excluding anything else. An external sound diverted your attention, demonstrating a bottom-up attention process, and your conscious desire to hear more keeps your attention there, demonstrating an internal top-down attention process originating in the conscious brain.
#
However, most attention research focuses on the visual system. We can and do physically point our eyes at the subject of attention, and the brain relies mostly on visual data. It's an obvious target for research, and this research has produced a lot of information about how attention works.