Authors: Dean Burnett
Scientists measure this sensitivity by simply prodding someone with a two-pronged instrument and seeing how close together these prongs can be and still be recognized as separate pressure points.
6
The fingertips are especially sensitive, which is why braille was developed. However, there are some limitations: braille is a series of separate specific bumps because the fingertips aren't sensitive enough to recognize the letters of the alphabet when they're text sized.
7
Like hearing, the sense of touch can also be “fooled.” Part of our ability to identify things with touch is via the brain being aware of the arrangement of your fingers, so if you touch something small (for instance, a marble) with your index and middle finger, you'll feel just the one object. But if you cross your fingers and close your eyes, it feels more like two separate objects. There's been no direct communication between the touch-processing somatosensory cortex and the finger-moving motor cortex to flag up this point, and the eyes are closed so aren't able to provide any information to override the inaccurate conclusion of the brain. This is the Aristotle illusion.
So there are more overlaps between touch and hearing than is immediately apparent, and recent studies have found evidence that the link between the two may be far more fundamental than previously thought. While we've always understood that certain genes were strongly linked to hearing abilities and increased risk of deafness, a 2012 study by Henning Frenzel and his team
8
discovered that genes also influenced touch sensitivity, and interestingly that those with very sensitive hearing also showed a finer sense of touch too. Similarly, those with genes that resulted in poor hearing also had a much higher likelihood of showing poor touch
sensitivity. A mutated gene was also discovered that resulted in both impaired hearing and touch.
While there is still more work to be done on this area, this does strongly suggest that the human brain uses similar mechanisms to process both hearing and touch, so deep-seated issues that affect one can end up affecting the other. This is perhaps not the most logical arrangement, but it's reasonably consistent with the tasteâsmell interaction we saw in the previous section. The brain does tend to group our senses together more often than seems practical. But on the other hand, it does suggest people can “feel the rhythm” in a more literal manner than is generally assumed.
Jesus has returned . . . as a piece of toast?
(What you didn't know about the visual system)
What do toast, tacos, pizza, ice-cream, jars of spread, bananas, pretzels, potato chips and nachos have in common? The image of Jesus has been found in all of them (seriously, look it up). It's not always food though; Jesus often pops up in varnished wooden items. And it's not always Jesus; sometimes it's the Virgin Mary. Or Elvis Presley.
What's actually happening is that there are uncountable billions of objects in the world that have random patterns of color or patches that are either light or dark, and by sheer chance these patterns sometimes resemble a well-known image or face. And if the face is that of a celebrated figure with metaphysical properties (Elvis falls into this category for many)
then the image will have more resonance and get a lot of attention.
The weird part (scientifically speaking) is that even those who are aware that it's just a grilled snack and not the bread-based rebirth of the Messiah can still
see
it. Everyone can still recognize what is said to be there, even if they dispute the origins of it.
The human brain prioritizes vision over all other senses, and the visual system boasts an impressive array of oddities. As with the other senses, the idea that the eyes capture everything about our outside world and relay this information intact to the brain like two worryingly squishy video cameras is a far cry from how things really work.
â¡
Many neuroscientists argue that the retina
is
part of the brain, as it develops from the same tissue and is directly linked to it. The eyes take in light through the pupils and lenses at the front, which lands on the retina at the back. The retina is a complex layer of photoreceptors, specialized neurons for detecting light, some of which can be activated by as little as half-a-dozen photons (the individual “bits” of light). This is very impressive sensitivity, like a bank security system being triggered because someone had a thought about robbing the place. The photoreceptors that demonstrate such sensitivity are used primarily for seeing contrasts, light and dark, and are known as rods. These work in low-light conditions, such as at night. Bright daylight
actually oversaturates them, rendering them useless; it's like trying to pour a gallon of water into an egg cup. The other (daylight-friendly) photoreceptors detect photons of certain wavelengths, which is how we perceive color. These are known as cones, and they give us a far more detailed view of the environment, but they require a lot more light to be activated, which is why we don't see colors at low light levels.
Photoreceptors aren't spread uniformly across the retina. Some areas have different concentrations from others. We have one area in the center of the retina that recognizes fine detail, while much of the periphery gives only blurry outlines. This is due to the concentrations and connections of the photoreceptor types in these areas. Each photoreceptor is connected to other cells (a bipolar cell and a ganglion cell usually), which transmit the information from the photoreceptors to the brain. Each photoreceptor is part of a receptive field (which is made up of all the receptors connected to the same transmission cells) that covers a specific part of the retina. Think of it like a cell-phone tower, which receives all the different information relayed from the phones within its coverage range and processes them. The bipolar and ganglion cells are the tower, the receptors are the phones; thus there is a specific receptive field. If light hits this field it will activate a specific bipolar or ganglion cell via the photoreceptors attached to it, and the brain recognizes this.
In the periphery of the retina, the receptive fields can be quite big, like a golf umbrella canvas around the central shaft. But this means precision suffersâit's difficult to work out where a raindrop is falling on a golf umbrella; you just know it's there. Luckily, towards the center of the retina, the receptive fields are small and dense enough to provide sharp
and precise images, enough for us to be able to see very fine details like small print.
Bizarrely, only one part of the retina is able to recognize this fine detail. It is named the fovea, in the dead center of the retina, and it makes up less than 1 percent of the total retina. If the retina were a widescreen TV, the fovea would be a thumbprint in the middle. The rest of the eye gives us more blurry outlines, vague shapes and colors.
You may think this makes no sense, because surely people see the world crisp and clear, give or take the odd cataract? This described arrangement would be more like looking through the wrong end of a telescope made of Vaseline. But, worryingly, that is what we “see,” in the purest sense. It's just that the brain does a sterling job of cleaning this image up before we consciously perceive it. The most convincing Photoshopped image is little more than a crude sketch in yellow crayon compared to the polishing the brain does with our visual information. But how does it do this?
The eyes move around a lot, and much of this is due to the fovea being pointed at various things in our environment that we need to look at. In the old days, experiments tracking eyeball movements used specialized
metal
contact lenses. Just let that sink in, and appreciate how committed some people are to science.
§
Essentially, whatever we're looking at, the fovea scans as much of it as possible, as quickly as possible. Think of a spotlight aimed at a football field operated by someone in the
middle of a near-lethal caffeine overdose, and you're sort of there. The visual information obtained via this process, coupled with the less-detailed but still-usable image of the rest of the retina, is enough for the brain to do some serious polishing and make a few “educated guesses” about what things look like, and we see what we see.
This seems a very inefficient system, relying on such a small area of retina to do so much. But considering how much of the brain is required to process this much visual information, even doubling the size of the fovea so it's more than 1 percent of the retina would require an increase in brain matter for visual processing to the point where our brains could end up the size of basketballs.
But what of this processing? How does the brain render such detailed perception from such crude information? Well, photoreceptors convert light information to neuronal signals which are sent to the brain along the optic nerves (one from each eye).
¶
The optic nerve relays visual information to several parts of the brain. Initially, the visual information is sent to the thalamus, the old central station of the brain, and from there it's spread far and wide. Some of it ends up in the brainstem, either in a spot called the pretectum, which dilates or contracts pupils in response to light intensity, or in the superior colliculus, which controls movement of the eyes in short jumps called saccades.
If you concentrate on how your eyes move when you look from right to left or vice versa, you will notice that they don't move in one smooth sweep but a series of short jerks (do it slowly to appreciate this properly). These movements are saccades, and they allow the brain to perceive a continuous image by piecing together a rapid series of “still” images, which is what appears on the retina between each jerk. Technically, we don't actually “see” much of what's happening between each jerk, but it's so quick we don't really notice, like the gap between the frames of an animation. (The saccade is one of the quickest movements the human body can make, along with blinking and closing a laptop as your mother walks into your bedroom unexpectedly.)
We experience the jerky saccades whenever we move our eyes from one object to another, but if we're visually following something in motion our eye movement is as smooth as a waxed bowling ball. This makes evolutionary sense; if you're tracking a moving object in nature it's usually prey or a threat, so you'd need to keep focused on it constantly. But we can do it only when there's something moving that we can track. Once this object leaves our field of vision, our eyes jerk right back to where they were via saccades, a process termed the Optokinetic reflex. Overall, it means the brain
can
move our eyes smoothly, it just often doesn't.
But why when we move our eyes do we not perceive the world around us as moving? After all, it all looks the same as far as images on the retina are concerned. Luckily, the brain has a quite ingenious system for dealing with this issue. The eye muscles receive regular inputs from the balance and motion systems in our ears, and use these to differentiate between eye motion and motion in or of the world around
us. It means we can also maintain focus on an object when we're in motion. It's a system that can be confused though, as the motion-detection systems can sometimes end up sending signals to the eyes when we're not moving, resulting in involuntary eye movements called nystagmus. Health professionals look out for these when assessing the health of the visual system, because when your eyes are twitching for no reason, that's not great. It's suggestive of something gone awry in the fundamental systems that control your eyes. Nystagmus is to doctors and optometrists what a rattling in the engine is to a mechanic; might be something fairly harmless, or it might not, but either way it's
not meant to be happening.
This is what your brain does just working out where to point the eyes. We haven't even started on how the visual information is processed.
Visual information is mostly relayed to the visual cortex in the occipital lobe, at the back of the brain. Have you ever experienced the phenomenon of hitting your head and “seeing stars”? One explanation for this is that impact causes your brain to rattle around in your skull like a hideous bluebottle trapped in an egg cup, so the back of your brain bounces off your skull. This causes pressure and trauma to the visual processing areas, briefly scrambling them, and as a result we see sudden weird colors and images resembling stars, for want of a better description.
The visual cortex itself is divided into several different layers, which are themselves often subdivided into further layers.
The primary visual cortex, the first place the information from the eyes arrives in, is arranged in neat “columns,” like sliced bread. These columns are very sensitive to orientation, meaning they respond only to the sight of lines of a
certain direction. In practical terms, this means we recognize edges. The importance of this can't be overstressed: edges mean boundaries, which means we can recognize individual objects and focus on them, rather than on the uniform surface that makes up much of their form. And it means we can track their movements as different columns fire in response to changes. We can recognize individual objects and their movement, and dodge an oncoming soccer ball, rather than just wonder why the white blob is getting bigger. The discovery of this orientation sensitivity is so integral that when David Hubel and Torsten Wiesel discovered it in 1981, they ended up with a Nobel Prize.
9
The secondary visual cortex is responsible for recognizing colors, and is extra impressive because it can work out color constancy. A red object in bright light will look, on the retina, very different from a red object in dark light, but the secondary visual cortex can seemingly take the amount of light into account, and work out what color the object is “meant” to be. This is great, but it's not 100 percent reliable. If you've ever argued with someone over what color something is (such as whether a car is dark blue or black) you've experienced first hand what happens when the secondary visual cortex gets confused.