However, these global and local changes are to some degree contradictory. To see this we can consider a simple model of the spread of electrical and chemical signals within neurons, cable theory. We have studied in class electrical cable theory, which looks at the combined effect of membrane capacitance and resistance, and cytoplasmic resistance, on electrical spread. As long as a synapse is reasonably close to the cell body (1 space constant or less, lamda V ~ 1mm) it can influence firing, albeit with a delay.
But one can also write a cable equation for chemical spread. Here we replace capacitance by the ability of intracellular calcium-binding molecules to "buffer" rapid calcium changes. Membrane resistance corresponds to calcium pumps (e.g. in the spine neck), which extrude or degrade calcium and other chemicals. Cytoplasmic resistance corresponds to intracellular diffusion, typically at ~ 1 um^2/msec.
Putting in reasonable numbers for these parameters one can write lamdaC ~ 1um for the chemical space constant. However, the distance from synapses to cell bodies is ~ 1 mm and between spine heads ~ 1 um. So it looks as though the brain cannot work well, since 1mm/1mm ~ 1um/1um ~ 1.
Of course one can try to squeeze synapses closer to the IS, but this just decreases the distance between them, worsening chemical isolation, on which learning (or self-programming) hinges.
The only real way this can work is to decrease the number of inputs, which of course greatly lowers computational power.
One can fiddle at the margins with this dilemma, but I suspect that it means that most animals have to rely on instinct, i.e, the computational power of Darwinian evolution. Since humans have language we can "program" each other (but still most programs have to be discovered by individuals). From this perspective, language (and limitless symbols generally) rather than big brains would be the key to human success.
No comments:
Post a Comment