Thursday, December 17, 2015

Wrap-Up

A student asked the following question, which raises the most fundamental idea we've studied in the course, that of a Hebb synapse:

"I was going over the material and I am confused about one thing: High frequency AP's cause significant amount of glutamate release, binding to AMPA --> significant Na+ entry --> NMDAR release of Mg2+ --> --> --> LTP ... but further studies find "silent synapses" deficient in AMPA receptors do not respond to pre-synaptic activity (release of glutamate), and LTP is necessary for appearance of AMPARs. If entry of sodium via AMPARs cause LTP by causing changes in NMDAR, allowing Ca2+ entry, how can LTP occur in silent synapses? Are all synapses initially silent? or is absence of AMPA a property of only a specific subset of neurons? Does this have anything to do with backpropagating spikes?"

I answered as follows:

"Yes it does have to do with backAPs. The key point is that ltp is triggered not by local depolarisation (at the synapse itself, eg caused by locally released glu acting on  ampaRs) but by the occurrence of a spike (starting at the beginning of the postsynaptic axon, the initial segment) which backpropagates into the dendrites and thus reaches all the synapses.
This answers your dilemma: at a silent synapse (most synapses start silent) the release of glu does not cause significant local depolarization, but nevertheless the neuron might fire a spike (as a result of near-simultaneous glu release at other, non-silent, synapses). When that spike backprops and reaches the silent synapse in question, it will trigger unblocking of the NMDARs at that synapse, and Ca entry, which can trigger strengthening (by adding AMPARs).
The whole point here that the decision to strengthen a particular synapse (or not) should depend not just on what is happening at that synapse (i.e. the arrival of a presynaptic spike) but the collective decision of the whole postsynaptic neuron (does it fire a spike too?).
Similarly at a particular nonsilent synapse: this synapse will depolarise (because it already has some ampaRs) if the input axon fires, but we don't want that synapse to locally depolarise enough to unblock the nmdars because then it would strengthen regardless of what the whole neuron is doing - we only want the synapse to strengthen if the postsynaptic neuron fires (mostly as a result of the other active synapses, but in small part because of the synapse in question). So we can adjust each synapse individually based on what the relevant pre- and postsynaptic axon are doing: if they both fire (the pre slightly before the post)  then strengthen it! This mechanism implements the Hebb rule, fire together wire together.
We can capture the behavior of a neuron with 2 simple equations: dw/dt = xy and y = f(x.w). In the first equation w refers to the strength of a particular synapse (it should have a subscript i since we are referring to the ith synapse) and y is the firing rate of the neuron. The second, "dot-product", equation says y depends on how well the current input pattern (vector x) matches the current strength pattern of the whole set of synapses (the vector w). The 2 equations together mean that gradually over time the relative strength of all the synapses will come to reflect regularities (correlations) in the entire set of input pattens the neuron sees over its lifetime (which is essentially the lifetime of the animal). Since detecting regularities is what we mean by "understanding", this implies  the brain (just a collection of neurons!) can "understand" - i.e. have a mind (a not uninteresting conclusion).

Of course the devil is in the details. The whole thing would be undermined if (a) the local depolarisation due to ampaRs were big enough to unblock the nmdaRs or (b) the Ca signal at one synapse could influence, even to a tiny degree, what happens at other synapses. Both these point are controversial.

In the case where all the synapses are initially silent (eg in the early fetus), it would appear that there's no way to get the ball rolling: no ampars means no firing! However, it turns out that very early on GABA acts as an excitatory transmitter (chloride pumps have not yet matured and Ecl is positive to threshold)! Probably initially random firing leads to some random unsilencing, and only later do the synapses get further adjusted in an experience-dependent way.


If you can answer the question and my answer, the course will have been a success, no matter what your final grade.


 






Thursday, November 19, 2015

Visual Cortex; the vanishing cow

Here's a nice overview of visual cortex, with a few cool experiments you can try on yourself:http://www.tutis.ca/Senses/L2VisualCortex/L2V1.pdf
When testing your blind spot, I suggest moving the test screen back and forth a bit while fixating the X target: you will see that at one particular distance the center of the blind spot becomes invisible - or, better, gets "filled in" with the rest of the pattern. It's at this distance that the central patch falls exactly on the blind spot (use 1 eye only, and don't cheat: you must fixate the X.) Here's another example: fixate the + and find the distance from the screen where the cow vanishes. Why and how does the cow vanish?
from http://www.psy.ritsumei.ac.jp/~akitaoka/moten01.jpg





Wednesday, November 18, 2015

The brain dilemma: Electrical and Chemical Spread

Synapses have 2 main functions. First, they transmit information (about input firing and synapse strength) to the axon initial segment where spikes are initiated and sent to other neurons. Second they can change their strength, primarily in response to input/output co-activity. These 2 processes interact so that the output that inputs cause becomes more useful (e.g. to survival and reproduction). The first is largely a fast, rather global, electrical process (individual synaptic currents spread down dendrites and combine to trigger spikes) and the second is largely a slower sharply localized chemical process confined to individual synapses. The first process is the "computation" and the second the "programming" (which has to be mostly self-programming).


However, these global and local changes are to some degree contradictory. To see this we can consider a simple model of the spread of electrical and chemical signals within neurons, cable theory. We have studied in class electrical cable theory, which looks at the combined effect of membrane capacitance and resistance, and cytoplasmic resistance, on electrical spread. As long as a synapse is reasonably close to the cell body (1 space constant or less,  lamda V ~ 1mm) it can influence firing, albeit with a delay.

But one can also write a cable equation for chemical spread. Here we replace capacitance by the ability of intracellular calcium-binding molecules to "buffer" rapid calcium changes. Membrane resistance corresponds to calcium pumps (e.g. in the spine neck), which extrude or degrade calcium and other chemicals. Cytoplasmic resistance corresponds to intracellular diffusion, typically at ~ 1 um^2/msec.

The black lines below represent a cable - either the dendrites or the spine neck. The colored lines represent voltage or chemical signals. In the dendrites one wants good voltage spread, so the neuron can "integrate" its synaptic inputs (the basic computation). Between spines one wants no spread (so changes in synapses do not affect each other).



Putting in reasonable numbers for these parameters one can write  lamdaC ~ 1um for the chemical space constant. However, the distance from synapses to cell bodies is ~ 1 mm and between spine heads ~ 1 um. So it looks as though the brain cannot work well, since 1mm/1mm ~ 1um/1um ~ 1.

Of course one can try to squeeze synapses closer to the IS, but this just decreases the distance between them, worsening chemical isolation, on which learning (or self-programming) hinges.

The only real way this can work is to decrease the number of inputs, which of course greatly lowers computational power.

One can fiddle at the margins with this dilemma, but I suspect that it means that most animals have to rely on instinct, i.e, the computational power of Darwinian evolution. Since humans have language we can "program" each other (but still most programs have to be discovered by individuals). From this perspective, language (and limitless symbols generally) rather than big brains would be the key to human success.



Monday, November 16, 2015

Are synaptic strengths set by co-activity?

Although there is good evidence that Hebbian (co-activity-dependent) synapses do exist, and in some cases the machinery is understood (NMDARs, back-APs etc), and there is also evidence that strength changes underlie learning, only very recently has evidence been obtained that synaptic strength is set by the history of co-activity at that synapse (http://biorxiv.org/content/biorxiv/early/2015/03/11/016329.full.pdf)

The authors used detailed 3D reconstruction from serial EM, and studied pairs of synapses in the hippocampus that were either made by an axon on the same dendrite or on either different dendrites or by different axons (where it's unlikely the axons or dendrites belonged to the same neuron). The idea is that in the former case the 2 synapses would have identical histories of co-activity, and in the latter case, different histories. The above figure shows (A) one example, in which an axon makes 2 synapses on a dendrite (arrows point to the 2 post-synaptic densities, in red) (B) many pairs of examples (C) the relation between the 2 spine head volumes (top) or  PSD areas (next) for the pairs. Spine head volumes or PSD areas should be good measures of synapses strength, and they are tightly correlated for the 2 members of each pair, though they vary over a large range. This suggests that co-activity history determines synapse strength. When the 2 synapses are made by different axons or on different dendrites, the strengths do not correlate well:


A related 3D reconstruction study in the neocortex shows that if an axon makes one synapse on a dendrite, it tends to make others (again consistent with shared co-activity history), though in this case these synapses do not seem to have similar size.


The authors (Kasthuri et al., 2015, Cell 162, 648–661) conclude: " Thus axon-dendrite adjacency, while of course necessary for synapses to form, is insufficient to explain why some axons establish multiple synapses on
some dendrites and not others. This is an explicit refutation of
Peters’ rule. Rather this result argues that there are different
probabilities for synapses between particular dendrites and
particular excitatory axons." Of course a shared history of co-activity and Hebbian plasticity could account for these probabilities.
"Peters' Rule refers to the hypothesis that synapses are made solely on the basis of physical axon and dendrite proximity, without regard to their past history of co-activity. Connections in the brain would then be determined solely by the chance close encounters of axons and dendrites. These in turn would reflect the general geometry of axodendritic overlap, which might reflect genetic specification of axonal arborization dendritic branching patterns. Several recent papers show that this idea is not generally valid, but do not directly suggest a role for co-activity and Hebbian learning. However, there may be situations in which connections do follow Peters' rule. One example would be the bipolar-starburst amacrine synapses I discussed in a recent post.

I favor the extreme opposite view - that connection probabilities and strengths are determined by co-activity and thus indirectly by input correlations - especially by subtle higher-order correlations. This would probably require extraordinary degrees of synapse isolation and independence, and could endow neurons with powerful computational abilities, perhaps transcending what is achievable with current silicon technology. However strong experimental evidence on this point is currently lacking. 

Saturday, November 14, 2015

what does the retina do? - and how? The EyeWire Video Game


EyeWire is an online video-game that is fun to play and useful to neuroscience!

The retina is perhaps the most accessible part of the brain, and in some ways the simplest (though it's still amazingly complicated) - see the comparison below of the retina and the neocortex:


The retina has at least 4 jobs.

1. It converts the current pattern of light intensity and color falling on the retina (in particular, on the outer segments of the photoreceptors) into an electrical pattern (in particular, the potentials inside the photoreceptors, which act as 100 million "image pixels"). 

2. This electrical pattern is then compressed 100-fold  into a pattern of spikes on the axons of the ganglion cells and sent to the rest of the brain (especially the visual thalamus and superior colliculus) for interpretation.

3. Some specialized ganglion cells can already "interpret" aspects of the visual image. For example, some ganglion cells respond to directed motion at particular locations. 

1 is accomplished by the phototransduction machinery. Light causes a structural change in the (one of four) photopigments expressed by the photoreceptor, which activates an enzyme that breaks down cGMP. This in turn closes some of the CNG-channels that normally allow a background influx of sodium ions into the photoreceptor, thus hyperpolarizing the photoreceptor, reducing ongoing glutamate release.

2. Compression is achieved by the center-surround receptive field organization. In a nutshell, since neighboring points in retinal images tend to show similar light intensities, it's usually more informative (more "surprising")  to send to the brain information about differences in local intensities. Thus an "on"  ganglion cell could be caused to fire by light hitting a "central" photoreceptor that provides (via a bipolar) depolarizing input, especially when the illumination of immediately neighboring photoreceptors decreases. These neighbors provide input to GABAergic "horizontal cells", which reduce release from the central photoreceptor, as well as inhibiting the corresponding bipolar. Vice-versa for "off" ganglion cells. In statistics terms this corresponds to a "local" version of PCA called ZCA, as discussed in class. The comparison with neighbors implies sensitivity to pairwise statistics, also a feature of PCA. However, straight PCA is not practical in the retina, because it involves global (and long-distance) connections, which would enormously thicken the retina. Because the pairwise image correlations (mostly caused by optical imperfections of the eye itself) are usually highly local, the local wiring needed for ZCA is much more practical, and equally efficient. Because of inevitable off-axis chromatic aberrations in the lens, local green/blue or red/green center-surround comparisons may also do good compression.  
Note that the goal of PCA is to find directions in multidimensional pixel space along which image projections vary maximally. 
In the foveola, which aligns with the optical axis of the eye, image blur is minimal, and here each ganglion cell has input from just 1 cone, so there is no image compression: the brain receives the full, detailed,  RAW image. Of course it has to interpret it - which is why the visual cortex is so complex (see top image). Note that though the fovea is only a small part of the retina, a large part of the visual cortex is devoted to its analysis - this "magnification factor" is at least 100 fold. 

3. The mechanism of the direction selectivity of ganglion cells has been recently worked out (e.g http://www.nature.com.proxy.library.stonybrook.edu/nature/journal/v471/n7337/full/nature09818.html) These ganglion cells come in on and off types, reflecting the sign of the bipolar to which they are connected, and the inner plexiform sublayer where their axons/dendrites meet . But in addition they respond selectively when the light or dark spot to which they are tuned moves in a particular direction. This directionality arises from additional inhibitory input from GABAergic  "starburst amacrine cells",  which inhibits firing when the spot moves in one particular direction, but not the other. Amacrine cells do not have axons. This type is aptly named, because their dendrites spread out in all directions in ether the on or off sublayer, making synapses on ganglion cell dendrites in that layer. The starburst dendrites also get input from bipolars. Each spreading SBA dendrite branches out in a sector, and within that sector it gets excitatory input from overlying bipolars. However these BP-SBA epsps arrive at different times because of cable properties. In particular, if a spot of light moves away from the SBA cell body,  it first depolarises the proximal SB dendrite, then the distal. The distally and proximally generated epsps will thus peak at the same time in the distal dendrite; dendrodendritic SBA-GC synapses in the distal dendrites will thus be strongly activated. Notice that this arrangement makes each separate dendrite respond to centrifugal motion in a particular direction (eg north, south east or west). Now it turns out that a northward GC receives inhibitory synapses from a northwards-tuned SBA dendrite, and so forth, and therefore inherits its directional tuning. Note that individual SBA cells are NOT tuned to individual directions, though its dendrites are. Here we have an example of local dendritic computation, a principle which some neuroscientists are vainly trying to extend to excitatory neurons with axons (e.g. pyramidal cells - see Hebbery Notes.). 


The SBA is in black, and the synapses it makes on 4 (N,S,E,W; different colors) directional GCs is shown as colored balls. One of these synapses is shown in detail. All the other neuron processes are shown in gray. Seung and Denk Nature
514,394(16 October 2014) doi:10.1038/nature13877




Friday, November 6, 2015

Toyota Invests $1 Billion in Machine Learning; Will we become slaves to clever machines?


"Machine Learning" is a very hot new field that is taking over from the older field of Artificial Intelligence, and is central to increasingly ubiquitous technologies such as Siri, Watson and self-driving cars. It also has increasingly strong links to neuroscience, and draws on applied math, statistics, physics and computer science. It's sometimes referred to as the "New AI".
ML is essentially the science of learning by machines (especially computers). Since the central assumption underlying neuroscience is that the brain is a machine, and since neural plasticity and learning are fundamental to brain function, especially in mammals, the 2 sciences are natural allies.
In today's New York Times a front page article (http://www.nytimes.com/2015/11/06/technology/toyota-silicon-valley-artificial-intelligence-research-center.html?emc=eta1) reveals that Toyota is investing $1B in ML in Silicon Valley, already the epicenter of ML. The same page features a banner ad by IBM touting Watson and the "Cognitive Era".

Why has ML moved to the fore? First, it's increasingly realized that learning is the key to intelligence. Indeed, one could almost define intelligence as the ability to learn how to solve problems - any problem, but especially new problems. Second, there's an increasing focus on using rigorous, quantitative approaches, often based on statistics. In particular so-called "Bayesian statistics" - a systematic approach to improving one's hypotheses as new information becomes available. Third, rapid (though somewhat decelerating) advances in computing power allow the heavy number crunching required by ML techniques. Fourth, some of the most powerful ML approaches are partly inspired by neuroscience, so advances in both fields are synergistic.

In the course we already touched on one of the simplest and oldest examples of ML when we considered motor learning in the cerebellum. We saw that parallel fibers make synapses on Purkinje neurons, and these can automatically change their strength based on 2 coincident factors, the parallel fiber firing (signalled by glutamate release) and an error signal (conveyed by climbing fiber firing). We formulated this as "weight decrease at synapse number i is proportional to PF number i firing rate times CF firing rate" - sometimes known as the "delta rule".
Clearly once the movement error goes to zero under this rule the PF strengths will stop changing, suggesting that a Purkinje cell might learn to fire in the way needed for accurate movements (by inhibiting deep cerebellar neurons that influence movement details). However we did not actually prove that this delta rule always improves things (which requires the implicit assumption that there ARE PF synapse strengths that allow perfect movements.)

Clearly this delta rule has a "Hebbian" flavor (see my last post) - synapse strength change depends on both input and output firing. Related rules underlie much of the most sophisticated new ML techniques.

Will ML succeed, and if so would machines take over our jobs, condemning almost all of us to abject poverty? If ML is to succeed it requires that our machines (e.g. computers) can do the required number crunching, and this tends to become prohibitively expensive as the numbers increase. So far Moore's Law has allowed  hardware to keep up with software, but this is now slowing, and researchers are exploring "neuromorphic" (brainlike) strategies. But it's not yet clear that implementing Hebbian synapses at extremely high density is straightforward either for the brain or for machines (see my last post).

 This is really not just a scientific question, but also one about politics and morality: should the owners of these technologies become the new economic aristocrats that the USA was founded to eliminate? In the meantime it's an exciting period in neuroscience and AI.

Thursday, November 5, 2015

The Hebbian excitatory synapse

This diagram shows several key features a typical Hebbian  synapse. The transmitter glutamate is released from vesicles (orange) when a presynaptic spike arrives. This stimulates both AMPA-type receptors (green) and NMDA-type receptors (blue). The NMDARs generate very little current, because when they open they are immediately blocked by extracellular magnesium ions. The AMPARs generate an inward sodium current which depolarizes the spine head, and less strongly and with a slight delay, the cell body and the axon initial segment. If a lot of other synapses fire at roughly the same time, these small somatic depolarizations can add up and trigger a spike, which travels down the axon to the neuron's output synapses made on other neurons. But this spike also travels back along the dendrites, and reaches the synapse shown here (and the others that help trigger the spike). This pops out the Mg from the open NMDARs, which allows calcium to enter (shown as a red cloud; a few calcium ions are already around at rest). This calcium signal can then trigger, via CaMkinase, strengthening of the synapse by addition of more AMPARs (either from perisynaptic membrane and/or an intracellular source).

IMPORTANT POINTS                                                                                       
(1) the briefly open AMPARs do not permit Ca entry (Q/R switch) 
(2) the unplugged open NMDARs do ; this Ca signal triggers LTP (or perhaps LTD) 
(3) the occurrence of a back-propagating spike reflects the cooperative action of the firing of many individually weak synapses, each resembling that shown here, but varying in "strength" (= numbers of AMPARs). 
 (4) some synapses lack AMPARs - they are "silent". However, they can be unsilenced in exactly the same way as shown here.
 (5) If a backpropagating spike should arrive prematurely, before a presynaptic spike releases glutamate, or not at all, there is a much smaller calcium signal in the spine head (and therefore no LTP) but the early bAP can cause e.g. endocannabinoid release, which combined with subsequent stimulation of another type of presynaptic NMDARs (not shown), can cause, in the future, less transmitter release, weakening the synapse ("LTD"). 
(6) the calcium signal (and other second messengers underlying ltp/ltd) does not significantly spread to neighboring synapses, despite their extremely close packing (~ 1 um apart or less) and their rapid diffusion (~ 1 um^2/msec). Of course the devil is in that "significantly". My own research focuses on this rather neglected but crucial issue - more anon.                                                                                                               






New Kasai Paper on Synaptic Basis of Learning

Haruo Kasai is a Japanese scientist who has made important contributions to our understanding of the synaptic machinery of learning and memory.


From: http://www.bio.brandeis.edu/lismanlab/index.html

For example in an already classic paper (Matsuzaki, M.Honkura, N.Ellis-Davies, G. & Kasai, H. Structural basis of long-term potentiation in single dendritic spinesNature 429761766 (2004).,  he showed that induction of LTP at single synapses (by pairing punctate glutamate application with postsynaptic depolarization) triggers changes in strength and spine head volume, which is confined to that synapse and does not spread to neighboring synapses:


Top graph shows changes in strength and volume at a hippocampal synapse where ltp is induced, with no change at neighbors. Ignore bottom graph.

He concluded: "Our results thus indicate that spines individually follow Hebb's postulate for learning."

In the recent study (Nature Volume 525, Issue 7569, 17 September 2015, Pages 333-338), he looked at synapses in the motor neocortex during and after learning of a motor task (the ability to cling to a rotating rod). Using special molecular markers he was able to identify a set of synapses on the dendrites of a pyramidal cell that became strengthened following task learning. Then, crucially, he was able to shrink those synapses back to their original size, and show that this selectively erased learning of this task, but not others. This strongly suggests that strengthening of these synapses (presumably as a result of Hebbian ltp occurring during learning) underlies the relevant learning.
Of course we do not know which circuits these synapses belong to, or exactly what role they play in learning the task. 

Wednesday, October 7, 2015

Today's Chemistry Nobel

Paul Modrich elucidated mismatch repair, the final step in ensuring accurate DNA replication. Recently we have come to recognize (almost) miraculous replication accuracy as the driver of evolution, rather than the anyway inevitable mistakes (i.e. mutation). As we discussed the main step underlying accuracy is however "proofreading", a concept proposed independently and simultaneously by an earlier Chemistry Nobelist, John Hopfield, and the now almost unknown Jacques Ninio. Similar extreme accuracy in the strengthening of synapses could underlie learning, especially in the neocortex, where a type of neural proofreading might occur, generating what we loosely call "mind" (see syndar.org). We will not be discussing this rather speculative "Hebbian proofreading" concept in BIO 338, but if anyone is interested please contact me. However we will be discussing related aspect of synapses and neocortical operation. Congratulations to all 3 winners!

There are very surprising and interesting implications of proofreading for the way DNA is replicated. As summarized in the Alberts textbook:- 

"The need for accuracy probably explains why  replication occurs only in the 5′-to-3′ direction. If there were a  that added deoxyribonucleoside triphosphates in the 3′-to-5′ direction, the growing 5′-chain end, rather than the incoming mononucleotide, would carry the activating triphosphate. In this case, the mistakes in polymerization could not be simply hydrolyzed away, because the bare 5′-chain end thus created would immediately terminate DNA synthesis (Figure 5-11). It is therefore much easier to correct a mismatched that has just been added to the 3′ end than one that has just been added to the 5′ end of a DNA chain. Although the mechanism for DNA replication (see Figure 5-8) seems at first sight much more  than the incorrect mechanism depicted earlier in Figure 5-7, it is much more accurate because all DNA synthesis occurs in the 5′-to-3′ direction.


Figure 5-11. An explanation for the 5′-to-3′ direction of DNA chain growth.

Figure 5-11An explanation for the 5′-to-3′ direction of DNA chain growth

Growth in the 5′-to-3′ direction, shown on the right, allows the chain to continue to be elongated when a mistake in polymerization has been removed by exonucleolytic proofreading (see Figure 5-9). In contrast, exonucleolytic proofreading in the hypothetical 3′-to-5′ polymerization scheme, shown on the left, would block further chain elongation. For convenience, only the primer strand of the   is shown.
Cover of Molecular Biology of the Cell
Molecular Biology of the Cell. 4th edition.
Alberts B, Johnson A, Lewis J, et al.
New York: Garland Science; 2002.
Copyright © 2002, Bruce Alberts, Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walter; Copyright © 1983, 1989, 1994, Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts, and James D. Watson .
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Thus proofreading requires unidirectional strand copying,  which in turn explains why replication is done using an otherwise complex and cumbersome "replication fork" machinery for the lagging strand. Of course evolving this macxhinery was an unlikely, difficult and individual-fitness lowering process (rather like sex), but it was necessary for all life as we know it to emerge billions of years ago. 


Friday, October 2, 2015

New "Nature" paper on molecular mechanism of transmitter release

I mentioned in class that transmitter release is mediated by 2 key molecules (or more exactly, types of molecule), the SNARES and synaptotagmin. SNARES are protein that bring the vesicle membrane and the presynaptic plasma membrane together at the active zone (where release occurs). The new paper ( Nature, 52562–67, 03 September 2015) describes an X-ray crystalographic study of the interface between the calcium-detecting molecule synaptotagmin (on the vesicle membrane) and the SNARES, which are anchored in both vesicle and plasma membranes, and come together in the narrow space between them to form a "helix bundle". When calcium binds to the synaptotagmin it, via its interface with the SNARES, triggers a contraction of the helix bundle pulling the 2 membranes together and then dragging them so they actually fuse. Here's the key diagram from the paper (which the Nature robot might censor, but you can also look at online at the University library); look particularly at parts c,d and e.


Of course one always knew, since Katz's seminal discovery of the role of calcium in transmitter release,  that the underlying molecular machinery would eventually be mapped out, but it's still very satisfying to see all the details falling into place. Sudhof, one of the authors of the new paper, had already received the 2013 Nobel Prize for his work (with Rothman and Scheckman) for his work on the molecular mechanism of exocytosis.

Wednesday, September 23, 2015

Interesting interview with Cori Bargmann

Cori Bargmann is the Chair of the  advisory commission for the Brain Initiative.

She studies worms.


Here's the comment I just posted on my Facebook page (which should not be directly accessible to you):

She's a good scientist, but misleads when she says that studying the worm brain might help understand the human brain. It might, but already it's clear that it probably won't answer any of the important questions, like how does the cortex work? Essentially she's looking for her keys where the light is, not in the dark place she dropped them. Of course there may be interesting objects under the lamp-post - even a million dollars which would buy a good locksmith, but it's rather unlikely. To understand the human brain you have to study the cortex directly: try to find the out what the circuits are and what they do, and above all whether there are general principles that underlie the observed variations which specifically explain the features we are most interested in (eg intelligence). Of course evolution does manifest a type of intelligence, and there may be interesting deep connections with neural intelligence.

Sunday, September 13, 2015

Videos and Discussion of Potassium Channel Permeation and Selectivity



The above videos should be helpful in understanding the structure and function of the basic type of potassium channel, the "inward rectifier". My lecture and Notes describe the basic features of selective permeation, especially the  narrowest part of the open pore the "selectivity filter". This is lined with 20 (5X4) carbonyl  groups, all pointing to the interior. These are part of the backbone of the polypeptides, and the corresponding amino acid side chains point outward and interact with the rest of the protein, so the carbonyls are held rigidly, and cannot move inward to better contact sodium ions. Therefore these ions cannot lower their energy enough to compensate for that gained by losing H20 "hydration" molecules as they squeeze into the narrow pore. But K+ ions can lower their energy and enter the narrow part of the pore, and then easily move from site to site (e.g s1 to s2 or s3 to s2), depending on whether adjacent sites are already occupied by a K ion. Because the narrow part of the pore (the "filter") always has 2 ions (otherwise the complex structure collapses, eg at very low K cocentration) the only movements we need consider are between the 2 possible states 1,3 or 2,4. Both can occur, but the relative numbers depend on factors such as K concentration on both sides of the membrane, Vm and the chemical energies of the ions in different states. While I diagrammed a simplified "chemical energy" diagram, which showed only equally low energy wells, with the ion pair are located at the lowest energy positions, of course the ions are moving and the total chemical energy depends at any time depends on state and position. It's the complete energy profile for all possible combinations of state (though to a good approximation only movements between  the 1,3 and 2/4 states are important and position that actually determine the movements of ions and thus the unidirectional and net potassium currents (more about this in a later lecture).
Crucially the membrane potential affects the energies at all positions. A reasonable approximation  is to assume the field is constant at all positions (this "constant field" assumption is used in deriving the GHK membrane equation referred to in the lecture on membrane potential), i.e. that the voltage changes linearly with position (though perhaps most of the potential drop occurs along the narrow filter, so we would first consider the situation at zero Vm, then at other Vms. (I will post diagrams for this)

Notice one crucial point: whatever the current at various Vms and K external/internal concentrations we know that the net K current at the K Nernst potential must be zero, and our quantative analysis must yield this result.
THe main point of this discussion that embodying our recent understanding of molecular structural details can take us far beyond the simple pictures we used before (either Ohm's law or electrodiffusion across a uniform membrane as in the GHK picture). However it can get complicated , and these more detailed models don't really affect the basic conclusions we reached previously: qualitaively Ik = Gk(Vm-Ek). They do offer a more complete picture of ion channel function, and you should know the gist of the ideas involved.

SUMMARY: the movement of K ions through the selectivity filter can be modeled as cycling between 1,3 and 2/4 states. Inic current occurs when there are move cycles in 1 direction than the other. When the pore moves form the first state to the second, an ion has moved inward, and vice versa. If the concentration difference on either side is zero, and Vm is too, then clearly there cannot be net cycling either way. If Kout/Kin is > 1, with Vm = 0, there will be net inward cycling; if K out/Kin = 1 bit Vm is < 0, there will also be net inward cycling. If Vm = Ek (the Nernst potential) there is no net cycling  though neither Vm = 0 nor Ko/Ki = 1. In general knowing the energy profile for various pore occupancy states allows one to calculate Ik under any conditions.

Tuesday, September 8, 2015

Breaking News : Lasker Prize just awarded for research on DNA repair: the "sloppier copier"

Evelyn Witkin just won the Lasker Prize (often a prelude the Nobel) for her pioneering work on DNA repair  . As I explained in my lecture, the polymerase  that quickly repairs damaged DNA (eg by UV light) is a "sloppier copier" - it has to work very quickly to copy the short error-free (parental) strand, after removing the stretch that contains the damage. To do this fast, it dispenses with proofreading, which of course greatly increases the mutation rate, and can lead to cancer - a disease that's closely related to the Eigen "error catastrophe".

see http://bcove.me/a7l91kqj for a nice video on mismatch repair (which is repair of replicative errors).

Response to a student's question re Eigen's model

The population of the master sequence as a fraction of the total population (n) as a function of overall mutation rate (1-Q). The total number of digits per sequence is L=100, and the master sequence has a selective advantage of a=1.05. The "phase transition" is seen to occur at roughly 1-Q=0.05. From https://en.wikipedia.org/wiki/Error_threshold_(evolution) 

NOTE: the ordinate (Y-axis) is on a log scale, so the almost all-or none change in the concentration of the master is by a factor around 10^28 (ten to the twenty eight, a million times greater than Avogadro's number. 


Here is part of the student's interpretation of my lecture on Eigen's model, and my response, in bold. More to follow....




"The Eigen model of molecular evolution allows us to make a connection between darwinian evolution and the origins of DNA/ RNA replication. Eigen's model works on the basis that in our universes early history, RNA were able to fold and function as catalysts for Polypeptide replication before DNA came into the picture. When DNA developed, the process changed slightly, including the creation of complimentary strands before the identical strand is produced. "


Not quite right. Eigen's model does not explicitly aim to model conditions on the early earth, nor to describe the special case of RNA replication. Indeed at the time the model was first developed (1970), the catalytic abiilities of RNA were not yet recognized. Eigen's model had a more general aim: to describe quantitatively a simplified and generalized model of polynucleotide (or indeed heteropolymer) replication. The first core feature is sequence copying - that the precise linear sequence of n monomers in a polymer made up of at least 2 different monomers (for convenience represented as 0 or 1) depends in a 1-to-1, 0 to 0 fashion on the precise linear sequence of another "template" sequence. All other specifics are left out. As I explained, we must consider the relative concentration of all possible 2^n possible sequences when each sequence is growing exponentially at different rates phi but also "killed" (eg by simple dilution) with equal probability (because every so often half the solution is thrown away, so as to keep the total number of molecules constant despite the ongoing replication. 

So 2 key features of the model are sequence-specific replication and competition for resources (i.e. hi-energy nucleotides). The third crucial feature is the possibility of mistakes in copying individual monomers (eg bases) despite the high specificity of e.g Crick-Watson base pairing. This is a minimal model of Darwinian evolution at the molecular (not organismal) level. Eigen suspected, and was able to prove (both mathematically and experimentally - a nice combination) that if the error rate e exceeded a critical value approximately equal to 1/n, that Darwinian natural selection would stop. Or, applied to the Origin of Life problem, that since on the early earth replication (i.e. template-dependent copying) was probably rather inaccurate, Darwinian evolution could not start until the relevant catalyst (whatever it was) became accurate enough that e < 1/n. The whole point is that at least in the model there is a sharp dividing line (analogous to a phase transition) between purely chemical processes (eg low-accuracy copying) and Darwinian evolution (= Life) at a critical error rate ~ 1/n.   

Note that while the model explicitly considers only sequences of fixed length n exactly the same outcome would be observed in a model with variable length n (for example, in the likely case that shorter sequences were more likely than longer ones). The model throws out all the interesting but basically irrelevant details to focus on the essence of the problem: point mutation.


Thursday, September 3, 2015

Order and Disorder: who to marry

In yesterday's lecture I briefly discussed the connection between 2 apparently different types of disordering process: thermal agitation (= temperature) in phase transitions such as melting of ice, boiling of water and loss of magnetism, and mutation in molecular evolution. The main type of mutation is "point mutation", the occurrence of incorrect Crick-Watson base-pairing (i.e. other than A-G or C-T). The latter results from the fact that the difference in the free energy change (in water) that accompanies correct or incorrect pairing ("delta E") is not infinite (in fact it's only around 2 kcal/mole*)  - hydrogen bonding can occur between incorrect pairs (e.g A-A or A-C), though not as snugly as for correct pairing.

The Boltzmann equation states:

Phi/Plo = exp -(deltaE/kT)

where phi/lo are the (mutually exclusive) probabilities of being in hi energy versus lower energy states and kT is the thermal energy (0.6 kcal/mole). Here we can interpret the hi energy state as incorrect pairing. Inserting the above energy values we get error rate = exp- 3.4 ~ 0.03 i.e. around 3%!
As discussed by Kunkel DNA polymerases can achieve an error rate approaching 10^-10 using 3 combined, strategies: active site geometry (e.g. exclusion of water),  proofreading and mismatch repair.

Obviously thermal agitation (i.e. T)  plays a crucial role in mutation. That's why a man's testicles hang down from the body: it's cooler, though more vulnerable and less elegant than the female arrangement. Note that while I argued (in the Notes and the Lecture) that the high human intergeneration mutation rate favored rapid evolution (broadening of the quasi-species), at the individual selection level (mate choice) it always pays to choose the younger man, who has the lowest mutation rate (other things being equal, which they rarely are).

After the lecture a student asked if this idea (the battle between order and disorder) would be the major theme of the course. I replied, somewhat hastily, that once we got into the brain, this would not be a major theme - we will usually assume synapses, neurons and the brain work perfectly. But this was not quite right: we will see that thermal agitation underlies diffusion which powers the brain's batteries, and that non-thermal phenomena can cause disorder, or at least total confusion! More to the point, the conceptual/quantitative approach I introduce in the early part of the course will often crop up again as we become neural (and some of these ideas will have direct applications, e.g. in neural networks).
................................................................................................................................................................
*I multiplied energies be molecule  by N, Avogadro's number (6X 10^23, meaning 6 times ten to the power twenty three), converting kT to RT. In other words, one can use either energies per molecule or energies per mole in the Boltzmann formula, but one must be consistent so N cancels on top and bottom)

Monday, August 31, 2015

Ferromagnetism and the Brain

What does the fact that a piece of iron loses its magnetism above the Curie temperature (1043 F) have to do with the brain? The electrical activity of neurons does create extremely weak magnetic fields and the activity can be affected by extremely strong magnets. The latter effect is sometimes used by experimenters to modify ongoing brain activity, but the reason why we study the ferromagnetic phase transition is because it's a simple example of self-organization. The human mind is an almost (but not quite!) magical outcome of the interaction of billions of neurons, which is a rather poorly understood example of a phase transition (loss or gain of self-organization)  - neural matter leading to mind rather than watery matter transforming from liquid to solid as it freezes). Even freezing is quite a complicated process so we first looked at an even "easier" example: the ferromagnetic phase transition.

<script type='text/javascript' src='http://demonstrations.wolfram.com/javascript/embed.js' ></script><script type='text/javascript'>var demoObj = new DEMOEMBED(); demoObj.run('The2DIsingModelMonteCarloSimulationUsingTheMetropolisAlgorit', '', '459', '648');</script><div id='DEMO_The2DIsingModelMonteCarloSimulationUsingTheMetropolisAlgorit'><a class='demonstrationHyperlink' href='http://demonstrations.wolfram.com/The2DIsingModelMonteCarloSimulationUsingTheMetropolisAlgorit/' target='_blank'>The 2D Ising Model Monte Carlo Simulation Using the Metropolis Algorithm</a> from the <a class='demonstrationHyperlink' href='http://demonstrations.wolfram.com/' target='_blank'>Wolfram Demonstrations Project</a> by Darya Aleinikava</div><br />

More to follow, including links to videos.

Sunday, August 30, 2015

BIO 338; “L’esprit de l’escalier”






BIO 338; “L’esprit de l’escalier”



me lecturing (on hummingbirds not the brain).

I teach an advanced undergrad class in neuroscience at Stony Brook University (BIO 338 “From Synapse to Circuit: Self-Organization of the Brain” , and I’ll be using this blog to provide a running commentary on the course, explain background, and hilight interesting student questions and my answers to those questions. Often when teaching I find I come up with better ways to explain things after class rather than during class itself (when one is under time pressure), and this way I can share my thoughts with students. I don't expect students to learn the additional material I present here, though if I present the same idea in both the class and the blog, you should learn it. However, when you write your essays, you might find the relevant blog posts helpful.
So let’s start with a prospective student’s interesting question: “How exactly can this course quantify the mind?”. Here is my initial answer:
“The course does not aim to “quantify the mind” but to try to understand it (i.e. understand understanding itself). In science one does this by constructing “models” i.e. mental representations of how aspects of the world operate. Since these models try to link high-level concepts like “memory”, “thinking”, “understanding” etc to low-level processes such as the firing of millions of neurons, they must be quantitative – one cannot easily reason about the behavior of highly nonlinear complex systems (eg the weather) without quantitative methods (eg computer simulation, math etc). Of course one cannot yet completely understand the brain, or the weather/climate, but we are making progress and the course will highlight some aspects of that progress. Of course a clinical doctor does not need to understand how the brain works, any more than you need to understand how weather predictions are generated. But a neuroscientist does.”
This is why we start with the example of ferromagnetism, which is a simple model of a complex system (a collection of interacting iron atoms). Using semi-quantitative tools we can actually see why spontaneous magnetism (a large-scale result of small-scale behavior) emerges.
The french in my title refers to the fact that one’s best joke or retort often arises as one is leaving the party (i.e. going down the stairs, from the “piano nobile” or principal floor of a mansion).