Wednesday, 15 August 2012

In the news: clever coding gets the most out of retinal prosthetics

This is something of an update to a previous post, but I thought interesting enough for its own blog entry. Just out in PNAS, Nirenberg and Pandarinath describe how they mimic the retina’s neural code to improve the effective resolution of an optogenetic prosthetic device (for a good review, see Nature News).

As we have described previously, retinal degeneration affects the photoreceptors (i.e., rod and cone cells), but often spares the ganglion cells that would otherwise carry the visual information to the optic nerve (see retina diagram below). By stimulating these intact output cells, visual information can bypass the damaged retinal circuitry to reach the brain. Although the results from recent clinical trials are promising, restored vision is still fairly modest at best. To put it in perspective, Nirenberg and Pandarinath write:
[current devices enable] "discrimination of objects or letters if they span ∼7 ° of visual angle; this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States"
Obviously, this poor resolution must be improved upon. Typically, the problem is framed as a limit in the resolution of the stimulating hardware, but Nirenberg and Pandarinath show that software matters too. In fact, they demonstrate that software matters a great deal.

This research focuses on a specific implementation of retinal prosthesis based on optogenetics (for more on approach check out this Guardian article, and for an early empirical demonstration). Basically, intact retinal ganglion cells are injected with a genetically engineered virus that produces a light sensitive protein. These modified cells will now respond to light coming into the eye, just as the rods and cones do in the healthy retina. This approach, although still being developed in mouse models, promises a more powerful and less invasive alternative to electrode arrays previously trialled in humans. But it is not the hardware that is the focus of this research. Rather, Nirenberg and Pandarinath show how the efficacy of the these prosthetic devices critically depends on the type of signal used to activate the ganglion cells. As schematised below, they developed a special type of encoder to convert natural images into a format that more closely matches the neural code expected by the brain. 

The steps from visual input to retinal output proceed as follows: Images enter a device that contains the encoder and a stimulator [a modified minidigital light projector (mini-DLP)]. The encoder converts the images into streams of electrical pulses, analogous to the streams of action potentials that would be produced by the normal retina in response to the same images. The electrical pulses are then converted into light pulses (via the mini-DLP) to drive the ChR2, which is expressed in the ganglion cells.
This neural code is illustrated in the image below: 

The key result of this research paper is a dramatic increase in the amount of information that is transduced to the retinal output cells. They used a neural decoding procedure to quantify the information content in the activity patterns elicited during visual stimulation of a healthy retina, compared to optogenetic activation of ganglion cells in the degenerated retina via encoded or unencoded stimulation. Sure enough, the encoded signals were able to reinstate activity patterns that contained much more information than the raw signals. In a more dramatic, and illustrative, demonstration of this improvement, they used an image reconstruction method to show how the original image (baby's face in panel A) is first encoded by the device (reconstructed in panel B) to activate a pattern of ganglion cells (image-reconstructed in panel C). Clearly, the details are well-preserved, especially in comparison to the image-reconstruction of a non-encoded transduction (in panel D). In a final demonstration, they also found that the experimental mice could track a moving stimulus using the coded signal, but not the raw unprocessed input.

According to James Weiland, ophthalmologist at University of Southern California (quoted in by Geoff Brumfiel Nature News), there has been considerable debate whether it is more important to try to mimic the neural code, or just allow the system to adapt to an unprocessed signal. Nirenberg and Pandarinath argue that clever pre-processing will be particularly important for retinal prosthetics, as there appears to be less plasticity in the visual system than say the auditory system. Therefore, it is essential that researchers crack the neural code of the retina rather than hope the visual system will learn to adapt to an artificial input. The team are optimistic:
"the combined effect of using the code and high-resolution stimulation is able to bring prosthetic capabilities into the realm of normal image representation"
But only time, and clinical trials, will tell.


Bi A, et al. (2006) Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron 50(1):23–33.

Nirenberg and Pandarinath (2012). Retinal prosthetic strategy with the capacity to restore normal vision. PNAS

Monday, 13 August 2012

Research Briefing: Lacking Control over the Trade-off between Quality and Quantity in Visual Short-Term Memory

This paper, just out in PLoS One, describes research led by Alexandra Murray during her doctoral studies with Kia Nobre and myself. The series of behavioural experiments began with a relatively simple question: how do people prepare for encoding into visual short-term memory (VSTM)?

VSTM is capacity limited. To some extent, increasing the number of items in memory reduces the quality of each representation. However, this trade-off does not seem to continue ad infinitum. If there are too many items to encode, people tend to remember only a subset of possible items, but with reasonable precision, rather than a more vague recollection of all the items. 

Previously, we and others had shown that directing participants to encode only a subset of items from a larger set of possible memory items increases the likelihood that the cued items would be recalled after a memory delay. Using electroencephalogram (EEG), we further showed that the brain mechanisms associated with preparation for selective VSTM encoding were similar to those previously associated with selective attention. 

To follow up on this previous research, Murray further asked whether can people strategically fine tune the trade-off between the number and quality of items in VSTM? Given foreknowledge of the likely demands (i.e., many or few memory items, difficult or easy memory test), can people engage an encoding strategy that favours quality over quality, or vice versa?  

From the outset, we were pretty confident that people would be able to fine-tune their encoding strategy according to such foreknowledge. Extensive previous evidence, including our own mentioned above, had revealed a variety of control mechanisms that optimise VSTM encoding according to expected task demands. Our first goal was simply to develop a nice behavioural task that would allow us to explore in future brain imaging experiments the neural principles underlying preparation for encoding strategy, relative to other forms of preparatory control. But this particular line of enquiry never got that far! Instead, we encountered a stubborn failure of our manipulations to influence encoding strategy. We started with quite an optimistic design in the first experiment, but progressively increased the power of our experiments to detect any influence of foreknowledge over expected memory demands - and still nothing at all! The figure on the right summarises the final experiment in the series. The red squares in the data plot (i.e., panel b) highlight the two conditions that should differ if our hypothesis was correct.  

By this stage it was clear that we would have to rethink our plans for subsequent brain imaging experiments. But in the interim, we had also potentially uncovered an important limit to VSTM encoding flexibility that we had not expected. The data just kept on telling us: people seem to encode as many task-relevant items as possible, irrespective of how many items they expect, or how difficult the expected memory test at the end of the trial. In other words, this null effect had revealed an important boundary condition for encoding flexibility in VSTM. Rather than condemn these data to the file draw, shelved as a dead-end line of enquiry, we decided that we should definitely try to publish this important, and somewhat surprising null effect. We decided PLoS One would be the perfect home for this kind of robust null effect. The experimental designs were sensible, with a logical progression of manipulations, the experiments were well-conducted and the data were otherwise clean. There was just no evidence that our key manipulations influenced short-term memory performance. 

As we were preparing our manuscript for submission, a highly relevant paper by Zhang and Luck came out in Psychological Sciences (see here). Like us, they found no evidence that people can strategically alter the trade-off between remembering many items poorly and/or few items well. If it is possible to be scooped on a null effect, then I guess we were scooped! But in a way, the precedent only increased our confidence that our null effect was real and interesting, and definitely worth publishing. Further, PLoS One is also a great place for replication studies, and so surely a replication of a null effect makes it a doubly ideal! 

For further details, see:

Murray, Nobre & Stokes (2012) Lacking control over the trade-off between quality and quantity in VSTM. PLoS One

Murray, Nobre & Stokes (2011). Markers of preparatory attention predict visual short-term memory. Neuropsychologia, 49:1458-1465.

Zhang W, Luck SJ (2011) The number and quality of representations in working memory. Psychol Sci. 22: 1434–1441