Tuesday, 23 April 2013

In the news: Decoding dreams with fMRI

Recently Horikawa and colleagues from ATR Computational Neuroscience Laboratories, in Kyoto (Japan), caused a media sensation with the publication of the study in Science that shows first-time proof-of-principle that non-invasive brain scanning (fMRI) can be used to decode dreams. Rumblings were already heard in various media circles after Yuki Kamitani presented their initial findings at the annual meeting of the Society for Neuroscience in New Orleans last year [see Mo Costandi's report]. But now the peer-reviewed paper is officially published, the press releases have gone out and the journal embargo has been lifted, there was a media frenzy [e.g., here, here and here]. The idea of reading people's dreams was always bound to attract a lot of media attention.

OK, so this study is cool. OK, very cool - what could be cooler than reading people's dreams while they sleep!? But is this just a clever parlour trick, using expensive brain imaging equipment? What does it tell us about the brain, and how it works?

First, to get beyond the hype, we need to understand exactly what they have, and have not, achieved in this study. Research participants were put into the narrow bore of an fMRI for a series of mid afternoon naps (up to 10 sessions in total). With the aid of simultaneous EEG recordings, the researchers were able to detect when their volunteers had slipped off into the earliest stage of sleep (stage 1 or 2). At this point, they were woken and questioned about any dream that they could remember, before being allowed to go back to sleep again. That is, until the EEG next registered evidence of early stage sleep again, and then again they were awoken, questioned, and allowed back to sleep. So on and so forth, until they had recorded at least 200 distinct awakenings.

After all the sleep data were collected, the experimenters then analysed the verbal dream reports using a semantic network analysis (WordNet) to help organise the contents of the dreams their participants had experience during the brain scans. The results of this analysis could then be used to systematically label dream content associated with the sleep-related brain activity they had recorded earlier.

Having identified the kind of things their participants had been dreaming about in the scanner, the researchers then searched for actual visual images that best matched the reported content of dreams. Scouring the internet, the researchers built up a vast database of images that more or less corresponded to the contents of the reported dreams. In a second phase of the experiment, the same participants were scanned again, but this time they were fully awake and asked to view the collection of images that were chosen to match their previous dream content. These scans provided the research team with individualised measures of brain activity associated with specific visual scenes. Once these patterns had been mapped, the experimenters returned to the sleep data, using the normal waking perception data as a reference map.

If it looks like a duck...

In the simplest possible terms, if the pattern of activity measured during one dream looks more like activity associated with viewing a person, compared to activity associated with seeing an empty street scene, then you should say that the dream probably contains a person, if you were forced to guess. This is the essence of their decoding algorithm. They use sophisticated ways to characterise patterns in fMRI activity (support vector machine), but essentially the idea is simply to match up, as best they can, the brain patterns observed during sleep with those measures during wakeful viewing of corresponding images. Their published result is shown on the right for different areas of the brain's visual system. Lower visual cortex (LVC) includes primary visual cortex (V1), and areas V2 and V3; whereas higher visual cortex (HVC) includes lateral occipital complex (LOC), fusiform face area (FFA) and parahippocampal place area (PPA).

Below is a more creative reconstruction of this result. The researchers have put together a movie based on one set of sleep data taken before waking. Each frame represents the visual image from their database that best matches the current pattern of brain activity. Note, the reason why the image gets clearer towards the end of the movie is because the brain activity is nearer to the time point at which the participants were woken, and therefore were more likely to be described at waking. If the content at other times did not make it into the verbal report, then the dream activity would be difficult to classify because the corresponding waking data would not have been entered into the image database. This highlights how this approach only really works for content that has been characterised using the waking visual perception data.      


videoOK, so these scientists have decoded dreams. The accuracy is hardly perfect, but still, the results are significantly above chance, and that's no mean feat. In fact, it has never been done before. But some might still say, so what? Have we learned anything very new about the brain? Or is this just a lot of neurohype?

Well, beyond the tour de force technical achievement of actually collecting this kind of multi-session simultaneous fMRI/EEG sleep data, these results also provide valuable insights into how dreams are represented in the brain. As in many neural decoding studies, the true purpose of the classifier is not really to make perfectly accurate predictions, but rather to work out how the brain represented information by studying how patterns of brain activity differ between conditions [see previous post]. For example, are there different patterns of visual activity during different types of dreams? Technically, this could be tested by just looking for any difference in activity patterns associated with different dream content. In machine-learning language, this could be done using a cross-validated classification algorithm. If a classifier trained to discriminate activity patterns associated with known dream states can then make accurate predictions of new dreams, then it is safe to assume that there are reliable differences in activity patterns between the two conditions. However, this only tells you that activity in a specific brain area is different between conditions. In this study, they go one step further.

By training the dream decoder using only patterns of activity associated with the visual perception of actual images, they can also test whether there is a systematic relationship between the way dreams are presented, and how actual everyday perception is represented in the brain. This cross-generalisation approach helps isolate the shared features between the two phenomenological states. In my own research, we have used this approach to show that visual imagery during normal waking selectively activates patterns in high-level visual areas (lateral occipital complex: LOC) that are very similar to the patterns associated with directly viewing the same stimulus (Stokes et al., 2009, J Neurosci). The same approach can be used to test for other coding principles, including high-order properties such as position-invariance (Stokes et al., 2011, NeuroImage), or the pictorial nature of dreams, as studied here. As in our previous findings during waking imagery, Horikawa et al show that the visual content of dreams shares similar coding principles to direct perception in higher visual brain areas. Further research, using a broader base of comparisons, will provide deeper insights into the representational structure of these inherently subject and private experiences.

Many barriers remain for an all-purpose dream decoder

When the media first picked up this story, the main question I was asked went something like: are scientists going to be able to build dream decoders? In principle, yes, this result shows that a well trained algorithm, given good brain data, is able to decode the some of the content of dreams. But as always, there are plenty of caveats and qualifiers.

Firstly, the idea of downloading people's dreams while they sleep is still a very long way off. This study shows that, in principle, it is possible to use patterns of brain activity to infer the contents of peoples dreams, but only at a relatively coarse resolution. For example, it might be possible to distinguish between patterns of activity associated with a dream containing people or an empty street, but it is another thing entirely to decode which person, or which street, not to mention all the other nuances that make dreams so interesting.

To boost the 'dream resolution' of any viable decoding machine, the engineer would need to scan participants for much MUCH longer, using many more visual exemplars to build up an enormous database of brain scans to use as a reference for interpreting more subtle dream patterns. In this study, the researchers took advantage of prior knowledge of specific dream content to limit their database to a manageable size. By verbally assessing the content of dreams first, they were able to focus on just a relatively small subset of all the possible dream content one could imagine. If you wanted to build an all-purpose dream decoder, you would need an effectively infinite database, unless you could discover a clever way to generalise from a finite set of exemplars to reconstruct infinitely novel content. This is an exciting area of active research (e.g., see here).

Another major barrier to a commercially available model is that you would also need to characterise this data for each individual person. Everyone's brain is different, unique at birth and further shaped by individual experiences. There is no reason to believe that we could build a reliable machine to read dreams without taking this kind of individual variability into account. Each dream machine would have to be tuned to each person's brain.


Finally, it is also worth noting that the method that was used in this experiment requires some pretty expensive and unwieldy machinery. Even if all the challenges set out above were solved, it is unlikely that dream readers for the home will be hitting the shelves any time soon. Other cheaper, and more portable methods for measuring brain activity, such as EEG, can only really be used to identify difference sleep stages, not what goes on inside them. Electrodes placed directly into the brain could be more effective, but at the cost of invasive brain surgery.


For the moment, it is probably better just to keep a dream journal.

Reference:


Horikawa, Tamaki, Miyawaki & Kamitani (2013) Neural Decoding of Visual Imagery During Sleep, Science [here]

4 comments:

  1. Can it be really called "dreaming" though ?
    The scans and verbal reports they get, all come from non-REM sleep. From some reading of Michel Jouvet's work I vaguely remember that the content of REM and non-REM "dreaming" is qualitatively different. I am not sure how that limits the interpretation of their work though.

    ReplyDelete
  2. Disclaimers: Haven't read the paper, no knowledge of sleep/dreaming, only vaguely familiar with the technical aspects of decoders. Consider this a lay question!

    How does the temporal aspect of dream decoding enter into the equation? Clearly, the way that Kamitani has approached the problem there is a memory aspect and the recall is necessarily imparted in real (experienced) time. But when I fall asleep for just a few seconds I sometimes have the sensation of having had vivid, extensive dreams that may have seemed to last for minutes or longer. Do we know much about the time scale of dreams as compared to our experienced, awake conscious experience? Or, to put it in terms that my brain can understand, do we know whether dreams play out "real time," like watching a movie, or are they at a different (faster?) time? If dreams dart about and occur faster than for movie watching then decoding dreams from the results of (slow) classification to movie watching might be a poor way to decode dreams. Or, perhaps its the ideal way, like a slow-motion, information-rich basis. I'm curious because if the time scale shifts then the BOLD information content would be predicted to be intrinsically different for movie watching versus dreaming, given the well-known low pass temporal filtering in the hemodynamics.

    Why pose the question? I'm just curious whether there might be better ways to build model classifiers. Perhaps, for example, a better basis would be generated by watching movies at x10 acceleration. A visual bombardment, if you will. Or, perhaps slowing the movies down below real time would help the final decoding. I clearly need to give this more thought, just wondering if anyone else is thinking about the temporal aspects of dreams (and movie watching) as they pertain to BOLD responses.

    ReplyDelete
  3. very innovative.................hi this so good to know about dream decoder........................

    image decoding

    ReplyDelete
  4. wow!!!its wonderful and getting beyond the hype.
    hats off to the project.
    keep blogging.

    captcha solver

    ReplyDelete