Publication date: 15 May 2017
Source:NeuroImage, Volume 152
Author(s): Michael Eickenberg, Alexandre Gramfort, Gaël Varoquaux, Bertrand Thirion
Convolutional networks used for computer vision represent candidate models for the computations performed in mammalian visual systems. We use them as a detailed model of human brain activity during the viewing of natural images by constructing predictive models based on their different layers and BOLD fMRI activations. Analyzing the predictive performance across layers yields characteristic fingerprints for each visual brain region: early visual areas are better described by lower level convolutional net layers and later visual areas by higher level net layers, exhibiting a progression across ventral and dorsal streams. Our predictive model generalizes beyond brain responses to natural images. We illustrate this on two experiments, namely retinotopy and face-place oppositions, by synthesizing brain activity and performing classical brain mapping upon it. The synthesis recovers the activations observed in the corresponding fMRI studies, showing that this deep encoding model captures representations of brain function that are universal across experimental paradigms.
Graphical abstract
Highlights
http://ift.tt/2m2ffaN
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου