Often times I feel restrained by the unfortunate, inherent shortcomings of medical and functional imaging data. The entire process is subject to a vast array of variability, assumptions, and “warpings” in many senses of the word. These, however minute the calculations and interpolations may be, carry an overarching sentiment of being haphazard. However, differences between sets of such arrays can account and point to “effects”. Enter functional imaging.
Neural activation can be relayed and inferred due to one image of the brain varying from another image, as due to some cause. That’s the bare bones of what is called “univariate analysis”. Data collected during one time, say when a subject is resting, can be shown to differ, significantly, from when they are engaged in a task. Fitting the brain and all the data into a matrix allows for comparisons and differences to be spatially represented. Very similar to to “Photo Hunt” based games, where players compare two side by side, macroscopically identical pictures and circle the regions which contain some sort of shift (i.e in one picture a person may have a ring on their finger, but in the other they do not[For those deprived of the bar bound touch screens which frequently feature the past-time, here is an online version]), comparing differences in brain data allow for scientists to see which regions in the brain can be related to which causes as determined by the design of their study. Until recently, this type of straightforward comparison has been the go-to method for fMRI imaging. The regions showing the most change between two states are considered to be most correlated, and thus, to please the physicalists, most responsible for such perception, memories, or any other such study defined cognitive functioning.
Thanks to exponential progress in the field of machine learning, we can now see that there is more than initially meets the eye as to which regions of the brain can be deemed “responsible” for correlating brain activity with performance, task, memory, etc. No longer is it just the difference, space by space between images, but it is the difference in the patterns of activation between images. The previous “Photo Hunt” analogy would have to be transposed as such: instead of being able to just detect that one picture has a person with a ring and the other not, but also being able to extract which material that ring is made of to the point where even if both photos had the ring, a pattern assessment of the photo would reveal the differences between the two based on the makeup of the ring. This hypothetical situation is meant solely to be conceptual. With just a photo, it is not feasible to infer the makeup of an object when all other variables are constant. However, the idea of seemingly identical presentations having different content is key in understanding exactly what it is that pattern assessment differentiation between images is doing.
Take a simple matrix below:
It’s easy to see that there is nothing in this matrix. Let’s call this Matrix of Rest.
In this matrix, it’s immediately apparent that there are six blue squares now colored in. Let’s say that these are akin to each being a ring, to stay steady with the Photo Hunt example.
Therefore, it’s easy to say that The first image is different from the second, as per the fact that within this region, there are 6 rings where the first had none. The first is significantly different from the second.
Easy enough. To parlay this to brain images, images each cell of the matriz as a small region of the brain (voxels, which are cubic regions of brain space), and in the first image, these voxels are operating at baseline, but in the second, there are six voxels which operate at a level significantly higher than baseline. Therefore, we could say that this particular 4×6 regions of space is more active in the second image. If the second image is taken at a time when there is a particular, measurable task going on that wasn’t going on during the first image, then you have yourself a univariate, spatial finding of supposed causal relationship between neuronal processes and task related behavior. Granted there is a lot being inferred here, mainly due to the ad-hoc assumptions of BOLD signals, but that’s a debate for another time.
Now, presenting the next image may make it seem like the explanation that follows as somewhat intuitive, but that’s only due to its inherent ingenuity. This is a spectacular and fascinating concept that has only struck the field of neuroscience applicably for the last ten or so years.
This matrix differs from the first one exactly in the same way that the second one does. It has six “rings” in it, where the first one did not. Just as we said with the second image, this one could be correlated with any task related change in brain activity. Essentially, this 4×6 region of space in the brain could be deemed “responsible” for the task at hand.
However, the pattern of the arrangement of which positions in the matrix these six rings differs between the second and third matrices. This is where there is even more information. These arrangement patterns can, essentially, further differentiate between two sets of images. For example, now we may know that there are six rings initially, but upon assessment of the arrangement of the patterns, we can then know whether the rings in the images are made of gold or made of silver. To translate into the actual cog neuro applications, it would be the same as identifying this region of brain space as being responsible for perceiving objects. However, when assessing more than just the summation of the activity in this particular region and differentiating the patterns, one could theoretically differentiate between different types of objects. Fors example, and umbrellas might trigger six voxels of activation in this region and a bookshelf may recruit six voxels in this region as well. Therefore, it would be safe to say that this particular region recruits six voxels when the subject is viewing objects. However, additionally, since the six voxels may arrange themselves differently, it would be possible to identify, through the patterns only, which type of object the subject is viewing, as opposed to just the fact that they are viewing an object instead of nothing.
A favorite scene of mine comes to mind when trying to conceptualize this idea. It’s from I ❤ Huckabees, when the “detective” is trying to explain existentialism. Here’s a clip of it below.
Think of the flat sheet as a brain in resting state as a flat image; a single slice of the brain. Then, each object which is placed under the sheet can be thought of as a task that the subject is doing, which causes a spike in activation in that region, or a raise in the sheet as seen physically by the demonstration.
Putting aside the utter beauty of the realization that “everything is connected”, brain imaging up until this point was like having this sole sheet, which could identify temporal and spatial rises in sections of the sheet and correlate and compare them with different brain functioning. However, by seeing the patterns which are giving rise to seemingly identical spikes, there is such a deeper realization to be had as to the actual causes of each spike, allowing for more distinct classifications of decoded brain activity and their relevant, functional implications. Now, it’s like having a very large number of sheets which can each lie over separate parts of the formerly whole object and allow for the topography of the object to emerge. Think of one giant sheet over the Statue of Liberty. Covered in such a manner, it would appear similar to just a cylinder rising in the sky, comprable to many other structures of its height. However, with one sheet for each spike of the crown, one sheet for the torch, one for the arm, etc. until each area of the statue is covered to the same extent that the singular sheet satisfied, it would be much easier to distinguish the Statue of Liberty from another structure which was “sheeted” in the same manner.
Therefore, it is the patterns of activation which carry the greatest amount of information under the current limitations of fMRI imaging. We find these patterns through MVPA.
The Details…. (Coming Soon).