Reading Response Archive

The following list of responses has three levels of organization. The top level is a section header, indicating the reading (one or two papers, indicated by author and year) to which the responses in that section relate. The next level is numbered items; each one of these is a question or comment made by one of the reading groups in response to that particular reading.

The deepest level of the list is responses to those original reading responses, and it is hidden by default. You can click any section header or numbered response to show or hide these meta-responses. You may also click the links in this sentence to Show All Meta-Responses or Hide All Meta-Responses.

Note that when meta-responses are “hidden” they're actually just tiny and invisible; the text is still there, so you can use the Find feature of your browser to look for particular keywords. (Try it yourself: select several responses below. See the little white lines between responses? That's the tiny hidden text of the meta-responses.) When un-hidden, each meta-response is in a colored box depending on its authors: Jadrian, Group 0, Group 1, and Group 2. (You can click the author names in the last sentence to show or hide them, too.)

Beaulieu 2002

  1. What, exactly, are attenuation curves? Why would diffusion be slowing down or decreasing in intensity over time? Are these the same thing as “diffusion decay curves?”

    The name “attenuation curve” is not a jargon term with a specific meaning. Instead, it means simply what it says: a curve that indicates the attenuation of some value. In this context, Beaulieu is referring to plots of experimentally measured MRI signal vs. some other variable, with other variables kept fixed. For example, the b-value is often the independent variable for such a plot. The Stejskal-Tanner signal equation predicts exponential decay of the signal value as b increases (monoexponential behavior), which would lead to straight-line plots if the Y (signal) axis were log-scaled. However, in some tissues we observe nonmonoexponential behavior.

    “Diffusion decay curves” is a similar phrase; it is not a technical term, but rather refers to some of the plots in the paper. In this case, this phrase also refers to the same plots of MRI signal vs. another variable, in particular for diffusion MRI and the variables specifically related to it (b-value or diffusion coefficient).

  2. What is the diffusion-sensitizing gradient factor? How does this help in the understanding of the graph labeled figure ?
    The diffusion-sensitizing gradient factor is the formal name for what we often otherwise call the b-value. It is a property of the chosen MRI acquisition pulse sequence. For simple pulse sequences, it may be computed explicitly, but more often it is determined empirically by taking many measurements of the MRI signal using one or more calibration samples with known diffusivity.

    The diffusion-sensitizing gradient factor is what we also call the b-value. When we acquire an MRI signal, the b-value is the part of the scan settings that we can change. By selecting multiple b-values in the Stejskal-Tanner equation, we can obtain different signal amplitudes (and compute the diffusion coefficient as seen in the Mori and Zhang paper). The graphs in the Beaulieu paper show how increasing the diffusion-sensitizing gradient factor affects the signal acquired in various cells.

  3. For any given point, is there only one value of ADC? Is the ADC at a given point a function of direction?

    The fundamental thesis of this paper is that the ADC (apparent diffusion coefficient) at a given point varies with direction, when observed in white matter or in peripheral nerves. Specifically, ADC is higher when measured in the direction along (parallel to) the nerve fibers than when measured across (perpendicular to) them.

    ADC is the D variable in the Stejskal-Tanner diffusion signal equation. It may therefore be computed as the slope of a well-fitting line to a monoexponential portion of a plot of log(signal) vs. b-value. When this plot does not appear linear across the whole range of b-values, we have non-monoexponential behavior: a dependence of ADC not only on direction but also on b-value.

    For any given voxel from a particular scan there will be a resulting apparent diffusion coefficient that represents diffusion perpendicular to the gradient chosen. This ADC (apparent diffusion coefficient) is the D variable in the Stejskal-Tanner diffusion signal equation and can be read as the slope of a well-fit line to a monoexponential portion of a plot of log(signal) vs b-value. Since the measured diffusivity of a single scan is directional (gradient dependent), a more meaningful analysis of a voxel will require multiple ADC values to be calculated and then averaged so in this sense, some ADC values are less directional than others.

  4. How does NMR work?

    That's a big question! We'll answer it (partially) in later readings. Specifically, see Mori & Zhang 2006, Mori & Barker 1999, and Hobbie 1997.

  5. While we know what T2 stands for, we still don’t know what it is. What exactly is the transverse relaxation time and how does it relate to anisotropic diffusion?

    For our purposes, T2 is just a different MRI “modality” — a different way of using MRI to examine tissues, which measures different properties of that tissue and therefore gives us different information than diffusion MRI does. In particular, at the time of the writing of this paper, T2-weighted imaging was a much more mature technology than diffusion MRI. Even now it provides a much higher-resolution, lower-noise, and lower-distortion image of the brain than diffusion MRI can create, though (again) it tells us different information about the tissue. Importantly, T2-weighted images show basically no meaningful detail in the white matter.

    T2 is explained in more detail in Hobbie 1997. See also the Wikipedia articles on T2 relaxation, the spin echo, and NMR relaxation in general.

    T2 is essentially an alternative way to examine tissues that is not based on the diffusion of water molecules. More specifically, T2 refers to the transverse relaxation time of a molecule (or the behavior of a molecule’s signal decay in an MRI scan -- see the Hobbie paper, as well as Wikipedia articles provided below). T2-weighted imaging has been in use for a much longer time than diffusion-weighted imaging and, as a result, provides us with higher-resolution and lower-noise images, but the properties which it measures are more useful in some tissue than in others. The advantage that diffusion-weighted imaging provides is a much more detailed image of highly-structured tissue, such as white matter, based on how dependent diffusion is on that structure. The Mori and Barker paper also has a helpful section explaining T2 relaxation in the “Conventional MRI” section.

  6. The equation which relates the b value to signal intensity (I=Ioe-bD) doesn’t really mean anything to us.

    You'll notice that a lot of things in this class don't mean anything to you at first. But let's pick it apart, at least. We can see from this equation that we would expect the signal (I) to decrease (or “attenuate”) according to exponential decay with either increasing b-value or increasing D-value. The absolute scale of I depends on the “unweighted” signal value, I0.

    This equation, derived theoretically from the first principles of NMR (nuclear magnetic resonance) in 1965 by Stejskal and Tanner, allows us to empirically measure values related to the apparent diffusion coefficient of the sample, remotely and non-invasively. This article describes several different aspects of neural microarchitecture that have been investigated using this principle.

    Keep in mind that, even after a couple decades of research, diffusion MRI is in its infancy. It's like when X-rays were first discovered: it was immediately apparent that they were useful, but further investigation revealed all kinds of unexpected uses for the technology. Diffusion MRI provides a unique perspective on the microstructure of biologically important tissues. This paper is a survey of some of the big questions being asked about the relationship between the measurements we can make with it and the actual underlying biological phenomena, and describes various efforts at better understanding those relationships.

    The Stejskal and Tanner equation I = I0e-bD (we also use S and S0 in place of I and I0 respectively) measures the relationship between diffusion, b-values, and signal strength. Specifically, I (the signal) experiences exponential decay with increases in the b-value or D. This equation forms the basis of MRIs since it allows for the non-invasive measurement of anisotropy within the brain. I0 (the base signal) can be found by setting the b-value to zero. By using I0 and multiple b-values, the ADC (apparent diffusion coefficient) can be calculated.

  7. What exactly does “susceptibility” mean in this context?

    Let's see if Wikipedia knows. Magnetic susceptibility is a proportionality constant, different for different materials, that relates to how susceptible a material is to being magnetized in the presence of a magnetic field. If the susceptibility of a particular material is incorrectly estimated, this may cause errors in the interpretation of MRI signals.

    The paper briefly explores the hypothesis that anisotropy in ADC is actually an artifact of susceptibility anisotropy in the neural tissue; that is, the hypothesis that the different ADC values computed in different directions are in fact erroneous. The paper concludes that the answer to this hypothesis is “no”.

    Susceptibility continues to be a concern in diffusion MRI analysis because of the possibility of “susceptibility artifacts”. Susceptibility acts like a sort of refractive index for magnetic fields, distorting them near junctures between materials of differing susceptibility. Imaging techniques exist to correct for this phenomenon, but left uncorrected, a heavily distorted image can result.

  8. The article does not describe how scientists are able to learn so much about water diffusion in vivo or in vitro, and we would like to know more about how the technology works.

    I disagree; it explains a lot about how scientists attempt to draw conclusions from their observations based on the theory of what those observations mean. I think what this comment is talking about is that the paper doesn't describe where that theory comes from, and the actual process by which these observations are conducted. That is covered later in the physics-related papers mentioned above.

    The actual process by which water diffusion is observed is described in more detail in the Hobbie paper. With regards to comparisons of in vivo or in vitro scans, the Mori and Zhang paper discusses how diffusion-weighted imaging is affected based on whether you’re scanning in vivo, in vitro, or ex vivo samples -- scanning time is less of an issue in an ex vivo study (since live subjects can’t sit in a scanner for as long), for example, which means we can often obtain a better image than in vivo. The paper also provides more detail about the process of excitation and dephasing which is used to learn about water diffusion in the brain, so we recommend this paper in addition to Hobbie.

  9. It would also be interesting to know why anisotropic diffusion is important to the function of a healthy nervous system, or whether it’s simply a coincidence of how the nervous system is built. Further research could focus on this, if it is not a topic that has already been explored. One possible direction for such research would be the effectiveness of neural transmission with respect to changing ADC ratios.

    It's not so much that anisotropic diffusion itself is important for healthy nervous function, but instead that anisotropic diffusion is an indicator of normal tissue in the white matter. So yes, in the healthy adult brain, anisotropy of ADC in white matter is a consequence of the well-organized microstructure of the tissue. We see disturbances of anisotropy in cases such as injury, disease, or toxicity. The article goes into various potential explanations of what micro-scale phenomena (cell death, swelling, breakdown of cellular components, etc.) might cause the various effects on diffusivity that we seem to observe.

    It's important to note that, in terms of the relationship between ADC, biological microstructure, and neural functioning, that ADC is a dependent variable. It's not something we can control; it's something that we observe in relation to other factors.

    While there is a causality between anisotropic diffusion and tissue health, it turns out that tissue health impacts anisotropic diffusion and not the other way around. Essentially, tissue health is what determines the diffusivity of water in the tissue. If the tissue is damaged, water flows more freely within the tissue when compared to healthy tissue (damaged microstructures don't constrain water diffusivity as much as healthy microstructures) and thus the ADC is higher in damaged tissues. As a result, the ADC can be used to detect the presence of tissue damage which can be attributed to a variety of possible factors. A healthy nervous system will need to reliably transport compounds around and because of this has very controlled water diffusivity.

  10. Are there any other body systems, such as the muscles, where anisotropy is or might be important, or could serve as a diagnostic tool?

    Yes. Tracking anisotropy helps us find problems in muscles in general, and is very useful in diagnosing heart problems. For reference, see Vilanova et al, section 4.4, page 25.

  11. Theoretically, what would happen if all perpendicular diffusion stopped? Can we examine this with computer models?

    As we discussed in class, we would never expect water molecules to cease perpendicular diffusion completely (unless they were passing through an infinitely small structure which forced them to diffuse in one direction only). Recall that a water molecule will tend to diffuse in all directions if unhindered (see the discussions of Brownian motion and isotropic diffusion in the Beaulieu paper). We could presumably simulate this, and it might be interesting, but we’d be altering the laws of physics. On the other hand, consider the implications if D = 0 in the equation S = S0e-bD. No matter what the value of b, e-bD=1 and is essentially a constant. In other words, S=S0 is unaffected by b. We see this behavior in background gradients in our MRI scans as well as areas of the brain that pretty much always show up dark using diffusion-weighted imaging, such as gray matter (and for that reason don’t often use DWI to study them).

  12. On pages 441 and 442 Beaulieu discusses the role of myelin, axonal membranes, and neurofibrils in anisotropy. The next section, "Restricted Diffusion and Compartmental Issues Related to Anisotropy", it seems as though the paper changes gears and looks at separate causes for anisotropy, but it is unclear how these two sections really differ. Aren't they discussing the same phenomena?

    Both sections discuss the phenomena that water diffusivity in brain tissue is anisotropic. The sections Postulated Sources and Investigation of Diffusion Anisotropy and Restricted Diffusion and Compartmental Issues Related to Anisotropy differ in their focus in that the former focuses primarily on demonstrating that myelin is not a requirement for ordered arrangements of fibers to affect the apparent diffusion coefficient whereas the latter focuses on the effects of diffusion time and compartmental structuring on ADC measurements. Here, we talk a little bit more about the two parts (restricted diffusion and compartmental issues) of the second section.

    With respect to time, Beaulieu describes how interactions with membranes become more significant when scan time is increased. At very short scan times, ADCs appear almost isotropic, because many water molecules have simply not encountered any diffusion barriers. As diffusion time increases and more barriers to diffusion are encountered, ADC measurements better reflect underlying tissue structure. Beaulieu implies that diffusion time should be a major consideration when determining scan parameters. If too short a diffusion time is used, the water molecules will not interact with neural fibers, and the ADC measurements won’t reflect brain structures. If too long a diffusion time is used, signal will fall off considerably, and the effects of noise will become increasingly apparent.

    In the second half of this section, compartmental issues are discussed. Brain “compartments” can include intracellular, extracellular, neural, glial cellular, and axonal compartments, according to the examples given on page 436 and 444. These represent distinct regions where water can diffuse (nearly) independently of other regions. Some issues regarding the nontrivial task of characterizing diffusion within individual compartments are addressed.

  13. Axonal transport is mentioned as a possibly cause for anisotropy. My previous knowledge of axons made me assume that this meant the transfer of electrical pulses, but I was surprised to read that this meant the transport of cellular organelles and would like to read more on this.

    Since none of the other papers touch on this, here’s the wikipedia article:

    http://en.wikipedia.org/wiki/Axoplasmic_transport

    Essentially, it says that cell parts like mitochondria, proteins, and lipids move through the axons, with different materials moving at different rates. Relation to anisotropy

  14. One other confusion was the mention of susceptibility in regards to "the static magnetic field". Is there some sort of omnipresent static magnetic field? Does this magnetic field or these gradients ever play a role in the diffusion of water?

    The static magnetic field, B0, is a magnetic field that is always present in an MRI scan. When we talk about applying gradients (b-values) in varying directions and orientations, the gradients are relative to the magnetic field that is already there. Think of it this way: we can excite water molecules and analyze differences in D based on a sensitivity to diffusion in different directions, but when b=0, we are still picking up signals from our scans (but one that is not sensitive to a particular direction of diffusion). This is only true because there is a magnetic field there in the first place.

  15. One critique of the paper we did have was how the authors glazed over mentioning the distinction between ADC(||)/ADC(perpendicular). We had assumed ADC(||) had meant the ADC measured in the direction parallel to the cell structure and ADC(perpendicular) meant perpendicular to the cell structure; but we also reasoned that as long as the relationship between parallel and perpendicular had remained for any directions, it could be used independent of the cell structure.

    Your first assumption is correct! ADC(||)/ADC() means the ratio of the apparent diffusion in the direction parallel to fiber direction to the apparent diffusion in the direction perpendicular to fiber direction. Many of the experiments cited by Beaulieu involve pre-scan alignments of sample fiber structure and the laboratory frame where the researchers knew a priori which direction to align their samples. The author notes that aligning the fiber frame and the laboratory frame “obviated the need for acquiring the full [diffusion] tensor” in many early studies. For instance, on page 437 at the beginning of the Early Observations section, Beaulieu cites a study which claims that “water diffusion was greater parallel (||) to the length of the fibers than perpendicular () [ADC(||)/ADC() = 1.4]” When the author refers to ADC(||), he is referring talking to the results from a scan which was oriented parallel to the fibers. In more complex MRI scans (ie of brains) we don’t know before hand how the fiber structures are oriented. Notably, ADC(||) and ADC() do not refer to diffusion that is parallel or perpendicular to the laboratory frame, in general. Because diffusion can only be averaged per voxel, we have no sense of distinguishing parallel flow vs perpendicular flow in a voxel in more complex scenarios.

  16. The paper briefly mentioned the effects of neurotoxins in the brain. Generally speaking we know neurotoxins as poison for the brain. It seems from the results that different neurotoxins can have radically different effects on the brain. However, the paper never mentioned what the neurotoxins were doing to the substructures just that the introduction of the toxin changed the ADC ratio.

    The paper itself concedes that the “cellular and molecular mechanisms [of neurotoxins] are not fully understood” and our cursory research does not reveal any additional information. Although researchers work with these compounds and can predict the symptoms that go with their consumption, their exact workings are still a mystery. It is still unknown even how much methylmercury you need to ingest before you’re in danger of developing symptoms.

  17. The paper never went into mentioning how images of the brain/how information about the brain was extrapolated using this phenomena. We’re not sure if anisotropy is used as an imaging technique or is merely a phenomena that will appear different through different images of the brain or even how anisotropy is determined.

    We think that this question may have been resolved after our increased exposure to the topic. Anisotropic diffusion is a quality of water, and it is a quality that reliably corresponds to tissue structure. By creating images based on anisotropy we can reliably depict structures in the brain. Anisotropy is determined by measuring how water molecules diffuse, and if they diffuse more in certain directions. We store information about anisotropy in tensors because diffusion is 3-dimensional, and tensors clearly describe variations in a field.

Mori & Zhang 2006

  1. What does basic MRI signal strength exactly measure?

    MRIs measure signal output at different locations within a sample. These locations and intensities combine to form some sort of image. While different types of scans result in different types of signals and images, all MRIs measure the resonance (the re-emission of absorbed energy) of protons due to their intrinsic angular momentum, or spin. When a scan is executed, first, a very strong magnetic field aligns the atomic spins of atoms within a sample. Next, an additionally applied magnetic field alters the orientation of these atomic spins, exciting the atoms into a higher energy state (imagine turning a bar magnet in a magnetic field away from its prefered orientation; this would surely take energy to do, and, your magnet would release energy if you were to let it go). The specific type of excitations depend on what type of scan you’re attempting.

    The type of scan we are most interested in attempts to produce a diffusion-weighted image, which results from applying a Pulsed-Gradient Spin Echo sequence (or a variant; this signal is simply the easiest to consider). In this case, after the protons are excited, the signal attenuates as the phases of the protons are randomized due to the non-homogeneity of the magnetic field. In short, for this type of image, high signal corresponds to low diffusion in a given direction, whereas low signal corresponds to high diffusion in a given direction.

  2. How are tractography maps used?

    There are potentially many different ways tractography maps are used, and the method pointed out by Mori &Zhang reconstructs information found from the tensor field by propagating streamlines (understood as an integral) from seedpoints. The streamlines get terminated when they reach a threshold of low anisotropy, such that the major white matter tracts in the region of interest can be reconstructed via tracing the streamlines in the tractography map. Once finished, tractography maps are used for prognosis, diagnosis, and research, among other things.

  3. What are tensors, in general?

    Tensors describe relationships between scalars, vectors and other tensors. Scalars, vectors and matrices are all tensors. Tensors can be represented as multidimensional arrays. Tensors are independent of the choice of coordinate systems. DTI uses second order tensors (matrices) to quantitatively describe diffusion in a specific region. The Wikipedia page on tensors is great and can probably answer any questions you have about “tensors in general.”

  4. Further elaboration regarding the physical principles underlying MRI would be useful (pg 530 in particular).

    This question is partially addressed by our in class exercise we did where we pretended to be dancing protons. A magnetic field oscillating at the Lamarr frequency is first induced which causes the water molecules to resonate at different frequencies given the torque from the applied field, pulling them an approximate 90 degrees out of phase. An opposite magnetic field is then applied to rephase the molecules to get them back to their starting positions however this realignment isn’t perfect as there are lots interactions with neighboring molecules. The inhomogeneity of the neighborhoods we examine affects the realignment and can be measured from a signal released during rephasing.

  5. How does 4D imaging work?

    We’re not entirely sure how this is being used in the paper — the “4D anatomical domain” is mentioned on page 537, but that’s all. We think probably it’s just referencing the fact that our data is 4 dimensional, since we have three physical directions and a data value to keep track of.

  6. How are the seeds (pixels of interest) in tractography selected? Randomly? Based on the image itself? Knowledge that we already have about brain structure?

    The selection of seeds in part depends on what kind of study you’re performing. If you have prior knowledge of where in particular you want to examine, you identify regions/pixels of interest based on knowledge of the brain structure and the focus of the study (selected by researchers who ideally are blind to the characteristics of the subject). Once you select regions of interest, you can then decide within those regions how to seed (one seed per region, several, randomized, etc). If you don’t have much prior knowledge, placing seeds throughout the brain and then filtering them afterwards based on what gives you more information is another seeding process employed. Subjectivity in seed point selection is one of the problems with tractography that Vilanova et al mention.

  7. How are the eigenvalues and eigenvalues used to create a tensor? We’re a little confused by this (a LinAl review is possibly in order).

    LinAl is always good to review! The tensor can be thought of, most generally, as a linear transformation. A linear transformation is a function that satisfies the following two conditions:

    1. f(x + y) = f(x) + f(y)

    2. f(ax) = a f(x) for any scalar a.

    An eigenvector of a tensor is a vector which simply scales (instead of changing orientation, for instance) when you apply the transformation represented by the tensor. The eigenvalue corresponding to that eigenvector is the factor by which the vector scales.

    An ellipsoid is a graphical representation of the tensor function. Imagine applying the tensor to a sphere; the geometrical result of that operation is the familiar ellipsoid.

  8. Is histological brain imaging still practiced?

    Yes. While MR images have the advantage of being non-invasive and more easily acquirable, they can’t provide the level of detail or assurance that a histology can provide. In other words, without histology, we can’t necessarily confirm what we think we see on an MR image. Histology is especially useful in examining diseased tissue (otherwise known as histopathology; see http://en.wikipedia.org/wiki/Histology) to study the effect particular diseases, such as MS, have on the brain. Additionally, it seems that often scans and histology are used in tandem. For example, on page 534 of this paper, Mori and Zhang mention using MRI to guide subsequent histological analysis of white matter lesions and then correlating the histological findings with the MR images.

  9. Why is it called a spin-echo image?

    It is called a spin-echo echo image because the spin-echo pulse sequence is used as the signal detection mechanism when creating a gray-scaled image.

    The spin-echo sequence was the sequence we collectively demonstrated in class last Friday. To recap, water molecules are excited by a magnetic pulse with a given gradient, and the water molecules begin to gyrate (spin). The spinning molecules will give off their own magnetic radiation, and the more molecules spinning at the same frequency, the more aligned the radiation signal becomes resulting in a stronger signal to detect. Due to inhomogenous effects in the region however, the water molecules spin at different frequencies and so the signal becomes muddled. To fix this, we apply a 180-degree inverted pulse at TE/2 to flip the molecules, and through some physics the molecules realign after some time (specifically TE/2). After TE time we turn off the pulses, and start our signal detection with a strong signal that exponentially decays as the molecules begin to relax and eventually comes to rest.

    TE stands for Echo Time.

  10. To clarify, to what degree do we understand the macromotion of the water? Do we happen to just know the answer for certain sections of the brain? Does that knowledge come from the discussed type of imaging or externally?

    The water in the brain is transported through protein water channels. The 3 most important of these are AQP1, AQP4 and AQP9. The pdf in this link gives a lot of explanation for AQPs (http://www.sciencedirect.com/science/article/pii/S0166223607002986 ). AQP is short for Aquaporins.The water channels are mostly located in the region that connects the blood tissue and blood vessels and the surface of the brain. Water is transported to the brain by blood.More information can be found at this link.( http://www.med.uio.no/imb/english/research/news-and-events/news/2012/water-channels-cleanse-brain.html )

    Adult brains have an intracranial cavity and a which has some blood in it. The brain also has intracellular and interstitial spaces. Water moves between these regions as a result of osmotic and hydrostatic forces. More clear explanations are provided in this link

    ( http://www.sciencedirect.com/science/article/pii/S0166223607002986# ).

  11. Why are these imaging techniques better or worse when it comes to in vitro vs ex vivo?

    First, it is important to note that in vitro and ex vivo techniques are not entirely distinct methods. It seems like they can almost be used synonymously - in vitro translates into “in the glass” and refers to isolating a small amount of an organism’s tissue or cellular structure. Ex vivo translates into “out of the living” and refers to studying isolated parts of a possibly deceased organism. So for example removing a sliver of mouse brain and studying it intensely could be interpreted as both in vitro and ex vivo. That being said, a sample could be ex vivo without being in vitro (for instance, a full cadaver) in which case an in vitro subject could be studied more closely by smaller, more powerful machines and the resulting image would have higher resolution. A more important distinction arises between in vivo and ex vivo, in which imaging techniques differ greatly because a longer scan time on ex vivo material (due to a lack of movement) lends itself to greater resolutions, whereas in vivo subjects can not be practically or ethically restrained for nearly the same amount of scan time.

  12. What possibilities exist to better analyze MRI data with even less information loss, and how much can we expect DTI to improve?

    Vilanova et al addresses this question in their discussion of the pros and cons of each visualization method. Some methods are useful for determining structures in the brain such as volume rendering and tractography. Other methods display a specific property in each voxel such as fractional anisotropy and mean diffusivity. A fundamental issue with Diffusion MRI is that examining structure forces us to discard local information and examining local properties makes it hard to look at structures. Tensor glyphs try to strike a balance between the 2 extremes.

    Mori et al describe the physical limitations on MRI. On page 527, they write that the absolute limit on MRI resolution is 10 micrometers. The practical limit on image resolution is about 1mm. Even if we could construct images of the human brain at that resolution, it would be impossible to analyze them thoroughly. Time of acquisition is also a limiting factor. In conclusion, there are a number of hard physical limitations on DTI, but there is a lot of room for improvement for both data acquisition and visualization.

  13. How much progress has been made in this field since the paper’s publication in 2006?

    A lot of the progress in this field comes from people having had more time to do research, and develop &refine models. Google searching revealed a lot of clinical advances being made with DTI, such as distinguishing Alzheimer’s from other diseases, muscle fibers in sports medicine, kidney disease and detection of multiple sclerosis.

    Other foreseeable advances include stronger and more accurate imaging machines! We found an article online describing a newly designed 11.75 Tesla magnet capable of fitting a full grown human which would provide “unprecedented resolution” images. Faster processors and developments in parallel processing also allow for these larger resolution images to be processed at almost interactive speeds.

  14. Besides making discoveries about micro structure of the brain, what other uses does DTI have? Is it possible that DTI will be used to learn about human psychology, or are its uses limited to axonal-scale structural discoveries?

    Many applications of DTI are noted in the Vilanova paper written about the visualization of DTI. Most of these uses begin from reconstructing axonal-scale maps, but beyond axonal structure, DTI can be extended to estimate underlying myelination and white matter development. Also, as the principal eigenvector measured by DT-MRI is confirmed to align with the myofiber orientation of myocardium, the part of the heart responsible for pumping blood to the body, studying DT-MRI helps scientists understand the structure of this complex muscle. Regarding human psychology, it seems like MRI provides more anatomical insight than psychological understanding, especially in comparison to an EEG (which can monitor brain activities based on electronic signal).

  15. To add to the question Tony's group asked about why they're called spin-echo images, I was wondering if we might at some point go over the difference between spin-echo imaging and echo-planar imaging. [Answered as: "What is the difference between spin-echo imaging and echo-planar imaging?"]

    Echo-planar imaging is a imaging technique similar to spin-echo in that it uses the Spin-echo sequences to excite molecules and create frequency signals. Echo-planar imaging differs from SE-imaging in that it collects data in a much quicker fashion. To start, let’s talk about k-space.

    K-space is a graphic matrix of digitized MR imaging data that represents the image prior to Fourier transform analysis. All points in k space contain data from all locations within an MR image. The Fourier transform of k space is the image. (Wikipedia)

    In an SE pulse sequence, one line of imaging data (one line in k space or one phase-encoding step) is collected within each repetition time (TR) period. The pulse sequence is then repeated for multiple TR periods until all phase-encoding steps are collected and k space is filled. Therefore, the imaging time is equal to the product of the TR and the number of phase-encoding steps. For example, if the TR is 2 seconds and the number of phase-encoding steps is 256, the imaging time is 512 seconds or about 8.5 minutes. (Poustchi-Amin et. al)

    In echo-planar imaging, multiple lines of imaging data are acquired after a single radio frequency (RF) excitation. Like a conventional SE sequence, an SE echo-planar imaging sequence begins with 90° and 180° RF pulses. However, after the 180° RF pulse, the frequency-encoding gradient oscillates rapidly from a positive to a negative amplitude, forming a train of gradient echoes. Each echo is phase encoded differently by phase-encoding blips on the phase-encoding axis. Each oscillation of the frequency-encoding gradient corresponds to one line of imaging data in k space. (Poustchi-Amin et. al) So the imaging technique is able to collect more data in less time, however the sanitation of the data is unclear.

    Poustchi-Amin et al. Principles and Applications of Echo-planar Imaging: A Review for the General Radiologist. RadioGraphics May 2001.

  16. Disregard the question in my group's post about histological practices. I goofed with that one. In fact, upon rereading the last few sections, I felt our summary was a little inaccurate about them. We presented those sections as, essentially, an argument for how DTI is better than MRIs and histological studies in various applications, when in truth the gist seems to have been that no imaging technique is inherently "better," and that by combining them for their various advantages, we get a much richer understanding of what's going on.

    No worries, we’re all learning. I’m sure some things my groups have claimed in previous summaries have been incorrect or, at the very least, misleading.

    I think there is some confusion about MRI vs. DTI in this question. Jadrian talked a lot about this in class, but I’ll re-iterate some of the points he made here. “Magnetic resonance imaging” is a technique that takes advantage of the intrinsic spin of protons in order to produce some sort of image. An MRI machine is used to create images, and, based on different properties of the scan, different types of images can be produced (t1-weighted, t2-weighted, diffusion-weighted, proton density, etc.). Using images taken with different scan parameters, one can model the human brain in a lot of different ways. The scans we are most interested in are diffusion scans. A popular way to model a brain, some given diffusion weighted images, is to approximate the diffusion at each voxel using a tensor. Diffusion tensors can be represented by the ellipsoids we are all (somewhat) familiar with. The use of tensors is simply a method of combining MRI measurements to make additional inferences about fiber structures. Directly using MRI images without additional processing might be fine in some cases, while more complex models might be more appropriate in others.

  17. Like Alex's group, I'm also interested in talking about what possibilities there are for improving DTI. I know we talked a bit about resolution difficulties in class. It seems there are some issues that are basically impossible to resolve simply by virtue of how we're generating these images. (e.g. we can't distinguish crossing and kissing axon arrangements in a pixel if we're using the tensor/ellipse process). Is this accurate?

    Some of the other papers do discuss techniques for estimating where fibers kiss or cross, although it does seem like they are at the moment imperfect. For example, on pages 16 and 17 the Vilanova paper explains that planar anisotropy doesn’t occur in the brain, so when we see it in DTI, it means fibers are kissing, crossing, converging, or diverging. When researchers see planar anisotropy, they track possible paths from it to figure which of these structures is there.

  18. I know we can't exactly expect to achieve a better resolution with in vivo imaging, but I was wondering about in vitro imaging and had a question that is possibly kind of silly. Is there a reason why we can't examine parts of a human brain in vitro? More specifically, for a human brain sample to be small enough to examine in vitro, would it have to be so small that it would, er, lose structural integrity, or something? Yeah, I dunno.

    We can examine the human brain in vitro. By at least 1996 researchers were creating MRI images of formaldehyde-fixed human brain stem (http://www.ncbi.nlm.nih.gov/pubmed/8741190). Any structural integrity would be preserved due to the formaldehyde. Now, however, there is no reason to perform in vitro scans (unless, of course, your subject is dead). In 2008 Cho et al. published regarding a new combination of coil design and pulse sequence that produced resolution comparable to in vitro scans of human subjects (http://www-stat.wharton.upenn.edu/~shepp/publications/172.pdf).

Jones et al 2006

  1. Why was sample size so small? We’re assuming it was difficult to find subjects that fit the necessary criteria, but was there anything else?

    There are lots of reasons for researchers to “choose” their sample sizes to be small beyond the limited availability of test subjects. Seeing that this paper was published in 2006, Jadrian suggested that it could just be that it was one of the preliminary studies published on this subject and that they published their research knowing that their results would be deemed inconclusive because of sample size. However, the point of publishing research like this isn’t always to conclusively prove something, if that were the case many papers would simply never be published. While having a small sample size leads to forming a weak conclusion, having a conclusive sample size is secondary to publishing potentially revolutionary findings (as long as you end up being right). However, there are real reasons to choosing a small sample size, including complexity of the study and funding.

  2. What is a plethysmograph?

    A plethysmograph is a device used to measure changes in volume in an organ or body. Fluctuations are due to changes in air (as in the lungs) or blood. The one sentence which brings up plethysmographs is in the Data Acquisition section on page 231: “The acquisition was peripherally gated to the cardiac cycle using a plethysmograph on the subjects’ forefingers.” This basically means the scan was synchronized with pulse as detected from the forefinger of a subject so as to collect data only during certain intervals (like in between heartbeats). See https://www.med-ed.virginia.edu/courses/rad/cardiacmr/Techniques/Gating.html.

  3. Have any subsequent studies answered the questions posed by this paper or contradicted its findings?

    Certainly. A search of google scholar with the query “schizophrenia diffusion tensor brain imaging” returns over 7,000 results since 2010. This includes one paper entitled “Diffusion tensor imaging reliably differentiates patients with schizophrenia from healthy volunteers.” So, it’s safe to say that more conclusions have been made regarding the effects of schizophrenia on diffusion in the brain. We believe age will always be an important factor to consider when studying the brain since it has such dramatic effects on the brain.

  4. Do other studies consider the anti-psychotics the schizophrenic subjects were taking? Do different drugs have different effects? What exactly was meant by “atypical drugs?”

    I would assume so that other studies consider the anti-psychotics taken by subjects. However, a quick google search showed that most anti-psychotic drugs have the same mechanism of action: blocking dopamine receptors. Although different drugs do have different effects on people, in terms of side effects especially, we cannot exclude the fact that all humans are different, and that may be the major reason for different effects for even the same drug. “Atypical drugs” are known as second generation anti-psychotics (SGAs). These drugs have been developed much more recently, and although they are prescribed for the same diseases, they are viewed as safer than “typical anti-psychotics.” They work by both blocking the dopamine receptors, as well as act on the serotonin receptors. The Hobbie chapter discussed how introducing compounds into the bloodstream could affect measured T2 values because of neighboring nuclear magnetic moment inhomogeneity so it would make sense that blood composition should be taken into consideration when performing these studies.

  5. How exactly are seed points selected, and why were the authors surprised to find the same number of seed points in patients and the control group if they themselves selected them?

    The authors fail to mention exactly how they selected their seed points. In class we learned that seed points can be generated a number of ways (for example, one seed point in the middle of each voxel or seed points scattered randomly throughout a certain constraint). As for the authors being surprised, it seems the passage in question reads as follows:

    “The number of seed points used to reconstruct a trajectory and the total number of steps taken to reconstruct a trajectory might be expected to be depending on the tract volume. However, for each fasciculus (...), there were no significant differences in the number of seed points nor in the total number of steps between patients and comparison subjects.”

    This is under the heading of “Statistical Analysis” and it appears as though the authors intentionally used the same number of seed points as a controlled variable between patients and comparison subjects. Therefore it seems they are intentional about finding the same number of seed points, not surprised.

  6. How much does the severity of schizophrenia affect these physical metrics (FA and MD)?

    The severity of schizophrenia in these patients is unclear. The only hint as to the severity is that these patients meet the DSM-IV criteria for schizophrenia.

  7. Specifically what were the abnormalities expected by the research group and are there abnormalities that are unrelated to diffusivity?

    The question is hard to understand, but the diffusion/diffusivity in tissue is determined by the structure of the tissue. Studies have shown that the diffusivity of water in tissue is always bigger in the direction of axons. In healthy tissue, we expect to observe large anisotropy in the direction of axons. However, if the tissue is damaged or has abnormalities, we would expect to see bigger anisotropy in areas perpendicular to axons. Things that could cause this could injury, demyelination of axons, multiple sclerosis and schizophrenia .

    It has been shown that abnormalities of functional connectivity accompany schizophrenia.The researchers guessed that measures of diffusivity and diffusion anisotropy made within major white tracts that form connection to to frontal cortex would be abnormal in people with schizophrenia. I believe the abnormalities they would have been looking for to exclude people in the study were abnormalities in the fractional anisotropy. People with reduced fractional anisotropy would have been left out of the study.

  8. The fact that they found test subjects based on sex got me curious about how schizophrenia might manifest itself differently in a female brain. With some cursory searches on PubMed, I found that women with schizophrenia apparently have a later age of onset and premorbid functioning (functionality pre-diagnosis).1 They also have "different structural brain abnormalities and cognitive deficits," though I couldn't investigate further without accessing the paper itself and not just the abstract (which I currently can't do). Anyway, I thought this was interesting (even if gender and sex were conflated in the articles I found, which is probably an interesting discussion as it relates to differences in cognitive function in and of itself).
    1 Canuso CM, Pandina G. “Gender and schizophrenia.” Psychopharmacol Bull. 2007;40(4):178-90. Review. PubMed PMID: 18227787.

    That’s really interesting! I submitted a similar question in the moodle response after I read this paper, but I didn’t have the initiative to search for answers on PubMed or Google. Props to you for doing so :).

  9. What does “X” mean? I.e. what does it mean when one says there’s “an interaction of Group x Age”?

    An interaction between two variables refers to the way they contribute to a third dependant variable. If the relationship between two variables a and b is additive, then

    f(a + b) = f(a) + f(b).

    If a and b do not (approximately) satisfy this relationship, then we say that there is an interaction between a and b. In this paper, age and group both played a role in fractional anisotropy, but it was not additive. The figures on p. 234 do a good job demonstrating this. In the comparison group, FA decreased with age. In the patient group, FA increased with age. If the subjects were not split into groups, age would not appear to be linked to FA.

  10. Figure 3 seems to display a positive correlation between FA and age, whereas page 233 says that “age was significantly and negatively correlated with mean FA.” That doesn’t make sense to me.

    Figure 3 displays a positive correlation between FA and age for patients (solid lines trending upwards), but a negative correlation for comparison subjects (dotted lines slanted downward). The phrase that you mention on page 233 says exactly that, except they say the positive correlation that we see was actually not significant (meaning that it didn’t meet a statistical criteria for being classified as significant).

  11. At one point they state that “(for all eight tracts) [FA measures] were lower among patients” quickly followed by “only differences in FA measures in the left superior longitudinal fasciculus (SLF) achieved statistical significance.” I believe that the first should only have been claimed if in fact “all tracts” showed significant significance.

    We think this section is really badly phrased. The full quote without numbers is “When the effect of subject group was considered, FA measures (for all eight tracts) were lower among patients than comparison subjects. When individual tracts were examined, only differences in FA measures in the left superior longitudinal fasciculus achieved statistical significance with values that were lower in patients than controls.” We think this means that when the researchers looked at all tracts at once, they did see a statistically significant difference in FA measure, but when they examined tracts individually, the only one that showed significant difference between the two groups was the left superior longitudinal fasciculus. So “all tracts” did show significant difference, but only collectively.

  12. What are surrogate markers and seedpoints and why are they volume independent?

    Seed points are the starting points for a tractography algorithm. Once a seed point is selected, the tracking algorithm takes a step in the direction of the fibers (determined by the diffusion tensor). I am pretty sure that seed points are points and don’t have volume. If seed points did have volume, there would only be issues if the seed points spanned multiple voxels.

    In clinical trials, a surrogate endpoint (or marker) is a measure of effect of a certain process that may correlate with a real clinical endpoint but does not necessarily have a guaranteed relationship. The National Institute of Health (USA) defines surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint" (Wikipedia).

    In the case of the Jones et al. paper, surrogate markers were used to correlate the tract volume of the multiple fasciculi with the effects of schizophrenia. The desired relationship was that FA and MD in these fasciculi would decrease due to the disease.

  13. The paper mentions picking only right-handed males with a specific IQ range. We are curious how or why the authors chose these specific traits. Are there other variables that weren’t considered but may have influenced their findings?

    Presumably, these traits were chosen to maximize consistency in the brains among the subjects. It could reduce the propensity to select a person with unknown brain damage (presumably lower IQ) and also account for the possibility that both gender and handedness affect brain development (causality may be reversed). There are other possible confounding variables that could be and were targeted, such as age, family history, and medication usage (age was kept consistent, no family history of mental disorders and no medications), as well as other variables that aren't mentioned, such as occupation (i.e. working in a nuclear power plant).

  14. How exactly were the seeds where the tractography algorithms began chosen? Are these seed points different from ROI’s? Are they the same or are seed points within some enclosed ROI?

    As we learned in class, ROIs are not the same as seedpoints. As the name suggestions, “regions of interest” are regions which are somehow picked as interesting and within which seedpoints are defined. In this paper, one of the authors picked ROIs based on a FA image. However, the authors fail to mention how they actually choose seedpoints within the region, simply saying “one of the authors...defined 3-D regions of interest (ROIs)... At each seedpoint fiber orientation was determined…” Figure 12a in the paper by Vilanova et al. is an informative image showing in 3-D tracts and the original ROIs, and outlines the seedpoint selection as follows: “Interior of the ROIs are sampled and the samples are used as seed points.” In class, Jadrian suggested a few methods for picking seed points, including choosing a single point in the ROI and using a random distribution of points within the ROI.

  15. When overcoming the problem of closely running fasciculi, the authors glaze over the method for finding another ROI that has the fasciculus of interest. How do they do this?

    To overcome this problem, the authors identify the fasciculi they want to track in two locations. Both selections may have fibers passing through that are undesired, but both contain a significant number of fibers that belong to the fasciculi of interest. By choosing fibers that go through both locations we can filter out most of the undesired fibers, which lead to other locations and most likely will only pass through one of the selected regions of interest.

    Mori &Barker 99, Hobbie 97

  16. Is fractional anisotropy different from regular anisotropy?

    There is a difference between fractional anisotropy and regular isotropy (presuming that regular isotropy refers to mean diffusivity). Fractional anisotropy is a measure taken from a scan with a particular gradient that has a value between zero and one, with zero indicating isotropic diffusion and larger numbers indicating larger degrees of anisotropy. Therefore, the apparent diffusion coefficient only represents diffusion across the gradient. When we look at diffusion in a voxel, we mean this type of diffusion measurement (I think), which has been a discussed limitation of older scanning methods. In order to get a more general sense of what’s happening at a location is to take multiple scans, the averaged result is what I believe you mean when you say regular anisotropy.

Mori & Barker 1999; Hobbie 1997

  1. Are we essentially measuring something like the difference from perfect gyroscope spinning and that is the anisotropy? Or is it more like total amount spun when influenced by a gradient field? What exactly is the “signal” measured?

    With the MRI techniques described by this paper, water protons are excited by a magnetic field. As the water protons relaxes, this energy produces signals in the form of measurable electric currents. The signal we measure is a combination of the signals from the individual atoms, and it’s stronger when they are precessing at the same rate. Using the gyroscope metaphor, we get a stronger signal when all the gyroscopes are in sync. This doesn’t tell us anything about anisotropy by itself.

  2. In the paragraph in the top-right corner of page 106 it says “..and no difference between gray and white matter remains” then references figure 9a. We seem to be able to see a difference between gray and white matter here, though we realize we might be misreading the image because we don’t know much about neural structure. Is this quote referring to a visual difference between grey and white matter in 9a? If so, why? If not, what is it talking about?

    http://medicalimages.allrefer.com/large/gray-and-white-matter-of-the-brain.jpg

    If we look at any of the images (9b,c,d) that have added contrast due to the anisotropy effect, we can see differing shades of grey within the image. I believe the grey matter appears in the darker patches, while the lighter patches are more white matter. However, in 9a the grey stuff is nearly all the same shade. The white outline that we see is cerebrospinal fluid, while the grey area depicts both grey and white matter.

  3. What are some applications of the various pulse sequences described? Are some used for specific tasks?

    http://www.bioc.aecom.yu.edu/labs/girvlab/nmr/course/COURSE_2012/chapter5.pdf

    This link has a bunch of information on a whole lot of pulse sequences, and in it we read “the MRI sequence parameters are chosen to best suit the particular clinical application” which suggests that for an instance when you would want to scan there would be a preferred scanning method. It also describes how for a particular Flip Angle and Ernst Angle scan that there is an optimum flip angle choice to maximize contrast depending on qualities of the tissue being scanned.

  4. Which sequence is your favorite sequence?!?!?!?!

    Pulsed-Gradient Spin Echo (PGSE), for sure. It's the simple modification of the Spin-Echo sequence that allows us to get diffusion information out of MRI. Lots of other sequences exist to get far higher-quality diffusion-weighted signals out of MRI, but PGSE was the first and simplest; for elegance and comprehensibility it can't be beat. Of course, if I were actually picking a sequence to use in a study, I'd defer to the expertise of a medical physicist who understood the more sophisticated sequences better than I do.

  5. How can incorporate T1-weighted imaging into the gyroscope analogy, and what are the advantages of T1-weighted imaging?

    Our MRI machine has a magnetic field oriented along the bore: the b-field. This makes our ‘gyroscopes’ spin in line with the bore. T1-weighted imaging knocks the rotation of the gyroscopes so that they are perpendicular to the b-field and bore. *We measure the time until the gyroscopes reorient along the b-field. T1-weighted imaging is knocking the gyroscopes halfway over (a 90 degree) rotation.

  6. Are there any health effects related to synchronizing every proton in the human body?

    According to this 2009 review article (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705217/) the health effects of prolonged exposure to high magnetic fields is not entirely known. The magnetic field produced by an MRI machine is not strong enough to knock off electrons from, or ionize, atoms. While exposure to high amounts of ionizing radiation carries with it significant health risks, the results are less decisive for non-ionizing radiation. Addressing your question directly, because this article doesn’t even mention spin, my guess that the only macro-effect of aligning proton spins would be a very small amount of angular momentum being imparted upon a patient’s body.

    Notably, being in the presence of a 4T electromagnet carries with it some other, more apparent health risks. If a patient has any sort of magnetic object within their body, bad things can happen. It turns out that MRI magnets are pretty strong. Here is a youtube video of some people throwing a stapler (and other things) into an MRI.

    http://www.youtube.com/watch?v=6BBx8BwLhqg

  7. Since 1999 have we developed better techniques for detecting strokes than simply removing directionality from Diffusion MRI images?

    DTI is still a favored tool for diagnosing stroke, since it’s noninvasive and gives accurate results. However, the other papers we’ve read haven’t discussed other specific techniques, and neither do the resources we were being able to find online. However, it does sound like the basic technique has stayed the same. This recent article in a medical database talks about looking at DWI and ADC for stroke diagnosis:

    http://emedicine.medscape.com/article/345561-overview

  8. What are the relative advantages of each of the pulse sequences?

    I think I am right in saying that they all allow us to measure different things and thus have different abilities. Hobbie mentions that the “Inversion-Recovery” sequence lets us measure T1. However, there can be sign issues, which we need ‘special detector circuits’ to fix. He continues that this sequence can take a long time to process and can be confusing to read due to the sign issues.

    The Spin-Echo (SE) sequence allows us to measure T2 instead of T2*. A T2* measurement is affected by the inhomogeneities in the magnetic field. In other words, the application of the field is not perfect, and T2* gives us information that is not perfect in this way. T2 is closely related to T2* (see equation 17.35, Hobbie page 501), but it also uses a term related to the ‘spread in Larmor frequency and transverse relaxation time’ to account for these inhomogeneties. T2 is one step better than T2* because it compensates for known errors in applying the magnetic field.

    The Carr-Purcell sequence has a couple advantages. I’m not sure why, but Hobbie writes that this sequence lets us measure points all along the decay curve quickly, which I assume means we get better data and faster. Additionally, we know that rephasing is imperfect because molecules diffuse into different regions. A molecule may be excited to a certain frequency, diffuse to a distinct region, and be rephased (incorrectly) by the frequency determined for that region. The Carr-Purcell sequence (somehow) mitigates this issue. However, Hobbie continues that (necessarily?) the PI pulse has to be extremely accurate otherwise we get compounding error in our pulses.

    The Carr-Purcell-Meiboom-Gill Sequence (CPMG) is similar to the Carr-Purcell sequence, but it overcomes the problem of error buildup by shifting the pulses in time.

  9. How and why do T1 and T2 depend on the structure of their environment?

    In the Mori &Barker paper on page 103 in a discussion surrounding T2 weighted images it reads “the exact mechanism that confers longer or shorter T2 relaxation is not completely understood.” T1 relaxation time is very connected to the molecules’ thermodynamic equilibrium as well as the local fluidity of the measured system. Because of this, different tissues will exhibit different T1 times because the biological structure will impede motion to varying degrees. Additionally, Hobbie indicates that paramagnetic atoms reduce T1 and T2 by generating a fluctuating magnetic field. For example, adding 20 ppm of Fe3+ to water reduces the T1 value to 20 ms. As a result, environments with different atomic composure have differing relaxation times.

  10. Why do relaxation times for T1 and T2 vary for the same environment (whole blood, fat, muscle)?

    We believe that the T1 and T2 values are properties of a material relating to local magnetic field inhomogeneities on the microscale. If two materials have identical microstructures, then their T1 and T2 times should be the same. However, different samples of the same material may have different microstructures and thus have different T1 and T2 times.

  11. In the Mori and Barker paper they mention “that the ADC of brain water drops drastically in the event of ischemia”. What is the mechanism behind this? Working with the model that higher ADC means faster diffusion and vice-versa, wouldn’t a drop in diffusion be very uncharacteristic of a sickly cell?

    The book "Biomedical imaging in experimental neuroscience" on page 82 states: "The progression of the cerebral ischemia results in cytotoxic edema, cell death, axon demyelination, cell lysis and eventual removal of cellular debris by glial cells and macrophages. These events will reduce barriers to diffusion and result in a reduction in anisotropy."

    In other words, ischemia kills and damages cells, and demyelinates fibers. This increases isotropy and decreases the ADC.

  12. In the Mori and Barker paper — “Tensor theory tells us if we measure the diffusion constant along six independent axes, we can calculate the complete shape of the diffusion ellipsoid”. Are these 6 linearly independent axes? How is this possible?

    Here the paper means pairwise linearly independent — that is, no two axes are scalar multiples of each other.

    Consider Mori and Barker’s figure 10. In order to specify a diffusion tensor completely (rotation and scaling in arbitrary directions in 3-space) we need to specify six parameters. To specify six parameters, we need at least six independent observations, and it turns out that making measurements in the same direction doesn’t actually provide any new information. Take a look at the handout Jadrian made for more information regarding how diffusion tensors can be (incorrectly) fit. Instead of simply using 6 measurements (in fact, because the value of S0 is generally stored along with the tensor, a minimum of seven measurements are required) often many more (~70) are used, and the tensor chosen is the one that minimizes the error/residuals.

  13. Is there any sequence that could be used more effectively than the Spin-Echo?

    As we might expect, it seems that different MRI sequencing techniques have various strengths and weaknesses. One source cites advantages of Spin-Echo as “High SNR, True T2 weighting, and minimises susceptibility effects” and disadvantages as “Long scan times and uses more RF power than a GE sequence.” I think the technicalities of these sequences are outside the scope of our course and best left to physicists, and I think we can cautiously assume that various researchers will have picked the best sequence method for any given experiment.

    (Source: http://www.bioc.aecom.yu.edu/labs/girvlab/nmr/course/COURSE_2012/chapter5.pdf )

  14. How would you realistically find a coordinate system that rotates at the Lamor frequency?

    As mentioned in class, a coordinate system is ‘an abstract mathematical construct.’ Since we don’t have to create or locate axes in the real world, we simply define our coordinate system to rotate at the Larmor frequency. We define equations dependent on time that allow us to move back and forth between using rotating and non-rotating axes.

  15. Why is contrast caused by anisotropy not desirable for Stroke detection?

    Maps of apparent diffusion look very different depending on the axes of the measurements. Measurements of anisotropy along the axis of measurement are shown as regions of a lighter shade, and for axes perpendicular to these axes the anisotropy is indicated by darker shades. This means that the choice of axes causes tissue oriented in a certain way to be contrasted with other tissue, even if both are healthy hence this contrast is undesirable to detect stroke.

    However, if the trace of the diffusion tensor is used to process the image, the contrast of tissue by orientation is avoided and the tissue affected by stroke appears as darker regions compared to the rest of the healthy tissue.

  16. Why would people prefer weighted t2 images of proton density images?

    Proton density techniques observe fibers via magnetic resonance, and the contrast of the generated images is changed depending on the concentration of the water. T2 weighted images, on the other hand, looks at the speed by which the machine signal diminishes, and does numerous measurements to calibrate the signal. We believe that T2 weighted images are preferable over proton density images because since T2 weighted images has parameters, it is possible to sensitize the image by altering factors, such as the gradient pulse which in turn affects the amount of diffusivity signal loss. Proton density images, on the other hand, don’t give us a way to change the contrast of the image.

Basser et al 2000; Simon et al 2006

  1. Is there a reason why the sample size is usually relatively limited? In the study involving schizophrenia patients, only 14 patients are involved, whereas in this experiment involving fibers at risk, 3 patients are involved. Is this reasonable to draw conclusions based on such a small pool of patients?

    As discussed in class, many of these studies are pilot studies and serve the purpose of proving concepts. Additionally, putting patients through a scanner and processing the information can be expensive and time-consuming. I personally don’t think that the authors should be claiming any findings in these studies, other than the conclusion that DT-MRI will likely be used in the future to study tissue conditions, and also for diagnosis and prognosis.

  2. The In Vivo Fiber Tractography paper makes a point to emphasize generating continuous tracts from discrete data. Is this different than what we’ve seen before? Is the novel part of this paper the math it uses to actually follow a tract?

    Considering studies like Jones et al. cite Basser et al., I think that the “framework and methodology to obtain a continuous representation” of the tensor field presented here is fairly standard. The main contribution of this paper is a methodology for transforming discrete tensor data into continuous tensor data, and the application of numerical tract-tracing algorithms that use equation [1] to find paths in continuous tensor fields. Equation 1 is a Frenet equation (http://en.wikipedia.org/wiki/Frenet%E2%80%93Serret_formulas) which basically asserts that the vector rate-of-change of a parameterized curve (ie the direction a curve is “curving”) at a given point is tangent to the curve at that point.

  3. How exactly are seeds generated in these types of algorithms? An earlier paper discussed regions of interest (ROIs), but how do they pick these regions in the first place?

    For the answer to the first question, see the answer to question 49. As Jadrian said, people working with these images know generally where the structures they want to look at are, and so they pick a region of interest in about the same place as the structure.

  4. How were Basser et al. able to do calculations on a length scale that was less than a voxel?

    On page 630 in the right column, Basser et al.’s discussion regarding voxel size reduction seems to be simply hypothetical. They argue that voxel reduction could help solve problems with identifying fiber orientation in regions with curving fiber but that it would not be effective in areas with crossing fibers.

  5. Where do the equations for curvature and torsion come from?

    The equations come from mathematical derivatives from the parameterized tract curves. Note that, in their most simplified forms, both curvature and torsion depend only on derivatives of r(s). Unfortunately, the citation [6] isn’t available online or in the library, so it’s not entirely clear exactly where these formulas come from. For the purposes of this study, we can think of these as statistics the authors use to “help monitoring the tract-following process,” and that curvature “describes the propensity of r(s) to bend” and that torsion “describes [the propensity of r(s)] to twist around a fiber axis.”

  6. What is an Affine transform?

    An affine transformation is a function from one affine space to another. In particular, an affine transformation is a linear transformation followed by a translation. Both times it appears in Simon et al., the authors describe the affine transformation as “registering” data to another space (B0 space in the first case and T2 space in the second), so they seem to be saying that they used an affine transformation to change the vector space of the data.

  7. What is a b-spline? What are the differences between approximation and interpolation?

    [For b-splines, see question 80.]

    Both approximation and interpolation fit a curve to a data set. Interpolation constructs a function that contains each point in the data set. Interpolation is not suitable if the data is subject to significant errors. Consider a set of perfectly linear data. An interpolating method would fit a linear function. If one of the points was perturbed, an interpolation method would fit a higher order curve to the data, which is undesirable. Interpolation is only viable is noise in the data is very small.

    Approximation methods use fancy statistics to take noise into account. Curves generated by approximation do not pass through each point in the data set. For analyzing noisy MRI data, approximation is a superior method.

  8. Is there any way to resolve issues regarding “tract jumping”? Is it possible to identify a confidence level of our mapping for an individual tract?

    The way to resolve this is to be aware that remediating noise may cause poor calculation of eigenvalues. The bad calculation of Eigenvalues leads to 90 degrees change in trajectories, which allows the tracking to jump from tract to tract.

  9. In a larger study focused on magnetization transfer ratio (MTR) were conducted, what differences might we expect to see between fiber-at-risk and normal-appearing-white-matter with regards to this measure?

    Although the underpinnings of MTR are not entirely clear to us right now, it seems as though even a larger study would not find significant differences between fiber-at-risk (FAR) and normal-appearing-white-matter (NAWM). This study attempts to identify FAR by tracing tracts through areas that are known to be damaged; the FAR itself is not significantly different than NAWM (otherwise researchers could just find the MTR or another value to identify FAR instead of using tractography). Their findings reinforce this theory… “MTR values appear abnormally low in the AAWM fraction, and relatively normal in the FAR and NAWM fractions.”

  10. The number of patients used was 3, this does not constitute a sufficient sample size to fully validate any methodology or hypothesis by today’s standards. Additionally, without a control group, many of the metrics are meaningless (i.e. what percentage of white matter appears normal).

    The number of patients used was 3, this does not constitute a sufficient sample size to fully validate any methodology or hypothesis by today’s standards. Additionally, without a control group, many of the metrics are meaningless (i.e. what percentage of white matter appears normal?).

    See Question 68.

  11. On the bottom of 630 in Basser’s paper, they mention the “use of isotropic voxels” to mitigate susceptibility issues. Is this just a fancy way of scanning with multiple orientations?

    The use of isotropic voxels means using voxels that are uniform in all directions. Implications of this include reducing susceptibility and mitigating the effects of the orientation of fibers (http://imaging.mrc-cbu.cam.ac.uk/imaging/AnalyzingDiffusion?action=AttachFile&do=get&target=Diffuson_voxel_size.doc). This is the case since different voxel sizes can result in a larger bias in the FA measured (i.e. the voxel may be more susceptible and the same fiber could have different FA based on the dimensions of the voxel).

  12. Also in Basser: How exactly was figure 5b rendered differently from 6a? (was it?) And from 7? Same rendering process, just different regions?

    I believe the only difference in these images is in the areas they represent, and that your assumption is correct. The images seem to follow the same rendering process, except figure 5b mapped the trajectories with slices from the ROI, the body of the corpus callosum. I think the significance in this section is not to show how different renderings appear, but talk about the processes scientists follow in deciphering these images. For example, in the 3D rendering of the corpus callosum in figure 7, by seeing that the cingulum is shown in the rendering despite it not being included in the region of impact, it turns out the tractography algorithm found a connection between callosal and cingular fibers.

  13. What is a B-spline?

    A b-spline is a method of numerical analysis. It approximates curves as piecewise polynomials of degree k. The points where the pieces meet are called knots and there are k+2 knots in a b-spline. They are useful for fitting a curve to exponential data. In many instances (almost always in physics), there is a theoretical basis for fitting a function to a set of experimental data. When there is no theoretical basis (like in these studies), b-splines produce good fits.

Vilanova et al 2006

  1. What is the relationship between volume rendering and vector field visualization? What are the differences?

    The relationship between these two methods is that they are both ways to visualize tensor fields. A lot of the differences are specified in 3.2 and 3.4 of Vilanova et al., but I’ll try to summarize here.

    Volume rendering maps from tensor properties to voxel colors/opacities using a so-called transfer function. The transfer function’s inputs can be any aspect of the tensor; the example presented uses fractional anisotropy (see table 1 of Vilanova) but one could, in principle, use any aspect of the tensor. In figure 5, a voxel is rendered as grey with 0 opacity if the fractional anisotropy of the tensor within that voxel is above a given threshold. Increasing this threshold allows researchers to filter out regions of low fractional anisotropy. Again, this is just one instance of volume rendering, lots of tensor statistics could be mapped to lots of voxel attributes.

    Vector field visualization, on the other hand, simplifies the tensor to just its main eigenvector (the one with the highest corresponding eigenvalue). While this constitutes a loss of information, vector fields are, in some ways, easier to deal with than tensor fields. You can map a vector to an RGB color value easily, for instance (Vilanova fig 11). Furthermore, tractography (the tracing of entire fibers) is based off the vector field underlying the tensor field.

  2. On page 17 what is the ‘time of arrival’ the authors mention as a factor in tractography?

    In the paragraph where they mention time of arrival, they’re discussing modeling possible neural paths in locations with planar anisotropy. These models are tested by running simulations of diffusion. Time of arrival at a given point refers to when the diffusion front reaches that point from a given start point. The papers Vilanova et al. are summarizing here are probably good sources for understanding this better- we can’t access them, but we think Vilanova does not do them justice.

  3. Do researchers often combine visualization techniques, and what are the common ways to do it?

    In several instances, we have seen scalar indices combined, usually fractional anisotropy and mean diffusivity. Scalar indices are also used in other techniques. In tractography, the stopping point for a streamline is determined by the fractional anisotropy of the voxel. Streamlines and stream surfaces are sometimes combined into one model.

  4. What is barycentric space?

    It seems appropriate that we can think of barycentric spaces as triangles where three different qualities or measures are placed at the vertices. Forms and intermediate forms are then stored at locations inside the triangle representing measures of anisotropy. These models are good for distinguishing between cases where interactions between variables provide significant contrast.

  5. How are hyperstreamlines different from streamlines? Why are they useful?

    From our understanding, streamlines are one-dimensional curves through space derived from first order tensor fields (they only use the principle eigenvector). Hyperstreamlines are essentially tubes through space constructed from second order tensor fields using all three eigenvectors. The tube follows the direction of its principle eigenvector with diameter defined by the second and third eigenvectors.

  6. How do the run times differ among visualization techniques? Is any method particularly advantageous or impractical?

    As Jadrian mentioned in class, run times are not a concern for researchers in the field right now. None of the methods described are fast. Methods using scalar indices are faster than more complex methods such as volume rendering. The complexity of these algorithms is also not a huge issue since the size of each data set (brain) is more or less constant.

  7. Why are all the visualization techniques based on tensors? Many studies transform the tensor into graphical objects, but why do we start from a tensor rather than raw data?

    While it might be advantageous to create a different type of model from the raw data, tensors have some nice properties and, as a result, have become very popular to use in this context. The main advantage of using a tensor is that it decreases the number of measurements one needs to make in order to completely describe the diffusion in each voxel. Measurements along 70+ different axes are fitted to a single tensor, and this tensor readily provides information about the diffusion in the tissue at a given voxel. Working with the tensor simplifies the task of understanding the tissue without having to take thousands of measurements, and is complex enough to be a viable model.

    To highlight the advantages/disadvantages of tensors, consider this simpler diffusion model: a linear interpolation scheme would require only 3 measurements from 3 orthogonal axes to produce a voxel-level diffusion model. While it does require fewer scans, problems with this simple model might arise because a majority of fibers are not along these chosen axes. The point is that diffusion models can have different acquisition costs (number of scans), computational costs (how long it takes to fit the model), and complexities (how well the diffusion is characterized).

    If you can think of a better model than a tensor that uses ~70 measurements though, try it! :)

  8. Other than “surgery planning” what other ways can imaging inform disease diagnosis, treatment, or prevention?

    Even in the few papers we have read, there are a host of other benefits of imaging. In Mori &Barker 1999, the authors show that DT-MRI is essential in detecting stroke. In Simon et al 2006, it is shown that fibers-at-risk of degeneration due to multiple sclerosis can be detected even before they begin any degeneration process. Clearly, it is important to note that MRI is essentially the only noninvasive tool science currently has to map the brain, so the benefits are innumerable.

  9. When discussing volume rendering, the authors mention that “there are no implementations that can volume render...at interactive rates.” How has the progress in graphics hardware since 2006 affected this field?

    Well, GPU’s are faster and according to Moore’s law--technology doubles nearly every 18 months (1.5 years). So graphics hardware should be approximately 32 times better (have 32 times more transistors) than in 2006. Although volume rendering still does not occur at interactive rates despite this large growth in GPU capacity.

  10. We had been functioning with the understanding that nMR and MR were the same thing, but on page 3, it seems to distinguish between them as if to say that MR imaging is when the signal originates from the sample (perhaps with radioactive dye). What if any are the differences here?

    We had been functioning with the understanding that nMR and MR were the same thing, but on page 3, it seems to distinguish between them as if to say that MR imaging is when the signal originates from the sample (perhaps with radioactive dye). What if any are the differences here?

    Page 3 describes two related concepts: nuclear magnetic resonance, a physical principle whereby nuclei absorb and re-emit energy, and magnetic resonance imaging, a process by which this physical principle is used to produce an image. In this instance, their description of “imaging” is a process by which one can tell exactly where in a sample (a brain, a heart, etc.) certain signal is being emitted from; these signals and their locations are combined to form a diffusion-weighted image of some sort. In short, nMR and MR are the same thing, as are nMRI and MRI.

  11. When the paper discusses how different imaging techniques that “contract the tensor to one scalar” it seems that the downsides of contraction are offset by the fact that humans can take as input multiple visualizations and go from there.

    This seems like another context dependent problem. There may be some cases where a scalar image or a series of scalar images give you the information you want, but more complex imaging techniques, while harder to interpret, can give you more nuanced results.

  12. With rapidly improving computer architecture, what sorts of imaging techniques do you think will develop that were previously impossible or impractical?

    Some of the techniques discussed in this article are not suitable for clinical applications. Volume rendering is too slow. Tractography based methods are subject to human errors because people select seed points. These methods will likely improve with time. Maybe one day we will be able to use giant holograms to look at our brains (See Iron Man 3).

  13. What is a superquadratic? How exactly do we visualize a 6D tensor?

    According to wikipedia (http://en.wikipedia.org/wiki/Superquadrics) superquadrics are a family of shapes described by the same formula parameterized in different ways. Fig 10. of Vilanova presents some examples of superquadrics. According to the paper that originally proposed using superquadrics to visualize tensor fields, they have some advantages over using cuboids or ellpisoids (http://dl.acm.org/citation.cfm?id=2384248). The paper states that “cuboids and ellipsoids have problems of asymmetry and visual ambiguity. Cuboids can display misleading orientation for tensors with underlying rotational symmetry. Ellipsoids differing in shape can be confused, from certain viewpoints, because of similarities in profile and shading.” For our purposes, superquadrics are simply another type of tensor glyph. A tensor glyph is any sort of parameterized geometric object that describes a diffusion tensor, including our familiar diffusion ellipsoid. Tensor glyphs allow for visualizations of tensors, so while we might not be able to visualize six-dimensional geometry, we can visualize geometry with 6 parameters, for instance. This link here also has some important explanations of Tensor field visualisation.(http://www.cs.utah.edu/~gk/papers/vissym04/)

Alexander 2005

  1. When and why do we use ADC?
  2. What does it mean by a mixture of Gaussian densities (p. 93)?
  3. How confident can we be confirming the accuracy of techniques using models like the “phantom” mentioned on page 96?
  4. Equation 5.4 describes a way to compute D such that it “has eigenvalues alpha + beta, beta, and beta.” Is this an error, or are two of the eigenvalues actually the same?
  5. What do we gain by transforming our tensor fit into a linear optimization? It seems like there are many downsides to this (many having to do with Rician noise, etc). Is it really that much faster?
  6. If higher order tensor analysis and spherical harmonics cannot provide fiber orientation estimates, what purpose do they serve? Do they indicate when the Gaussian model breaks down?
  7. Several sections of the paper, such as 5.3.5, use wavenumbers in the model. What are wavenumbers?
  8. Section 5.3 discusses “other features of p.” Are these the sort of calculable scalar quantities we discussed/read about in Vilanova & Zhang?
  9. How would knowledge of, say, Fourier transforms help a programmer who works with MRI machines, or are there computer programs that already exist that make this sort of knowledge superfluous?
  10. Could creating a sort of hybrid algorithm could optimize the trade-off between scan integrity and efficiency? I am imagining using “ADC modelling” to identify and isolate the non-Gaussian voxels, then using normal DT-MRI on the isotropic or Gaussian voxels (efficient) and using DSI on those non-Gaussian voxels (more accurate). I can’t tell whether this process is similar to the specific algorithms described at the end of 5.3 or not.
  11. The text mentions that the diffusion spectrum imaging model is not used due to its long acquisition times. What practical issues does this raise, provided that the subject in vivo consents to paying in the scanner for an extended period of time? Does long acquisition time correlate with high costs in this case?
  12. Many different types of reconstruction algorithms are described in this paper, and we believe we understand most of these in some ways but not in detail. Can we go through one of them in greater detail in class? This could possibly benefit our understanding of the other algorithms.