Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Analyzing Brain Connectivity & Goodness-of-Fit in Neuroimaging, Study Guides, Projects, Research of Statistics

NeuropsychologyNeuroscienceBrain ImagingCognitive Neuroscience

The application of Structural Equation Modeling (SEM) in neuroimaging research, focusing on creating connectivity models of brain regions, testing their goodness-of-fit, and finding a balance between model complexity, anatomical accuracy, and interpretability. The document also covers the estimation of path coefficients and their interpretation, as well as the comparison of different models.

What you will learn

  • What is the role of Structural Equation Modeling (SEM) in neuroimaging research?
  • How do researchers create brain connectivity models using SEM?
  • How are path coefficients estimated and interpreted in SEM?
  • What is the significance of the goodness-of-fit criteria in SEM?
  • What are the challenges in applying SEM to neuroimaging data?

Typology: Study Guides, Projects, Research

2021/2022

Uploaded on 09/27/2022

ananya
ananya 🇺🇸

4.4

(16)

11 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Analyzing Brain Connectivity & Goodness-of-Fit in Neuroimaging and more Study Guides, Projects, Research Statistics in PDF only on Docsity! 1. Structural Equation Modeling Roughly speaking, SEM involves creation of possible connectivity models involving brain regions that are active for a given task, then testing the goodness of fit of these models to see if they can account for a significant amount of the experimental data. Here we use this technique to investigate possible connections between cortical regions that are active during processing of visual and audio-visual speech stimuli in both normal hearing and congenitally deaf individuals. SEM is a multivariate technique used to analyze the covariance of observations (McIntosh et al., 1996). When applying SEM techniques, one also has to find a compromise between model complexity, anatomical accuracy and interpretability since there are mathematical constraints that impose limits to how complex the model can be. The first step in the analysis is to define an anatomical model (constraining model), and the next step is to use the inter- regional covariances of activity to estimate the parameters of the model. Figure 1 Example of a structural model Consider a simple example above in Figure 1 (from McIntosh and Gonzalez-Lima, 1994). Here, A, B, C and D represent the brain areas and the arrows labeled v, w, x, y, and z represent the anatomical connections. These together comprise the anatomical model for structural equation modeling analyses. In most cases, the time-series for each region A, B, C and D are extracted from the imaging data (fMRI data), and are normalized to zero mean and unit variance. Then the covariance matrices are computed on the basis of this time-series or observations obtained from these regions. The values for v, w, x, y, and z are calculated through a series of algebraic manipulations and are known as the path coefficients. These path coefficients (or connection strengths) are the parameters of the model, and these represent the estimates of effective connectivity. Essentially, the parameters of the model are estimated by minimizing the difference between the observed covariances and the covariances implied by the anatomical structural model. Mathematically, the above model can be written as a set of structural equations as: B D C A v y z w x D c B A zCyBD wBxAC vAB A ψ ψ ψ ψ ++= ++= += = For these equations, A, B, C and D are the known variables (measured covariances); v, w, x, y, and z are the unknown variables. For each region, a separate Ψ variable is included, and these represent the residual influences. Simply stated, this variable can be interpreted as the combined influences of areas outside the model and the influence of a brain region upon itself (McIntosh and Gonzalez-Lima, 1992). The path coefficients are normally computed using software packages such as AMOS, LISREL, and MX32. The starting values of the estimates are initially obtained using two-stage least squares, and they are iteratively modified using a maximum likelihood fit function (Joreskog and Sorbom, 1989). Minimizing the differences between observed and implied covariances is usually done with steepest-descent iterations. The structural equation modeling technique differs from other statistical approaches such as multiple regression or ANOVA where the regression coefficients are obtained from minimizing the sum squared differences between the predicted and observed dependent variables. In structural equation modeling, instead of considering individual observations (or variables) as with other usual statistical approaches, the covariance structure is emphasized. In the context of neural systems, the covariance measure corresponds to how much the neural activities of two or more brain regions are related. Applying structural equation modeling analysis to neuroimaging data has a particular advantage compared to applying it to economics, social sciences or psychology datasets, since the connections (or pathways) between the dependent variables (activity of brain areas) can be determined based on anatomical knowledge and the activity can also be measured directly. With applications in other fields, this is not always true: the models are sometimes hypothetical and cannot be measured directly. Goodness-of-fit Criteria Typically in SEM, statistical inference is used to measure: (1) the goodness of the overall fit of the model, i.e. how significantly different are the observed covariance structure and the covariance structure implied by the anatomical model, and (2) the difference between alternative models for modeling modulatory influence or experimental context by using the nested or stacked model approach. For the purpose of assessing the overall fit of the model, the χ2 values relative to the degrees of freedom are most widely calculated. This is often referred to as the chi-square test and is an absolute test of model fit. If the p-value associated with the χ2 value is below 0.05, the model is rejected in absolute fit sense. Because the χ2 goodness-of-fit criterion is very sensitive to sample size and non-normality of the data, often other descriptive measures of fit are used in addition to the absolute χ2 test. When the number of samples is greater than a few hundred, the χ2 test has a high tendency to always show statistically significant results, ensuing in a rejected model. However other descriptive 2. Effective Connectivity Analyses Using SEM To further investigate the cortical interactions involved in auditory-visual speech perception, we performed structural equation modeling (SEM) analyses of our fMRI data. The SEM analyses were conducted on both normally hearing and deaf subjects to identify pathways that underlie the processing of visual speech. SEM is a multivariate technique used to analyze the covariance of observations (McIntosh et al., 1996). When applying SEM techniques, one also has to find a compromise between model complexity, anatomical accuracy and interpretability since there are mathematical constraints that impose limits to how complex the model can be. Anatomical Models Based on previous findings on functional specializations of brain regions known to be associated with visual and auditory stimulus processing, along with known anatomical connections in primates (see Discussion), a number of cortical regions were identified and used to construct a plausible, yet relatively simple anatomical model for our SEM analyses. Six cortical regions and their hypothesized connections comprised the structural model constructed for the effective connectivity analyses. Here the collections of connectivity data on the macaque brain (COCOMAC; http://www.cocomac.org) database was used extensively to search interconnectivity patterns reported in the literature. We hypothesized that there are projections from higher-order visual cortex (V2) to the fusiform gyrus (FG), the angular gyrus (AG), and to higher-order auditory cortex, more specifically the posterior superior temporal sulcus (STS). We further assumed projections from the FG and the AG to the STS. The opercular region of the inferior frontal gyrus (IFG, BA 44) and the lateral region of premotor cortex (PMC) – brain areas that are generally believed to play a role in auditory- visual speech perception and production – were also included in our anatomical model and were assumed to have connectivity with AG, FG, and STS. To define an anatomical model that would best account for the underlying neural circuitry during auditory-visual speech perception in both the Hearing and Deaf groups, we searched through a set permissible functional connection patterns which included our conjectured connectivity mentioned above. After sorting through global fit measures for a set of connectivity patterns, we identified the following structural model (depicted as a path diagram in Figure 1) to provide the best fit across all subjects and two conditions. Figure 1 Anatomical model for SEM analyses (V2 = Secondary Visual Cortex, AG = Angular Gyrus, FG = Fusiform Gyrus, STS = Superior Temporal Sulcus, IFG = Inferior Frontal Gyrus, PMC = Premotor Cortex) Data Extraction and Model Fitting Activities in the cortical regions of interest (ROIs) of the SEM path models were extracted for all subjects for the CVCV Visual-Only and CVCV Audio-Visual conditions. For each subject, local maxima were identified within each region based on functional maps for the CVCV Visual-Only condition. The mni2tal algorithm (http://www.mrc- cbu.cam.ac.uk/Imaging) was used to transform the MNI coordinates into Talairach coordinates, and the Talairach Daemon client (http://ric.uthscsa.edu/TDinfo/) was used to identify the corresponding atlas labels and Brodmann’s areas. BOLD signals in each ROI were extracted separately from the right and the left hemispheres using the Marsbar toolbox (http://marsbar.sourceforge.net) for experimental conditions. For each ROI of a single subject, the average signal was extracted using the SPM scaling design and the mean value option from a spherical region (r=5mm) centered at the peak activation coordinate. The extracted series from each temporal block were normalized for each subject and signal outliers were removed, and the first scan in each block (TR = 3s) was discarded to account for the delay in hemodynamic response. Finally, these values were concatenated across all subjects to create a single time-series for each ROI and experimental condition, and lastly the covariance matrix was calculated by treating these time-series as the measurements of the observed variables. The SEM analyses were conducted in AMOS 5 software (http://www.spss.com/amos/index.htm). Maximum likelihood estimation was performed on path coefficients between observed variables, thereby giving a measure of causal influence. The statistical significance of these parameter estimates was also computed. V2 FG STS AG IFG PMC Results The structural model in Figure 1 was analyzed separately for the Hearing and the Deaf groups, and also separately for the left and the right hemispheres, resulting in four independent models – Hearing (left hemisphere), Hearing (right hemisphere), Deaf (left hemisphere), and Deaf (right hemisphere). In order to investigate if any of the path model’s connection strengths change between the CVCV Visual-only and the CVCV Audio-Visual conditions (i.e. to test for changes in the connection strengths between when the auditory speech information is absent vs. available), multi-group analyses were conducted with the nested models approach. The null (constrained) model’s parameters were restricted to be equal between the two conditions, whereas the free (unconstrained) model’s parameters were allowed to be different for the two separate conditions. Several indices for goodness-of-fit, as discussed in the Methods section, for the four nested models are listed in Table 11 along with their χ2 statistics for model comparisons. The goodness-of-fit indices indicate that the anatomical model (Figure 1) adequately fits the experimental data for both subject groups and for both hemispheres especially when the models were unconstrained. This implies that our anatomical model is suitably represents a network of cortical regions that may underlie a audio-visual speech processing for both subject groups, while being sensitive enough to the changes in the availability of auditory speech. The χ2 fit index for the Hearing (Right) model suggested that the absolute fit may not be acceptable (χ2 (6) = 12.771, P = 0.047) as its p-value is near the borderline cut-off point of P > 0.05, but as stated in the Methods section, other descriptive fit statistics (RMR = 0.013, GFI = 0.998, AFGI = 0.986, RMSEA = 0.024) reflect a good overall fit, hence this model was not rejected in our analyses. The stability index (Fox, 1980; Bentler and Freeman, 1983) was also calculated for each model since our path model includes a nonrecursive subset of regions: AG, FG, STS, IFG, and PMC. As listed in Table 11, both the Hearing and the Deaf right hemisphere models’ estimates were found to be well below one and thus stable. However, the Deaf (Left) models’ stability indices were all greater than one (STI = 2.387, 2.387, 1.131). If the stability index value is greater than or equal to one for any of the nonrecursive subsets of a path model, the parameter estimates are known to yield an unstable system, producing results that are particularly difficult to interpret. Therefore, we decided not to present the parameter estimates from the Deaf (Left) model. All nested models except for the Hearing (Left) model (χ2 diff = 23.995, df = 15, P = 0.065) showed statistically significant differences across unconstrained and constrained models. Since the Hearing (Left) model did not satisfy the conventional level of significance p < 0.05, its path coefficients should be interpreted with some caution. Figure 2 Hearing (left): estimated path coefficients [black: CVCV Visual-Only, blue: CVCV Audio- Visual] Figure 3 Hearing (right): estimated path coefficients [black: CVCV Visual-Only, blue: CVCV Audio- Visual] V2 FG STS AG IFG PMC .542 .191 .377 .280 .493 .581 .136 .175 -.074 .321 .761 .644 .083 -.040 .161 .178 .133 -.041 .070 .053 -.009 -.022 .322 .411 .969 .942 -1.134 -1.049 .134 .145 V2 FG STS AG IFG PMC .263 .102 .353 .275 .364 .410 .220 .274 .111 .194 .477 .271 -.097 .080 .299 .288 .145 -.008 .089 .173 -.244 .451 .469 .021 .513 .607 -.182 -.525 .054 .136 CVCV Visual-Only CVCV Audio-Visual Pairwise Comparison Model Estimated path coefficient Standard Error Critical Ratio P- value Estimated path coefficient Standard Error Critical Ratio P- value Critical Ratio for Difference P- value Deaf (Right) V2 AG .392 .038 10.332 *** .294 .043 6.835 *** -2.310 * V2 FG .363 .032 11.227 *** .316 .036 8.911 *** -1.103 V2 STS -.101 .082 -1.223 .090 .091 .987 2.425 * FG AG .377 .052 7.239 *** .284 .039 7.289 *** -2.101 * AG FG .377 .052 7.239 *** .284 .039 7.289 *** -2.101 * AG STS .594 .120 4.951 *** .414 .182 2.274 * -1.557 STS AG -.249 .126 -1.977 * -.067 .156 -.430 2.308 * FG STS .417 .103 4.060 *** .164 .154 1.064 -1.847 STS FG -.154 .094 -1.629 .013 .129 .100 1.585 STS PMC .375 .144 2.607 ** .208 .206 1.008 -.863 PMC STS -.093 .200 -.467 .104 .298 .350 .705 STS IFG .462 .042 11.089 *** .579 .069 8.374 *** 1.750 IFG STS -.203 .073 -2.775 ** -.328 .130 -2.515 * -.854 IFG PMC .302 .021 14.085 *** .248 .031 7.985 *** -1.742 PMC IFG .302 .021 14.085 *** .248 .031 7.985 *** -1.742 FG PMC .026 .058 .439 .046 .048 .960 .316 AG PMC -.023 .076 -.298 .168 .090 1.881 2.124 * Table 3 Deaf group: estimated path coefficients [*** = p-value < 0.001; ** = p-value < 0.01; * = p- value < 0.05] Figure 4 Deaf (right): estimated path coefficients [black: CVCV Visual-Only, blue: CVCV Audio- Visual] V2 FG STS AG IFG PMC -.101 .090 .392 .294 .363 .316 .377 .284 .417 .164 .594 .414 -.023 .168 .302 .248 -.154 .013 .026 .046 -.093 .104 .375 .208 .462 .579 -.203 -.328 -.249 -.067 Tables 14 and 15 were also constructed to list direct, indirect, and combined total effects each node in the path diagram had on the other nodes. The estimated connection strengths listed in Tables 12 and 13 are equivalent to direct effects listed in Tables 14 and 15. Visual-Only Audio-Visual V2 FG IFG PMC AG STS V2 FG IFG PMC AG STS Left d .493 .000 .000 .000 .136 .133 .581 .000 .000 .000 .175 -.041 i .123 .020 -.086 -.014 .058 -.059 .064 .028 .008 .002 .000 .033 FG t .615 .020 -.086 -.014 .194 .074 .645 .028 .008 .002 .175 -.008 d .000 .000 .000 .161 .000 .969 .000 .000 .000 .178 .000 .942 i .430 .021 -.560 -.094 .391 -.466 .340 .235 -.530 -.106 .372 -.424 IFG t .430 .021 -.560 .066 .391 .503 .340 .235 -.530 .073 .372 .517 d .000 .070 .161 .000 .083 .322 .000 .053 .178 .000 -.040 .411 i .283 .018 -.283 -.022 .198 -.076 .209 .124 -.312 -.030 .220 -.119 PMC t .283 .088 -.122 -.022 .281 .246 .209 .177 -.133 -.030 .180 .292 d .377 .136 .000 .000 .000 .134 .280 .175 .000 .000 .000 .145 i .137 .004 -.086 -.015 .074 -.060 .160 .036 -.077 -.015 .083 -.075 AG t .514 .140 -.086 -.015 .074 .074 .440 .212 -.077 -.015 .083 .070 d .542 -.074 -1.135 -.009 .761 .000 .191 .321 -1.049 -.022 .644 .000 i -.145 .081 .577 -.085 -.404 -.522 .130 -.105 .512 -.085 -.284 -.506 STS t .397 .007 -.558 -.094 .357 -.522 .321 .216 -.537 -.106 .361 -.506 Right d .364 .000 .000 .000 .220 .145 .410 .000 .000 .000 .274 -.008 i .171 .078 -.037 -.044 .085 -.012 .124 .091 -.011 .009 .033 .034 FG t .536 .078 -.037 -.044 .305 .133 .535 .091 -.011 .009 .307 .026 d .000 .000 .000 .299 .000 .513 .000 .000 .000 .288 .000 .607 i .309 .142 -.065 -.161 .295 .068 .239 .267 -.151 .209 .265 -.047 IFG t .309 .142 -.065 .138 .295 .581 .239 .267 -.151 .497 .265 .560 d .000 .089 .299 .000 -.097 .469 .000 .173 .288 .000 .080 .021 i .294 .102 -.127 -.084 .316 .088 .205 .126 -.057 .153 .147 .172 PMC t .294 .191 .172 -.084 .220 .557 .205 .299 .231 .153 .227 .193 d .353 .220 .000 .000 .000 .054 .275 .274 .000 .000 .000 .136 i .141 .026 -.020 -.024 .091 .019 .187 .065 -.052 .039 .129 -.016 AG t .494 .246 -.020 -.024 .091 .073 .462 .340 -.052 .039 .129 .120 d .263 .111 -.182 -.244 .477 .000 .102 .194 -.525 .451 .271 .000 i .167 .054 -.044 -.021 -.030 -.192 .167 .054 -.044 -.021 -.030 -.192 STS t .430 .164 -.226 -.265 .447 -.192 .196 .104 .167 -.179 .058 -.169 Table 4 Hearing group: estimated parameters for direct, indirect and total effects
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved