fMRI Playground: Simple summaries & simulations of neuroimaging methods
How would you like to begin?
Contact: scottibrain@gmail.com
PLEASE NOTE THAT THIS WEBSITE IS NOT FINISHED. PLEASE CONTACT AUTHORS FOR INFORMATION.
fmriplayground (or "fMRI Playground") was developed by Paul S. Scotti, Jiageng Chen, Xiaoli Zhang, and Julie D. Golomb to help researchers to better understand the currently available fMRI analysis techniques for exploring fMRI brain data and testing hypotheses about the brain.
More specifically, the goal of this tool is to review the most widely used post-processing fMRI methods, including both basic and advanced computational techniques, in an intuitive and hands-on manner. We provide a flowchart that guides the reader to the most appropriate analysis methods, and then we supply simulated data and example Python code to implement each method. Unlike other fMRI resources (we highly recommend Brainiak and fmri4newbies), this web tool focuses on the big picture--what are the different methods, when would you use each method, how do the methods relate to each other and the underlying research question, and what are the basic steps involved in each method (i.e., breadth over depth).
There are many stages to running a complete fMRI experiment, including experimental design, data collection, data pre-processing (e.g., motion correction), data processing (e.g., general linear model estimation), and finally post-processing analysis (e.g., univariate analysis) to reach a conclusion about the brain. Due to the number of steps prior to post-processing analysis, the teaching of these methods can often become confusing despite relatively straightforward steps. For instance, functional connectivity is just the correlations of time series, and representational similarity analysis is just the correlations of conditions! Such intuitive understandings can often be lost because when a researcher finally reaches the post-processing stage, they may not have easy access to their data content (e.g., interpreting the contents of an SPM.mat file is a cruel fate) and only be able to access it via package- and method-specific toolboxes that further obscure our understanding of the basic steps performed under the hood.
This tool can help readers to better understand the main ideas and core implementation of a variety of basic and advanced fMRI methods without worrying about the other fMRI stages such as pre-processing. We accomplish this through easy-to-follow flowcharts, high-level summaries of each method, and practical Python examples using simulated data (we only use standard Python packages in our examples [numpy, matplotlib, scipy, and scikit-learn] such that no custom datasets, packages, or toolboxes need to be installed).
Which of the following best describes your primary goal in terms of brain areas?
Which of the following best describes your primary goal in terms of level of understanding?
Which of the following best describes your primary goal in terms of participant populations?
Do you need assistance localizing/defining your brain regions before doing your main analyses?
Follow link to Jupyter Notebook:
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','vox_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/univariate/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
plt.imshow(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),cmap='PuBu')
plt.colorbar()
plt.title("Activations for comedy movies")
plt.yticks([]);plt.ylabel('Subject ID')
plt.xticks([]);plt.xlabel('Voxel ID')
plt.show()
plt.imshow(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),cmap='OrRd')
plt.colorbar()
plt.title("Activations for horror movies")
plt.yticks([]);plt.ylabel('Subject ID')
plt.xticks([]);plt.xlabel('Voxel ID')
plt.show()
plt.imshow(np.array([np.mean(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),axis=1)]).T,cmap='PuBu',clim=[0,1])
plt.title("Avg")
plt.yticks([]);plt.xticks([]);plt.colorbar()
plt.show()
plt.imshow(np.array([np.mean(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),axis=1)]).T,cmap='OrRd',clim=[0,1])
plt.title("Avg")
plt.yticks([]);plt.xticks([]);plt.colorbar()
plt.show()
diffs=np.array([np.mean(np.reshape(activations[cond_id==1],(num_subjects,num_voxels)),axis=1)]).T - np.array([np.mean(np.reshape(activations[cond_id==0],(num_subjects,num_voxels)),axis=1)]).T
plt.imshow(diffs,cmap='bwr',clim=[-1,1])
plt.title("Diff.")
plt.yticks([]);plt.xticks([])
plt.colorbar()
plt.show()
stat = sp.stats.ttest_rel(diffs.flatten(),np.zeros(num_subjects))
print("Horror vs. Comedy (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations1','activations2','coord']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/searchlight/{}.npy'.format(array)).content))
# activations1 are activations for comedy movies
# activations2 are activations for horror movies
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(coord[:,0],
coord[:,1],
coord[:,2],
c=activations1,
marker='s',vmin=0,vmax=2,
cmap='PuBu')
fig.colorbar(p)
plt.title("Activations for comedy movies")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(coord[:,0],
coord[:,1],
coord[:,2],
c=activations1,
marker='s',vmin=0,vmax=2,
cmap='OrRd')
fig.colorbar(p)
plt.title("Activations for horror movies")
plt.show()
x0 = 12
y0 = 12
z0 = 12
center=np.array([x0,y0,z0])
cutoff=5
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations1[distance<cutoff],
marker='s',cmap='PuBu',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Comedy")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff],
marker='s',cmap='OrRd',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Horror")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff]-activations1[distance<cutoff],
marker='s',cmap='bwr',vmin=-3,vmax=3)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Difference")
plt.show()
diffs=np.full(num_voxels,np.nan)
coord_tracker=np.full((num_voxels,3),np.nan)
cnt=0
for x0 in range(20):
for y0 in range(20):
for z0 in range(20):
center=np.array([x0,y0,z0])
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
diffs[cnt] = np.abs(np.mean(activations2[distance<cutoff]-activations1[distance<cutoff]))
coord_tracker[cnt,:]=[x0,y0,z0]
cnt+=1
print("Largest absolute difference found at x={} y={} z={}".format(coord_tracker[np.argmax(diffs),0],
coord_tracker[np.argmax(diffs),1],
coord_tracker[np.argmax(diffs),2]))
Largest absolute difference found at x=6.0 y=6.0 z=6.0
And now let's take a look at the sphere where there was the largest difference of conditions! In this way we have explored where the brain was most responsive to task-specific information.
x0 = 6
y0 = 6
z0 = 6
center=np.array([x0,y0,z0])
distance=sp.spatial.distance.cdist(coord,center.reshape(1,-1)).ravel()
points_in_sphere=coord[distance<cutoff]
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations1[distance<cutoff],
marker='s',cmap='PuBu',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Comedy")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff],
marker='s',cmap='OrRd',vmin=0,vmax=2)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Horror")
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d')
p = ax.scatter(points_in_sphere[:,0],
points_in_sphere[:,1],
points_in_sphere[:,2],
c=activations2[distance<cutoff]-activations1[distance<cutoff],
marker='s',cmap='bwr',vmin=-3,vmax=3)
fig.colorbar(p)
ax.set_xlim3d(0,20)
ax.set_ylim3d(0,20)
ax.set_zlim3d(0,20)
plt.title("Difference")
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from sklearn import svm
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','vox_id','block_id','trial_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/mvpa/mvpc/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
num_blocks = len(np.unique(block_id)) #10
plt.hist(activations[cond_id==0],color='blue',bins=30,label='Comedy')
plt.hist(activations[cond_id==1],color='red',bins=30,alpha=.5,label='Horror')
plt.yticks([]);plt.ylabel('Frequency')
plt.xlabel("Activation strength")
plt.legend(bbox_to_anchor=(1, 1))
plt.show()
scores=[];
for s in range(num_subjects):
model = svm.SVC(kernel='linear')
samples_train = activations[(sub_id==s) & (block_id%2==0)].reshape(-1, 1)
classes_train = cond_id[(sub_id==s) & (block_id%2==0)].ravel()
samples_test = activations[(sub_id==s) & (block_id%2==1)].reshape(-1, 1)
classes_test = cond_id[(sub_id==s) & (block_id%2==1)].ravel()
model.fit(samples_train,classes_train)
scores = np.concatenate([scores,[model.score(samples_test,classes_test)]])
plt.scatter(np.arange(len(scores)),scores)
plt.ylim([.0,1.])
plt.ylabel("% correct")
plt.xlabel("Subject ID")
plt.axhline(.5,ls='--',c='k',label='chance')
plt.legend()
plt.show()
print("Avg correct predictions across subs:",np.mean(scores))
stat = sp.stats.ttest_rel(scores,np.ones(num_subjects)*.5)
print("Above-chance prediction accuracy? (t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
Avg correct predictions across subs: 0.8266
Above-chance prediction accuracy? (t-test): t=9.0744, p=0.0000
s=0
model = svm.SVC(kernel='linear')
samples_train = activations[(sub_id==s) & (block_id%2==0)].reshape(-1, 1)
classes_train = cond_id[(sub_id==s) & (block_id%2==0)].ravel()
samples_test = activations[(sub_id==s) & (block_id%2==1)].reshape(-1, 1)
classes_test = cond_id[(sub_id==s) & (block_id%2==1)].ravel()
model.fit(samples_train,classes_train)
plt.hist(activations[(sub_id==s) & (block_id%2==1) & (cond_id==0)],color='blue',bins=30,label='Comedy')
plt.hist(activations[(sub_id==s) & (block_id%2==1) & (cond_id==1)],color='red',bins=30,alpha=.5,label='Horror')
plt.yticks([]);plt.ylabel('Frequency')
plt.xlabel("Activation strength")
plt.legend(bbox_to_anchor=(1, 1))
xx = np.linspace(-5, 5, 100)
yy = np.linspace(0, 40, 100)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel()]).T
Z = model.decision_function(xy).reshape(XX.shape)
plt.title("Subject 1")
plt.contour(XX, YY, Z, colors='k', levels=[0],
linestyles=['--', '-', '--'])
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','block_id','vox_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/mvpa/mvpcorrelation/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
num_blocks = len(np.unique(block_id)) #40
plt.imshow(activations[(sub_id==0)&(cond_id==0)].reshape(num_blocks//2,num_voxels),cmap='PuBu')
plt.title("Comedy (Subject 1)")
plt.ylabel("Block ID")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)].reshape(num_blocks//2,num_voxels),cmap='OrRd')
plt.title("Horror (Subject 1)")
plt.ylabel("Block ID")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==0)&(block_id%2==1)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Comedy (Sub 1, odd blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==0)&(block_id%2==0)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Comedy (Sub 1, even blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)&(block_id%2==1)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Horror (Sub 1, odd blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
plt.imshow(activations[(sub_id==0)&(cond_id==1)&(block_id%2==0)].reshape(num_blocks//4,num_voxels),cmap='gray')
plt.title("Horror (Sub 1, even blocks)")
plt.ylabel("Block")
plt.xlabel("Voxel ID")
plt.colorbar()
plt.show()
# for each subject, correlate odd/even runs for each condition
corr = np.full((num_subjects,num_conditions*2),np.nan)
for s in range(num_subjects):
for c in range(num_conditions*2):
if c == 0: # Comedy (odd) -> Comedy (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==0)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==0)&(block_id%2==0)])
elif c == 1: # Horror (odd) -> Horror (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==1)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==1)&(block_id%2==0)])
elif c == 2: # Comedy (odd) -> Horror (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==0)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==1)&(block_id%2==0)])
elif c == 3: # Horror (odd) -> Comedy (even)
corr[s,c], _ = sp.stats.pearsonr(activations[(sub_id==s)&(cond_id==1)&(block_id%2==1)],
activations[(sub_id==s)&(cond_id==0)&(block_id%2==0)])
with plt.rc_context(rc={'font.size': 20, 'figure.figsize': (5,2)}):
plt.imshow(corr, cmap='gray', aspect='auto')
plt.ylabel('Subject ID')
plt.yticks([]);
plt.xticks(np.arange(4),["Horror\n(Odd)\nHorror\n(Even)","Comedy\n(Odd)\nComedy\n(Even)",
"Horror\n(Odd)\nComedy\n(Even)","Comedy\n(Odd)\nHorror\n(Even)"],
fontsize=14, rotation=0, ha="center")
plt.colorbar()
plt.title("Correlation coefficients")
plt.show()
print('From left to right:')
print('° Comedy (odd) & Comedy (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,0])))
print('° Horror (odd) & Horror (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,1])))
print('° Comedy (odd) & Horror (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,2])))
print('° Horror (odd) & Comedy (even) Avg. across subjects: r={:.4f}'.format(np.mean(corr[:,3])))
stat = sp.stats.ttest_rel(corr[:,:2].flatten(),corr[:,2:].flatten())
print("\nSignificantly different correlations betw first two columns and last two columns? (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
sub=0 # for first subject
# Calcuate correlations between even- and odd-block conditions
comedy_even = activations[(sub_id==0)&(cond_id==0)&(block_id%2==0)]
comedy_odd = activations[(sub_id==0)&(cond_id==0)&(block_id%2==1)]
horror_even = activations[(sub_id==0)&(cond_id==1)&(block_id%2==0)]
horror_odd = activations[(sub_id==0)&(cond_id==1)&(block_id%2==1)]
# Plot correlations between even- and odd-block conditions
correlation_table = []
# first row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_even,comedy_odd)[0],
stats.pearsonr(horror_even,comedy_odd)[0]])
# second row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_even,horror_odd)[0],
stats.pearsonr(horror_even,horror_odd)[0]])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(correlation_table,cmap='hot')
plt.title("Representational Similarity Matrix\n(Subject {})".format(sub+1),fontsize=18)
plt.yticks(np.arange(2),["Comedy","Horror"],fontsize=16)
plt.ylabel("Odd")
plt.xticks(np.arange(2),["Comedy","Horror"],fontsize=16)
plt.xlabel("Even")
cbar = plt.colorbar()
cbar.set_label('Pearson R', rotation=270)
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond','block_id','actor']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/mvpa/rsa/{}.npy'.format(array)).content))
sub=0 # for first subject
# Calcuate correlations between even- and odd-block conditions
comedy_aniston_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Aniston")]
comedy_aniston_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Aniston")]
comedy_depp_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Depp")]
comedy_depp_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Depp")]
horror_aniston_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Aniston")]
horror_aniston_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Aniston")]
horror_depp_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Depp")]
horror_depp_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Depp")]
# Plot correlations between even- and odd-block conditions
correlation_table = []
# first row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,comedy_aniston_odd)[0],
stats.pearsonr(comedy_depp_even,comedy_aniston_odd)[0],
stats.pearsonr(horror_aniston_even,comedy_aniston_odd)[0],
stats.pearsonr(horror_depp_even,comedy_aniston_odd)[0]])
# second row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,comedy_depp_odd)[0],
stats.pearsonr(comedy_depp_even,comedy_depp_odd)[0],
stats.pearsonr(horror_aniston_even,comedy_depp_odd)[0],
stats.pearsonr(horror_depp_even,comedy_depp_odd)[0]])
# third row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,horror_aniston_odd)[0],
stats.pearsonr(comedy_depp_even,horror_aniston_odd)[0],
stats.pearsonr(horror_aniston_even,horror_aniston_odd)[0],
stats.pearsonr(horror_depp_even,horror_aniston_odd)[0]])
# fourth row of correlation matrix:
correlation_table.append([stats.pearsonr(comedy_aniston_even,horror_depp_odd)[0],
stats.pearsonr(comedy_depp_even,horror_depp_odd)[0],
stats.pearsonr(horror_aniston_even,horror_depp_odd)[0],
stats.pearsonr(horror_depp_even,horror_depp_odd)[0]])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(correlation_table,cmap='hot')
plt.title("Representational Similarity Matrix\n(Subject {})".format(sub+1))
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror-Odd Comedy-Odd")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy-Even Horror-Even")
cbar = plt.colorbar()
cbar.set_label('Pearson R', rotation=270)
plt.show()
ComedyVsHorror= []
ComedyVsHorror.append([1,1,0,0])
ComedyVsHorror.append([1,1,0,0])
ComedyVsHorror.append([0,0,1,1])
ComedyVsHorror.append([0,0,1,1])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(ComedyVsHorror,cmap='RdYlBu')
plt.title("Genre RSM")
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror Comedy")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy Horror")
plt.show()
AnistonVsDepp = []
AnistonVsDepp.append([1,0,1,0])
AnistonVsDepp.append([0,1,0,1])
AnistonVsDepp.append([1,0,1,0])
AnistonVsDepp.append([0,1,0,1])
# plot correlation matrix
plt.rcParams.update({'font.size': 20, 'figure.figsize': (5,5)}) # change default plotting
plt.imshow(AnistonVsDepp,cmap='RdYlBu')
plt.title("Actor RSM")
plt.yticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.ylabel("Horror Comedy")
plt.xticks(np.arange(4),["Aniston","Depp","Aniston","Depp"],fontsize=16)
plt.xlabel("Comedy Horror")
plt.show()
# Calcuate correlations between even- and odd-block conditions
comedy_allsub = np.full((num_subjects),np.nan)
horror_allsub = np.full((num_subjects),np.nan)
aniston_allsub = np.full((num_subjects),np.nan)
depp_allsub = np.full((num_subjects),np.nan)
for sub in range(num_subjects):
correlation_table=[]
comedy_aniston_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Aniston")]
comedy_aniston_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Aniston")]
comedy_depp_even = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==0)&(actor=="Depp")]
comedy_depp_odd = activations[(sub_id==sub)&(cond=='comedy')&(block_id%2==1)&(actor=="Depp")]
horror_aniston_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Aniston")]
horror_aniston_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Aniston")]
horror_depp_even = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==0)&(actor=="Depp")]
horror_depp_odd = activations[(sub_id==sub)&(cond=='horror')&(block_id%2==1)&(actor=="Depp")]
# first row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(comedy_depp_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(horror_aniston_even,comedy_aniston_odd)[0],
sp.stats.pearsonr(horror_depp_even,comedy_aniston_odd)[0]])
# second row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,comedy_depp_odd)[0],
sp.stats.pearsonr(comedy_depp_even,comedy_depp_odd)[0],
sp.stats.pearsonr(horror_aniston_even,comedy_depp_odd)[0],
sp.stats.pearsonr(horror_depp_even,comedy_depp_odd)[0]])
# third row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,horror_aniston_odd)[0],
sp.stats.pearsonr(comedy_depp_even,horror_aniston_odd)[0],
sp.stats.pearsonr(horror_aniston_even,horror_aniston_odd)[0],
sp.stats.pearsonr(horror_depp_even,horror_aniston_odd)[0]])
# fourth row of correlation matrix:
correlation_table.append([sp.stats.pearsonr(comedy_aniston_even,horror_depp_odd)[0],
sp.stats.pearsonr(comedy_depp_even,horror_depp_odd)[0],
sp.stats.pearsonr(horror_aniston_even,horror_depp_odd)[0],
sp.stats.pearsonr(horror_depp_even,horror_depp_odd)[0]])
comedy_allsub[sub] = np.mean(np.array(correlation_table)[np.array(ComedyVsHorror)==1])
horror_allsub[sub] = np.mean(np.array(correlation_table)[np.array(ComedyVsHorror)==0])
aniston_allsub[sub] = np.mean(np.array(correlation_table)[np.array(AnistonVsDepp)==1])
depp_allsub[sub] = np.mean(np.array(correlation_table)[np.array(AnistonVsDepp)==0])
plt.bar(0,np.mean(comedy_allsub),color='blue')
plt.bar(1,np.mean(horror_allsub),color='darkred')
plt.scatter(np.zeros(num_subjects),comedy_allsub,color='k',zorder=2)
plt.scatter(np.ones(num_subjects),horror_allsub,color='k',zorder=2)
plt.axhline(0,c='k')
plt.title("Same-genre Vs Different-genre")
plt.xticks(np.arange(2),["Same","Diff."])
plt.ylabel("Avg. correlation")
plt.ylim(0,.1)
plt.show()
stat = sp.stats.ttest_rel(comedy_allsub,horror_allsub)
print("\nPaired t-test: t={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
plt.bar(0,np.mean(aniston_allsub),color='blue')
plt.bar(1,np.mean(depp_allsub),color='darkred')
plt.scatter(np.zeros(num_subjects),aniston_allsub,color='k',zorder=2)
plt.scatter(np.ones(num_subjects),depp_allsub,color='k',zorder=2)
plt.axhline(0,c='k')
plt.title("Same-actor Vs Different-actor")
plt.xticks(np.arange(2),["Same","Diff."])
plt.ylabel("Avg. correlation")
plt.ylim(0,.1)
plt.show()
stat = sp.stats.ttest_rel(aniston_allsub,depp_allsub)
print("\nPaired t-test: t={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
Paired t-test: t=5.5199, p=0.0000
Paired t-test: t=-0.0462, p=0.9636
# general packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# change default plotting
plt.rcParams.update({'font.size': 20, 'figure.figsize': (4,4)})
# create simulated data w/ rng seed so results are replicable
np.random.seed(3)
activations = np.random.randn(100, 2) @ np.random.rand(2, 2)
# general packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# change default plotting
plt.rcParams.update({'font.size': 20, 'figure.figsize': (4,4)})
# Normalize data
activations = (activations - np.mean(activations,axis=0)) / np.std(activations,axis=0)
# plot voxel 1
plt.scatter(np.arange(len(activations[:,0])),activations[:,0],c='k')
plt.xlabel('Trial number')
plt.ylabel('Activation')
plt.ylim([-3,3])
plt.title("Voxel 1")
plt.show()
# plot voxel 2
plt.scatter(np.arange(len(activations[:,1])),activations[:,1],c='k')
plt.xlabel('Trial number')
plt.ylabel('Activation')
plt.ylim([-3,3])
plt.title("Voxel 2")
plt.show()
# plot voxels 1 and 2
plt.scatter(activations[:,0],activations[:,1],c='k')
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.xlabel('Voxel 1 activation')
plt.ylabel('Voxel 2 activation')
plt.show()
# perform PCA
n_components = 1
model = PCA(n_components = n_components)
model.fit(activations)
components = model.components_.T
# plot first component
plt.scatter(activations[:,0],activations[:,1],c='k')
plt.xlim([-3,3])
plt.ylim([-3,3])
plt.xlabel('Voxel 1 activation')
plt.ylabel('Voxel 2 activation')
plt.annotate('', [0,0], components, arrowprops=dict(lw=2,color='r'))
plt.show()
# plot voxel 1 and 2 activations (2 dim.) represented as component magnitude (1 dim.)
restored=model.transform(activations)
plt.scatter(np.arange(len(restored[:,0])),restored[:,0],c='k')
plt.xlabel('Trial number')
plt.ylabel('Component magnitude')
plt.title("PCA component 1")
plt.show()
# plot averaging voxel 1 and 2
plt.scatter(np.arange(len(activations[:,0])),np.mean(activations,axis=1),c='k')
plt.ylim([-3,3])
plt.xlabel('Trial number')
plt.ylabel('Activation')
plt.title("Averaged voxel 1 and 2")
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['enjoyment','time_series1','time_series2']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/univariate/{}.npy'.format(array)).content))
# overview of the data
num_subjects = time_series1.shape[0]
num_TRs = time_series1.shape[1]
num_conditions = 2
plt.plot(time_series1[0,:],c='green')
plt.title("Subject 1, ROI A")
plt.ylabel("BOLD signal")
plt.xlabel("TR")
plt.show()
plt.plot(time_series2[0,:],c='purple')
plt.title("Subject 1, ROI B")
plt.ylabel("BOLD signal")
plt.xlabel("TR")
plt.show()
print("Correlation coefficient between the two time series = {}".format(
sp.stats.pearsonr(time_series1[0,:],time_series2[0,:])[0]))
print("Movie enjoyment rating = {} (1-5 scale)".format(enjoyment[0]))
corrs=[]
for sub in range(num_subjects):
coefficient=sp.stats.pearsonr(time_series1[sub,:],time_series2[sub,:])[0]
plt.scatter(sub,coefficient,color='brown')
corrs=np.append(corrs,coefficient)
plt.title("Time-series correlation coefficient")
plt.xlabel("Subject")
plt.show()
for sub in range(num_subjects):
plt.scatter(sub,enjoyment[sub],color='black')
plt.title("Movie enjoyment rating")
plt.yticks(np.arange(5)+1,fontsize=14)
plt.xlabel("Subject")
plt.show()
# do statistics between conditions
stat = sp.stats.ttest_rel(corrs,enjoyment)
print("Correlation coefficients sig. associated with enjoyment ratings? (paired t-test):\nt={:.4f}, p={:.4f}".format(stat.statistic,stat.pvalue))
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy import signal
# allows us to create tuning functions
def make_gaussian_iter(mu,sd):
return np.array([np.roll(signal.gaussian(180, std=s),m-90) for m,s in zip(mu,sd)]).T
# allows us to calculate circular error
def circ_diff(a,b,r=180):
'''
Returns circular differences between a and b, where r defines the point in number space where values should be wrapped around (wraps values between -r and r).
Examples
----------
>>> circ_diff(-10,350,r=360)
array([0.])
>>> circ_diff([0,10,180],[20,70,120],r=180)
array([ 20., 60., -60.])
'''
if np.isscalar(a):
a = np.array([a])
if np.isscalar(b):
b= np.array([b])
if len(a)!=len(b):
raise Exception ("a and b must be same length")
diff = np.full(len(a),np.nan)
for k in np.arange(len(a)):
diff[k] = b[k] - a[k]
if diff[k] < -r//2:
diff[k] = b[k] - a[k] + r
elif diff[k] > r//2:
diff[k] = b[k] - a[k] - r
return diff
# pull data from GitHub
import requests, io
for array in ['trn','tst','trnf','tstf','ciecolors']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/encoding_decoding/{}.npy'.format(array)).content))
# overview of the data
num_subjects = 1
num_voxels = trn.shape[1] #20
num_trials = trn.shape[0] + tst.shape[0] #4 training trials + 4 test trials
# change default plotting
plt.rcParams.update({'font.size': 18, 'figure.figsize': (7,2)})
plt.imshow(np.array([trnf]),cmap='gray')
for i,t in enumerate(trnf):
if t>90:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="k")
else:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="w")
plt.yticks([])
plt.xticks(np.arange(ntrials//2),np.arange(ntrials//2)+1)
plt.xlabel("Trial")
plt.title("Stimuli (Training)")
plt.tight_layout();
plt.show()
plt.imshow(np.array([tstf]),cmap='gray')
for i,t in enumerate(tstf):
if t>90:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="k")
else:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="w")
plt.yticks([])
plt.xticks(np.arange(ntrials//2),np.arange(ntrials//2)+5)
plt.xlabel("Trial")
plt.title("Stimuli (Testing)")
plt.tight_layout();
plt.show()
plt.rcParams.update({'font.size': 18, 'figure.figsize': (10,4)})
plt.imshow(np.concatenate([trn,tst]),cmap="coolwarm",clim=[0,np.max(np.concatenate([trn,tst]))])
plt.xticks(np.arange(nvoxels),np.arange(nvoxels)+1,fontsize=14)
plt.yticks(np.arange(ntrials),np.arange(ntrials)+1)
plt.ylabel("Trial")
plt.xlabel("Voxel")
plt.title("Brain activations")
plt.colorbar()
plt.tight_layout();
plt.show()
num_channels = 9
basis_points = np.linspace(0,180-(180//nchannels),nchannels).astype(int)
basis_set = make_gaussian_iter(basis_points, np.ones(num_channels)*20)
plt.plot(basis_set,lw=2)
plt.ylabel("Channel response")
plt.xlabel("Orientation")
plt.xticks([0,45,90,135,180])
plt.title("Basis set")
plt.show()
plt.plot(basis_set,lw=2)
plt.ylabel("Channel response")
plt.xlabel("Orientation")
plt.xticks([0,45,90,135,180])
plt.title("Basis set")
for t in trnf:
plt.axvline(t,ls='--',c='k')
plt.show()
plt.imshow(basis_set[trnf,:],cmap='gray')
plt.yticks(np.arange(len(trnf)),np.arange(len(trnf))+1)
plt.xticks(np.arange(num_channels),np.arange(num_channels)+1)
plt.xlabel("Channel")
plt.ylabel("Trial")
plt.title("Channel Responses")
plt.colorbar()
plt.tight_layout();
plt.show()
cr = np.linalg.lstsq(basis_set[trnf,:], trn, rcond=None)[0]
plt.imshow(cr,cmap='bone')
plt.xticks(np.arange(trn.shape[1]),np.arange(trn.shape[1])+1)
plt.yticks(np.arange(num_channels),np.arange(num_channels)+1)
plt.xlabel("Voxel")
plt.ylabel("Channel")
plt.title("Trained Weights")
plt.colorbar()
plt.tight_layout();
plt.show()
cw = np.linalg.lstsq(cr.T, tst.T, rcond=None)[0].T
plt.imshow(cw,cmap='gray')
plt.yticks(np.arange(len(trnf)),np.arange(len(trnf))+5)
plt.xticks(np.arange(num_channels),np.arange(num_channels)+1)
plt.xlabel("Channel")
plt.ylabel("Trial")
plt.title("Estimated channel responses")
plt.colorbar()
plt.tight_layout();
plt.show()
predictions=[]
for i in range(len(trnf)):
plt.plot(cw[i,:],c='k')
plt.scatter(np.arange(nchannels),cw[i,:],c='k',s=10)
plt.xticks(np.arange(nchannels),basis_points)
plt.ylabel("Channel\nresponse")
plt.xlabel("Orientation")
plt.yticks([-1,0,1,2])
plt.ylim([-1,2])
plt.title("Trial {} Reconstruction".format(i+5))
plt.axvline(np.argmax(cw[i,:]),ls='--')
predictions = np.append(predictions,basis_points[np.argmax(cw[i,:])])
plt.tight_layout();
plt.show()
plt.rcParams.update({'font.size': 18, 'figure.figsize': (7,2)})
predictions = predictions.astype(int)
plt.imshow(np.array([tstf]),cmap='gray')
for i,t in enumerate(tstf):
if t>90:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="k")
else:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="w")
plt.yticks([])
plt.xticks(np.arange(ntrials//2),np.arange(ntrials//2)+5)
plt.xlabel("Trial")
plt.title("Stimuli (Testing)")
plt.tight_layout();
plt.show()
plt.imshow(np.array([predictions]),cmap='gray')
for i,t in enumerate(predictions):
if t>90:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="k")
else:
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="w")
plt.yticks([])
plt.xticks(np.arange(ntrials//2),np.arange(ntrials//2)+5)
plt.xlabel("Trial")
plt.title("Predictions")
plt.tight_layout();
plt.show()
errors = circ_diff(predictions,tstf).astype(int)
plt.imshow([errors],cmap='gray',clim=[0,90])
for i,t in enumerate(errors):
plt.text(i, 0, str(t)+"°", ha="center", va="center", color="w")
plt.yticks([])
plt.xticks(np.arange(ntrials//2),np.arange(ntrials//2)+5)
plt.xlabel("Trial")
plt.title("Prediction error")
plt.tight_layout();
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy import signal
# hemodynamic response function with max value at 1
hrf_function = (gamma.pdf(np.arange(20),5) - 0.2 * gamma.pdf(np.arange(20),12)) / .2
# pull data from GitHub
import requests, io
for array in ['voxel_by_timeseries','receptive_fields']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/retinotopy/{}.npy'.format(array)).content))
# overview of the data
num_subjects = 1
num_voxels = voxel_by_timeseries.shape[0] #100 voxels
num_TRs = voxel_by_timeseries.shape[1] #180 TRs
plt.plot(voxel_by_timeseries[0,:])
plt.xticks(np.arange(0,num_TRs+1,30))
plt.xlabel("TR")
plt.ylabel("BOLD signal")
plt.title("Voxel 1")
plt.show()
plt.plot(voxel_by_timeseries[0,:])
plt.xticks(np.arange(0,num_TRs+1,30))
for p in np.arange(0,num_TRs,15):
plt.axvline(p,c='k',ls='--')
plt.plot(np.cos(np.linspace(0, np.pi*20, num_TRs)), c='k', alpha=.3)
plt.xlabel("TR")
plt.ylabel("BOLD signal")
plt.title("Voxel 1")
plt.show()
print("Correlation coefficient: {:.3f}".format(np.corrcoef(voxel_by_timeseries[0,:],np.cos(np.linspace(0, np.pi*20, num_TRs)))[0][1]))
plt.plot(voxel_by_timeseries[0,:])
plt.xticks(np.arange(0,num_TRs+1,30))
for p in np.arange(0,num_TRs,15):
plt.axvline(p,c='k',ls='--')
plt.plot(np.roll(np.cos(np.linspace(0, np.pi*20, num_TRs)),7), c='k', alpha=.3)
plt.xlabel("TR")
plt.ylabel("BOLD signal")
plt.title("Voxel 1")
plt.show()
print("Correlation coefficient: {:.3f}".format(np.corrcoef(voxel_by_timeseries[0,:],np.roll(np.cos(np.linspace(0, np.pi*20, num_TRs)),7))[0][1]))
predicted_eccentricity = []
for v in range(num_voxels):
rs = []
for t in range(15):
rs = np.append(rs, np.corrcoef(voxel_by_timeseries[v,:],np.roll(np.cos(np.linspace(0, np.pi*20, num_TRs)),t))[0][1])
predicted_eccentricity = np.append(predicted_eccentricity, np.argmax(rs))
plt.plot(predicted_eccentricity, label='prediction')
plt.plot(receptive_fields, label='actual')
plt.xlabel("Voxel (posterior to anterior)")
plt.ylabel("Eccentricity (°)")
plt.yticks(np.arange(0,16,5))
plt.legend()
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy import signal
# hemodynamic response function
hrf_function = (gamma.pdf(np.arange(20),5) * gamma.pdf(np.arange(20),12))
hrf_function /= np.max(hrf_function)
# pull data from GitHub
import requests, io
for array in ['voxel_by_timeseries','stimulus','receptive_fields']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/FMRIPlayground/methods/prf/{}.npy'.format(array)).content))
# overview of the data
num_subjects = 1
num_voxels = voxel_by_timeseries.shape[0] #50 voxels
num_TRs = voxel_by_timeseries.shape[1] #600 TRs
# change default plotting
plt.rcParams.update({'font.size': 18, 'figure.figsize': (10,1)})
plt.plot(x_ts)
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.ylabel("visual degrees\n(x-axis)")
plt.title("Horizontal position of stimulus")
plt.show()
plt.plot(y_ts)
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.ylabel("visual degrees\n(y-axis)")
plt.title("Vertical position of stimulus")
plt.show()
plt.plot(s_ts)
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.ylabel("visual\nangle")
plt.title("Size of stimulus")
plt.show()
plt.plot(voxel_by_timeseries[0,:])
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.ylabel("BOLD\nsignal")
plt.title("Voxel 1")
plt.show()
x=0
y=5
s=2
spikes = np.zeros(num_TRs)
spikes[np.intersect1d((np.where((x_ts>x-s) & (x_ts<x+s))[0]),
(np.where((y_ts>y-s) & (y_ts<y+s))[0]))] = 1
convolved = signal.convolve(hrf_function,spikes)[:num_TRs]
plt.plot(spikes)
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.title("Spike plot")
plt.show()
plt.plot(convolved)
plt.xticks(np.arange(0,num_TRs+1,100))
plt.xlabel("TR")
plt.title("Convolved with HRF")
plt.show()
predicted_timecourse = np.full((num_TRs,21,21,10),np.nan)
for xindex,x in enumerate(range(-10,11)):
for yindex,y in enumerate(range(-10,11)):
for sindex,s in enumerate(range(1,7)):
spikes = np.zeros(num_TRs)
spikes[np.intersect1d((np.where((x_ts>x-s) & (x_ts<x+s))[0]),
(np.where((y_ts>y-s) & (y_ts<y+s))[0]))] = 1
convolved = signal.convolve(hrf_function,spikes)[:num_TRs]
predicted_timecourse[:,xindex,yindex,sindex] = convolved
predicted_rf = np.full((num_voxels,4),np.nan) # x y s corrcoef
for v in range(num_voxels):
correlation_to_actual = np.full((21,21,10),np.nan)
for xindex,x in enumerate(range(-10,11)):
for yindex,y in enumerate(range(-10,11)):
for sindex,s in enumerate(range(1,7)):
correlation_to_actual[xindex,yindex,sindex] = np.corrcoef(predicted_timecourse[:,xindex,yindex,sindex],voxel_by_timeseries[v,:])[0][1]
predicted_rf[v,:3] = np.array(np.where([correlation_to_actual==np.nanmax(correlation_to_actual)])[1:])[:,0] - [10, 10, -1]
predicted_rf[v,3] = np.nanmax(correlation_to_actual)
# change default plotting
plt.rcParams.update({'font.size': 18, 'figure.figsize': (10,2)})
plt.plot(predicted_rf[:,0], label='prediction')
plt.plot(receptive_fields[:,0], label='actual')
plt.xlabel("Voxel")
plt.ylabel("Visual degrees\nfrom center")
plt.title("Horizontal position")
plt.legend(loc='upper right')
plt.show()
plt.plot(predicted_rf[:,1], label='prediction')
plt.plot(receptive_fields[:,1], label='actual')
plt.xlabel("Voxel")
plt.ylabel("Visual degrees\nfrom center")
plt.title("Vertical position")
plt.legend(loc='upper right')
plt.show()
plt.plot(predicted_rf[:,2], label='prediction')
plt.plot(receptive_fields[:,2], label='actual')
plt.xlabel("Voxel")
plt.ylabel("Radius\n(visual degrees)")
plt.title("Receptive field size")
plt.legend(loc='upper right')
plt.show()
# general packages
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
# pull data from GitHub
import requests, io
for array in ['activations','sub_id','cond_id','block_id','vox_id']:
globals()['{}'.format(array)] = np.load(io.BytesIO(requests.get(
'http://paulscotti.github.io/fmriplayground/methods/mvpa/mvpcorrelation/{}.npy'.format(array)).content))
# overview of the data
num_subjects = len(np.unique(sub_id)) #20
num_conditions = len(np.unique(cond_id)) #2
num_voxels = len(np.unique(vox_id)) #50
num_blocks = len(np.unique(block_id)) #40
plt.hist(activations[cond_id==0,:].flatten(),color='blue',bins=30,label='Comedy')
plt.hist(activations[cond_id==1,:].flatten(),color='red',bins=30,alpha=.5,label='Horror')
plt.yticks([]);plt.ylabel('Frequency')
plt.xlabel("Activation strength")
plt.legend(bbox_to_anchor=(1, 1))
plt.show()
# initialize model
clf = Perceptron()
# training on first 100 trials
clf.fit(activations[:100,:], cond_id[:100])
# predict conditions of held out trials (the last 50 trials)
print("Predicted labels: {}".format(clf.predict(activations[-50:,:])))
# compare predicted to actual conditions
print("Accuracy: {}%".format(clf.score(activations[-50:,:], cond_id[-50:])*100))
code
Regarding general linear models, general refers to the different types of analyses that can be used (e.g., correlation, one-sample t-test, ANOVA, etc.) and linear refers to how the model assumes a linear relationship between parameters (often preferred because nonlinear models are prone to overfitting, where a model appears to fit the data well but is not generalizable to new data). We consider the general linear model to be a processing step (rather than a post-processing fMRI method, which is what this website is focused on) and thus will not discuss it in further detail, but it is essentially one way to obtain an estimate of brain activation (in arbitrary units) for every voxel for each condition after accounting for things like the hemodynamic response function (time-course of blood flow) and body motion in the scanner.
In practice, you should use cross-validation such that all data may be predicted without succumbing to circularity/double-dipping.
Unlike fmriplayground, Brainiak involves installing a custom Python package and is built to be a helpful tool for use on real data. fmriplayground is meant purely for teaching purposes with no intention for the code contained on the website to be directly implemented in real fMRI projects. Brainiak's Python tutorial examples use real fMRI datasets and go into more depth than fmriplayground. fmriplayground covers more methods than Brainiak and uses simpler, simulated datasets. Our example walkthroughs are much shorter and do not dwell on technical details that may be important to consider when actually implementing techniques on real data.
fmriplayground only covers post-processing fMRI methods while fmri4newbies covers all stages of the fMRI pipeline including pre-processing, processing, and post-processing (in addition to information on MRI physics and anatomical & functional brain regions). Unlike fmriplayground, fmri4newbies supplies the reader with Powerpoint lecture slides (no example datasets or code).
"Activations" can be raw BOLD signal or beta weights obtained from general linear model estimation.
Please just pretend that Jennifer Aniston has starred in horror movies, we are not movie buffs :). Note: Jennifer Aniston actually has a reputation for being used as an example in neuroimaging experiments because of a study investigating "Grandmother cells", the idea that a neuron may be so complex as to specifically encode your grandmother. The study involved one participant who seemed have a neuron with Jennifer Aniston specific firing.
Correlation refers to whether a change in one thing is associated with a change in another thing. If two things are independent, however, that means that if you have information about one thing, that is entirely uninformative to predicting information about the other thing. Independent things are always uncorrelated things, but not all uncorrelated things are independent.