In the MNE API, adjacency for permutation_cluster_test is defined as:
Defines adjacency between locations in the data, where “locations” can be spatial vertices, frequency bins, etc. If False
, assumes no adjacency (each location is treated as independent and unconnected). If None
, a regular lattice adjacency is assumed, connecting each location to its neighbor(s) along the last dimension of each group X[k]
(or the last two dimensions if X[k]
is 2D). If adjacency
is a matrix, it is assumed to be symmetric (only the upper triangular half is used) and must be square with dimension equal to X[k].shape[1]
or X[k].shape[1] * X[k].shape[2]
.
I tried to look into the source code but it got a little too complex for me to follow and comprehend. Could anyone in simple terms, help to explain to me how the default adjacency is defined here?
I assume adjacency is the variable that forms the clusters that then gets passed on for statistical thresholding. Assuming the input to the permutation cluster test is a tfr_morlet output (So we have time and frequency domain here),

assuming my X is in a shape of (1, 10, 501) with 1 being the set of data, 10 being the number of frequency bands and 501 being time (in 10ms bins, totaling to 5s), it is only time that is being used as the adjacency factor as opposed to time x freq (2D)?

how is ‘regular lattice adjacency’ defined for time (or frequency)? Is it dependent on the adjacent left and right bins along the time and frequency axis? If time in my case is in bins of 10ms, so adjacency considers the 10ms and +10ms time points?

if the adjacency variable is set to false, is this equivalent to voxelwise correction then?
Would greatly appreciate any help!