dyconnmap.graphs package

Submodules

dyconnmap.graphs.e2e module

Edge-to-Edge Network

dyconnmap.graphs.e2e.edge_to_edge(dfcgs: numpy.ndarray)numpy.ndarray[source]

Edge-To-Edge

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

net

Return type

array-like

dyconnmap.graphs.gdd module

Graph Diffusion Distance

The Graph Diffusion Distance (GDD) metric (Hammond2013) is a measure of distance between two (positive) weighted graphs based on the Laplacian exponential diffusion kernel. The notion backing this metric is that two graphs are similar if they emit comparable patterns of information transmission.

This distance is computed by searching for a diffusion time \(t\) that maximizes the value of the Frobenius norm between the two diffusion kernels. The Laplacian operator is defined as \(L = D - A\), where \(A\) is the positive symmetric data matrix and \(D\) is a diagonal degree matrix for the adjacency matrix \(A\). The diffusion process (per vertex) on the adjacency matrix \(A\) is governed by a time-varying vector \(u(t)∈ R^N\). Thus, between each given pair of (vertices’) weights \(i\) and \(j\), their flux is quantified by \(a_{ij} (u_i (t)u_j (t))\). The grand sum of these interactions is given by \(\hat{u}_j(t)=\sum_i{a_{ij}(u_i(t)u_j(t))=-Lu(t)}\). Given the initial condition \(u^0,t=0\) this sum has the following analytic solution \(u(t)=exp⁡(-tL)u^0\). The resulting matrix is known as the Laplacian exponential diffusion kernel. Letting the diffusion process run for \(t\) time we compute and store the diffusion patterns in each column. Finally, the actual distance measure between two adjacency matrices \(A_1\) and \(A_2\), at diffusion time \(t\) is given by:

\[ξ(A_1, A_2 ; t) = ‖exp⁡(-tL_1 ) - exp⁡(-tL_2 )‖_F^2\]

where \(‖∙‖_F\) is the Frobenious norm.

Notes

Based on the code accompanied the original paper. Available at https://www.researchgate.net/publication/259621918_A_Matlab_code_for_computing_the_GDD_presented_in_the_paper



Hammond2013

Hammond, D. K., Gur, Y., & Johnson, C. R. (2013, December). Graph diffusion distance: A difference measure for weighted graphs based on the graph Laplacian exponential kernel. In Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE (pp. 419-422). IEEE.

dyconnmap.graphs.gdd.graph_diffusion_distance(a: numpy.ndarray, b: numpy.ndarray, threshold: Optional[float] = 1e-14)Tuple[numpy.float32, numpy.float32][source]

Graph Diffusion Distance

Parameters
  • a (array-like, shape(N, N)) – Weighted matrix.

  • b (array-like, shape(N, N)) – Weighted matrix.

  • threshold (float) – A threshold to filter out the small eigenvalues. If the you get NaN or INFs, try lowering this threshold.

Returns

  • gdd (float) – The estimated graph diffusion distance.

  • xopt (float) – Parameters (over given interval) which minimize the objective function. (see scipy.optimize.fmindbound)

dyconnmap.graphs.imd module

Ipsen-Mikhailov Distance

Given two graphs, this method quantifies their difference by comparing their spectral densities. This spectral density is computed as the sum of Lorentz distributions \(\rho(\omega)\):

\[\rho(\omega) = K \sum_{i=1}^{N-1} \frac{\gamma}{ (\omega - \omega_i)^2 + \gamma^2 }\]

Where \(\gamma\) is the bandwidth, and \(K\) a normalization constant such that \(\int_{0}^{\infty}\rho(\omega)d\omega=1\). The spectral distance between two graphs \(G\) and \(H\) with densities \(\rho_G(\omega)\) and \(\rho_H(\omega)\) respectively, is defined as:

\[\epsilon = \sqrt{ \int_{0}^{\infty}{[\rho_G(\omega) - \rho_H(\omega) ]^2 d(\omega)} }\]


Ipsen2004

Ipsen, M. (2004). Evolutionary reconstruction of networks. In Function and Regulation of Cellular Systems (pp. 241-249). Birkhäuser, Basel.

Donnat2018

Donnat, C., & Holmes, S. (2018). Tracking Network Dynamics: a review of distances and similarity metrics. arXiv preprint arXiv:1801.07351.

dyconnmap.graphs.imd.im_distance(X: numpy.ndarray, Y: numpy.ndarray, bandwidth: Optional[float] = 1.0)float[source]
Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrix.

  • bandwidth (float) – Bandwidth of the kernel. Default 1.0.

Returns

distance – The estimated Ipsen-Mikhailov distance.

Return type

float

dyconnmap.graphs.laplacian_energy module

Laplcian Energy

The Laplcian energy (LE) for a graph \(G\) is computed as

\[LE(G) = \sum_{i=1}^n | { \mu_{i} - \frac{2m}{n} } | ξ(A_1, A_2 ; t) = ‖exp⁡(-tL_1 ) - exp⁡(-tL_2 )‖_F^2\]

Where \(\mu_i\) denote the eigenvalue associated with the node of the Laplcian matrix of \(G\) (Laplcian spectrum) and \(\frac{2m}{n}\) the average vertex degree.

For a details please go through the original work (Gutman2006_).


dyconnmap.graphs.laplacian_energy.laplacian_energy(mtx: numpy.ndarray)float[source]

Laplacian Energy

Parameters

mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

Returns

le – The Laplacian Energy.

Return type

float

dyconnmap.graphs.mi module

Mutual Information

Normalized Mutual Information (NMI) proposed by [Strehl2002] as an extension to Mutual Information [cover] to enable interpretations and comparisons between two partitions. Given the entropies \(H(P^a)=-\sum_{i=1}^{k_a}{\frac{n_i^a}{n}\log(\frac{n_i^a}{n})}\) where \(n_i^a\) represents the number of patterns in group \(C_i^a \in P^a\) (and computed for \(H(P^b)\) accordingly); the initial matching of these two groups \(P^a\) and \(P^b\) in terms of mutual information is [Fred2005, Strehl2002]:

\[I(P^a, P^b) = \sum_{i=1}^{k_a} \sum_{j=1}^{k_b} {\frac{n_{ij}^{ab}}{n}} \log \left(\frac{ \frac{n_{ij}{ab}}{n} }{ \frac{n_i^a}{n} \frac{n_j^b}{n} } \right)\]

Where \(n_{ij}^{ab}\) denotes the number of shared patterns between the clusters \(C_i^a\) and \(C_j^b\). By exploiting the definition of mutual information, the following property holds true: \(I(P^a,P^b) \leq \frac{H(P^a)+H(P^b)}{2}\). This leads to the definition of NMI as:

\[NMI(A, B) = \frac{2I(P^a, P^b)}{ H(P^a) + H(P^b)} = \frac{ -2\sum_{i=1}^{k_a} \sum_{j=1}^{k_b} {n_{ij}^{ab}} \log \left( \frac{ n_{ij}^{ab} n }{ n_i^a n_j^n } \right) }{ \sum_{i=1}^{k_a} n_i^a \log \left( \frac{n_i^a}{n} \right) + \sum_{j=1}^{k_b} n_j^b \log \left( \frac{n_j^b}{n} \right) }\]


Fred2005

Fred, A. L., & Jain, A. K. (2005). Combining multiple clusterings using evidence accumulation. IEEE transactions on pattern analysis and machine intelligence, 27(6), 835-850.

Strehl2002

Strehl, A., & Ghosh, J. (2002). Cluster ensembles—a knowledge reuse framework for combining multiple partitions. Journal of machine learning research, 3(Dec), 583-617.

dyconnmap.graphs.mi.mutual_information(indices_a: numpy.ndarray, indices_b: numpy.ndarray)Tuple[float, float][source]

Mutual Information

Parameters
  • indices_a (array-like, shape(n_samples)) – Symbolic time series.

  • indices_b (array-like, shape(n_samples)) – Symbolic time series.

Returns

  • MI (float) – Mutual information.

  • NMI (float) – Normalized mutual information.

dyconnmap.graphs.mpc module

Multilayer Participation Coefficient



Guillon2016

Guillon, J., Attal, Y., Colliot, O., La Corte, V., Dubois, B., Schwartz, D., … & Fallani, F. D. V. (2017). Loss of brain inter-frequency hubs in Alzheimer’s disease. Scientific reports, 7(1), 10879.

dyconnmap.graphs.mpc.multilayer_pc_degree(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient (Degree)

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

mpc – Participation coefficient based on the degree of the layers’ nodes.

Return type

array-like

dyconnmap.graphs.mpc.multilayer_pc_gamma(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient method from Guillon et al.

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer graph.

Returns

gamma – Returns the original multilayer graph flattened, with the off diagional containing the estimated interlayer multilayer participation coefficient.

Return type

array-like, shape(n_layers*n_rois, n_layers*n_rois)

dyconnmap.graphs.mpc.multilayer_pc_strength(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient (Strength)

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

mpc – Participation coefficient based on the strength of the layers’ nodes.

Return type

array-like

dyconnmap.graphs.nodal module

Nodal network features

dyconnmap.graphs.nodal.nodal_global_efficiency(mtx: numpy.ndarray)numpy.ndarray[source]

Nodal Global Efficiency

Parameters

mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

Returns

nodal_ge – The computed nodal global efficiency.

Return type

array-like, shape(N, 1)

dyconnmap.graphs.spectral_euclidean_distance module

Spectral Euclidean Distance

The spectral distance between graphs is simply the Euclidean distance between the spectra.

\[d(G, H) = \sqrt{ \sum_i{ (g_i - h_j)^2 } }\]

Notes

  • The input graphs can be a standard adjency matrix, or a variant of Laplacian.



Wilson2008

Wilson, R. C., & Zhu, P. (2008). A study of graph spectra for comparing graphs and trees. Pattern Recognition, 41(9), 2833-2841.

dyconnmap.graphs.spectral_euclidean_distance.spectral_euclidean_distance(X: numpy.ndarray, Y: numpy.ndarray)float[source]
Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrix:

Returns

distance – The euclidean distance between the two spectrums.

Return type

float

dyconnmap.graphs.spectral_k_distance module

Spectral-K Distance

Given two graphs \(G\) and \(H\), we can use their \(k\) largest positive eigenvalues of their Laplacian counterparts to compute their distance.

\[\begin{split}d(G, H) = \left\{\begin{matrix} \sqrt{\frac{ \sum_{i=1}^k{(g_i - h_i)^2} }{ \sum_{i=1}^l{g_i^2} }} & ,\sum_{i=1}^l{g_i^2} \leq \sum_{j=1}^l{h_j^2} \\ \sqrt{\frac{ \sum_{i=1}^k{(g_i - h_i)^2} }{ \sum_{j=1}^l{g_i^2} }} & , \sum_{i=1}^l{g_i^2} > \sum_{j=1}^l{h_j^2} \end{matrix}\right.\end{split}\]

Where \(g\) and \(h\) denote the spectrums of the Laplacian matrices.

This measure is non-negative, separated, symmetric and it satisfies the triangle inequality.



Jakobson2000

Jakobson, D., & Rivin, I. (2000). Extremal metrics on graphs I. arXiv preprint math/0001169.

Pincombe2007

Pincombe, B. (2007). Detecting changes in time series of network graphs using minimum mean squared error and cumulative summation. ANZIAM Journal, 48, 450-473.

dyconnmap.graphs.spectral_k_distance.spectral_k_distance(X: numpy.ndarray, Y: numpy.ndarray, k: int)float[source]

Spectral-K Distance

Use the largest \(k\) eigenvalues of the given graphs to compute the distance between them.

Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrixY

  • k (int) – Largest k eigenvalues to use.

Returns

distance – Estimated distance based on selected largest eigenvalues.

Return type

float

dyconnmap.graphs.threshold module

Thresholding schemes

Notes



dyconnmap.graphs.threshold.k_core_decomposition(mtx: numpy.ndarray, threshold: float)numpy.ndarray[source]

Threshold a binary graph based on the detected k-cores.

Alvarez2006

Alvarez-Hamelin, J. I., Dall’Asta, L., Barrat, A., & Vespignani, A. (2006). Large scale networks fingerprinting and visualization using the k-core decomposition. In Advances in neural information processing systems (pp. 41-50).

Hagman2008

Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., & Sporns, O. (2008). Mapping the structural core of human cerebral cortex. PLoS biology, 6(7), e159.

Parameters
  • mtx (array-like, shape(N, N)) – Binary matrix.

  • threshold (int) – Degree threshold.

Returns

k_cores – A binary matrix of the decomposed cores.

Return type

array-like, shape(N, 1)

dyconnmap.graphs.threshold.threshold_eco(mtx)[source]
dyconnmap.graphs.threshold.threshold_global_cost_efficiency(mtx: numpy.ndarray, iterations: int)Tuple[numpy.ndarray, float, float, float][source]

Threshold a graph based on the Global Efficiency - Cost formula.

Basset2009

Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences, 106(28), 11747-11752.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • iterations (int) – Number of steps, as a resolution when search for optima.

Returns

  • binary_mtx (array-like, shape(N, N)) – A binary mask matrix.

  • threshold (float) – The threshold that maximizes the global cost efficiency.

  • global_cost_eff_max (float) – Global cost efficiency.

  • efficiency (float) – Global efficiency.

  • cost_max (float) – Cost of the network at the maximum global cost efficiency

dyconnmap.graphs.threshold.threshold_mean_degree(mtx: numpy.ndarray, threshold_mean_degree: int)numpy.ndarray[source]

Threshold a graph based on the mean degree.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • threshold_mean_degree (int) – Mean degree threshold.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.threshold.threshold_mst_mean_degree(mtx: numpy.ndarray, avg_degree: float)numpy.ndarray[source]

Threshold a graph based on mean using minimum spanning trees.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • avg_degree (float) – Mean degree threshold.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.threshold.threshold_omst_global_cost_efficiency(mtx: numpy.ndarray, n_msts: Optional[int] = None)Tuple[numpy.ndarray, numpy.ndarray, float, float, float, float][source]

Threshold a graph by optimizing the formula GE-C via orthogonal MSTs.

Dimitriadis2017a

Dimitriadis, S. I., Salis, C., Tarnanas, I., & Linden, D. E. (2017). Topological Filtering of Dynamic Functional Brain Networks Unfolds Informative Chronnectomics: A Novel Data-Driven Thresholding Scheme Based on Orthogonal Minimal Spanning Trees (OMSTs). Frontiers in neuroinformatics, 11.

Dimitriadis2017n

Dimitriadis, S. I., Antonakakis, M., Simos, P., Fletcher, J. M., & Papanicolaou, A. C. (2017). Data-driven Topological Filtering based on Orthogonal Minimal Spanning Trees: Application to Multi-Group MEG Resting-State Connectivity. Brain Connectivity, (ja).

Basset2009

Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences, 106(28), 11747-11752.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • n_msts (int or None) – Maximum number of OMSTs to compute. Default None; an exhaustive computation will be performed.

Returns

  • nCIJtree (array-like, shape(n_msts, N, N)) – A matrix containing all the orthogonal MSTs.

  • CIJtree (array-like, shape(N, N)) – Resulting graph.

  • degree (float) – The mean degree of the resulting graph.

  • global_eff (float) – Global efficiency of the resulting graph.

  • global_cost_eff_max (float) – The value where global efficiency - cost is maximized.

  • cost_max (float) – Cost of the network at the maximum global cost efficiency.

dyconnmap.graphs.threshold.threshold_shortest_paths(mtx: numpy.ndarray, treatment: Optional[bool] = False)numpy.ndarray[source]

Threshold a graph via via shortest path identification using Dijkstra’s algorithm.

Dimitriadis2010

Dimitriadis, S. I., Laskaris, N. A., Tsirka, V., Vourkas, M., Micheloyannis, S., & Fotopoulos, S. (2010). Tracking brain dynamics via time-dependent network analysis. Journal of neuroscience methods, 193(1), 145-155.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • treatment (boolean) – Convert the weights to distances by inversing the matrix. Also, fill the diagonal with zeroes. Default false.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.vi module

Variation of Information

Variation of Information (VI) [Meilla2007] is an information theoretic criterion for comparing two partitions. It is based on the classic notions of entropy and mutual information. In a nutshell, VI measures the amount of information that is lost or gained in changing from clustering \(A\) to clustering \(B\). VI is a true metric, is always non-negative and symmetric. The following formula is used to compute the VI between two groups:

\[VI(A, B) = [H(A) - I(A, B)] + [H(B) - I(A, B)]\]

Where \(H\) denotes the entropy computed for each partition separately, and \(I\) the mutual information between clusterings \(A\) and \(B\).

The resulting distance score can be adjusted to bound it between \([0, 1]\) as follows:

\[VI^{*}(A,B) = \frac{1}{\log{n}}VI(A, B)\]


Meilla2007

Meilă, M. (2007). Comparing clusterings—an information based distance. Journal of multivariate analysis, 98(5), 873-895.

Dimitriadis2009

Dimitriadis, S. I., Laskaris, N. A., Del Rio-Portilla, Y., & Koudounis, G. C. (2009). Characterizing dynamic functional connectivity across sleep stages from EEG. Brain topography, 22(2), 119-133.

Dimitriadis2012

Dimitriadis, S. I., Laskaris, N. A., Michael Vourkas, V. T., & Micheloyannis, S. (2012). An EEG study of brain connectivity dynamics at the resting state. Nonlinear Dynamics-Psychology and Life Sciences, 16(1), 5.

dyconnmap.graphs.vi.variation_information(indices_a: numpy.ndarray, indices_b: numpy.ndarray)float[source]

Variation of Information

Parameters
  • indices_a (array-like, shape(n_samples)) – Symbolic time series.

  • indices_b (array-like, shape(n_samples)) – Symbolic time series.

Returns

vi – Variation of information.

Return type

float

Module contents

dyconnmap.graphs.edge_to_edge(dfcgs: numpy.ndarray)numpy.ndarray[source]

Edge-To-Edge

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

net

Return type

array-like

dyconnmap.graphs.graph_diffusion_distance(a: numpy.ndarray, b: numpy.ndarray, threshold: Optional[float] = 1e-14)Tuple[numpy.float32, numpy.float32][source]

Graph Diffusion Distance

Parameters
  • a (array-like, shape(N, N)) – Weighted matrix.

  • b (array-like, shape(N, N)) – Weighted matrix.

  • threshold (float) – A threshold to filter out the small eigenvalues. If the you get NaN or INFs, try lowering this threshold.

Returns

  • gdd (float) – The estimated graph diffusion distance.

  • xopt (float) – Parameters (over given interval) which minimize the objective function. (see scipy.optimize.fmindbound)

dyconnmap.graphs.im_distance(X: numpy.ndarray, Y: numpy.ndarray, bandwidth: Optional[float] = 1.0)float[source]
Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrix.

  • bandwidth (float) – Bandwidth of the kernel. Default 1.0.

Returns

distance – The estimated Ipsen-Mikhailov distance.

Return type

float

dyconnmap.graphs.k_core_decomposition(mtx: numpy.ndarray, threshold: float)numpy.ndarray[source]

Threshold a binary graph based on the detected k-cores.

Alvarez2006

Alvarez-Hamelin, J. I., Dall’Asta, L., Barrat, A., & Vespignani, A. (2006). Large scale networks fingerprinting and visualization using the k-core decomposition. In Advances in neural information processing systems (pp. 41-50).

Hagman2008

Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., & Sporns, O. (2008). Mapping the structural core of human cerebral cortex. PLoS biology, 6(7), e159.

Parameters
  • mtx (array-like, shape(N, N)) – Binary matrix.

  • threshold (int) – Degree threshold.

Returns

k_cores – A binary matrix of the decomposed cores.

Return type

array-like, shape(N, 1)

dyconnmap.graphs.laplacian_energy(mtx: numpy.ndarray)float[source]

Laplacian Energy

Parameters

mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

Returns

le – The Laplacian Energy.

Return type

float

dyconnmap.graphs.multilayer_pc_degree(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient (Degree)

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

mpc – Participation coefficient based on the degree of the layers’ nodes.

Return type

array-like

dyconnmap.graphs.multilayer_pc_gamma(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient method from Guillon et al.

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer graph.

Returns

gamma – Returns the original multilayer graph flattened, with the off diagional containing the estimated interlayer multilayer participation coefficient.

Return type

array-like, shape(n_layers*n_rois, n_layers*n_rois)

dyconnmap.graphs.multilayer_pc_strength(mlgraph: numpy.ndarray)numpy.ndarray[source]

Multilayer Participation Coefficient (Strength)

Parameters

mlgraph (array-like, shape(n_layers, n_rois, n_rois)) – A multilayer (undirected) graph. Each layer consists of a graph.

Returns

mpc – Participation coefficient based on the strength of the layers’ nodes.

Return type

array-like

dyconnmap.graphs.mutual_information(indices_a: numpy.ndarray, indices_b: numpy.ndarray)Tuple[float, float][source]

Mutual Information

Parameters
  • indices_a (array-like, shape(n_samples)) – Symbolic time series.

  • indices_b (array-like, shape(n_samples)) – Symbolic time series.

Returns

  • MI (float) – Mutual information.

  • NMI (float) – Normalized mutual information.

dyconnmap.graphs.nodal_global_efficiency(mtx: numpy.ndarray)numpy.ndarray[source]

Nodal Global Efficiency

Parameters

mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

Returns

nodal_ge – The computed nodal global efficiency.

Return type

array-like, shape(N, 1)

dyconnmap.graphs.spectral_euclidean_distance(X: numpy.ndarray, Y: numpy.ndarray)float[source]
Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrix:

Returns

distance – The euclidean distance between the two spectrums.

Return type

float

dyconnmap.graphs.spectral_k_distance(X: numpy.ndarray, Y: numpy.ndarray, k: int)float[source]

Spectral-K Distance

Use the largest \(k\) eigenvalues of the given graphs to compute the distance between them.

Parameters
  • X (array-like, shape(N, N)) – A weighted matrix.

  • Y (array-like, shape(N, N)) – A weighted matrixY

  • k (int) – Largest k eigenvalues to use.

Returns

distance – Estimated distance based on selected largest eigenvalues.

Return type

float

dyconnmap.graphs.threshold_eco(mtx)[source]
dyconnmap.graphs.threshold_global_cost_efficiency(mtx: numpy.ndarray, iterations: int)Tuple[numpy.ndarray, float, float, float][source]

Threshold a graph based on the Global Efficiency - Cost formula.

Basset2009

Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences, 106(28), 11747-11752.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • iterations (int) – Number of steps, as a resolution when search for optima.

Returns

  • binary_mtx (array-like, shape(N, N)) – A binary mask matrix.

  • threshold (float) – The threshold that maximizes the global cost efficiency.

  • global_cost_eff_max (float) – Global cost efficiency.

  • efficiency (float) – Global efficiency.

  • cost_max (float) – Cost of the network at the maximum global cost efficiency

dyconnmap.graphs.threshold_mean_degree(mtx: numpy.ndarray, threshold_mean_degree: int)numpy.ndarray[source]

Threshold a graph based on the mean degree.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • threshold_mean_degree (int) – Mean degree threshold.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.threshold_mst_mean_degree(mtx: numpy.ndarray, avg_degree: float)numpy.ndarray[source]

Threshold a graph based on mean using minimum spanning trees.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • avg_degree (float) – Mean degree threshold.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.threshold_omst_global_cost_efficiency(mtx: numpy.ndarray, n_msts: Optional[int] = None)Tuple[numpy.ndarray, numpy.ndarray, float, float, float, float][source]

Threshold a graph by optimizing the formula GE-C via orthogonal MSTs.

Dimitriadis2017a

Dimitriadis, S. I., Salis, C., Tarnanas, I., & Linden, D. E. (2017). Topological Filtering of Dynamic Functional Brain Networks Unfolds Informative Chronnectomics: A Novel Data-Driven Thresholding Scheme Based on Orthogonal Minimal Spanning Trees (OMSTs). Frontiers in neuroinformatics, 11.

Dimitriadis2017n

Dimitriadis, S. I., Antonakakis, M., Simos, P., Fletcher, J. M., & Papanicolaou, A. C. (2017). Data-driven Topological Filtering based on Orthogonal Minimal Spanning Trees: Application to Multi-Group MEG Resting-State Connectivity. Brain Connectivity, (ja).

Basset2009

Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., & Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proceedings of the National Academy of Sciences, 106(28), 11747-11752.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • n_msts (int or None) – Maximum number of OMSTs to compute. Default None; an exhaustive computation will be performed.

Returns

  • nCIJtree (array-like, shape(n_msts, N, N)) – A matrix containing all the orthogonal MSTs.

  • CIJtree (array-like, shape(N, N)) – Resulting graph.

  • degree (float) – The mean degree of the resulting graph.

  • global_eff (float) – Global efficiency of the resulting graph.

  • global_cost_eff_max (float) – The value where global efficiency - cost is maximized.

  • cost_max (float) – Cost of the network at the maximum global cost efficiency.

dyconnmap.graphs.threshold_shortest_paths(mtx: numpy.ndarray, treatment: Optional[bool] = False)numpy.ndarray[source]

Threshold a graph via via shortest path identification using Dijkstra’s algorithm.

Dimitriadis2010

Dimitriadis, S. I., Laskaris, N. A., Tsirka, V., Vourkas, M., Micheloyannis, S., & Fotopoulos, S. (2010). Tracking brain dynamics via time-dependent network analysis. Journal of neuroscience methods, 193(1), 145-155.

Parameters
  • mtx (array-like, shape(N, N)) – Symmetric, weighted and undirected connectivity matrix.

  • treatment (boolean) – Convert the weights to distances by inversing the matrix. Also, fill the diagonal with zeroes. Default false.

Returns

binary_mtx – A binary mask matrix.

Return type

array-like, shape(N, N)

dyconnmap.graphs.variation_information(indices_a: numpy.ndarray, indices_b: numpy.ndarray)float[source]

Variation of Information

Parameters
  • indices_a (array-like, shape(n_samples)) – Symbolic time series.

  • indices_b (array-like, shape(n_samples)) – Symbolic time series.

Returns

vi – Variation of information.

Return type

float