From cf3160f705ea56d49d87b11d6aa240607692933b Mon Sep 17 00:00:00 2001 From: stefpeschel Date: Thu, 7 Nov 2024 22:01:58 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20stefpesc?= =?UTF-8?q?hel/NetCoMi@445e90613db1b094b33516c07fb08198aee5204b=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- index.html | 15 +- pkgdown.yml | 2 +- readme.html | 1391 +---------------------------------- reference/netConstruct.html | 2 +- search.json | 2 +- 5 files changed, 37 insertions(+), 1375 deletions(-) diff --git a/index.html b/index.html index f850723..d17b48f 100644 --- a/index.html +++ b/index.html @@ -86,7 +86,7 @@

NetCoMi (Network Construction and Comparison for Microbiome Data) is an R package designed to facilitate the construction, analysis, and comparison of networks tailored to microbial compositional data. It implements a comprehensive workflow introduced in Peschel et al. (2020), which guides users through each step of network generation and analysis with a strong emphasis on reproducibility and computational efficiency.

With NetCoMi, users can construct microbial association or dissimilarity networks directly from sequencing data, typically provided as a read count matrix. The package includes a broad selection of methods for handling zeros, normalizing data, computing associations between microbial taxa, and sparsifying the resulting matrices. By offering these components in a modular format, NetCoMi allows users to tailor the workflow to their specific research needs, creating highly customizable microbial networks.

The package supports both the construction, analysis, and visualization of a single network and the comparison of two networks through graphical and quantitative approaches, including statistical testing. Additionally, NetCoMi offers the capability of constructing differential networks, where only differentially associated taxa are connected.

-

+

Exemplary network comparison using soil microbiome data (‘soilrep’ data from phyloseq package). Microbial associations are compared between the two experimantal settings ‘warming’ and ‘non-warming’ using the same layout in both groups.

@@ -119,7 +119,7 @@

Methods included in NetCoMiKLD() from LaplacesDemon package)
  • Jeffrey divergence (own code using KLD() from LaplacesDemon package)
  • Jensen-Shannon divergence (own code using KLD() from LaplacesDemon package)
  • -
  • Compositional KLD (own implementation following Martin-Fernández et al. (1999))
  • +
  • Compositional KLD (own implementation following Martin-Fernández et al. (1999))
  • Aitchison distance (vegdist() and clr() from SpiecEasi package)
  • Methods for zero replacement:

    @@ -139,7 +139,7 @@

    Methods included in NetCoMivarianceStabilizingTransformation from DESeq2 package)
  • Centered log-ratio (clr) transformation (clr() from SpiecEasi package))
  • -

    TSS, CSS, COM, VST, and the clr transformation are described in (Badri et al. 2020).

    +

    TSS, CSS, COM, VST, and the clr transformation are described in (Badri et al. 2020).

    Installation @@ -149,14 +149,13 @@

    Installationinstall.packages("devtools") install.packages("BiocManager") -# Since two of NetCoMi's dependencies are only available on GitHub, it is -# recommended to install them first: +# Since two of NetCoMi's dependencies are only available on GitHub, +# it is recommended to install them first: devtools::install_github("zdk123/SpiecEasi") devtools::install_github("GraceYoon/SPRING") # Install NetCoMi devtools::install_github("stefpeschel/NetCoMi", - dependencies = c("Depends", "Imports", "LinkingTo"), repos = c("https://cloud.r-project.org/", BiocManager::repositories()))

    If there are any errors during installation, please install the missing dependencies manually.

    @@ -181,7 +180,6 @@

    Development version
     devtools::install_github("stefpeschel/NetCoMi", 
                              ref = "develop",
    -                         dependencies = c("Depends", "Imports", "LinkingTo"),
                              repos = c("https://cloud.r-project.org/",
                                        BiocManager::repositories()))

    Please check the NEWS document for features implemented on develop branch.

    @@ -196,6 +194,9 @@

    References Martin-Fernández, Josep A, M Bren, Carles Barceló-Vidal, and Vera Pawlowsky-Glahn. 1999. “A Measure of Difference for Compositional Data Based on Measures of Divergence.” In Proceedings of IAMG, 99:211–16. +
    +Peschel, Stefanie, Christian L Müller, Erika von Mutius, Anne-Laure Boulesteix, and Martin Depner. 2020. “NetCoMi: network construction and comparison for microbiome data in R.” Briefings in Bioinformatics 22 (4): bbaa290. https://doi.org/10.1093/bib/bbaa290. +
    diff --git a/pkgdown.yml b/pkgdown.yml index 7e25ace..3a5b554 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -10,7 +10,7 @@ articles: net_comparison: net_comparison.html NetCoMi: NetCoMi.html soil_example: soil_example.html -last_built: 2024-11-04T09:42Z +last_built: 2024-11-07T21:51Z urls: reference: https://netcomi.de/reference article: https://netcomi.de/articles diff --git a/readme.html b/readme.html index af26682..bf3e9b1 100644 --- a/readme.html +++ b/readme.html @@ -52,70 +52,17 @@
    -

    DOI install with bioconda

    -

    NetCoMi (Network Construction and Comparison for Microbiome Data) provides functionality for constructing, analyzing, and comparing networks suitable for the application on microbial compositional data. The R package implements the workflow proposed in

    -

    Stefanie Peschel, Christian L Müller, Erika von Mutius, Anne-Laure Boulesteix, Martin Depner (2020). NetCoMi: network construction and comparison for microbiome data in R. Briefings in Bioinformatics, bbaa290. https://doi.org/10.1093/bib/bbaa290.

    -

    NetCoMi allows its users to construct, analyze, and compare microbial association or dissimilarity networks in a fast and reproducible manner. Starting with a read count matrix originating from a sequencing process, the pipeline includes a wide range of existing methods for treating zeros in the data, normalization, computing microbial associations or dissimilarities, and sparsifying the resulting association/ dissimilarity matrix. These methods can be combined in a modular fashion to generate microbial networks. NetCoMi can either be used for constructing, analyzing and visualizing a single network, or for comparing two networks in a graphical as well as a quantitative manner, including statistical tests. The package furthermore offers functionality for constructing differential networks, where only differentially associated taxa are connected.

    -

    +

    Lifecycle: stable DOI DOI paper install with bioconda

    +

    NetCoMi (Network Construction and Comparison for Microbiome Data) is an R package designed to facilitate the construction, analysis, and comparison of networks tailored to microbial compositional data. It implements a comprehensive workflow introduced in Peschel et al. (2020), which guides users through each step of network generation and analysis with a strong emphasis on reproducibility and computational efficiency.

    +

    With NetCoMi, users can construct microbial association or dissimilarity networks directly from sequencing data, typically provided as a read count matrix. The package includes a broad selection of methods for handling zeros, normalizing data, computing associations between microbial taxa, and sparsifying the resulting matrices. By offering these components in a modular format, NetCoMi allows users to tailor the workflow to their specific research needs, creating highly customizable microbial networks.

    +

    The package supports both the construction, analysis, and visualization of a single network and the comparison of two networks through graphical and quantitative approaches, including statistical testing. Additionally, NetCoMi offers the capability of constructing differential networks, where only differentially associated taxa are connected.

    +

    Exemplary network comparison using soil microbiome data (‘soilrep’ data from phyloseq package). Microbial associations are compared between the two experimantal settings ‘warming’ and ‘non-warming’ using the same layout in both groups.

    -
    -

    Methods included in NetCoMi

    -

    Here is an overview of methods available for network construction, together with some information on their implementation in R:

    -

    Association measures:

    -

    Dissimilarity measures:

    -
    • Euclidean distance (vegdist() from vegan package)
    • -
    • Bray-Curtis dissimilarity (vegdist() from vegan package)
    • -
    • Kullback-Leibler divergence (KLD) (KLD() from LaplacesDemon package)
    • -
    • Jeffrey divergence (own code using KLD() from LaplacesDemon package)
    • -
    • Jensen-Shannon divergence (own code using KLD() from LaplacesDemon package)
    • -
    • Compositional KLD (own implementation following Martin-Fernández et al. (1999))
    • -
    • Aitchison distance (vegdist() and clr() from SpiecEasi package)
    • -

    Methods for zero replacement:

    -
    • Add a predefined pseudo count to the count table
    • -
    • Replace only zeros in the count table by a predefined pseudo count (ratios between non-zero values are preserved)
    • -
    • Multiplicative replacement (multRepl from zCompositions package)
    • -
    • Modified EM alr-algorithm (lrEM from zCompositions package)
    • -
    • Bayesian-multiplicative replacement (cmultRepl from zCompositions package)
    • -

    Normalization methods:

    -
    • Total Sum Scaling (TSS) (own implementation)
    • -
    • Cumulative Sum Scaling (CSS) (cumNormMat from metagenomeSeq package)
    • -
    • Common Sum Scaling (COM) (own implementation)
    • -
    • Rarefying (rrarefy from vegan package)
    • -
    • Variance Stabilizing Transformation (VST) (varianceStabilizingTransformation from DESeq2 package)
    • -
    • Centered log-ratio (clr) transformation (clr() from SpiecEasi package))
    • -

    TSS, CSS, COM, VST, and the clr transformation are described in (Badri et al. 2020).

    +

    Website

    +

    Please visit netcomi.de for a complete reference.

    Installation

    @@ -124,1330 +71,44 @@

    Installationinstall.packages("devtools") install.packages("BiocManager") +# Since two of NetCoMi's dependencies are only available on GitHub, +# it is recommended to install them first: +devtools::install_github("zdk123/SpiecEasi") +devtools::install_github("GraceYoon/SPRING") + # Install NetCoMi devtools::install_github("stefpeschel/NetCoMi", - dependencies = c("Depends", "Imports", "LinkingTo"), repos = c("https://cloud.r-project.org/", BiocManager::repositories()))

    If there are any errors during installation, please install the missing dependencies manually.

    -

    In particular the automatic installation of SPRING and SpiecEasi (only available on GitHub) does sometimes not work. These packages can be installed as follows (the order is important because SPRING depends on SpiecEasi):

    -
    -devtools::install_github("zdk123/SpiecEasi")
    -devtools::install_github("GraceYoon/SPRING")

    Packages that are optionally required in certain settings are not installed together with NetCoMi. These can be installed automatically using:

    -
    -installNetCoMiPacks()
    -
    -# Please check:
    -?installNetCoMiPacks()
    +

    If not installed via installNetCoMiPacks(), the required package is installed by the respective NetCoMi function when needed.

    -
    -

    Bioconda

    -

    Thanks to daydream-boost, NetCoMi can also be installed from conda bioconda channel with

    -
    # You can install an individual environment firstly with
    -# conda create -n NetCoMi
    -# conda activate NetCoMi
    -conda install -c bioconda -c conda-forge r-netcomi
    +
    +

    Bioconda

    +

    Thanks to daydream-boost, NetCoMi can also be installed from conda bioconda channel with

    +
    # You can install an individual environment firstly with
    +# conda create -n NetCoMi
    +# conda activate NetCoMi
    +conda install -c bioconda -c conda-forge r-netcomi

    Development version

    Everyone who wants to use new features not included in any releases is invited to install NetCoMi’s development version:

    -
    +
     devtools::install_github("stefpeschel/NetCoMi", 
                              ref = "develop",
    -                         dependencies = c("Depends", "Imports", "LinkingTo"),
                              repos = c("https://cloud.r-project.org/",
                                        BiocManager::repositories()))

    Please check the NEWS document for features implemented on develop branch.

    -

    Usage

    -

    We use the American Gut data from SpiecEasi package to look at some examples of how NetCoMi is applied. NetCoMi’s main functions are netConstruct() for network construction, netAnalyze() for network analysis, and netCompare() for network comparison. As you will see in the following, these three functions must be executed in the aforementioned order. A further function is diffnet() for constructing a differential association network. diffnet() must be applied to the object returned by netConstruct().

    -

    First of all, we load NetCoMi and the data from American Gut Project (provided by SpiecEasi, which is automatically loaded together with NetCoMi).

    -
    -library(NetCoMi)
    -data("amgut1.filt")
    -data("amgut2.filt.phy")
    -
    -

    Network with SPRING as association measure

    -
    -

    Network construction and analysis

    -

    We firstly construct a single association network using SPRING for estimating associations (conditional dependence) between OTUs.

    -

    The data are filtered within netConstruct() as follows:

    -
    • Only samples with a total number of reads of at least 1000 are included (argument filtSamp).
    • -
    • Only the 50 taxa with highest frequency are included (argument filtTax).
    • -

    measure defines the association or dissimilarity measure, which is "spring" in our case. Additional arguments are passed to SPRING() via measurePar. nlambda and rep.num are set to 10 for a decreased execution time, but should be higher for real data. Rmethod is set to “approx” to estimate the correlations using a hybrid multi-linear interpolation approach proposed by Yoon, Müller, and Gaynanova (2020). This method considerably reduces the runtime while controlling the approximation error.

    -

    Normalization as well as zero handling is performed internally in SPRING(). Hence, we set normMethod and zeroMethod to "none".

    -

    We furthermore set sparsMethod to "none" because SPRING returns a sparse network where no additional sparsification step is necessary.

    -

    We use the “signed” method for transforming associations into dissimilarities (argument dissFunc). In doing so, strongly negatively associated taxa have a high dissimilarity and, in turn, a low similarity, which corresponds to edge weights in the network plot.

    -

    The verbose argument is set to 3 so that all messages generated by netConstruct() as well as messages of external functions are printed.

    -
    -net_spring <- netConstruct(amgut1.filt,
    -                           filtTax = "highestFreq",
    -                           filtTaxPar = list(highestFreq = 50),
    -                           filtSamp = "totalReads",
    -                           filtSampPar = list(totalReads = 1000),
    -                           measure = "spring",
    -                           measurePar = list(nlambda=10, 
    -                                             rep.num=10,
    -                                             Rmethod = "approx"),
    -                           normMethod = "none", 
    -                           zeroMethod = "none",
    -                           sparsMethod = "none", 
    -                           dissFunc = "signed",
    -                           verbose = 2,
    -                           seed = 123456)
    -
    ## Checking input arguments ... Done.
    -## Data filtering ...
    -## 77 taxa removed.
    -## 50 taxa and 289 samples remaining.
    -## 
    -## Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend':
    -##   method     from 
    -##   rev.hclust vegan
    -## Registered S3 method overwritten by 'seriation':
    -##   method         from 
    -##   reorder.hclust vegan
    -## Done.
    -
    -
    -

    Analyzing the constructed network

    -

    NetCoMi’s netAnalyze() function is used for analyzing the constructed network(s).

    -

    Here, centrLCC is set to TRUE meaning that centralities are calculated only for nodes in the largest connected component (LCC).

    -

    Clusters are identified using greedy modularity optimization (by cluster_fast_greedy() from igraph package).

    -

    Hubs are nodes with an eigenvector centrality value above the empirical 95% quantile of all eigenvector centralities in the network (argument hubPar).

    -

    weightDeg and normDeg are set to FALSE so that the degree of a node is simply defined as number of nodes that are adjacent to the node.

    -

    By default, a heatmap of the Graphlet Correlation Matrix (GCM) is returned (with graphlet correlations in the upper triangle and significance codes resulting from Student’s t-test in the lower triangle). See ?calcGCM and ?testGCM for details.

    -
    -props_spring <- netAnalyze(net_spring, 
    -                           centrLCC = TRUE,
    -                           clustMethod = "cluster_fast_greedy",
    -                           hubPar = "eigenvector",
    -                           weightDeg = FALSE, normDeg = FALSE)
    -

    -
    -#?summary.microNetProps
    -summary(props_spring, numbNodes = 5L)
    -
    ## 
    -## Component sizes
    -## ```````````````          
    -## size: 48 1
    -##    #:  1 2
    -## ______________________________
    -## Global network properties
    -## `````````````````````````
    -## Largest connected component (LCC):
    -##                                  
    -## Relative LCC size         0.96000
    -## Clustering coefficient    0.33594
    -## Modularity                0.53407
    -## Positive edge percentage 88.34951
    -## Edge density              0.09131
    -## Natural connectivity      0.02855
    -## Vertex connectivity       1.00000
    -## Edge connectivity         1.00000
    -## Average dissimilarity*    0.97035
    -## Average path length**     2.36912
    -## 
    -## Whole network:
    -##                                  
    -## Number of components      3.00000
    -## Clustering coefficient    0.33594
    -## Modularity                0.53407
    -## Positive edge percentage 88.34951
    -## Edge density              0.08408
    -## Natural connectivity      0.02714
    -## -----
    -## *: Dissimilarity = 1 - edge weight
    -## **: Path length = Units with average dissimilarity
    -## 
    -## ______________________________
    -## Clusters
    -## - In the whole network
    -## - Algorithm: cluster_fast_greedy
    -## ```````````````````````````````` 
    -##                     
    -## name: 0  1  2  3 4 5
    -##    #: 2 12 17 10 5 4
    -## 
    -## ______________________________
    -## Hubs
    -## - In alphabetical/numerical order
    -## - Based on empirical quantiles of centralities
    -## ```````````````````````````````````````````````       
    -##  190597
    -##  288134
    -##  311477
    -## 
    -## ______________________________
    -## Centrality measures
    -## - In decreasing order
    -## - Centrality of disconnected components is zero
    -## ````````````````````````````````````````````````
    -## Degree (unnormalized):
    -##          
    -## 288134 10
    -## 190597  9
    -## 311477  9
    -## 188236  8
    -## 199487  8
    -## 
    -## Betweenness centrality (normalized):
    -##               
    -## 302160 0.31360
    -## 268332 0.24144
    -## 259569 0.23404
    -## 470973 0.21462
    -## 119010 0.19611
    -## 
    -## Closeness centrality (normalized):
    -##               
    -## 288134 0.68426
    -## 311477 0.68413
    -## 199487 0.68099
    -## 302160 0.67518
    -## 188236 0.66852
    -## 
    -## Eigenvector centrality (normalized):
    -##               
    -## 288134 1.00000
    -## 311477 0.94417
    -## 190597 0.90794
    -## 199487 0.85439
    -## 188236 0.72684
    -
    -
    -

    Plotting the GCM heatmap manually

    -
    -plotHeat(mat = props_spring$graphletLCC$gcm1,
    -         pmat = props_spring$graphletLCC$pAdjust1,
    -         type = "mixed",
    -         title = "GCM", 
    -         colorLim = c(-1, 1),
    -         mar = c(2, 0, 2, 0))
    -
    -# Add rectangles highlighting the four types of orbits
    -graphics::rect(xleft   = c( 0.5,  1.5, 4.5,  7.5),
    -               ybottom = c(11.5,  7.5, 4.5,  0.5),
    -               xright  = c( 1.5,  4.5, 7.5, 11.5),
    -               ytop    = c(10.5, 10.5, 7.5,  4.5),
    -               lwd = 2, xpd = NA)
    -
    -text(6, -0.2, xpd = NA, 
    -     "Significance codes:  ***: 0.001;  **: 0.01;  *: 0.05")
    -

    -
    -
    -

    Visualizing the network

    -

    We use the determined clusters as node colors and scale the node sizes according to the node’s eigenvector centrality.

    -
    -# help page
    -?plot.microNetProps
    -
    -p <- plot(props_spring, 
    -          nodeColor = "cluster", 
    -          nodeSize = "eigenvector",
    -          title1 = "Network on OTU level with SPRING associations", 
    -          showTitle = TRUE,
    -          cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated association:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"), 
    -       bty = "n", horiz = TRUE)
    -

    -

    Note that edge weights are (non-negative) similarities, however, the edges belonging to negative estimated associations are colored in red by default (negDiffCol = TRUE).

    -

    By default, a different transparency value is added to edges with an absolute weight below and above the cut value (arguments edgeTranspLow and edgeTranspHigh). The determined cut value can be read out as follows:

    -
    -p$q1$Arguments$cut
    -
    ##      75% 
    -## 0.337099
    -
    -
    -
    -

    Export to Gephi

    -

    Some users may be interested in how to export the network to Gephi. Here’s an example:

    -
    -# For Gephi, we have to generate an edge list with IDs.
    -# The corresponding labels (and also further node features) are stored as node list.
    -
    -# Create edge object from the edge list exported by netConstruct()
    -edges <- dplyr::select(net_spring$edgelist1, v1, v2)
    -
    -# Add Source and Target variables (as IDs)
    -edges$Source <- as.numeric(factor(edges$v1))
    -edges$Target <- as.numeric(factor(edges$v2))
    -edges$Type <- "Undirected"
    -edges$Weight <- net_spring$edgelist1$adja
    -
    -nodes <- unique(edges[,c('v1','Source')])
    -colnames(nodes) <- c("Label", "Id")
    -
    -# Add category with clusters (can be used as node colors in Gephi)
    -nodes$Category <- props_spring$clustering$clust1[nodes$Label]
    -
    -edges <- dplyr::select(edges, Source, Target, Type, Weight)
    -
    -write.csv(nodes, file = "nodes.csv", row.names = FALSE)
    -write.csv(edges, file = "edges.csv", row.names = FALSE)
    -

    The exported .csv files can then be imported into Gephi.

    -
    -
    -

    Network with Pearson correlation as association measure

    -

    Let’s construct another network using Pearson’s correlation coefficient as association measure. The input is now a phyloseq object.

    -

    Since Pearson correlations may lead to compositional effects when applied to sequencing data, we use the clr transformation as normalization method. Zero treatment is necessary in this case.

    -

    A threshold of 0.3 is used as sparsification method, so that only OTUs with an absolute correlation greater than or equal to 0.3 are connected.

    -
    -net_pears <- netConstruct(amgut2.filt.phy,  
    -                          measure = "pearson",
    -                          normMethod = "clr",
    -                          zeroMethod = "multRepl",
    -                          sparsMethod = "threshold",
    -                          thresh = 0.3,
    -                          verbose = 3)
    -
    ## Checking input arguments ... Done.
    -## 2 rows with zero sum removed.
    -## 138 taxa and 294 samples remaining.
    -## 
    -## Zero treatment:
    -## Execute multRepl() ... Done.
    -## 
    -## Normalization:
    -## Execute clr(){SpiecEasi} ... Done.
    -## 
    -## Calculate 'pearson' associations ... Done.
    -## 
    -## Sparsify associations via 'threshold' ... Done.
    -

    Network analysis and plotting:

    -
    -props_pears <- netAnalyze(net_pears, 
    -                          clustMethod = "cluster_fast_greedy")
    -

    -
    -plot(props_pears, 
    -     nodeColor = "cluster", 
    -     nodeSize = "eigenvector",
    -     title1 = "Network on OTU level with Pearson correlations", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:", 
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"), 
    -       bty = "n", horiz = TRUE)
    -

    -

    Let’s improve the visualization by changing the following arguments:

    -
    • -repulsion = 0.8: Place the nodes further apart.
    • -
    • -rmSingles = TRUE: Single nodes are removed.
    • -
    • -labelScale = FALSE and cexLabels = 1.6: All labels have equal size and are enlarged to improve readability of small node’s labels.
    • -
    • -nodeSizeSpread = 3 (default is 4): Node sizes are more similar if the value is decreased. This argument (in combination with cexNodes) is useful to enlarge small nodes while keeping the size of big nodes.
    • -
    • -hubBorderCol = "darkgray": Change border color for a better readability of the node labels.
    • -
    -plot(props_pears, 
    -     nodeColor = "cluster", 
    -     nodeSize = "eigenvector",
    -     repulsion = 0.8,
    -     rmSingles = TRUE,
    -     labelScale = FALSE,
    -     cexLabels = 1.6,
    -     nodeSizeSpread = 3,
    -     cexNodes = 2,
    -     hubBorderCol = "darkgray",
    -     title1 = "Network on OTU level with Pearson correlations", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"),
    -       bty = "n", horiz = TRUE)
    -

    -
    -

    Edge filtering

    -

    The network can be sparsified further using the arguments edgeFilter (edges are filtered before the layout is computed) and edgeInvisFilter (edges are removed after the layout is computed and thus just made “invisible”).

    -
    -plot(props_pears,
    -     edgeInvisFilter = "threshold",
    -     edgeInvisPar = 0.4,
    -     nodeColor = "cluster", 
    -     nodeSize = "eigenvector",
    -     repulsion = 0.8,
    -     rmSingles = TRUE,
    -     labelScale = FALSE,
    -     cexLabels = 1.6,
    -     nodeSizeSpread = 3,
    -     cexNodes = 2,
    -     hubBorderCol = "darkgray",
    -     title1 = paste0("Network on OTU level with Pearson correlations",
    -                     "\n(edge filter: threshold = 0.4)"),
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"),
    -       bty = "n", horiz = TRUE)
    -

    -
    -
    -
    -

    Using the “unsigned” transformation

    -

    In the above network, the “signed” transformation was used to transform the estimated associations into dissimilarities. This leads to a network where strongly positive correlated taxa have a high edge weight (1 if the correlation equals 1) and strongly negative correlated taxa have a low edge weight (0 if the correlation equals -1).

    -

    We now use the “unsigned” transformation so that the edge weight between strongly correlated taxa is high, no matter of the sign. Hence, a correlation of -1 and 1 would lead to an edge weight of 1.

    -
    -

    Network construction

    -

    We can pass the network object from before to netConstruct() to save runtime.

    -
    -net_pears_unsigned <- netConstruct(data = net_pears$assoEst1,
    -                                   dataType = "correlation", 
    -                                   sparsMethod = "threshold",
    -                                   thresh = 0.3,
    -                                   dissFunc = "unsigned",
    -                                   verbose = 3)
    -
    ## Checking input arguments ... Done.
    -## 
    -## Sparsify associations via 'threshold' ... Done.
    -
    -
    -

    Estimated correlations and adjacency values

    -

    The following histograms demonstrate how the estimated correlations are transformed into adjacencies (= sparsified similarities for weighted networks).

    -

    Sparsified estimated correlations:

    -
    -hist(net_pears$assoMat1, 100, xlim = c(-1, 1), ylim = c(0, 400),
    -     xlab = "Estimated correlation", 
    -     main = "Estimated correlations after sparsification")
    -

    -

    Adjacency values computed using the “signed” transformation (values different from 0 and 1 will be edges in the network):

    -
    -hist(net_pears$adjaMat1, 100, ylim = c(0, 400),
    -     xlab = "Adjacency values", 
    -     main = "Adjacencies (with \"signed\" transformation)")
    -

    -

    Adjacency values computed using the “unsigned” transformation:

    -
    -hist(net_pears_unsigned$adjaMat1, 100, ylim = c(0, 400),
    -     xlab = "Adjacency values", 
    -     main = "Adjacencies (with \"unsigned\" transformation)")
    -

    -
    -
    -

    Network analysis and plotting

    -
    -props_pears_unsigned <- netAnalyze(net_pears_unsigned, 
    -                                   clustMethod = "cluster_fast_greedy",
    -                                   gcmHeat = FALSE)
    -
    -plot(props_pears_unsigned, 
    -     nodeColor = "cluster", 
    -     nodeSize = "eigenvector",
    -     repulsion = 0.9,
    -     rmSingles = TRUE,
    -     labelScale = FALSE,
    -     cexLabels = 1.6,
    -     nodeSizeSpread = 3,
    -     cexNodes = 2,
    -     hubBorderCol = "darkgray",
    -     title1 = "Network with Pearson correlations and \"unsigned\" transformation", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"),
    -       bty = "n", horiz = TRUE)
    -

    -

    While with the “signed” transformation, positive correlated taxa are likely to belong to the same cluster, with the “unsigned” transformation clusters contain strongly positive and negative correlated taxa.

    -
    -
    -
    -

    Network on genus level

    -

    We now construct a further network, where OTUs are agglomerated to genera.

    -
    -library(phyloseq)
    -data("amgut2.filt.phy")
    -
    -# Agglomerate to genus level
    -amgut_genus <- tax_glom(amgut2.filt.phy, taxrank = "Rank6")
    -
    -# Taxonomic table
    -taxtab <- as(tax_table(amgut_genus), "matrix")
    -
    -# Rename taxonomic table and make Rank6 (genus) unique
    -amgut_genus_renamed <- renameTaxa(amgut_genus, 
    -                                  pat = "<name>", 
    -                                  substPat = "<name>_<subst_name>(<subst_R>)",
    -                                  numDupli = "Rank6")
    -
    ## Column 7 contains NAs only and is ignored.
    -
    -# Network construction and analysis
    -net_genus <- netConstruct(amgut_genus_renamed,
    -                          taxRank = "Rank6",
    -                          measure = "pearson",
    -                          zeroMethod = "multRepl",
    -                          normMethod = "clr",
    -                          sparsMethod = "threshold",
    -                          thresh = 0.3,
    -                          verbose = 3)
    -
    ## Checking input arguments ...
    -
    -## Done.
    -
    -## 2 rows with zero sum removed.
    -
    -## 43 taxa and 294 samples remaining.
    -
    -## 
    -## Zero treatment:
    -
    -## Execute multRepl() ... Done.
    -## 
    -## Normalization:
    -## Execute clr(){SpiecEasi} ... Done.
    -## 
    -## Calculate 'pearson' associations ... Done.
    -## 
    -## Sparsify associations via 'threshold' ... Done.
    -
    -props_genus <- netAnalyze(net_genus, clustMethod = "cluster_fast_greedy")
    -

    -
    -

    Network plots

    -

    Modifications:

    -
    • Fruchterman-Reingold layout algorithm from igraph package used (passed to plot as matrix)
    • -
    • Shortened labels (using the “intelligent” method, which avoids duplicates)
    • -
    • Fixed node sizes, where hubs are enlarged
    • -
    • Node color is gray for all nodes (transparancy is lower for hub nodes by default)
    • -
    -# Compute layout
    -graph3 <- igraph::graph_from_adjacency_matrix(net_genus$adjaMat1, 
    -                                              weighted = TRUE)
    -set.seed(123456)
    -lay_fr <- igraph::layout_with_fr(graph3)
    -
    -# Row names of the layout matrix must match the node names
    -rownames(lay_fr) <- rownames(net_genus$adjaMat1)
    -
    -plot(props_genus,
    -     layout = lay_fr,
    -     shortenLabels = "intelligent",
    -     labelLength = 10,
    -     labelPattern = c(5, "'", 3, "'", 3),
    -     nodeSize = "fix",
    -     nodeColor = "gray",
    -     cexNodes = 0.8,
    -     cexHubs = 1.1,
    -     cexLabels = 1.2,
    -     title1 = "Network on genus level with Pearson correlations", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"), 
    -       bty = "n", horiz = TRUE)
    -

    -

    Since the above visualization is obviously not optimal, we make further adjustments:

    -
    • This time, the Fruchterman-Reingold layout algorithm is computed within the plot function and thus applied to the “reduced” network without singletons
    • -
    • Labels are not scaled to node sizes
    • -
    • Single nodes are removed
    • -
    • Node sizes are scaled to the column sums of clr-transformed data
    • -
    • Node colors represent the determined clusters
    • -
    • Border color of hub nodes is changed from black to darkgray
    • -
    • Label size of hubs is enlarged
    • -
    -set.seed(123456)
    -
    -plot(props_genus,
    -     layout = "layout_with_fr",
    -     shortenLabels = "intelligent",
    -     labelLength = 10,
    -     labelPattern = c(5, "'", 3, "'", 3),
    -     labelScale = FALSE,
    -     rmSingles = TRUE,
    -     nodeSize = "clr",
    -     nodeColor = "cluster",
    -     hubBorderCol = "darkgray",
    -     cexNodes = 2,
    -     cexLabels = 1.5,
    -     cexHubLabels = 2,
    -     title1 = "Network on genus level with Pearson correlations", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"), 
    -       bty = "n", horiz = TRUE)
    -

    -

    Let’s check whether the largest nodes are actually those with highest column sums in the matrix with normalized counts returned by netConstruct().

    -
    -sort(colSums(net_genus$normCounts1), decreasing = TRUE)[1:10]
    -
    ##             Bacteroides              Klebsiella        Faecalibacterium 
    -##               1200.7971               1137.4928                708.0877 
    -##      5_Clostridiales(O)    2_Ruminococcaceae(F)    3_Lachnospiraceae(F) 
    -##                549.2647                502.1889                493.7558 
    -## 6_Enterobacteriaceae(F)               Roseburia         Parabacteroides 
    -##                363.3841                333.8737                328.0495 
    -##             Coprococcus 
    -##                274.4082
    -

    In order to further improve our plot, we use the following modifications:

    -
    • This time, we choose the “spring” layout as part of qgraph() (the function is generally used for network plotting in NetCoMi)
    • -
    • A repulsion value below 1 places the nodes further apart
    • -
    • Labels are not shortened anymore
    • -
    • Nodes (bacteria on genus level) are colored according to the respective phylum
    • -
    • Edges representing positive associations are colored in blue, negative ones in orange (just to give an example for alternative edge coloring)
    • -
    • Transparency is increased for edges with high weight to improve the readability of node labels
    • -
    -# Get phyla names
    -taxtab <- as(tax_table(amgut_genus_renamed), "matrix")
    -phyla <- as.factor(gsub("p__", "", taxtab[, "Rank2"]))
    -names(phyla) <- taxtab[, "Rank6"]
    -#table(phyla)
    -
    -# Define phylum colors
    -phylcol <- c("cyan", "blue3", "red", "lawngreen", "yellow", "deeppink")
    -
    -plot(props_genus,
    -     layout = "spring",
    -     repulsion = 0.84,
    -     shortenLabels = "none",
    -     charToRm = "g__",
    -     labelScale = FALSE,
    -     rmSingles = TRUE,
    -     nodeSize = "clr",
    -     nodeSizeSpread = 4,
    -     nodeColor = "feature", 
    -     featVecCol = phyla, 
    -     colorVec =  phylcol,
    -     posCol = "darkturquoise", 
    -     negCol = "orange",
    -     edgeTranspLow = 0,
    -     edgeTranspHigh = 40,
    -     cexNodes = 2,
    -     cexLabels = 2,
    -     cexHubLabels = 2.5,
    -     title1 = "Network on genus level with Pearson correlations", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -# Colors used in the legend should be equally transparent as in the plot
    -phylcol_transp <- colToTransp(phylcol, 60)
    -
    -legend(-1.2, 1.2, cex = 2, pt.cex = 2.5, title = "Phylum:", 
    -       legend=levels(phyla), col = phylcol_transp, bty = "n", pch = 16) 
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated correlation:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("darkturquoise","orange"), 
    -       bty = "n", horiz = TRUE)
    -

    -
    -
    -
    -

    Using an association matrix as input

    -

    The QMP data set provided by the SPRING package is used to demonstrate how NetCoMi is used to analyze a precomputed network (given as association matrix).

    -

    The data set contains quantitative count data (true absolute values), which SPRING can deal with. See ?QMP for details.

    -

    nlambda and rep.num are set to 10 for a decreased execution time, but should be higher for real data.

    -
    -library(SPRING)
    -
    -# Load the QMP data set
    -data("QMP") 
    -
    -# Run SPRING for association estimation
    -fit_spring <- SPRING(QMP, 
    -                     quantitative = TRUE, 
    -                     lambdaseq = "data-specific",
    -                     nlambda = 10, 
    -                     rep.num = 10,
    -                     seed = 123456, 
    -                     ncores = 1,
    -                     Rmethod = "approx",
    -                     verbose = FALSE)
    -
    -# Optimal lambda
    -opt.K <- fit_spring$output$stars$opt.index
    -    
    -# Association matrix
    -assoMat <- as.matrix(SpiecEasi::symBeta(fit_spring$output$est$beta[[opt.K]],
    -                                        mode = "ave"))
    -rownames(assoMat) <- colnames(assoMat) <- colnames(QMP)
    -

    The association matrix is now passed to netConstruct to start the usual NetCoMi workflow. Note that the dataType argument must be set appropriately.

    -
    -# Network construction and analysis
    -net_asso <- netConstruct(data = assoMat,
    -                         dataType = "condDependence",
    -                         sparsMethod = "none",
    -                         verbose = 0)
    -
    -props_asso <- netAnalyze(net_asso, clustMethod = "hierarchical")
    -

    -
    -plot(props_asso,
    -     layout = "spring",
    -     repulsion = 1.2,
    -     shortenLabels = "none",
    -     labelScale = TRUE,
    -     rmSingles = TRUE,
    -     nodeSize = "eigenvector",
    -     nodeSizeSpread = 2,
    -     nodeColor = "cluster",
    -     hubBorderCol = "gray60",
    -     cexNodes = 1.8,
    -     cexLabels = 2,
    -     cexHubLabels = 2.2,
    -     title1 = "Network for QMP data", 
    -     showTitle = TRUE,
    -     cexTitle = 2.3)
    -
    -legend(0.7, 1.1, cex = 2.2, title = "estimated association:",
    -       legend = c("+","-"), lty = 1, lwd = 3, col = c("#009900","red"), 
    -       bty = "n", horiz = TRUE)
    -

    -
    -
    -

    Network comparison

    -

    Now let’s look how NetCoMi is used to compare two networks.

    -
    -

    Network construction

    -

    The data set is split by "SEASONAL_ALLERGIES" leading to two subsets of samples (with and without seasonal allergies). We ignore the “None” group.

    -
    -# Split the phyloseq object into two groups
    -amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, 
    -                                             SEASONAL_ALLERGIES == "yes")
    -amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, 
    -                                            SEASONAL_ALLERGIES == "no")
    -
    -amgut_season_yes
    -
    ## phyloseq-class experiment-level object
    -## otu_table()   OTU Table:         [ 138 taxa and 121 samples ]
    -## sample_data() Sample Data:       [ 121 samples by 166 sample variables ]
    -## tax_table()   Taxonomy Table:    [ 138 taxa by 7 taxonomic ranks ]
    -
    -amgut_season_no
    -
    ## phyloseq-class experiment-level object
    -## otu_table()   OTU Table:         [ 138 taxa and 163 samples ]
    -## sample_data() Sample Data:       [ 163 samples by 166 sample variables ]
    -## tax_table()   Taxonomy Table:    [ 138 taxa by 7 taxonomic ranks ]
    -

    The 50 nodes with highest variance are selected for network construction to get smaller networks.

    -

    We filter the 121 samples (sample size of the smaller group) with highest frequency to make the sample sizes equal and thus ensure comparability.

    -
    -n_yes <- phyloseq::nsamples(amgut_season_yes)
    -
    -# Network construction
    -net_season <- netConstruct(data = amgut_season_no, 
    -                           data2 = amgut_season_yes,  
    -                           filtTax = "highestVar",
    -                           filtTaxPar = list(highestVar = 50),
    -                           filtSamp = "highestFreq",
    -                           filtSampPar = list(highestFreq = n_yes),
    -                           measure = "spring",
    -                           measurePar = list(nlambda = 10, 
    -                                             rep.num = 10,
    -                                             Rmethod = "approx"),
    -                           normMethod = "none", 
    -                           zeroMethod = "none",
    -                           sparsMethod = "none", 
    -                           dissFunc = "signed",
    -                           verbose = 2,
    -                           seed = 123456)
    -
    ## Checking input arguments ... Done.
    -## Data filtering ...
    -## 42 samples removed in data set 1.
    -## 0 samples removed in data set 2.
    -## 96 taxa removed in each data set.
    -## 1 rows with zero sum removed in group 2.
    -## 42 taxa and 121 samples remaining in group 1.
    -## 42 taxa and 120 samples remaining in group 2.
    -## 
    -## Calculate 'spring' associations ... Done.
    -## 
    -## Calculate associations in group 2 ... Done.
    -

    Alternatively, a group vector could be passed to group, according to which the data set is split into two groups:

    -
    -# Get count table
    -countMat <- phyloseq::otu_table(amgut2.filt.phy)
    -
    -# netConstruct() expects samples in rows
    -countMat <- t(as(countMat, "matrix"))
    -
    -group_vec <- phyloseq::get_variable(amgut2.filt.phy, "SEASONAL_ALLERGIES")
    -
    -# Select the two groups of interest (level "none" is excluded)
    -sel <- which(group_vec %in% c("no", "yes"))
    -group_vec <- group_vec[sel]
    -countMat <- countMat[sel, ]
    -
    -net_season <- netConstruct(countMat, 
    -                           group = group_vec, 
    -                           filtTax = "highestVar",
    -                           filtTaxPar = list(highestVar = 50),
    -                           filtSamp = "highestFreq",
    -                           filtSampPar = list(highestFreq = n_yes),
    -                           measure = "spring",
    -                           measurePar = list(nlambda=10, 
    -                                             rep.num=10,
    -                                             Rmethod = "approx"),
    -                           normMethod = "none", 
    -                           zeroMethod = "none",
    -                           sparsMethod = "none", 
    -                           dissFunc = "signed",
    -                           verbose = 3,
    -                           seed = 123456)
    -
    -
    -

    Network analysis

    -

    The object returned by netConstruct() containing both networks is again passed to netAnalyze(). Network properties are computed for both networks simultaneously.

    -

    To demonstrate further functionalities of netAnalyze(), we play around with the available arguments, even if the chosen setting might not be optimal.

    -
    • -centrLCC = FALSE: Centralities are calculated for all nodes (not only for the largest connected component).
    • -
    • -avDissIgnoreInf = TRUE: Nodes with an infinite dissimilarity are ignored when calculating the average dissimilarity.
    • -
    • -sPathNorm = FALSE: Shortest paths are not normalized by average dissimilarity.
    • -
    • -hubPar = c("degree", "eigenvector"): Hubs are nodes with highest degree and eigenvector centrality at the same time.
    • -
    • -lnormFit = TRUE and hubQuant = 0.9: A log-normal distribution is fitted to the centrality values to identify nodes with “highest” centrality values. Here, a node is identified as hub if for each of the three centrality measures, the node’s centrality value is above the 90% quantile of the fitted log-normal distribution.
    • -
    • The non-normalized centralities are used for all four measures.
    • -

    Note! The arguments must be set carefully, depending on the research questions. NetCoMi’s default values are not generally preferable in all practical cases!

    -
    -props_season <- netAnalyze(net_season, 
    -                           centrLCC = FALSE,
    -                           avDissIgnoreInf = TRUE,
    -                           sPathNorm = FALSE,
    -                           clustMethod = "cluster_fast_greedy",
    -                           hubPar = c("degree", "eigenvector"),
    -                           hubQuant = 0.9,
    -                           lnormFit = TRUE,
    -                           normDeg = FALSE,
    -                           normBetw = FALSE,
    -                           normClose = FALSE,
    -                           normEigen = FALSE)
    -

    -
    -summary(props_season)
    -
    ## 
    -## Component sizes
    -## ```````````````
    -## group '1':           
    -## size: 28  1
    -##    #:  1 14
    -## group '2':            
    -## size: 31 8 1
    -##    #:  1 1 3
    -## ______________________________
    -## Global network properties
    -## `````````````````````````
    -## Largest connected component (LCC):
    -##                          group '1' group '2'
    -## Relative LCC size          0.66667   0.73810
    -## Clustering coefficient     0.15161   0.27111
    -## Modularity                 0.62611   0.45823
    -## Positive edge percentage  86.66667 100.00000
    -## Edge density               0.07937   0.12473
    -## Natural connectivity       0.04539   0.04362
    -## Vertex connectivity        1.00000   1.00000
    -## Edge connectivity          1.00000   1.00000
    -## Average dissimilarity*     0.67251   0.68178
    -## Average path length**      3.40008   1.86767
    -## 
    -## Whole network:
    -##                          group '1' group '2'
    -## Number of components      15.00000   5.00000
    -## Clustering coefficient     0.15161   0.29755
    -## Modularity                 0.62611   0.55684
    -## Positive edge percentage  86.66667 100.00000
    -## Edge density               0.03484   0.08130
    -## Natural connectivity       0.02826   0.03111
    -## -----
    -## *: Dissimilarity = 1 - edge weight
    -## **: Path length = Sum of dissimilarities along the path
    -## 
    -## ______________________________
    -## Clusters
    -## - In the whole network
    -## - Algorithm: cluster_fast_greedy
    -## ```````````````````````````````` 
    -## group '1':                  
    -## name:  0 1 2 3 4 5
    -##    #: 14 7 6 5 4 6
    -## 
    -## group '2':                  
    -## name: 0 1  2 3 4 5
    -##    #: 3 5 14 4 8 8
    -## 
    -## ______________________________
    -## Hubs
    -## - In alphabetical/numerical order
    -## - Based on log-normal quantiles of centralities
    -## ```````````````````````````````````````````````
    -##  group '1' group '2'
    -##     307981    322235
    -##               363302
    -## 
    -## ______________________________
    -## Centrality measures
    -## - In decreasing order
    -## - Computed for the complete network
    -## ````````````````````````````````````
    -## Degree (unnormalized):
    -##         group '1' group '2'
    -##  307981         5         2
    -##    9715         5         5
    -##  364563         4         4
    -##  259569         4         5
    -##  322235         3         9
    -##            ______    ______
    -##  322235         3         9
    -##  363302         3         9
    -##  158660         2         6
    -##  188236         3         5
    -##  259569         4         5
    -## 
    -## Betweenness centrality (unnormalized):
    -##         group '1' group '2'
    -##  307981       231         0
    -##  331820       170         9
    -##  158660       162        80
    -##  188236       161        85
    -##  322235       159       126
    -##            ______    ______
    -##  322235       159       126
    -##  363302        74        93
    -##  188236       161        85
    -##  158660       162        80
    -##  326792        17        58
    -## 
    -## Closeness centrality (unnormalized):
    -##         group '1' group '2'
    -##  307981  18.17276   7.80251
    -##    9715   15.8134   9.27254
    -##  188236   15.7949  23.24055
    -##  301645  15.30177   9.01509
    -##  364563  14.73566  21.21352
    -##            ______    ______
    -##  322235  13.50232  26.36749
    -##  363302  12.30297  24.19703
    -##  158660  13.07106  23.31577
    -##  188236   15.7949  23.24055
    -##  326792  14.61391  22.52157
    -## 
    -## Eigenvector centrality (unnormalized):
    -##         group '1' group '2'
    -##  307981   0.53313   0.06912
    -##    9715   0.44398   0.10788
    -##  301645   0.41878   0.08572
    -##  326792   0.27033   0.15727
    -##  188236   0.25824   0.21162
    -##            ______    ______
    -##  322235   0.01749   0.29705
    -##  363302   0.03526   0.28512
    -##  188236   0.25824   0.21162
    -##  194648   0.00366   0.19448
    -##  184983    0.0917    0.1854
    -
    -
    -

    Visual network comparison

    -

    First, the layout is computed separately in both groups (qgraph’s “spring” layout in this case).

    -

    Node sizes are scaled according to the mclr-transformed data since SPRING uses the mclr transformation as normalization method.

    -

    Node colors represent clusters. Note that by default, two clusters have the same color in both groups if they have at least two nodes in common (sameColThresh = 2). Set sameClustCol to FALSE to get different cluster colors.

    -
    -plot(props_season, 
    -     sameLayout = FALSE, 
    -     nodeColor = "cluster",
    -     nodeSize = "mclr",
    -     labelScale = FALSE,
    -     cexNodes = 1.5, 
    -     cexLabels = 2.5,
    -     cexHubLabels = 3,
    -     cexTitle = 3.7,
    -     groupNames = c("No seasonal allergies", "Seasonal allergies"),
    -     hubBorderCol  = "gray40")
    -
    -legend("bottom", title = "estimated association:", legend = c("+","-"), 
    -       col = c("#009900","red"), inset = 0.02, cex = 4, lty = 1, lwd = 4, 
    -       bty = "n", horiz = TRUE)
    -

    -

    Using different layouts leads to a “nice-looking” network plot for each group, however, it is difficult to identify group differences at first glance.

    -

    Thus, we now use the same layout in both groups. In the following, the layout is computed for group 1 (the left network) and taken over for group 2.

    -

    rmSingles is set to "inboth" because only nodes that are unconnected in both groups can be removed if the same layout is used.

    -
    -plot(props_season, 
    -     sameLayout = TRUE, 
    -     layoutGroup = 1,
    -     rmSingles = "inboth", 
    -     nodeSize = "mclr", 
    -     labelScale = FALSE,
    -     cexNodes = 1.5, 
    -     cexLabels = 2.5,
    -     cexHubLabels = 3,
    -     cexTitle = 3.8,
    -     groupNames = c("No seasonal allergies", "Seasonal allergies"),
    -     hubBorderCol  = "gray40")
    -
    -legend("bottom", title = "estimated association:", legend = c("+","-"), 
    -       col = c("#009900","red"), inset = 0.02, cex = 4, lty = 1, lwd = 4, 
    -       bty = "n", horiz = TRUE)
    -

    -

    In the above plot, we can see clear differences between the groups. The OTU “322235”, for instance, is more strongly connected in the “Seasonal allergies” group than in the group without seasonal allergies, which is why it is a hub on the right, but not on the left.

    -

    However, if the layout of one group is simply taken over to the other, one of the networks (here the “seasonal allergies” group) is usually not that nice-looking due to the long edges. Therefore, NetCoMi (>= 1.0.2) offers a further option (layoutGroup = "union"), where a union of the two layouts is used in both groups. In doing so, the nodes are placed as optimal as possible equally for both networks.

    -

    The idea and R code for this functionality were provided by Christian L. Müller and Alice Sommer

    -
    -plot(props_season, 
    -     sameLayout = TRUE, 
    -     repulsion = 0.95,
    -     layoutGroup = "union",
    -     rmSingles = "inboth", 
    -     nodeSize = "mclr", 
    -     labelScale = FALSE,
    -     cexNodes = 1.5, 
    -     cexLabels = 2.5,
    -     cexHubLabels = 3,
    -     cexTitle = 3.8,
    -     groupNames = c("No seasonal allergies", "Seasonal allergies"),
    -     hubBorderCol  = "gray40")
    -
    -legend("bottom", title = "estimated association:", legend = c("+","-"), 
    -       col = c("#009900","red"), inset = 0.02, cex = 4, lty = 1, lwd = 4, 
    -       bty = "n", horiz = TRUE)
    -

    -
    -
    -

    Quantitative network comparison

    -

    Since runtime is considerably increased if permutation tests are performed, we set the permTest parameter to FALSE. See the tutorial_createAssoPerm file for a network comparison including permutation tests.

    -

    Since permutation tests are still conducted for the Adjusted Rand Index, a seed should be set for reproducibility.

    -
    -comp_season <- netCompare(props_season, 
    -                          permTest = FALSE, 
    -                          verbose = FALSE,
    -                          seed = 123456)
    -
    -summary(comp_season, 
    -        groupNames = c("No allergies", "Allergies"),
    -        showCentr = c("degree", "between", "closeness"), 
    -        numbNodes = 5)
    -
    ## 
    -## Comparison of Network Properties
    -## ----------------------------------
    -## CALL: 
    -## netCompare(x = props_season, permTest = FALSE, verbose = FALSE, 
    -##     seed = 123456)
    -## 
    -## ______________________________
    -## Global network properties
    -## `````````````````````````
    -## Largest connected component (LCC):
    -##                          No allergies   Allergies    difference
    -## Relative LCC size               0.667       0.738         0.071
    -## Clustering coefficient          0.152       0.271         0.120
    -## Modularity                      0.626       0.458         0.168
    -## Positive edge percentage       86.667     100.000        13.333
    -## Edge density                    0.079       0.125         0.045
    -## Natural connectivity            0.045       0.044         0.002
    -## Vertex connectivity             1.000       1.000         0.000
    -## Edge connectivity               1.000       1.000         0.000
    -## Average dissimilarity*          0.673       0.682         0.009
    -## Average path length**           3.400       1.868         1.532
    -## 
    -## Whole network:
    -##                          No allergies   Allergies    difference
    -## Number of components           15.000       5.000        10.000
    -## Clustering coefficient          0.152       0.298         0.146
    -## Modularity                      0.626       0.557         0.069
    -## Positive edge percentage       86.667     100.000        13.333
    -## Edge density                    0.035       0.081         0.046
    -## Natural connectivity            0.028       0.031         0.003
    -## -----
    -##  *: Dissimilarity = 1 - edge weight
    -## **: Path length = Sum of dissimilarities along the path
    -## 
    -## ______________________________
    -## Jaccard index (similarity betw. sets of most central nodes)
    -## ```````````````````````````````````````````````````````````
    -##                     Jacc   P(<=Jacc)     P(>=Jacc)   
    -## degree             0.556    0.957578      0.144846   
    -## betweenness centr. 0.333    0.650307      0.622822   
    -## closeness centr.   0.231    0.322424      0.861268   
    -## eigenvec. centr.   0.100    0.017593 *    0.996692   
    -## hub taxa           0.000    0.296296      1.000000   
    -## -----
    -## Jaccard index in [0,1] (1 indicates perfect agreement)
    -## 
    -## ______________________________
    -## Adjusted Rand index (similarity betw. clusterings)
    -## ``````````````````````````````````````````````````
    -##         wholeNet       LCC
    -## ARI        0.232     0.355
    -## p-value    0.000     0.000
    -## -----
    -## ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings
    -##                    ARI=0: expected for two random clusterings
    -## p-value: permutation test (n=1000) with null hypothesis ARI=0
    -## 
    -## ______________________________
    -## Graphlet Correlation Distance
    -## `````````````````````````````
    -##     wholeNet       LCC
    -## GCD    1.577     1.863
    -## -----
    -## GCD >= 0 (GCD=0 indicates perfect agreement between GCMs)
    -## 
    -## ______________________________
    -## Centrality measures
    -## - In decreasing order
    -## - Computed for the whole network
    -## ````````````````````````````````````
    -## Degree (unnormalized):
    -##        No allergies Allergies abs.diff.
    -## 322235            3         9         6
    -## 363302            3         9         6
    -## 469709            0         4         4
    -## 158660            2         6         4
    -## 223059            0         4         4
    -## 
    -## Betweenness centrality (unnormalized):
    -##        No allergies Allergies abs.diff.
    -## 307981          231         0       231
    -## 331820          170         9       161
    -## 259569          137        34       103
    -## 158660          162        80        82
    -## 184983           92        12        80
    -## 
    -## Closeness centrality (unnormalized):
    -##        No allergies Allergies abs.diff.
    -## 469709            0    21.203    21.203
    -## 541301            0    20.942    20.942
    -## 181016            0    19.498    19.498
    -## 361496            0    19.349    19.349
    -## 223059            0    19.261    19.261
    -## 
    -## _________________________________________________________
    -## Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1
    -
    -
    -
    -

    Differential networks

    -

    We now build a differential association network, where two nodes are connected if they are differentially associated between the two groups.

    -

    Due to its very short execution time, we use Pearson’s correlations for estimating associations between OTUs.

    -

    Fisher’s z-test is applied for identifying differentially correlated OTUs. Multiple testing adjustment is done by controlling the local false discovery rate.

    -

    Note: sparsMethod is set to "none", just to be able to include all differential associations in the association network plot (see below). However, the differential network is always based on the estimated association matrices before sparsification (the assoEst1 and assoEst2 matrices returned by netConstruct()).

    -
    -net_season_pears <- netConstruct(data = amgut_season_no, 
    -                                 data2 = amgut_season_yes, 
    -                                 filtTax = "highestVar",
    -                                 filtTaxPar = list(highestVar = 50),
    -                                 measure = "pearson", 
    -                                 normMethod = "clr",
    -                                 sparsMethod = "none", 
    -                                 thresh = 0.2,
    -                                 verbose = 3)
    -
    ## Checking input arguments ... Done.
    -## Infos about changed arguments:
    -## Zero replacement needed for clr transformation. "multRepl" used.
    -## 
    -## Data filtering ...
    -## 95 taxa removed in each data set.
    -## 1 rows with zero sum removed in group 1.
    -## 1 rows with zero sum removed in group 2.
    -## 43 taxa and 162 samples remaining in group 1.
    -## 43 taxa and 120 samples remaining in group 2.
    -## 
    -## Zero treatment in group 1:
    -## Execute multRepl() ... Done.
    -## 
    -## Zero treatment in group 2:
    -## Execute multRepl() ... Done.
    -## 
    -## Normalization in group 1:
    -## Execute clr(){SpiecEasi} ... Done.
    -## 
    -## Normalization in group 2:
    -## Execute clr(){SpiecEasi} ... Done.
    -## 
    -## Calculate 'pearson' associations ... Done.
    -## 
    -## Calculate associations in group 2 ... Done.
    -
    -# Differential network construction
    -diff_season <- diffnet(net_season_pears,
    -                       diffMethod = "fisherTest", 
    -                       adjust = "lfdr")
    -
    ## Checking input arguments ... 
    -## Done.
    -## Adjust for multiple testing using 'lfdr' ... 
    -## Execute fdrtool() ...
    -
    -## Step 1... determine cutoff point
    -## Step 2... estimate parameters of null distribution and eta0
    -## Step 3... compute p-values and estimate empirical PDF/CDF
    -## Step 4... compute q-values and local fdr
    -
    -## Done.
    -
    -# Differential network plot
    -plot(diff_season, 
    -     cexNodes = 0.8, 
    -     cexLegend = 3,
    -     cexTitle = 4,
    -     mar = c(2,2,8,5),
    -     legendGroupnames = c("group 'no'", "group 'yes'"),
    -     legendPos = c(0.7,1.6))
    -

    In the differential network shown above, edge colors represent the direction of associations in the two groups. If, for instance, two OTUs are positively associated in group 1 and negatively associated in group 2 (such as ‘191541’ and ‘188236’), the respective edge is colored in cyan.

    -

    We also take a look at the corresponding associations by constructing association networks that include only the differentially associated OTUs.

    -
    -props_season_pears <- netAnalyze(net_season_pears, 
    -                                 clustMethod = "cluster_fast_greedy",
    -                                 weightDeg = TRUE,
    -                                 normDeg = FALSE,
    -                                 gcmHeat = FALSE)
    -
    -# Identify the differentially associated OTUs
    -diffmat_sums <- rowSums(diff_season$diffAdjustMat)
    -diff_asso_names <- names(diffmat_sums[diffmat_sums > 0])
    -
    -plot(props_season_pears, 
    -     nodeFilter = "names",
    -     nodeFilterPar = diff_asso_names,
    -     nodeColor = "gray",
    -     highlightHubs = FALSE,
    -     sameLayout = TRUE, 
    -     layoutGroup = "union",
    -     rmSingles = FALSE, 
    -     nodeSize = "clr",
    -     edgeTranspHigh = 20,
    -     labelScale = FALSE,
    -     cexNodes = 1.5, 
    -     cexLabels = 3,
    -     cexTitle = 3.8,
    -     groupNames = c("No seasonal allergies", "Seasonal allergies"),
    -     hubBorderCol  = "gray40")
    -
    -legend(-0.15,-0.7, title = "estimated correlation:", legend = c("+","-"), 
    -       col = c("#009900","red"), inset = 0.05, cex = 4, lty = 1, lwd = 4, 
    -       bty = "n", horiz = TRUE)
    -

    -

    We can see that the correlation between the aforementioned OTUs ‘191541’ and ‘188236’ is strongly positive in the left group and negative in the right group.

    -
    -
    -

    Dissimilarity-based Networks

    -

    If a dissimilarity measure is used for network construction, nodes are subjects instead of OTUs. The estimated dissimilarities are transformed into similarities, which are used as edge weights so that subjects with a similar microbial composition are placed close together in the network plot.

    -

    We construct a single network using Aitchison’s distance being suitable for the application on compositional data.

    -

    Since the Aitchison distance is based on the clr-transformation, zeros in the data need to be replaced.

    -

    The network is sparsified using the k-nearest neighbor (knn) algorithm.

    -
    -net_diss <- netConstruct(amgut1.filt,
    -                         measure = "aitchison",
    -                         zeroMethod = "multRepl",
    -                         sparsMethod = "knn",
    -                         kNeighbor = 3,
    -                         verbose = 3)
    -
    ## Checking input arguments ... Done.
    -## Infos about changed arguments:
    -## Counts normalized to fractions for measure "aitchison".
    -## 
    -## 127 taxa and 289 samples remaining.
    -## 
    -## Zero treatment:
    -## Execute multRepl() ... Done.
    -## 
    -## Normalization:
    -## Counts normalized by total sum scaling.
    -## 
    -## Calculate 'aitchison' dissimilarities ... Done.
    -## 
    -## Sparsify dissimilarities via 'knn' ... Registered S3 methods overwritten by 'proxy':
    -##   method               from    
    -##   print.registry_field registry
    -##   print.registry_entry registry
    -## Done.
    -

    For cluster detection, we use hierarchical clustering with average linkage. Internally, k=3 is passed to cutree() from stats package so that the tree is cut into 3 clusters.

    -
    -props_diss <- netAnalyze(net_diss,
    -                         clustMethod = "hierarchical",
    -                         clustPar = list(method = "average", k = 3),
    -                         hubPar = "eigenvector")
    -

    -
    -plot(props_diss, 
    -     nodeColor = "cluster", 
    -     nodeSize = "eigenvector",
    -     hubTransp = 40,
    -     edgeTranspLow = 60,
    -     charToRm = "00000",
    -     shortenLabels = "simple",
    -     labelLength = 6,
    -     mar = c(1, 3, 3, 5))
    -
    -# get green color with 50% transparency
    -green2 <- colToTransp("#009900", 40)
    -
    -legend(0.4, 1.1,
    -       cex = 2.2,
    -       legend = c("high similarity (low Aitchison distance)",
    -                  "low similarity (high Aitchison distance)"), 
    -       lty = 1, 
    -       lwd = c(3, 1),
    -       col = c("darkgreen", green2),
    -       bty = "n")
    -

    -

    In this dissimilarity-based network, hubs are interpreted as samples with a microbial composition similar to that of many other samples in the data set.

    -
    -
    -

    Soil microbiome example

    -

    Here is the code for reproducing the network plot shown at the beginning.

    -
    -data("soilrep")
    -
    -soil_warm_yes <- phyloseq::subset_samples(soilrep, warmed == "yes")
    -soil_warm_no  <- phyloseq::subset_samples(soilrep, warmed == "no")
    -
    -net_seas_p <- netConstruct(soil_warm_yes, soil_warm_no,
    -                           filtTax = "highestVar",
    -                           filtTaxPar = list(highestVar = 500),
    -                           zeroMethod = "pseudo",
    -                           normMethod = "clr",
    -                           measure = "pearson",
    -                           verbose = 0)
    -
    -netprops1 <- netAnalyze(net_seas_p, clustMethod = "cluster_fast_greedy")
    -
    -nclust <- as.numeric(max(names(table(netprops1$clustering$clust1))))
    -
    -col <- c(topo.colors(nclust), rainbow(6))
    -
    -plot(netprops1, 
    -     sameLayout = TRUE, 
    -     layoutGroup = "union", 
    -     colorVec = col,
    -     borderCol = "gray40", 
    -     nodeSize = "degree", 
    -     cexNodes = 0.9, 
    -     nodeSizeSpread = 3, 
    -     edgeTranspLow = 80, 
    -     edgeTranspHigh = 50,
    -     groupNames = c("Warming", "Non-warming"), 
    -     showTitle = TRUE, 
    -     cexTitle = 2.8,
    -     mar = c(1,1,3,1), 
    -     repulsion = 0.9, 
    -     labels = FALSE, 
    -     rmSingles = "inboth",
    -     nodeFilter = "clustMin", 
    -     nodeFilterPar = 10, 
    -     nodeTransp = 50, 
    -     hubTransp = 30)
    -
    -
    -

    References

    -
    -
    -Badri, Michelle, Zachary D. Kurtz, Richard Bonneau, and Christian L. Müller. 2020. “Shrinkage Improves Estimation of Microbial Associations Under Different Normalization Methods.” NAR Genomics and Bioinformatics 2 (December). https://doi.org/10.1093/NARGAB/LQAA100. -
    -
    -Martin-Fernández, Josep A, M Bren, Carles Barceló-Vidal, and Vera Pawlowsky-Glahn. 1999. “A Measure of Difference for Compositional Data Based on Measures of Divergence.” In Proceedings of IAMG, 99:211–16. -
    -
    -Yoon, Grace, Christian L. Müller, and Irina Gaynanova. 2020. “Fast Computation of Latent Correlations.” Journal of Computational and Graphical Statistics, June. http://arxiv.org/abs/2006.13875. -
    +

    References

    +
    +
    +Peschel, Stefanie, Christian L Müller, Erika von Mutius, Anne-Laure Boulesteix, and Martin Depner. 2020. “NetCoMi: network construction and comparison for microbiome data in R.” Briefings in Bioinformatics 22 (4): bbaa290. https://doi.org/10.1093/bib/bbaa290.
    diff --git a/reference/netConstruct.html b/reference/netConstruct.html index 58a2935..cf0d366 100644 --- a/reference/netConstruct.html +++ b/reference/netConstruct.html @@ -597,7 +597,7 @@

    Details-"pseudoZO"A pseudo count (defined by pseudocount as optional element of zeroPar) is added to zero counts only. A unit zero count is used by default.-"multRepl"Multiplicative simple replacementmultRepl"alrEM"Modified EM alr-algorithmlrEM"bayesMult"Bayesian-multiplicative replacementcmultRepl

    Normalization methods

    ArgumentMethodFunction
    "TSS"Total sum scalingt(apply(countMat, 1, function(x) - x/sum(x)))
    "CSS"Cumulative sum scalingcumNormMat
    "COM"Common sum scalingt(apply(countMat, 1, function(x) x * min(rowSums(countMat)) / sum(x)))
    "rarefy"Rarefyingrrarefy
    "VST"Variance stabilizing transformationvarianceStabilizingTransformation
    "clr"Centered log-ratio transformationclr
    "mclr"Modified central log ratio transformationmclr

    These methods (except for rarefying) are described in + x/sum(x)))"CSS"Cumulative sum scalingcumNormMat"COM"Common sum scalingt(apply(countMat, 1, function(x) x * min(rowSums(countMat)) / sum(x)))"rarefy"Rarefyingrrarefy"VST"Variance stabilizing transformationvarianceStabilizingTransformation"clr"Centered log-ratio transformationclr"mclr"Modified central log ratio transformationmclr

    These methods (except for rarefying) are described in Badri et al.(2020).

    Transformation methods
    Functions used for transforming associations into dissimilarities:

    ArgumentFunction
    "signed"sqrt(0.5 * (1 - x))
    "unsigned"sqrt(1 - x^2)
    "signedPos"diss <- sqrt(0.5 * (1-x))
    diss[x < 0] <- 0
    "TOMdiss"TOMdist

    diff --git a/search.json b/search.json index fe55d9a..8e39097 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-construction","dir":"Articles","previous_headings":"","what":"Network construction","title":"Get started","text":"use SPRING package estimating associations (conditional dependence) OTUs. data filtered within netConstruct() follows: samples total number reads least 1000 included (argument filtSamp). 50 taxa highest frequency included (argument filtTax). measure defines association dissimilarity measure, \"spring\" case. Additional arguments passed SPRING() via measurePar. nlambda rep.num set 10 decreased execution time, higher real data. Rmethod set “approx” estimate correlations using hybrid multi-linear interpolation approach proposed @yoon2020fast. method considerably reduces runtime controlling approximation error. Normalization well zero handling performed internally SPRING(). Hence, set normMethod zeroMethod \"none\". furthermore set sparsMethod \"none\" SPRING returns sparse network additional sparsification step necessary. use “signed” method transforming associations dissimilarities (argument dissFunc). , strongly negatively associated taxa high dissimilarity , turn, low similarity, corresponds edge weights network plot. verbose argument set 3 messages generated netConstruct() well messages external functions printed.","code":"net_spring <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend': #> method from #> rev.hclust vegan #> Registered S3 method overwritten by 'seriation': #> method from #> reorder.hclust vegan #> Done."},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-analysis","dir":"Articles","previous_headings":"","what":"Network analysis","title":"Get started","text":"NetCoMi’s netAnalyze() function used analyzing constructed network(s). , centrLCC set TRUE meaning centralities calculated nodes largest connected component (LCC). Clusters identified using greedy modularity optimization (cluster_fast_greedy() igraph package). Hubs nodes eigenvector centrality value empirical 95% quantile eigenvector centralities network (argument hubPar). weightDeg normDeg set FALSE degree node simply defined number nodes adjacent node. default, heatmap Graphlet Correlation Matrix (GCM) returned (graphlet correlations upper triangle significance codes resulting Student’s t-test lower triangle). See ?calcGCM ?testGCM details.","code":"props_spring <- netAnalyze(net_spring, centrLCC = TRUE, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\", weightDeg = FALSE, normDeg = FALSE) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. #?summary.microNetProps summary(props_spring, numbNodes = 5L) #> #> Component sizes #> ``````````````` #> size: 48 1 #> #: 1 2 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.96000 #> Clustering coefficient 0.33594 #> Modularity 0.53407 #> Positive edge percentage 88.34951 #> Edge density 0.09131 #> Natural connectivity 0.02855 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.97035 #> Average path length** 2.36912 #> #> Whole network: #> #> Number of components 3.00000 #> Clustering coefficient 0.33594 #> Modularity 0.53407 #> Positive edge percentage 88.34951 #> Edge density 0.08408 #> Natural connectivity 0.02714 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 2 12 17 10 5 4 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 190597 #> 288134 #> 311477 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (unnormalized): #> #> 288134 10 #> 190597 9 #> 311477 9 #> 188236 8 #> 199487 8 #> #> Betweenness centrality (normalized): #> #> 302160 0.31360 #> 268332 0.24144 #> 259569 0.23404 #> 470973 0.21462 #> 119010 0.19611 #> #> Closeness centrality (normalized): #> #> 288134 0.68426 #> 311477 0.68413 #> 199487 0.68099 #> 302160 0.67518 #> 188236 0.66852 #> #> Eigenvector centrality (normalized): #> #> 288134 1.00000 #> 311477 0.94417 #> 190597 0.90794 #> 199487 0.85439 #> 188236 0.72684"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"plotting-the-gcm-heatmap-manually","dir":"Articles","previous_headings":"Network analysis","what":"Plotting the GCM heatmap manually","title":"Get started","text":"","code":"plotHeat(mat = props_spring$graphletLCC$gcm1, pmat = props_spring$graphletLCC$pAdjust1, type = \"mixed\", title = \"GCM\", colorLim = c(-1, 1), mar = c(2, 0, 2, 0)) # Add rectangles highlighting the four types of orbits graphics::rect(xleft = c( 0.5, 1.5, 4.5, 7.5), ybottom = c(11.5, 7.5, 4.5, 0.5), xright = c( 1.5, 4.5, 7.5, 11.5), ytop = c(10.5, 10.5, 7.5, 4.5), lwd = 2, xpd = NA) text(6, -0.2, xpd = NA, \"Significance codes: ***: 0.001; **: 0.01; *: 0.05\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"visualizing-the-network","dir":"Articles","previous_headings":"","what":"Visualizing the network","title":"Get started","text":"use determined clusters node colors scale node sizes according node’s eigenvector centrality. Note edge weights (non-negative) similarities, however, edges belonging negative estimated associations colored red default (negDiffCol = TRUE). default, different transparency value added edges absolute weight cut value (arguments edgeTranspLow edgeTranspHigh). determined cut value can read follows:","code":"# help page ?plot.microNetProps p <- plot(props_spring, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with SPRING associations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated association:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) p$q1$Arguments$cut #> 75% #> 0.337099"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"export-to-gephi","dir":"Articles","previous_headings":"","what":"Export to Gephi","title":"Get started","text":"users may interested export network Gephi. ’s example: exported .csv files can imported Gephi.","code":"# For Gephi, we have to generate an edge list with IDs. # The corresponding labels (and also further node features) are stored as node list. # Create edge object from the edge list exported by netConstruct() edges <- dplyr::select(net_spring$edgelist1, v1, v2) # Add Source and Target variables (as IDs) edges$Source <- as.numeric(factor(edges$v1)) edges$Target <- as.numeric(factor(edges$v2)) edges$Type <- \"Undirected\" edges$Weight <- net_spring$edgelist1$adja nodes <- unique(edges[,c('v1','Source')]) colnames(nodes) <- c(\"Label\", \"Id\") # Add category with clusters (can be used as node colors in Gephi) nodes$Category <- props_spring$clustering$clust1[nodes$Label] edges <- dplyr::select(edges, Source, Target, Type, Weight) write.csv(nodes, file = \"nodes.csv\", row.names = FALSE) write.csv(edges, file = \"edges.csv\", row.names = FALSE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-with-pearson-correlation-as-association-measure","dir":"Articles","previous_headings":"","what":"Network with Pearson correlation as association measure","title":"Get started","text":"Let’s construct another network using Pearson’s correlation coefficient association measure. input now phyloseq object. Since Pearson correlations may lead compositional effects applied sequencing data, use clr transformation normalization method. Zero treatment necessary case. threshold 0.3 used sparsification method, OTUs absolute correlation greater equal 0.3 connected. Network analysis plotting: Let’s improve visualization changing following arguments: repulsion = 0.8: Place nodes apart. rmSingles = TRUE: Single nodes removed. labelScale = FALSE cexLabels = 1.6: labels equal size enlarged improve readability small node’s labels. nodeSizeSpread = 3 (default 4): Node sizes similar value decreased. argument (combination cexNodes) useful enlarge small nodes keeping size big nodes. hubBorderCol = \"darkgray\": Change border color better readability node labels.","code":"net_pears <- netConstruct(amgut2.filt.phy, measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"multRepl\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... Done. #> 2 rows with zero sum removed. #> 138 taxa and 294 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Sparsify associations via 'threshold' ... Done. props_pears <- netAnalyze(net_pears, clustMethod = \"cluster_fast_greedy\") plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"edge-filtering","dir":"Articles","previous_headings":"Network with Pearson correlation as association measure","what":"Edge filtering","title":"Get started","text":"network can sparsified using arguments edgeFilter (edges filtered layout computed) edgeInvisFilter (edges removed layout computed thus just made “invisible”).","code":"plot(props_pears, edgeInvisFilter = \"threshold\", edgeInvisPar = 0.4, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = paste0(\"Network on OTU level with Pearson correlations\", \"\\n(edge filter: threshold = 0.4)\"), showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"using-the-unsigned-transformation","dir":"Articles","previous_headings":"","what":"Using the “unsigned” transformation","title":"Get started","text":"network, “signed” transformation used transform estimated associations dissimilarities. leads network strongly positive correlated taxa high edge weight (1 correlation equals 1) strongly negative correlated taxa low edge weight (0 correlation equals -1). now use “unsigned” transformation edge weight strongly correlated taxa high, matter sign. Hence, correlation -1 1 lead edge weight 1.","code":""},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-construction-1","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Network construction","title":"Get started","text":"can pass network object netConstruct() save runtime.","code":"net_pears_unsigned <- netConstruct(data = net_pears$assoEst1, dataType = \"correlation\", sparsMethod = \"threshold\", thresh = 0.3, dissFunc = \"unsigned\", verbose = 3) #> Checking input arguments ... Done. #> #> Sparsify associations via 'threshold' ... Done."},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"estimated-correlations-and-adjacency-values","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Estimated correlations and adjacency values","title":"Get started","text":"following histograms demonstrate estimated correlations transformed adjacencies (= sparsified similarities weighted networks). Sparsified estimated correlations: Adjacency values computed using “signed” transformation (values different 0 1 edges network): Adjacency values computed using “unsigned” transformation:","code":"hist(net_pears$assoMat1, 100, xlim = c(-1, 1), ylim = c(0, 400), xlab = \"Estimated correlation\", main = \"Estimated correlations after sparsification\") hist(net_pears$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"signed\\\" transformation)\") hist(net_pears_unsigned$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"unsigned\\\" transformation)\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-analysis-and-plotting","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Network analysis and plotting","title":"Get started","text":"“signed” transformation, positive correlated taxa likely belong cluster, “unsigned” transformation clusters contain strongly positive negative correlated taxa.","code":"props_pears_unsigned <- netAnalyze(net_pears_unsigned, clustMethod = \"cluster_fast_greedy\", gcmHeat = FALSE) plot(props_pears_unsigned, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.9, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network with Pearson correlations and \\\"unsigned\\\" transformation\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-on-genus-level","dir":"Articles","previous_headings":"","what":"Network on genus level","title":"Get started","text":"now construct network, OTUs agglomerated genera.","code":"library(phyloseq) data(\"amgut2.filt.phy\") # Agglomerate to genus level amgut_genus <- tax_glom(amgut2.filt.phy, taxrank = \"Rank6\") # Taxonomic table taxtab <- as(tax_table(amgut_genus), \"matrix\") # Rename taxonomic table and make Rank6 (genus) unique amgut_genus_renamed <- renameTaxa(amgut_genus, pat = \"\", substPat = \"_()\", numDupli = \"Rank6\") #> Column 7 contains NAs only and is ignored. # Network construction and analysis net_genus <- netConstruct(amgut_genus_renamed, taxRank = \"Rank6\", measure = \"pearson\", zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... #> Done. #> 2 rows with zero sum removed. #> 43 taxa and 294 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Sparsify associations via 'threshold' ... Done. props_genus <- netAnalyze(net_genus, clustMethod = \"cluster_fast_greedy\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-plots","dir":"Articles","previous_headings":"Network on genus level","what":"Network plots","title":"Get started","text":"Modifications: Fruchterman-Reingold layout algorithm igraph package used (passed plot matrix) Shortened labels (using “intelligent” method, avoids duplicates) Fixed node sizes, hubs enlarged Node color gray nodes (transparancy lower hub nodes default) Since visualization obviously optimal, make adjustments: time, Fruchterman-Reingold layout algorithm computed within plot function thus applied “reduced” network without singletons Labels scaled node sizes Single nodes removed Node sizes scaled column sums clr-transformed data Node colors represent determined clusters Border color hub nodes changed black darkgray Label size hubs enlarged Let’s check whether largest nodes actually highest column sums matrix normalized counts returned netConstruct(). order improve plot, use following modifications: time, choose “spring” layout part qgraph() (function generally used network plotting NetCoMi) repulsion value 1 places nodes apart Labels shortened anymore Nodes (bacteria genus level) colored according respective phylum Edges representing positive associations colored blue, negative ones orange (just give example alternative edge coloring) Transparency increased edges high weight improve readability node labels","code":"# Compute layout graph3 <- igraph::graph_from_adjacency_matrix(net_genus$adjaMat1, weighted = TRUE) set.seed(123456) lay_fr <- igraph::layout_with_fr(graph3) # Row names of the layout matrix must match the node names rownames(lay_fr) <- rownames(net_genus$adjaMat1) plot(props_genus, layout = lay_fr, shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), nodeSize = \"fix\", nodeColor = \"gray\", cexNodes = 0.8, cexHubs = 1.1, cexLabels = 1.2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) set.seed(123456) plot(props_genus, layout = \"layout_with_fr\", shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeColor = \"cluster\", hubBorderCol = \"darkgray\", cexNodes = 2, cexLabels = 1.5, cexHubLabels = 2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) sort(colSums(net_genus$normCounts1), decreasing = TRUE)[1:10] #> Bacteroides Klebsiella Faecalibacterium #> 1200.7971 1137.4928 708.0877 #> 5_Clostridiales(O) 2_Ruminococcaceae(F) 3_Lachnospiraceae(F) #> 549.2647 502.1889 493.7558 #> 6_Enterobacteriaceae(F) Roseburia Parabacteroides #> 363.3841 333.8737 328.0495 #> Coprococcus #> 274.4082 # Get phyla names taxtab <- as(tax_table(amgut_genus_renamed), \"matrix\") phyla <- as.factor(gsub(\"p__\", \"\", taxtab[, \"Rank2\"])) names(phyla) <- taxtab[, \"Rank6\"] #table(phyla) # Define phylum colors phylcol <- c(\"cyan\", \"blue3\", \"red\", \"lawngreen\", \"yellow\", \"deeppink\") plot(props_genus, layout = \"spring\", repulsion = 0.84, shortenLabels = \"none\", charToRm = \"g__\", labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeSizeSpread = 4, nodeColor = \"feature\", featVecCol = phyla, colorVec = phylcol, posCol = \"darkturquoise\", negCol = \"orange\", edgeTranspLow = 0, edgeTranspHigh = 40, cexNodes = 2, cexLabels = 2, cexHubLabels = 2.5, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) # Colors used in the legend should be equally transparent as in the plot phylcol_transp <- colToTransp(phylcol, 60) legend(-1.2, 1.2, cex = 2, pt.cex = 2.5, title = \"Phylum:\", legend=levels(phyla), col = phylcol_transp, bty = \"n\", pch = 16) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"darkturquoise\",\"orange\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-construction-and-analysis","dir":"Articles","previous_headings":"","what":"Network construction and analysis","title":"Generate permuted association matrices","text":"use data American Gut Project conduct network comparison subjects without lactose intolerance. demonstrate NetCoMi’s functionality matched data, build “fake” 1:2 matched data set, two samples LACTOSE = \"\" group assigned one sample LACTOSE = \"yes\" group. use subset 150 samples, leading 50 samples group “yes” 100 samples group “”.","code":"library(NetCoMi) library(phyloseq) set.seed(123456) # Load American Gut Data (from SpiecEasi package) data(\"amgut2.filt.phy\") #table(amgut2.filt.phy@sam_data@.Data[[which(amgut2.filt.phy@sam_data@names == \"LACTOSE\")]]) # Divide samples into two groups: with and without lactose intolerance lact_yes <- phyloseq::subset_samples(amgut2.filt.phy, LACTOSE == \"yes\") lact_no <- phyloseq::subset_samples(amgut2.filt.phy, LACTOSE == \"no\") # Extract count tables counts_yes <- t(as(phyloseq::otu_table(lact_yes), \"matrix\")) counts_no <- t(as(phyloseq::otu_table(lact_no), \"matrix\")) # Build the 1:2 matched data set counts_matched <- matrix(NA, nrow = 150, ncol = ncol(counts_yes)) colnames(counts_matched) <- colnames(counts_yes) rownames(counts_matched) <- 1:150 ind_yes <- ind_no <- 1 for (i in 1:150) { if ((i-1)%%3 == 0) { counts_matched[i, ] <- counts_yes[ind_yes, ] rownames(counts_matched)[i] <- rownames(counts_yes)[ind_yes] ind_yes <- ind_yes + 1 } else { counts_matched[i, ] <- counts_no[ind_no, ] rownames(counts_matched)[i] <- rownames(counts_no)[ind_no] ind_no <- ind_no + 1 } } # The corresponding group vector used for splitting the data into two subsets. group_vec <- rep(c(1,2,2), 50) # Note: group \"1\" belongs to \"yes\", group \"2\" belongs to \"no\" # Network construction net_amgut <- netConstruct(counts_matched, group = group_vec, matchDesign = c(1,2), filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", zeroMethod = \"pseudo\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.4, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 88 taxa removed. #> 50 taxa and 150 samples remaining. #> #> Zero treatment: #> Pseudo count of 1 added. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Calculate associations in group 2 ... Done. #> #> Sparsify associations via 'threshold' ... Done. #> #> Sparsify associations in group 2 ... Done. # Network analysis with default values props_amgut <- netAnalyze(net_amgut) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. #summary(props_amgut) # Network plot plot(props_amgut, sameLayout = TRUE, layoutGroup = \"union\", nodeSize = \"clr\", repulsion = 0.9, cexTitle = 3.7, cexNodes = 2, cexLabels = 2, groupNames = c(\"LACTOSE = yes\", \"LACTOSE = no\")) legend(\"bottom\", title = \"estimated correlation:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-comparison-via-the-classical-way","dir":"Articles","previous_headings":"","what":"Network comparison via the “classical way”","title":"Generate permuted association matrices","text":"conduct network comparison permutation tests examine whether group differences significant. order reduce execution time, 100 permutations used. real data sets, number permutations least 1000 get reliable results. matrices estimated associations permuted data stored external file (current working directory) named \"assoPerm_comp\". network comparison repeated, time, stored permutation associations loaded netCompare(). option might useful rerun function alternative multiple testing adjustment, without need re-estimating associations. stored permutation associations can also passed diffnet() construct differential network. expected number permutations 100, differential associations multiple testing adjustment. Just take look differential network look like, plot differential network based non-adjusted p-values. Note approach statistically correct!","code":"# Network comparison comp_amgut_orig <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Files 'assoPerm_comp.bmat and assoPerm_comp.desc.txt created. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. summary(comp_amgut_orig) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = props_amgut, permTest = TRUE, nPerm = 100, seed = 123456, #> storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm_comp\", #> storeCountsPerm = FALSE) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.480 0.400 0.080 0.950495 #> Clustering coefficient 0.510 0.635 0.125 0.584158 #> Modularity 0.261 0.175 0.085 0.524752 #> Positive edge percentage 57.627 62.500 4.873 0.554455 #> Edge density 0.214 0.295 0.081 0.693069 #> Natural connectivity 0.080 0.109 0.029 0.742574 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.921 0.887 0.033 0.693069 #> Average path length** 1.786 1.459 0.327 0.643564 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 21.000 27.000 6.000 0.861386 #> Clustering coefficient 0.463 0.635 0.172 0.247525 #> Modularity 0.332 0.252 0.080 0.504950 #> Positive edge percentage 58.462 63.333 4.872 0.564356 #> Edge density 0.053 0.049 0.004 0.871287 #> Natural connectivity 0.030 0.031 0.001 0.772277 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.615 0.991177 0.034655 * #> betweenness centr. 0.312 0.546936 0.660877 #> closeness centr. 0.529 0.972716 0.075475 . #> eigenvec. centr. 0.625 0.995960 0.015945 * #> hub taxa 0.500 0.888889 0.407407 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.383 0.281 #> p-value 0.000 0.000 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.914000 2.241000 #> p-value 0.633663 0.574257 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 364563 0.061 0.245 0.184 1 #> 190597 0.102 0.000 0.102 1 #> 369164 0.102 0.000 0.102 1 #> 184983 0.184 0.102 0.082 1 #> 194648 0.000 0.082 0.082 1 #> 353985 0.082 0.000 0.082 1 #> 242070 0.082 0.000 0.082 1 #> 307981 0.122 0.184 0.061 1 #> 188236 0.122 0.184 0.061 1 #> 363302 0.102 0.041 0.061 1 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 353985 0.320 0.000 0.320 0.494890 #> 364563 0.040 0.216 0.177 0.999678 #> 188236 0.024 0.187 0.163 0.999678 #> 307981 0.020 0.158 0.138 0.999678 #> 157547 0.103 0.000 0.103 0.999678 #> 71543 0.091 0.000 0.091 0.999678 #> 590083 0.087 0.000 0.087 0.742335 #> 190597 0.079 0.000 0.079 0.999678 #> 194648 0.000 0.070 0.070 0.999678 #> 288134 0.063 0.129 0.065 0.999678 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 190597 0.871 0.000 0.871 0.489307 #> 194648 0.000 0.868 0.868 0.768911 #> 242070 0.809 0.000 0.809 0.733961 #> 369164 0.788 0.000 0.788 0.733961 #> 302160 0.000 0.677 0.677 0.988400 #> 353985 0.639 0.000 0.639 0.733961 #> 516022 0.000 0.561 0.561 0.988400 #> 181095 0.515 0.000 0.515 0.733961 #> 590083 0.507 0.000 0.507 0.489307 #> 470239 0.349 0.000 0.349 0.988400 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 242070 0.469 0.000 0.469 0.9998 #> 307981 0.334 0.672 0.339 0.9998 #> 301645 0.332 0.661 0.329 0.9998 #> 364563 0.079 0.339 0.260 0.9998 #> 369164 0.201 0.000 0.201 0.9998 #> 326792 0.130 0.300 0.170 0.9998 #> 157547 0.632 0.801 0.169 0.9998 #> 190597 0.146 0.000 0.146 0.9998 #> 363302 0.153 0.012 0.141 0.9998 #> 71543 0.910 0.777 0.133 0.9998 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # Network comparison comp_amgut1 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Check whether the second comparison leads to equal results all.equal(comp_amgut_orig$properties, comp_amgut1$properties) #> [1] TRUE # Construct differential network diffnet_amgut <- diffnet(net_amgut, diffMethod = \"permute\", nPerm = 100, fileLoadAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE) #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> Done. #> No significant differential associations detected after multiple testing adjustment. plot(diffnet_amgut) #> Error in plot.diffnet(diffnet_amgut): There are no differential correlations to plot (after multiple testing adjustment). plot(diffnet_amgut, adjusted = FALSE, mar = c(2, 2, 5, 15), legendPos = c(1.2,1.2), legendArgs = list(bty = \"n\"), legendGroupnames = c(\"yes\", \"no\"), legendTitle = \"Correlations:\")"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-comparison-using-createassoperm","dir":"Articles","previous_headings":"","what":"Network comparison using createAssoPerm()","title":"Generate permuted association matrices","text":"time, permutation association matrices generated using createAssoPerm() passed netCompare(). output written variable createAssoPerm() generally returns matrix permuted group labels. Let’s take look permuted group labels. interpret group labels correctly, important know, within netConstruct(), data set divided two matrices belonging two groups. permutation tests, two matrices combined rows permutation, samples reassigned one two groups keeping matching design matched data. case, permGroupMat matrix consists 100 rows (nPerm = 100) 150 columns (sample size). first 50 columns belong first group (group “yes” case) columns 51 150 belong second group. Since two samples group 2 matched one sample group 1, number group label matrix accordingly. Now, can see matching design kept: Since sample 3 assigned group 1, samples 1 2 assigned group 2 (entries [1,1] [1,51:52] permGroupMat). , stored permutation association matrices passed netCompare(). Using fm.open function, take look stored matrices .","code":"permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, append = FALSE, seed = 123456) seq1 <- seq(1,150, by = 3) seq2 <- seq(1:150)[!seq(1:150)%in%seq1] colnames(permGroupMat) <- c(seq1, seq2) permGroupMat[1:5, 1:10] #> 1 4 7 10 13 16 19 22 25 28 #> [1,] 2 2 2 2 1 2 2 1 2 2 #> [2,] 2 2 2 1 1 2 2 2 2 2 #> [3,] 2 2 1 1 2 2 2 1 2 2 #> [4,] 2 2 1 2 1 2 2 2 1 1 #> [5,] 2 2 2 2 2 1 2 2 2 2 permGroupMat[1:5, 51:71] #> 2 3 5 6 8 9 11 12 14 15 17 18 20 21 23 24 26 27 29 30 32 #> [1,] 2 1 1 2 2 1 1 2 2 2 1 2 2 1 2 2 2 1 1 2 1 #> [2,] 2 1 2 1 2 1 2 2 2 2 1 2 1 2 2 1 2 1 2 1 2 #> [3,] 2 1 1 2 2 2 2 2 1 2 1 2 2 1 2 2 1 2 2 1 2 #> [4,] 1 2 1 2 2 2 1 2 2 2 1 2 2 1 1 2 2 2 2 2 1 #> [5,] 2 1 2 1 1 2 2 1 1 2 2 2 2 1 1 2 1 2 1 2 2 comp_amgut2 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal? all.equal(comp_amgut_orig$properties, comp_amgut2$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- filematrix::fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm2 <- filematrix::fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) identical(as.matrix(assoPerm1), as.matrix(assoPerm2)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm2)) #> [1] 5000 100 # Close files filematrix::close(assoPerm1) filematrix::close(assoPerm2)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"block-wise-execution","dir":"Articles","previous_headings":"","what":"Block-wise execution","title":"Generate permuted association matrices","text":"Due limited resources, might meaningful estimate associations blocks, , subset permutations instead permutations . ’ll now see perform block-wise network comparison using NetCoMi’s functions. Note approach, external file extended iteration, parallelizable. first step, createAssoPerm used generate matrix permuted group labels (permutations!). Hence, set computeAsso parameter FALSE. now compute association matrices blocks 20 permutations loop (leading 5 iterations). Note: nPerm argument must set block size. external file (containing association matrices) must extended loop, except first iteration, file created. Thus, append set TRUE >=2. stored file, now contains associations 100 permutations, can passed netCompare() .","code":"permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = FALSE, seed = 123456) #> Create matrix with permuted group labels ... Done. nPerm_all <- 100 blocksize <- 20 repetitions <- nPerm_all / blocksize for (i in 1:repetitions) { print(i) if (i == 1) { # Create a new file in the first run tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, append = FALSE) } else { tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, append = TRUE) } } #> [1] 1 #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 2 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 3 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 4 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 5 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. comp_amgut3 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, storeAssoPerm = TRUE, fileLoadAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal to the first comparison? all.equal(comp_amgut_orig$properties, comp_amgut3$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm3 <- fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) all.equal(as.matrix(assoPerm1), as.matrix(assoPerm3)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm3)) #> [1] 5000 100 # Close files close(assoPerm1) close(assoPerm3)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"block-wise-execution-executable-in-parallel","dir":"Articles","previous_headings":"","what":"Block-wise execution (executable in parallel)","title":"Generate permuted association matrices","text":"blocks computed parallel, extending \"assoPerm\" file iteration work. able run blocks parallel, create separate file iteration combine end. last step, pass file containing combined matrix netCompare().","code":"# Create the matrix with permuted group labels (as before) permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = FALSE, seed = 123456) #> Create matrix with permuted group labels ... Done. nPerm_all <- 100 blocksize <- 20 repetitions <- nPerm_all / blocksize # 5 repetitions # Execute as standard for-loop: for (i in 1:repetitions) { tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = paste0(\"assoPerm\", i), storeCountsPerm = FALSE, append = FALSE) } #> Files 'assoPerm1.bmat and assoPerm1.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm2.bmat and assoPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm3.bmat and assoPerm3.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm4.bmat and assoPerm4.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm5.bmat and assoPerm5.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. # OR execute in parallel: library(\"foreach\") cores <- 2 # Please choose an appropriate number of cores cl <- parallel::makeCluster(cores) doSNOW::registerDoSNOW(cl) # Create progress bar: pb <- utils::txtProgressBar(0, repetitions, style=3) #> | | | 0% progress <- function(n) { utils::setTxtProgressBar(pb, n) } opts <- list(progress = progress) tmp <- foreach(i = 1:repetitions, .packages = c(\"NetCoMi\"), .options.snow = opts) %dopar% { progress(i) NetCoMi::createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = paste0(\"assoPerm\", i), storeCountsPerm = FALSE, append = FALSE) } #> | |============== | 20% | |============================ | 40% | |========================================== | 60% | |======================================================== | 80% | |======================================================================| 100% # Close progress bar close(pb) # Stop cluster parallel::stopCluster(cl) # Combine the matrices and store them into a new file (because netCompare() # needs an external file) assoPerm_all <- NULL for (i in 1:repetitions) { assoPerm_tmp <- fm.open(filenamebase = paste0(\"assoPerm\", i) , readonly = TRUE) assoPerm_all <- rbind(assoPerm_all, as.matrix(assoPerm_tmp)) close(assoPerm_tmp) } dim(assoPerm_all) #> [1] 5000 100 # Combine the permutation association matrices fm.create.from.matrix(filenamebase = \"assoPerm\", mat = assoPerm_all) #> 5000 x 100 filematrix with 8 byte \"double\" elements comp_amgut4 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal to those of the first comparison? all.equal(comp_amgut_orig$properties, comp_amgut4$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm4 <- fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) identical(as.matrix(assoPerm1), as.matrix(assoPerm4)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm4)) #> [1] 5000 100 # Close files close(assoPerm1) close(assoPerm4)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"network-construction","dir":"Articles","previous_headings":"","what":"Network construction","title":"Network comparison","text":"amgut data set split \"SEASONAL_ALLERGIES\" leading two subsets samples (without seasonal allergies). ignore “None” group. 50 nodes highest variance selected network construction get smaller networks. filter 121 samples (sample size smaller group) highest frequency make sample sizes equal thus ensure comparability. Alternatively, group vector passed group, according data set split two groups:","code":"data(\"amgut2.filt.phy\") # Split the phyloseq object into two groups amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") amgut_season_yes #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 121 samples ] #> sample_data() Sample Data: [ 121 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] amgut_season_no #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 163 samples ] #> sample_data() Sample Data: [ 163 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] n_yes <- phyloseq::nsamples(amgut_season_yes) # Network construction net_season <- netConstruct(data = amgut_season_no, data2 = amgut_season_yes, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda = 10, rep.num = 10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 42 samples removed in data set 1. #> 0 samples removed in data set 2. #> 96 taxa removed in each data set. #> 1 rows with zero sum removed in group 2. #> 42 taxa and 121 samples remaining in group 1. #> 42 taxa and 120 samples remaining in group 2. #> #> Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend': #> method from #> rev.hclust vegan #> Registered S3 method overwritten by 'seriation': #> method from #> reorder.hclust vegan #> Done. #> #> Calculate associations in group 2 ... Done. # Get count table countMat <- phyloseq::otu_table(amgut2.filt.phy) # netConstruct() expects samples in rows countMat <- t(as(countMat, \"matrix\")) group_vec <- phyloseq::get_variable(amgut2.filt.phy, \"SEASONAL_ALLERGIES\") # Select the two groups of interest (level \"none\" is excluded) sel <- which(group_vec %in% c(\"no\", \"yes\")) group_vec <- group_vec[sel] countMat <- countMat[sel, ] net_season <- netConstruct(countMat, group = group_vec, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 3, seed = 123456)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"network-analysis","dir":"Articles","previous_headings":"","what":"Network analysis","title":"Network comparison","text":"object returned netConstruct() containing networks passed netAnalyze(). Network properties computed networks simultaneously. demonstrate functionalities netAnalyze(), play around available arguments, even chosen setting might optimal. centrLCC = FALSE: Centralities calculated nodes (largest connected component). avDissIgnoreInf = TRUE: Nodes infinite dissimilarity ignored calculating average dissimilarity. sPathNorm = FALSE: Shortest paths normalized average dissimilarity. hubPar = c(\"degree\", \"eigenvector\"): Hubs nodes highest degree eigenvector centrality time. lnormFit = TRUE hubQuant = 0.9: log-normal distribution fitted centrality values identify nodes “highest” centrality values. , node identified hub three centrality measures, node’s centrality value 90% quantile fitted log-normal distribution. non-normalized centralities used four measures. Note! arguments must set carefully, depending research questions. NetCoMi’s default values generally preferable practical cases!","code":"props_season <- netAnalyze(net_season, centrLCC = FALSE, avDissIgnoreInf = TRUE, sPathNorm = FALSE, clustMethod = \"cluster_fast_greedy\", hubPar = c(\"degree\", \"eigenvector\"), hubQuant = 0.9, lnormFit = TRUE, normDeg = FALSE, normBetw = FALSE, normClose = FALSE, normEigen = FALSE) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. summary(props_season) #> #> Component sizes #> ``````````````` #> group '1': #> size: 28 1 #> #: 1 14 #> group '2': #> size: 31 8 1 #> #: 1 1 3 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' #> Relative LCC size 0.66667 0.73810 #> Clustering coefficient 0.15161 0.27111 #> Modularity 0.62611 0.45823 #> Positive edge percentage 86.66667 100.00000 #> Edge density 0.07937 0.12473 #> Natural connectivity 0.04539 0.04362 #> Vertex connectivity 1.00000 1.00000 #> Edge connectivity 1.00000 1.00000 #> Average dissimilarity* 0.67251 0.68178 #> Average path length** 3.40008 1.86767 #> #> Whole network: #> group '1' group '2' #> Number of components 15.00000 5.00000 #> Clustering coefficient 0.15161 0.29755 #> Modularity 0.62611 0.55684 #> Positive edge percentage 86.66667 100.00000 #> Edge density 0.03484 0.08130 #> Natural connectivity 0.02826 0.03111 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Sum of dissimilarities along the path #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> group '1': #> name: 0 1 2 3 4 5 #> #: 14 7 6 5 4 6 #> #> group '2': #> name: 0 1 2 3 4 5 #> #: 3 5 14 4 8 8 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on log-normal quantiles of centralities #> ``````````````````````````````````````````````` #> group '1' group '2' #> 307981 322235 #> 363302 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the complete network #> ```````````````````````````````````` #> Degree (unnormalized): #> group '1' group '2' #> 307981 5 2 #> 9715 5 5 #> 364563 4 4 #> 259569 4 5 #> 322235 3 9 #> ______ ______ #> 322235 3 9 #> 363302 3 9 #> 158660 2 6 #> 188236 3 5 #> 259569 4 5 #> #> Betweenness centrality (unnormalized): #> group '1' group '2' #> 307981 231 0 #> 331820 170 9 #> 158660 162 80 #> 188236 161 85 #> 322235 159 126 #> ______ ______ #> 322235 159 126 #> 363302 74 93 #> 188236 161 85 #> 158660 162 80 #> 326792 17 58 #> #> Closeness centrality (unnormalized): #> group '1' group '2' #> 307981 18.17276 7.80251 #> 9715 15.8134 9.27254 #> 188236 15.7949 23.24055 #> 301645 15.30177 9.01509 #> 364563 14.73566 21.21352 #> ______ ______ #> 322235 13.50232 26.36749 #> 363302 12.30297 24.19703 #> 158660 13.07106 23.31577 #> 188236 15.7949 23.24055 #> 326792 14.61391 22.52157 #> #> Eigenvector centrality (unnormalized): #> group '1' group '2' #> 307981 1 0.13142 #> 9715 0.83277 0.20513 #> 301645 0.78551 0.16298 #> 326792 0.50706 0.42082 #> 188236 0.48439 0.56626 #> ______ ______ #> 322235 0.03281 0.79487 #> 363302 0.06613 0.76293 #> 188236 0.48439 0.56626 #> 194648 0.00687 0.52039 #> 184983 0.172 0.49611"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"visual-network-comparison","dir":"Articles","previous_headings":"","what":"Visual network comparison","title":"Network comparison","text":"First, layout computed separately groups (qgraph’s “spring” layout case). Node sizes scaled according mclr-transformed data since SPRING uses mclr transformation normalization method. Node colors represent clusters. Note default, two clusters color groups least two nodes common (sameColThresh = 2). Set sameClustCol FALSE get different cluster colors. Using different layouts leads “nice-looking” network plot group, however, difficult identify group differences first glance. Thus, now use layout groups. following, layout computed group 1 (left network) taken group 2. rmSingles set \"inboth\" nodes unconnected groups can removed layout used. plot, can see clear differences groups. OTU “322235”, instance, strongly connected “Seasonal allergies” group group without seasonal allergies, hub right, left. However, layout one group simply taken , one networks (“seasonal allergies” group) usually nice-looking due long edges. Therefore, NetCoMi (>= 1.0.2) offers option (layoutGroup = \"union\"), union two layouts used groups. , nodes placed optimal possible equally networks. idea R code functionality provided Christian L. Müller Alice Sommer","code":"plot(props_season, sameLayout = FALSE, nodeColor = \"cluster\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.7, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, layoutGroup = 1, rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, repulsion = 0.95, layoutGroup = \"union\", rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"quantitative-network-comparison","dir":"Articles","previous_headings":"","what":"Quantitative network comparison","title":"Network comparison","text":"Since runtime considerably increased permutation tests performed, set permTest parameter FALSE. See tutorial_createAssoPerm file network comparison including permutation tests. Since permutation tests still conducted Adjusted Rand Index, seed set reproducibility.","code":"comp_season <- netCompare(props_season, permTest = FALSE, verbose = FALSE, seed = 123456) summary(comp_season, groupNames = c(\"No allergies\", \"Allergies\"), showCentr = c(\"degree\", \"between\", \"closeness\"), numbNodes = 5) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = props_season, permTest = FALSE, verbose = FALSE, #> seed = 123456) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> No allergies Allergies difference #> Relative LCC size 0.667 0.738 0.071 #> Clustering coefficient 0.152 0.271 0.120 #> Modularity 0.626 0.458 0.168 #> Positive edge percentage 86.667 100.000 13.333 #> Edge density 0.079 0.125 0.045 #> Natural connectivity 0.045 0.044 0.002 #> Vertex connectivity 1.000 1.000 0.000 #> Edge connectivity 1.000 1.000 0.000 #> Average dissimilarity* 0.673 0.682 0.009 #> Average path length** 3.400 1.868 1.532 #> #> Whole network: #> No allergies Allergies difference #> Number of components 15.000 5.000 10.000 #> Clustering coefficient 0.152 0.298 0.146 #> Modularity 0.626 0.557 0.069 #> Positive edge percentage 86.667 100.000 13.333 #> Edge density 0.035 0.081 0.046 #> Natural connectivity 0.028 0.031 0.003 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Sum of dissimilarities along the path #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.556 0.957578 0.144846 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.231 0.322424 0.861268 #> eigenvec. centr. 0.100 0.017593 * 0.996692 #> hub taxa 0.000 0.296296 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.232 0.355 #> p-value 0.000 0.000 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.577 1.863 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (unnormalized): #> No allergies Allergies abs.diff. #> 322235 3 9 6 #> 363302 3 9 6 #> 469709 0 4 4 #> 158660 2 6 4 #> 223059 0 4 4 #> #> Betweenness centrality (unnormalized): #> No allergies Allergies abs.diff. #> 307981 231 0 231 #> 331820 170 9 161 #> 259569 137 34 103 #> 158660 162 80 82 #> 184983 92 12 80 #> #> Closeness centrality (unnormalized): #> No allergies Allergies abs.diff. #> 469709 0 21.203 21.203 #> 541301 0 20.942 20.942 #> 181016 0 19.498 19.498 #> 361496 0 19.349 19.349 #> 223059 0 19.261 19.261 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1"},{"path":"https://netcomi.de/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Stefanie Peschel. Author, maintainer. Christian L. Müller. Contributor. Anne-Laure Boulesteix. Contributor. Erika von Mutius. Contributor. Martin Depner. Contributor.","code":""},{"path":"https://netcomi.de/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Stefanie Peschel, (2022). NetCoMi: Network Construction Comparison Microbiome Data. R package version 1.1.0. https://netcomi.de","code":"@Manual{, title = {{NetCoMi: Network Construction and Comparison for Microbiome Data}}, author = {Stefanie Peschel}, year = {2022}, note = {R package version 1.1.0}, url = {https://netcomi.de}, }"},{"path":"https://netcomi.de/index.html","id":"netcomi-","dir":"","previous_headings":"","what":"Network Construction and Comparison for Microbiome Data","title":"Network Construction and Comparison for Microbiome Data","text":"NetCoMi (Network Construction Comparison Microbiome Data) R package designed facilitate construction, analysis, comparison networks tailored microbial compositional data. implements comprehensive workflow introduced Peschel et al. (2020), guides users step network generation analysis strong emphasis reproducibility computational efficiency. NetCoMi, users can construct microbial association dissimilarity networks directly sequencing data, typically provided read count matrix. package includes broad selection methods handling zeros, normalizing data, computing associations microbial taxa, sparsifying resulting matrices. offering components modular format, NetCoMi allows users tailor workflow specific research needs, creating highly customizable microbial networks. package supports construction, analysis, visualization single network comparison two networks graphical quantitative approaches, including statistical testing. Additionally, NetCoMi offers capability constructing differential networks, differentially associated taxa connected. Exemplary network comparison using soil microbiome data (‘soilrep’ data phyloseq package). Microbial associations compared two experimantal settings ‘warming’ ‘non-warming’ using layout groups.","code":""},{"path":[]},{"path":"https://netcomi.de/index.html","id":"methods-included-in-netcomi","dir":"","previous_headings":"","what":"Methods included in NetCoMi","title":"Network Construction and Comparison for Microbiome Data","text":"overview methods available network construction, together information implementation R: Association measures: Pearson coefficient (cor() stats package) Spearman coefficient (cor() stats package) Biweight Midcorrelation bicor() WGCNA package SparCC (sparcc() SpiecEasi package) CCLasso (R code GitHub) CCREPE (ccrepe package) SpiecEasi (SpiecEasi package) SPRING (SPRING package) gCoda (R code GitHub) propr (propr package) Dissimilarity measures: Euclidean distance (vegdist() vegan package) Bray-Curtis dissimilarity (vegdist() vegan package) Kullback-Leibler divergence (KLD) (KLD() LaplacesDemon package) Jeffrey divergence (code using KLD() LaplacesDemon package) Jensen-Shannon divergence (code using KLD() LaplacesDemon package) Compositional KLD (implementation following Martin-Fernández et al. (1999)) Aitchison distance (vegdist() clr() SpiecEasi package) Methods zero replacement: Add predefined pseudo count count table Replace zeros count table predefined pseudo count (ratios non-zero values preserved) Multiplicative replacement (multRepl zCompositions package) Modified EM alr-algorithm (lrEM zCompositions package) Bayesian-multiplicative replacement (cmultRepl zCompositions package) Normalization methods: Total Sum Scaling (TSS) (implementation) Cumulative Sum Scaling (CSS) (cumNormMat metagenomeSeq package) Common Sum Scaling (COM) (implementation) Rarefying (rrarefy vegan package) Variance Stabilizing Transformation (VST) (varianceStabilizingTransformation DESeq2 package) Centered log-ratio (clr) transformation (clr() SpiecEasi package)) TSS, CSS, COM, VST, clr transformation described (Badri et al. 2020).","code":""},{"path":"https://netcomi.de/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Network Construction and Comparison for Microbiome Data","text":"errors installation, please install missing dependencies manually. Packages optionally required certain settings installed together NetCoMi. can installed automatically using: installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":"# Required packages install.packages(\"devtools\") install.packages(\"BiocManager\") # Since two of NetCoMi's dependencies are only available on GitHub, it is # recommended to install them first: devtools::install_github(\"zdk123/SpiecEasi\") devtools::install_github(\"GraceYoon/SPRING\") # Install NetCoMi devtools::install_github(\"stefpeschel/NetCoMi\", dependencies = c(\"Depends\", \"Imports\", \"LinkingTo\"), repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories())) installNetCoMiPacks()"},{"path":"https://netcomi.de/index.html","id":"bioconda","dir":"","previous_headings":"","what":"Bioconda","title":"Network Construction and Comparison for Microbiome Data","text":"Thanks daydream-boost, NetCoMi can also installed conda bioconda channel ","code":"# You can install an individual environment firstly with # conda create -n NetCoMi # conda activate NetCoMi conda install -c bioconda -c conda-forge r-netcomi"},{"path":"https://netcomi.de/index.html","id":"development-version","dir":"","previous_headings":"","what":"Development version","title":"Network Construction and Comparison for Microbiome Data","text":"Everyone wants use new features included releases invited install NetCoMi’s development version: Please check NEWS document features implemented develop branch.","code":"devtools::install_github(\"stefpeschel/NetCoMi\", ref = \"develop\", dependencies = c(\"Depends\", \"Imports\", \"LinkingTo\"), repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories()))"},{"path":[]},{"path":"https://netcomi.de/readme.html","id":null,"dir":"","previous_headings":"","what":"NetCoMi ","title":"NetCoMi ","text":"NetCoMi (Network Construction Comparison Microbiome Data) provides functionality constructing, analyzing, comparing networks suitable application microbial compositional data. R package implements workflow proposed Stefanie Peschel, Christian L Müller, Erika von Mutius, Anne-Laure Boulesteix, Martin Depner (2020). NetCoMi: network construction comparison microbiome data R. Briefings Bioinformatics, bbaa290. https://doi.org/10.1093/bib/bbaa290. NetCoMi allows users construct, analyze, compare microbial association dissimilarity networks fast reproducible manner. Starting read count matrix originating sequencing process, pipeline includes wide range existing methods treating zeros data, normalization, computing microbial associations dissimilarities, sparsifying resulting association/ dissimilarity matrix. methods can combined modular fashion generate microbial networks. NetCoMi can either used constructing, analyzing visualizing single network, comparing two networks graphical well quantitative manner, including statistical tests. package furthermore offers functionality constructing differential networks, differentially associated taxa connected. Exemplary network comparison using soil microbiome data (‘soilrep’ data phyloseq package). Microbial associations compared two experimantal settings ‘warming’ ‘non-warming’ using layout groups.","code":""},{"path":"https://netcomi.de/readme.html","id":"table-of-contents","dir":"","previous_headings":"","what":"Table of Contents","title":"NetCoMi ","text":"Methods included NetCoMi Installation Development version Network SPRING association measure Export Gephi Network Pearson correlations “Unsigned” transformation Network genus level Association matrix input Network comparison Differential networks Dissimilarity-based Networks Soil microbiome example References","code":""},{"path":"https://netcomi.de/readme.html","id":"methods-included-in-netcomi","dir":"","previous_headings":"","what":"Methods included in NetCoMi","title":"NetCoMi ","text":"overview methods available network construction, together information implementation R: Association measures: Pearson coefficient (cor() stats package) Spearman coefficient (cor() stats package) Biweight Midcorrelation bicor() WGCNA package SparCC (sparcc() SpiecEasi package) CCLasso (R code GitHub) CCREPE (ccrepe package) SpiecEasi (SpiecEasi package) SPRING (SPRING package) gCoda (R code GitHub) propr (propr package) Dissimilarity measures: Euclidean distance (vegdist() vegan package) Bray-Curtis dissimilarity (vegdist() vegan package) Kullback-Leibler divergence (KLD) (KLD() LaplacesDemon package) Jeffrey divergence (code using KLD() LaplacesDemon package) Jensen-Shannon divergence (code using KLD() LaplacesDemon package) Compositional KLD (implementation following Martin-Fernández et al. (1999)) Aitchison distance (vegdist() clr() SpiecEasi package) Methods zero replacement: Add predefined pseudo count count table Replace zeros count table predefined pseudo count (ratios non-zero values preserved) Multiplicative replacement (multRepl zCompositions package) Modified EM alr-algorithm (lrEM zCompositions package) Bayesian-multiplicative replacement (cmultRepl zCompositions package) Normalization methods: Total Sum Scaling (TSS) (implementation) Cumulative Sum Scaling (CSS) (cumNormMat metagenomeSeq package) Common Sum Scaling (COM) (implementation) Rarefying (rrarefy vegan package) Variance Stabilizing Transformation (VST) (varianceStabilizingTransformation DESeq2 package) Centered log-ratio (clr) transformation (clr() SpiecEasi package)) TSS, CSS, COM, VST, clr transformation described (Badri et al. 2020).","code":""},{"path":"https://netcomi.de/readme.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"NetCoMi ","text":"errors installation, please install missing dependencies manually. particular automatic installation SPRING SpiecEasi (available GitHub) sometimes work. packages can installed follows (order important SPRING depends SpiecEasi): Packages optionally required certain settings installed together NetCoMi. can installed automatically using: installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":"# Required packages install.packages(\"devtools\") install.packages(\"BiocManager\") # Install NetCoMi devtools::install_github(\"stefpeschel/NetCoMi\", dependencies = c(\"Depends\", \"Imports\", \"LinkingTo\"), repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories())) devtools::install_github(\"zdk123/SpiecEasi\") devtools::install_github(\"GraceYoon/SPRING\") installNetCoMiPacks() # Please check: ?installNetCoMiPacks()"},{"path":"https://netcomi.de/readme.html","id":"bioconda","dir":"","previous_headings":"Installation","what":"Bioconda","title":"NetCoMi ","text":"Thanks daydream-boost, NetCoMi can also installed conda bioconda channel ","code":"# You can install an individual environment firstly with # conda create -n NetCoMi # conda activate NetCoMi conda install -c bioconda -c conda-forge r-netcomi"},{"path":"https://netcomi.de/readme.html","id":"development-version","dir":"","previous_headings":"","what":"Development version","title":"NetCoMi ","text":"Everyone wants use new features included releases invited install NetCoMi’s development version: Please check NEWS document features implemented develop branch.","code":"devtools::install_github(\"stefpeschel/NetCoMi\", ref = \"develop\", dependencies = c(\"Depends\", \"Imports\", \"LinkingTo\"), repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories()))"},{"path":"https://netcomi.de/readme.html","id":"usage","dir":"","previous_headings":"","what":"Usage","title":"NetCoMi ","text":"use American Gut data SpiecEasi package look examples NetCoMi applied. NetCoMi’s main functions netConstruct() network construction, netAnalyze() network analysis, netCompare() network comparison. see following, three functions must executed aforementioned order. function diffnet() constructing differential association network. diffnet() must applied object returned netConstruct(). First , load NetCoMi data American Gut Project (provided SpiecEasi, automatically loaded together NetCoMi).","code":"library(NetCoMi) data(\"amgut1.filt\") data(\"amgut2.filt.phy\")"},{"path":[]},{"path":"https://netcomi.de/readme.html","id":"network-construction-and-analysis","dir":"","previous_headings":"Usage > Network with SPRING as association measure","what":"Network construction and analysis","title":"NetCoMi ","text":"firstly construct single association network using SPRING estimating associations (conditional dependence) OTUs. data filtered within netConstruct() follows: samples total number reads least 1000 included (argument filtSamp). 50 taxa highest frequency included (argument filtTax). measure defines association dissimilarity measure, \"spring\" case. Additional arguments passed SPRING() via measurePar. nlambda rep.num set 10 decreased execution time, higher real data. Rmethod set “approx” estimate correlations using hybrid multi-linear interpolation approach proposed Yoon, Müller, Gaynanova (2020). method considerably reduces runtime controlling approximation error. Normalization well zero handling performed internally SPRING(). Hence, set normMethod zeroMethod \"none\". furthermore set sparsMethod \"none\" SPRING returns sparse network additional sparsification step necessary. use “signed” method transforming associations dissimilarities (argument dissFunc). , strongly negatively associated taxa high dissimilarity , turn, low similarity, corresponds edge weights network plot. verbose argument set 3 messages generated netConstruct() well messages external functions printed.","code":"net_spring <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) ## Checking input arguments ... Done. ## Data filtering ... ## 77 taxa removed. ## 50 taxa and 289 samples remaining. ## ## Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend': ## method from ## rev.hclust vegan ## Registered S3 method overwritten by 'seriation': ## method from ## reorder.hclust vegan ## Done."},{"path":"https://netcomi.de/readme.html","id":"analyzing-the-constructed-network","dir":"","previous_headings":"Usage > Network with SPRING as association measure","what":"Analyzing the constructed network","title":"NetCoMi ","text":"NetCoMi’s netAnalyze() function used analyzing constructed network(s). , centrLCC set TRUE meaning centralities calculated nodes largest connected component (LCC). Clusters identified using greedy modularity optimization (cluster_fast_greedy() igraph package). Hubs nodes eigenvector centrality value empirical 95% quantile eigenvector centralities network (argument hubPar). weightDeg normDeg set FALSE degree node simply defined number nodes adjacent node. default, heatmap Graphlet Correlation Matrix (GCM) returned (graphlet correlations upper triangle significance codes resulting Student’s t-test lower triangle). See ?calcGCM ?testGCM details.","code":"props_spring <- netAnalyze(net_spring, centrLCC = TRUE, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\", weightDeg = FALSE, normDeg = FALSE) #?summary.microNetProps summary(props_spring, numbNodes = 5L) ## ## Component sizes ## ``````````````` ## size: 48 1 ## #: 1 2 ## ______________________________ ## Global network properties ## ````````````````````````` ## Largest connected component (LCC): ## ## Relative LCC size 0.96000 ## Clustering coefficient 0.33594 ## Modularity 0.53407 ## Positive edge percentage 88.34951 ## Edge density 0.09131 ## Natural connectivity 0.02855 ## Vertex connectivity 1.00000 ## Edge connectivity 1.00000 ## Average dissimilarity* 0.97035 ## Average path length** 2.36912 ## ## Whole network: ## ## Number of components 3.00000 ## Clustering coefficient 0.33594 ## Modularity 0.53407 ## Positive edge percentage 88.34951 ## Edge density 0.08408 ## Natural connectivity 0.02714 ## ----- ## *: Dissimilarity = 1 - edge weight ## **: Path length = Units with average dissimilarity ## ## ______________________________ ## Clusters ## - In the whole network ## - Algorithm: cluster_fast_greedy ## ```````````````````````````````` ## ## name: 0 1 2 3 4 5 ## #: 2 12 17 10 5 4 ## ## ______________________________ ## Hubs ## - In alphabetical/numerical order ## - Based on empirical quantiles of centralities ## ``````````````````````````````````````````````` ## 190597 ## 288134 ## 311477 ## ## ______________________________ ## Centrality measures ## - In decreasing order ## - Centrality of disconnected components is zero ## ```````````````````````````````````````````````` ## Degree (unnormalized): ## ## 288134 10 ## 190597 9 ## 311477 9 ## 188236 8 ## 199487 8 ## ## Betweenness centrality (normalized): ## ## 302160 0.31360 ## 268332 0.24144 ## 259569 0.23404 ## 470973 0.21462 ## 119010 0.19611 ## ## Closeness centrality (normalized): ## ## 288134 0.68426 ## 311477 0.68413 ## 199487 0.68099 ## 302160 0.67518 ## 188236 0.66852 ## ## Eigenvector centrality (normalized): ## ## 288134 1.00000 ## 311477 0.94417 ## 190597 0.90794 ## 199487 0.85439 ## 188236 0.72684"},{"path":"https://netcomi.de/readme.html","id":"plotting-the-gcm-heatmap-manually","dir":"","previous_headings":"Usage > Network with SPRING as association measure","what":"Plotting the GCM heatmap manually","title":"NetCoMi ","text":"","code":"plotHeat(mat = props_spring$graphletLCC$gcm1, pmat = props_spring$graphletLCC$pAdjust1, type = \"mixed\", title = \"GCM\", colorLim = c(-1, 1), mar = c(2, 0, 2, 0)) # Add rectangles highlighting the four types of orbits graphics::rect(xleft = c( 0.5, 1.5, 4.5, 7.5), ybottom = c(11.5, 7.5, 4.5, 0.5), xright = c( 1.5, 4.5, 7.5, 11.5), ytop = c(10.5, 10.5, 7.5, 4.5), lwd = 2, xpd = NA) text(6, -0.2, xpd = NA, \"Significance codes: ***: 0.001; **: 0.01; *: 0.05\")"},{"path":"https://netcomi.de/readme.html","id":"visualizing-the-network","dir":"","previous_headings":"Usage > Network with SPRING as association measure","what":"Visualizing the network","title":"NetCoMi ","text":"use determined clusters node colors scale node sizes according node’s eigenvector centrality. Note edge weights (non-negative) similarities, however, edges belonging negative estimated associations colored red default (negDiffCol = TRUE). default, different transparency value added edges absolute weight cut value (arguments edgeTranspLow edgeTranspHigh). determined cut value can read follows:","code":"# help page ?plot.microNetProps p <- plot(props_spring, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with SPRING associations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated association:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) p$q1$Arguments$cut ## 75% ## 0.337099"},{"path":"https://netcomi.de/readme.html","id":"export-to-gephi","dir":"","previous_headings":"Usage","what":"Export to Gephi","title":"NetCoMi ","text":"users may interested export network Gephi. ’s example: exported .csv files can imported Gephi.","code":"# For Gephi, we have to generate an edge list with IDs. # The corresponding labels (and also further node features) are stored as node list. # Create edge object from the edge list exported by netConstruct() edges <- dplyr::select(net_spring$edgelist1, v1, v2) # Add Source and Target variables (as IDs) edges$Source <- as.numeric(factor(edges$v1)) edges$Target <- as.numeric(factor(edges$v2)) edges$Type <- \"Undirected\" edges$Weight <- net_spring$edgelist1$adja nodes <- unique(edges[,c('v1','Source')]) colnames(nodes) <- c(\"Label\", \"Id\") # Add category with clusters (can be used as node colors in Gephi) nodes$Category <- props_spring$clustering$clust1[nodes$Label] edges <- dplyr::select(edges, Source, Target, Type, Weight) write.csv(nodes, file = \"nodes.csv\", row.names = FALSE) write.csv(edges, file = \"edges.csv\", row.names = FALSE)"},{"path":"https://netcomi.de/readme.html","id":"network-with-pearson-correlation-as-association-measure","dir":"","previous_headings":"Usage","what":"Network with Pearson correlation as association measure","title":"NetCoMi ","text":"Let’s construct another network using Pearson’s correlation coefficient association measure. input now phyloseq object. Since Pearson correlations may lead compositional effects applied sequencing data, use clr transformation normalization method. Zero treatment necessary case. threshold 0.3 used sparsification method, OTUs absolute correlation greater equal 0.3 connected. Network analysis plotting: Let’s improve visualization changing following arguments: repulsion = 0.8: Place nodes apart. rmSingles = TRUE: Single nodes removed. labelScale = FALSE cexLabels = 1.6: labels equal size enlarged improve readability small node’s labels. nodeSizeSpread = 3 (default 4): Node sizes similar value decreased. argument (combination cexNodes) useful enlarge small nodes keeping size big nodes. hubBorderCol = \"darkgray\": Change border color better readability node labels.","code":"net_pears <- netConstruct(amgut2.filt.phy, measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"multRepl\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) ## Checking input arguments ... Done. ## 2 rows with zero sum removed. ## 138 taxa and 294 samples remaining. ## ## Zero treatment: ## Execute multRepl() ... Done. ## ## Normalization: ## Execute clr(){SpiecEasi} ... Done. ## ## Calculate 'pearson' associations ... Done. ## ## Sparsify associations via 'threshold' ... Done. props_pears <- netAnalyze(net_pears, clustMethod = \"cluster_fast_greedy\") plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"edge-filtering","dir":"","previous_headings":"Usage > Network with Pearson correlation as association measure","what":"Edge filtering","title":"NetCoMi ","text":"network can sparsified using arguments edgeFilter (edges filtered layout computed) edgeInvisFilter (edges removed layout computed thus just made “invisible”).","code":"plot(props_pears, edgeInvisFilter = \"threshold\", edgeInvisPar = 0.4, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = paste0(\"Network on OTU level with Pearson correlations\", \"\\n(edge filter: threshold = 0.4)\"), showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"using-the-unsigned-transformation","dir":"","previous_headings":"Usage","what":"Using the “unsigned” transformation","title":"NetCoMi ","text":"network, “signed” transformation used transform estimated associations dissimilarities. leads network strongly positive correlated taxa high edge weight (1 correlation equals 1) strongly negative correlated taxa low edge weight (0 correlation equals -1). now use “unsigned” transformation edge weight strongly correlated taxa high, matter sign. Hence, correlation -1 1 lead edge weight 1.","code":""},{"path":"https://netcomi.de/readme.html","id":"network-construction","dir":"","previous_headings":"Usage > Using the “unsigned” transformation","what":"Network construction","title":"NetCoMi ","text":"can pass network object netConstruct() save runtime.","code":"net_pears_unsigned <- netConstruct(data = net_pears$assoEst1, dataType = \"correlation\", sparsMethod = \"threshold\", thresh = 0.3, dissFunc = \"unsigned\", verbose = 3) ## Checking input arguments ... Done. ## ## Sparsify associations via 'threshold' ... Done."},{"path":"https://netcomi.de/readme.html","id":"estimated-correlations-and-adjacency-values","dir":"","previous_headings":"Usage > Using the “unsigned” transformation","what":"Estimated correlations and adjacency values","title":"NetCoMi ","text":"following histograms demonstrate estimated correlations transformed adjacencies (= sparsified similarities weighted networks). Sparsified estimated correlations: Adjacency values computed using “signed” transformation (values different 0 1 edges network): Adjacency values computed using “unsigned” transformation:","code":"hist(net_pears$assoMat1, 100, xlim = c(-1, 1), ylim = c(0, 400), xlab = \"Estimated correlation\", main = \"Estimated correlations after sparsification\") hist(net_pears$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"signed\\\" transformation)\") hist(net_pears_unsigned$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"unsigned\\\" transformation)\")"},{"path":"https://netcomi.de/readme.html","id":"network-analysis-and-plotting","dir":"","previous_headings":"Usage > Using the “unsigned” transformation","what":"Network analysis and plotting","title":"NetCoMi ","text":"“signed” transformation, positive correlated taxa likely belong cluster, “unsigned” transformation clusters contain strongly positive negative correlated taxa.","code":"props_pears_unsigned <- netAnalyze(net_pears_unsigned, clustMethod = \"cluster_fast_greedy\", gcmHeat = FALSE) plot(props_pears_unsigned, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.9, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network with Pearson correlations and \\\"unsigned\\\" transformation\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"network-on-genus-level","dir":"","previous_headings":"Usage","what":"Network on genus level","title":"NetCoMi ","text":"now construct network, OTUs agglomerated genera.","code":"library(phyloseq) data(\"amgut2.filt.phy\") # Agglomerate to genus level amgut_genus <- tax_glom(amgut2.filt.phy, taxrank = \"Rank6\") # Taxonomic table taxtab <- as(tax_table(amgut_genus), \"matrix\") # Rename taxonomic table and make Rank6 (genus) unique amgut_genus_renamed <- renameTaxa(amgut_genus, pat = \"\", substPat = \"_()\", numDupli = \"Rank6\") ## Column 7 contains NAs only and is ignored. # Network construction and analysis net_genus <- netConstruct(amgut_genus_renamed, taxRank = \"Rank6\", measure = \"pearson\", zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) ## Checking input arguments ... ## Done. ## 2 rows with zero sum removed. ## 43 taxa and 294 samples remaining. ## ## Zero treatment: ## Execute multRepl() ... Done. ## ## Normalization: ## Execute clr(){SpiecEasi} ... Done. ## ## Calculate 'pearson' associations ... Done. ## ## Sparsify associations via 'threshold' ... Done. props_genus <- netAnalyze(net_genus, clustMethod = \"cluster_fast_greedy\")"},{"path":"https://netcomi.de/readme.html","id":"network-plots","dir":"","previous_headings":"Usage > Network on genus level","what":"Network plots","title":"NetCoMi ","text":"Modifications: Fruchterman-Reingold layout algorithm igraph package used (passed plot matrix) Shortened labels (using “intelligent” method, avoids duplicates) Fixed node sizes, hubs enlarged Node color gray nodes (transparancy lower hub nodes default) Since visualization obviously optimal, make adjustments: time, Fruchterman-Reingold layout algorithm computed within plot function thus applied “reduced” network without singletons Labels scaled node sizes Single nodes removed Node sizes scaled column sums clr-transformed data Node colors represent determined clusters Border color hub nodes changed black darkgray Label size hubs enlarged Let’s check whether largest nodes actually highest column sums matrix normalized counts returned netConstruct(). order improve plot, use following modifications: time, choose “spring” layout part qgraph() (function generally used network plotting NetCoMi) repulsion value 1 places nodes apart Labels shortened anymore Nodes (bacteria genus level) colored according respective phylum Edges representing positive associations colored blue, negative ones orange (just give example alternative edge coloring) Transparency increased edges high weight improve readability node labels","code":"# Compute layout graph3 <- igraph::graph_from_adjacency_matrix(net_genus$adjaMat1, weighted = TRUE) set.seed(123456) lay_fr <- igraph::layout_with_fr(graph3) # Row names of the layout matrix must match the node names rownames(lay_fr) <- rownames(net_genus$adjaMat1) plot(props_genus, layout = lay_fr, shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), nodeSize = \"fix\", nodeColor = \"gray\", cexNodes = 0.8, cexHubs = 1.1, cexLabels = 1.2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) set.seed(123456) plot(props_genus, layout = \"layout_with_fr\", shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeColor = \"cluster\", hubBorderCol = \"darkgray\", cexNodes = 2, cexLabels = 1.5, cexHubLabels = 2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) sort(colSums(net_genus$normCounts1), decreasing = TRUE)[1:10] ## Bacteroides Klebsiella Faecalibacterium ## 1200.7971 1137.4928 708.0877 ## 5_Clostridiales(O) 2_Ruminococcaceae(F) 3_Lachnospiraceae(F) ## 549.2647 502.1889 493.7558 ## 6_Enterobacteriaceae(F) Roseburia Parabacteroides ## 363.3841 333.8737 328.0495 ## Coprococcus ## 274.4082 # Get phyla names taxtab <- as(tax_table(amgut_genus_renamed), \"matrix\") phyla <- as.factor(gsub(\"p__\", \"\", taxtab[, \"Rank2\"])) names(phyla) <- taxtab[, \"Rank6\"] #table(phyla) # Define phylum colors phylcol <- c(\"cyan\", \"blue3\", \"red\", \"lawngreen\", \"yellow\", \"deeppink\") plot(props_genus, layout = \"spring\", repulsion = 0.84, shortenLabels = \"none\", charToRm = \"g__\", labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeSizeSpread = 4, nodeColor = \"feature\", featVecCol = phyla, colorVec = phylcol, posCol = \"darkturquoise\", negCol = \"orange\", edgeTranspLow = 0, edgeTranspHigh = 40, cexNodes = 2, cexLabels = 2, cexHubLabels = 2.5, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) # Colors used in the legend should be equally transparent as in the plot phylcol_transp <- colToTransp(phylcol, 60) legend(-1.2, 1.2, cex = 2, pt.cex = 2.5, title = \"Phylum:\", legend=levels(phyla), col = phylcol_transp, bty = \"n\", pch = 16) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"darkturquoise\",\"orange\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"using-an-association-matrix-as-input","dir":"","previous_headings":"Usage","what":"Using an association matrix as input","title":"NetCoMi ","text":"QMP data set provided SPRING package used demonstrate NetCoMi used analyze precomputed network (given association matrix). data set contains quantitative count data (true absolute values), SPRING can deal . See ?QMP details. nlambda rep.num set 10 decreased execution time, higher real data. association matrix now passed netConstruct start usual NetCoMi workflow. Note dataType argument must set appropriately.","code":"library(SPRING) # Load the QMP data set data(\"QMP\") # Run SPRING for association estimation fit_spring <- SPRING(QMP, quantitative = TRUE, lambdaseq = \"data-specific\", nlambda = 10, rep.num = 10, seed = 123456, ncores = 1, Rmethod = \"approx\", verbose = FALSE) # Optimal lambda opt.K <- fit_spring$output$stars$opt.index # Association matrix assoMat <- as.matrix(SpiecEasi::symBeta(fit_spring$output$est$beta[[opt.K]], mode = \"ave\")) rownames(assoMat) <- colnames(assoMat) <- colnames(QMP) # Network construction and analysis net_asso <- netConstruct(data = assoMat, dataType = \"condDependence\", sparsMethod = \"none\", verbose = 0) props_asso <- netAnalyze(net_asso, clustMethod = \"hierarchical\") plot(props_asso, layout = \"spring\", repulsion = 1.2, shortenLabels = \"none\", labelScale = TRUE, rmSingles = TRUE, nodeSize = \"eigenvector\", nodeSizeSpread = 2, nodeColor = \"cluster\", hubBorderCol = \"gray60\", cexNodes = 1.8, cexLabels = 2, cexHubLabels = 2.2, title1 = \"Network for QMP data\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated association:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"network-comparison","dir":"","previous_headings":"Usage","what":"Network comparison","title":"NetCoMi ","text":"Now let’s look NetCoMi used compare two networks.","code":""},{"path":"https://netcomi.de/readme.html","id":"network-construction-1","dir":"","previous_headings":"Usage > Network comparison","what":"Network construction","title":"NetCoMi ","text":"data set split \"SEASONAL_ALLERGIES\" leading two subsets samples (without seasonal allergies). ignore “None” group. 50 nodes highest variance selected network construction get smaller networks. filter 121 samples (sample size smaller group) highest frequency make sample sizes equal thus ensure comparability. Alternatively, group vector passed group, according data set split two groups:","code":"# Split the phyloseq object into two groups amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") amgut_season_yes ## phyloseq-class experiment-level object ## otu_table() OTU Table: [ 138 taxa and 121 samples ] ## sample_data() Sample Data: [ 121 samples by 166 sample variables ] ## tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] amgut_season_no ## phyloseq-class experiment-level object ## otu_table() OTU Table: [ 138 taxa and 163 samples ] ## sample_data() Sample Data: [ 163 samples by 166 sample variables ] ## tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] n_yes <- phyloseq::nsamples(amgut_season_yes) # Network construction net_season <- netConstruct(data = amgut_season_no, data2 = amgut_season_yes, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda = 10, rep.num = 10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) ## Checking input arguments ... Done. ## Data filtering ... ## 42 samples removed in data set 1. ## 0 samples removed in data set 2. ## 96 taxa removed in each data set. ## 1 rows with zero sum removed in group 2. ## 42 taxa and 121 samples remaining in group 1. ## 42 taxa and 120 samples remaining in group 2. ## ## Calculate 'spring' associations ... Done. ## ## Calculate associations in group 2 ... Done. # Get count table countMat <- phyloseq::otu_table(amgut2.filt.phy) # netConstruct() expects samples in rows countMat <- t(as(countMat, \"matrix\")) group_vec <- phyloseq::get_variable(amgut2.filt.phy, \"SEASONAL_ALLERGIES\") # Select the two groups of interest (level \"none\" is excluded) sel <- which(group_vec %in% c(\"no\", \"yes\")) group_vec <- group_vec[sel] countMat <- countMat[sel, ] net_season <- netConstruct(countMat, group = group_vec, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 3, seed = 123456)"},{"path":"https://netcomi.de/readme.html","id":"network-analysis","dir":"","previous_headings":"Usage > Network comparison","what":"Network analysis","title":"NetCoMi ","text":"object returned netConstruct() containing networks passed netAnalyze(). Network properties computed networks simultaneously. demonstrate functionalities netAnalyze(), play around available arguments, even chosen setting might optimal. centrLCC = FALSE: Centralities calculated nodes (largest connected component). avDissIgnoreInf = TRUE: Nodes infinite dissimilarity ignored calculating average dissimilarity. sPathNorm = FALSE: Shortest paths normalized average dissimilarity. hubPar = c(\"degree\", \"eigenvector\"): Hubs nodes highest degree eigenvector centrality time. lnormFit = TRUE hubQuant = 0.9: log-normal distribution fitted centrality values identify nodes “highest” centrality values. , node identified hub three centrality measures, node’s centrality value 90% quantile fitted log-normal distribution. non-normalized centralities used four measures. Note! arguments must set carefully, depending research questions. NetCoMi’s default values generally preferable practical cases!","code":"props_season <- netAnalyze(net_season, centrLCC = FALSE, avDissIgnoreInf = TRUE, sPathNorm = FALSE, clustMethod = \"cluster_fast_greedy\", hubPar = c(\"degree\", \"eigenvector\"), hubQuant = 0.9, lnormFit = TRUE, normDeg = FALSE, normBetw = FALSE, normClose = FALSE, normEigen = FALSE) summary(props_season) ## ## Component sizes ## ``````````````` ## group '1': ## size: 28 1 ## #: 1 14 ## group '2': ## size: 31 8 1 ## #: 1 1 3 ## ______________________________ ## Global network properties ## ````````````````````````` ## Largest connected component (LCC): ## group '1' group '2' ## Relative LCC size 0.66667 0.73810 ## Clustering coefficient 0.15161 0.27111 ## Modularity 0.62611 0.45823 ## Positive edge percentage 86.66667 100.00000 ## Edge density 0.07937 0.12473 ## Natural connectivity 0.04539 0.04362 ## Vertex connectivity 1.00000 1.00000 ## Edge connectivity 1.00000 1.00000 ## Average dissimilarity* 0.67251 0.68178 ## Average path length** 3.40008 1.86767 ## ## Whole network: ## group '1' group '2' ## Number of components 15.00000 5.00000 ## Clustering coefficient 0.15161 0.29755 ## Modularity 0.62611 0.55684 ## Positive edge percentage 86.66667 100.00000 ## Edge density 0.03484 0.08130 ## Natural connectivity 0.02826 0.03111 ## ----- ## *: Dissimilarity = 1 - edge weight ## **: Path length = Sum of dissimilarities along the path ## ## ______________________________ ## Clusters ## - In the whole network ## - Algorithm: cluster_fast_greedy ## ```````````````````````````````` ## group '1': ## name: 0 1 2 3 4 5 ## #: 14 7 6 5 4 6 ## ## group '2': ## name: 0 1 2 3 4 5 ## #: 3 5 14 4 8 8 ## ## ______________________________ ## Hubs ## - In alphabetical/numerical order ## - Based on log-normal quantiles of centralities ## ``````````````````````````````````````````````` ## group '1' group '2' ## 307981 322235 ## 363302 ## ## ______________________________ ## Centrality measures ## - In decreasing order ## - Computed for the complete network ## ```````````````````````````````````` ## Degree (unnormalized): ## group '1' group '2' ## 307981 5 2 ## 9715 5 5 ## 364563 4 4 ## 259569 4 5 ## 322235 3 9 ## ______ ______ ## 322235 3 9 ## 363302 3 9 ## 158660 2 6 ## 188236 3 5 ## 259569 4 5 ## ## Betweenness centrality (unnormalized): ## group '1' group '2' ## 307981 231 0 ## 331820 170 9 ## 158660 162 80 ## 188236 161 85 ## 322235 159 126 ## ______ ______ ## 322235 159 126 ## 363302 74 93 ## 188236 161 85 ## 158660 162 80 ## 326792 17 58 ## ## Closeness centrality (unnormalized): ## group '1' group '2' ## 307981 18.17276 7.80251 ## 9715 15.8134 9.27254 ## 188236 15.7949 23.24055 ## 301645 15.30177 9.01509 ## 364563 14.73566 21.21352 ## ______ ______ ## 322235 13.50232 26.36749 ## 363302 12.30297 24.19703 ## 158660 13.07106 23.31577 ## 188236 15.7949 23.24055 ## 326792 14.61391 22.52157 ## ## Eigenvector centrality (unnormalized): ## group '1' group '2' ## 307981 0.53313 0.06912 ## 9715 0.44398 0.10788 ## 301645 0.41878 0.08572 ## 326792 0.27033 0.15727 ## 188236 0.25824 0.21162 ## ______ ______ ## 322235 0.01749 0.29705 ## 363302 0.03526 0.28512 ## 188236 0.25824 0.21162 ## 194648 0.00366 0.19448 ## 184983 0.0917 0.1854"},{"path":"https://netcomi.de/readme.html","id":"visual-network-comparison","dir":"","previous_headings":"Usage > Network comparison","what":"Visual network comparison","title":"NetCoMi ","text":"First, layout computed separately groups (qgraph’s “spring” layout case). Node sizes scaled according mclr-transformed data since SPRING uses mclr transformation normalization method. Node colors represent clusters. Note default, two clusters color groups least two nodes common (sameColThresh = 2). Set sameClustCol FALSE get different cluster colors. Using different layouts leads “nice-looking” network plot group, however, difficult identify group differences first glance. Thus, now use layout groups. following, layout computed group 1 (left network) taken group 2. rmSingles set \"inboth\" nodes unconnected groups can removed layout used. plot, can see clear differences groups. OTU “322235”, instance, strongly connected “Seasonal allergies” group group without seasonal allergies, hub right, left. However, layout one group simply taken , one networks (“seasonal allergies” group) usually nice-looking due long edges. Therefore, NetCoMi (>= 1.0.2) offers option (layoutGroup = \"union\"), union two layouts used groups. , nodes placed optimal possible equally networks. idea R code functionality provided Christian L. Müller Alice Sommer","code":"plot(props_season, sameLayout = FALSE, nodeColor = \"cluster\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.7, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, layoutGroup = 1, rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, repulsion = 0.95, layoutGroup = \"union\", rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"quantitative-network-comparison","dir":"","previous_headings":"Usage > Network comparison","what":"Quantitative network comparison","title":"NetCoMi ","text":"Since runtime considerably increased permutation tests performed, set permTest parameter FALSE. See tutorial_createAssoPerm file network comparison including permutation tests. Since permutation tests still conducted Adjusted Rand Index, seed set reproducibility.","code":"comp_season <- netCompare(props_season, permTest = FALSE, verbose = FALSE, seed = 123456) summary(comp_season, groupNames = c(\"No allergies\", \"Allergies\"), showCentr = c(\"degree\", \"between\", \"closeness\"), numbNodes = 5) ## ## Comparison of Network Properties ## ---------------------------------- ## CALL: ## netCompare(x = props_season, permTest = FALSE, verbose = FALSE, ## seed = 123456) ## ## ______________________________ ## Global network properties ## ````````````````````````` ## Largest connected component (LCC): ## No allergies Allergies difference ## Relative LCC size 0.667 0.738 0.071 ## Clustering coefficient 0.152 0.271 0.120 ## Modularity 0.626 0.458 0.168 ## Positive edge percentage 86.667 100.000 13.333 ## Edge density 0.079 0.125 0.045 ## Natural connectivity 0.045 0.044 0.002 ## Vertex connectivity 1.000 1.000 0.000 ## Edge connectivity 1.000 1.000 0.000 ## Average dissimilarity* 0.673 0.682 0.009 ## Average path length** 3.400 1.868 1.532 ## ## Whole network: ## No allergies Allergies difference ## Number of components 15.000 5.000 10.000 ## Clustering coefficient 0.152 0.298 0.146 ## Modularity 0.626 0.557 0.069 ## Positive edge percentage 86.667 100.000 13.333 ## Edge density 0.035 0.081 0.046 ## Natural connectivity 0.028 0.031 0.003 ## ----- ## *: Dissimilarity = 1 - edge weight ## **: Path length = Sum of dissimilarities along the path ## ## ______________________________ ## Jaccard index (similarity betw. sets of most central nodes) ## ``````````````````````````````````````````````````````````` ## Jacc P(<=Jacc) P(>=Jacc) ## degree 0.556 0.957578 0.144846 ## betweenness centr. 0.333 0.650307 0.622822 ## closeness centr. 0.231 0.322424 0.861268 ## eigenvec. centr. 0.100 0.017593 * 0.996692 ## hub taxa 0.000 0.296296 1.000000 ## ----- ## Jaccard index in [0,1] (1 indicates perfect agreement) ## ## ______________________________ ## Adjusted Rand index (similarity betw. clusterings) ## `````````````````````````````````````````````````` ## wholeNet LCC ## ARI 0.232 0.355 ## p-value 0.000 0.000 ## ----- ## ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings ## ARI=0: expected for two random clusterings ## p-value: permutation test (n=1000) with null hypothesis ARI=0 ## ## ______________________________ ## Graphlet Correlation Distance ## ````````````````````````````` ## wholeNet LCC ## GCD 1.577 1.863 ## ----- ## GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) ## ## ______________________________ ## Centrality measures ## - In decreasing order ## - Computed for the whole network ## ```````````````````````````````````` ## Degree (unnormalized): ## No allergies Allergies abs.diff. ## 322235 3 9 6 ## 363302 3 9 6 ## 469709 0 4 4 ## 158660 2 6 4 ## 223059 0 4 4 ## ## Betweenness centrality (unnormalized): ## No allergies Allergies abs.diff. ## 307981 231 0 231 ## 331820 170 9 161 ## 259569 137 34 103 ## 158660 162 80 82 ## 184983 92 12 80 ## ## Closeness centrality (unnormalized): ## No allergies Allergies abs.diff. ## 469709 0 21.203 21.203 ## 541301 0 20.942 20.942 ## 181016 0 19.498 19.498 ## 361496 0 19.349 19.349 ## 223059 0 19.261 19.261 ## ## _________________________________________________________ ## Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1"},{"path":"https://netcomi.de/readme.html","id":"differential-networks","dir":"","previous_headings":"Usage","what":"Differential networks","title":"NetCoMi ","text":"now build differential association network, two nodes connected differentially associated two groups. Due short execution time, use Pearson’s correlations estimating associations OTUs. Fisher’s z-test applied identifying differentially correlated OTUs. Multiple testing adjustment done controlling local false discovery rate. Note: sparsMethod set \"none\", just able include differential associations association network plot (see ). However, differential network always based estimated association matrices sparsification (assoEst1 assoEst2 matrices returned netConstruct()). differential network shown , edge colors represent direction associations two groups. , instance, two OTUs positively associated group 1 negatively associated group 2 (‘191541’ ‘188236’), respective edge colored cyan. also take look corresponding associations constructing association networks include differentially associated OTUs. can see correlation aforementioned OTUs ‘191541’ ‘188236’ strongly positive left group negative right group.","code":"net_season_pears <- netConstruct(data = amgut_season_no, data2 = amgut_season_yes, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), measure = \"pearson\", normMethod = \"clr\", sparsMethod = \"none\", thresh = 0.2, verbose = 3) ## Checking input arguments ... Done. ## Infos about changed arguments: ## Zero replacement needed for clr transformation. \"multRepl\" used. ## ## Data filtering ... ## 95 taxa removed in each data set. ## 1 rows with zero sum removed in group 1. ## 1 rows with zero sum removed in group 2. ## 43 taxa and 162 samples remaining in group 1. ## 43 taxa and 120 samples remaining in group 2. ## ## Zero treatment in group 1: ## Execute multRepl() ... Done. ## ## Zero treatment in group 2: ## Execute multRepl() ... Done. ## ## Normalization in group 1: ## Execute clr(){SpiecEasi} ... Done. ## ## Normalization in group 2: ## Execute clr(){SpiecEasi} ... Done. ## ## Calculate 'pearson' associations ... Done. ## ## Calculate associations in group 2 ... Done. # Differential network construction diff_season <- diffnet(net_season_pears, diffMethod = \"fisherTest\", adjust = \"lfdr\") ## Checking input arguments ... ## Done. ## Adjust for multiple testing using 'lfdr' ... ## Execute fdrtool() ... ## Step 1... determine cutoff point ## Step 2... estimate parameters of null distribution and eta0 ## Step 3... compute p-values and estimate empirical PDF/CDF ## Step 4... compute q-values and local fdr ## Done. # Differential network plot plot(diff_season, cexNodes = 0.8, cexLegend = 3, cexTitle = 4, mar = c(2,2,8,5), legendGroupnames = c(\"group 'no'\", \"group 'yes'\"), legendPos = c(0.7,1.6)) props_season_pears <- netAnalyze(net_season_pears, clustMethod = \"cluster_fast_greedy\", weightDeg = TRUE, normDeg = FALSE, gcmHeat = FALSE) # Identify the differentially associated OTUs diffmat_sums <- rowSums(diff_season$diffAdjustMat) diff_asso_names <- names(diffmat_sums[diffmat_sums > 0]) plot(props_season_pears, nodeFilter = \"names\", nodeFilterPar = diff_asso_names, nodeColor = \"gray\", highlightHubs = FALSE, sameLayout = TRUE, layoutGroup = \"union\", rmSingles = FALSE, nodeSize = \"clr\", edgeTranspHigh = 20, labelScale = FALSE, cexNodes = 1.5, cexLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(-0.15,-0.7, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.05, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/readme.html","id":"dissimilarity-based-networks","dir":"","previous_headings":"Usage","what":"Dissimilarity-based Networks","title":"NetCoMi ","text":"dissimilarity measure used network construction, nodes subjects instead OTUs. estimated dissimilarities transformed similarities, used edge weights subjects similar microbial composition placed close together network plot. construct single network using Aitchison’s distance suitable application compositional data. Since Aitchison distance based clr-transformation, zeros data need replaced. network sparsified using k-nearest neighbor (knn) algorithm. cluster detection, use hierarchical clustering average linkage. Internally, k=3 passed cutree() stats package tree cut 3 clusters. dissimilarity-based network, hubs interpreted samples microbial composition similar many samples data set.","code":"net_diss <- netConstruct(amgut1.filt, measure = \"aitchison\", zeroMethod = \"multRepl\", sparsMethod = \"knn\", kNeighbor = 3, verbose = 3) ## Checking input arguments ... Done. ## Infos about changed arguments: ## Counts normalized to fractions for measure \"aitchison\". ## ## 127 taxa and 289 samples remaining. ## ## Zero treatment: ## Execute multRepl() ... Done. ## ## Normalization: ## Counts normalized by total sum scaling. ## ## Calculate 'aitchison' dissimilarities ... Done. ## ## Sparsify dissimilarities via 'knn' ... Registered S3 methods overwritten by 'proxy': ## method from ## print.registry_field registry ## print.registry_entry registry ## Done. props_diss <- netAnalyze(net_diss, clustMethod = \"hierarchical\", clustPar = list(method = \"average\", k = 3), hubPar = \"eigenvector\") plot(props_diss, nodeColor = \"cluster\", nodeSize = \"eigenvector\", hubTransp = 40, edgeTranspLow = 60, charToRm = \"00000\", shortenLabels = \"simple\", labelLength = 6, mar = c(1, 3, 3, 5)) # get green color with 50% transparency green2 <- colToTransp(\"#009900\", 40) legend(0.4, 1.1, cex = 2.2, legend = c(\"high similarity (low Aitchison distance)\", \"low similarity (high Aitchison distance)\"), lty = 1, lwd = c(3, 1), col = c(\"darkgreen\", green2), bty = \"n\")"},{"path":"https://netcomi.de/readme.html","id":"soil-microbiome-example","dir":"","previous_headings":"Usage","what":"Soil microbiome example","title":"NetCoMi ","text":"code reproducing network plot shown beginning.","code":"data(\"soilrep\") soil_warm_yes <- phyloseq::subset_samples(soilrep, warmed == \"yes\") soil_warm_no <- phyloseq::subset_samples(soilrep, warmed == \"no\") net_seas_p <- netConstruct(soil_warm_yes, soil_warm_no, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 500), zeroMethod = \"pseudo\", normMethod = \"clr\", measure = \"pearson\", verbose = 0) netprops1 <- netAnalyze(net_seas_p, clustMethod = \"cluster_fast_greedy\") nclust <- as.numeric(max(names(table(netprops1$clustering$clust1)))) col <- c(topo.colors(nclust), rainbow(6)) plot(netprops1, sameLayout = TRUE, layoutGroup = \"union\", colorVec = col, borderCol = \"gray40\", nodeSize = \"degree\", cexNodes = 0.9, nodeSizeSpread = 3, edgeTranspLow = 80, edgeTranspHigh = 50, groupNames = c(\"Warming\", \"Non-warming\"), showTitle = TRUE, cexTitle = 2.8, mar = c(1,1,3,1), repulsion = 0.9, labels = FALSE, rmSingles = \"inboth\", nodeFilter = \"clustMin\", nodeFilterPar = 10, nodeTransp = 50, hubTransp = 30)"},{"path":[]},{"path":"https://netcomi.de/reference/NetCoMi-package.html","id":null,"dir":"Reference","previous_headings":"","what":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","title":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","text":"NetCoMi offers functions constructing, analyzing, comparing microbial association networks well dissimilarity-based networks compositional data. also includes function constructing differential association networks. main functions netConstruct, netAnalyze, netCompare, diffnet","code":""},{"path":[]},{"path":"https://netcomi.de/reference/NetCoMi-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","text":"Maintainer: Stefanie Peschel stefanie.peschel@mail.deAcknowledgments: Anne-Laure Boulesteix, Christian L. Müller, Martin Depner (inspiration theoretical background) Anastasiia Holovchak (package testing editing)","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":null,"dir":"Reference","previous_headings":"","what":"Graphlet Correlation Distance (GCD) — calcGCD","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"Computes Graphlet Correlation Distance (GCD) - graphlet-based distance measure - two networks. Following Yaveroglu et al. (2014), GCD defined Euclidean distance upper triangle values Graphlet Correlation Matrices (GCM) two networks, defined adjacency matrices. GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016). function considers orbits graphlets four nodes. Orbit counts determined using function count4 orca package. Unobserved orbits lead NAs correlation matrix, row pseudo counts 1 added orbit count matrices (ocount1 ocount2). function based R code provided Theresa Ullmann (https://orcid.org/0000-0003-1215-8561).","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"","code":"calcGCD(adja1, adja2, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1))"},{"path":"https://netcomi.de/reference/calcGCD.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"adja1, adja2 adjacency matrices (numeric) defining two networks GCD shall calculated. orbits numeric vector integers 0 14 defining graphlet orbits use GCD calculation. Minimum length 2. Defaults c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), thus excluding redundant orbits orbit o3. See details.","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"object class gcd containing following elements:","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/calcGCD.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"","code":"library(phyloseq) # Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") # Make sample sizes equal to ensure comparability n_yes <- phyloseq::nsamples(amgut_season_yes) ids_yes <- phyloseq::get_variable(amgut_season_no, \"X.SampleID\")[1:n_yes] amgut_season_no <- phyloseq::subset_samples(amgut_season_no, X.SampleID %in% ids_yes) #> Error in h(simpleError(msg, call)): error in evaluating the argument 'table' in selecting a method for function '%in%': object 'ids_yes' not found # Network construction net <- netConstruct(amgut_season_yes, amgut_season_no, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"thresh\", thresh = 0.5) #> Checking input arguments ... #> Done. #> Data filtering ... #> 94 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 1 rows with zero sum removed in group 2. #> 44 taxa and 120 samples remaining in group 1. #> 44 taxa and 162 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. #> #> Sparsify associations in group 2 ... #> Done. # Get adjacency matrices adja1 <- net$adjaMat1 adja2 <- net$adjaMat2 # Network visualization props <- netAnalyze(net) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . plot(props, rmSingles = TRUE, cexLabels = 1.7) # Calculate the GCD gcd <- calcGCD(adja1, adja2) gcd #> GCD: 2.40613 # Orbit counts head(gcd$ocount1) #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 2 0 0 0 0 3 0 0 3 0 3 #> 71543 3 0 0 0 0 4 0 0 1 0 2 #> 331820 0 0 0 0 0 0 0 0 0 0 0 #> 322235 2 1 1 0 0 0 0 0 0 0 1 #> 469709 2 1 1 0 0 0 0 0 0 0 1 #> 73352 0 0 0 0 0 0 0 0 0 0 0 head(gcd$ocount2) #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 2 0 0 0 0 3 0 3 0 0 3 #> 71543 1 0 0 0 0 0 0 5 1 0 4 #> 331820 0 0 0 0 0 0 0 0 0 0 0 #> 322235 0 0 0 0 0 0 0 0 0 0 0 #> 469709 0 0 0 0 0 0 0 0 0 0 0 #> 73352 0 0 0 0 0 0 0 0 0 0 0 # GCMs gcd$gcm1 #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.45431964 0.3404901 0.1339970 0.1339970 0.60502382 0.3091818 #> O2 0.4543196 1.00000000 0.8341060 0.4704992 0.4704992 0.07685639 0.7072981 #> O5 0.3404901 0.83410601 1.0000000 0.5640761 0.5640761 0.12776023 0.3649530 #> O7 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O8 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O10 0.6050238 0.07685639 0.1277602 0.3341264 0.3341264 1.00000000 0.1905216 #> O11 0.3091818 0.70729807 0.3649530 0.6825925 0.6825925 0.19052158 1.0000000 #> O6 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O9 0.5960912 0.09182761 0.1452644 0.3638144 0.3638144 0.99263054 0.2112551 #> O4 0.2375512 0.22242827 0.2857143 0.5640761 0.5640761 0.12776023 0.3649530 #> O1 0.7481584 0.31969180 0.4248120 0.2396263 0.2396263 0.79591010 0.1091601 #> O6 O9 O4 O1 #> O0 0.1339970 0.59609122 0.2375512 0.7481584 #> O2 0.4704992 0.09182761 0.2224283 0.3196918 #> O5 0.5640761 0.14526441 0.2857143 0.4248120 #> O7 1.0000000 0.36381438 0.5640761 0.2396263 #> O8 1.0000000 0.36381438 0.5640761 0.2396263 #> O10 0.3341264 0.99263054 0.1277602 0.7959101 #> O11 0.6825925 0.21125512 0.3649530 0.1091601 #> O6 1.0000000 0.36381438 0.5640761 0.2396263 #> O9 0.3638144 1.00000000 0.1452644 0.7991260 #> O4 0.5640761 0.14526441 1.0000000 0.4248120 #> O1 0.2396263 0.79912603 0.4248120 1.0000000 gcd$gcm2 #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.50195290 0.2172958 0.3929341 0.2172958 0.4845458 0.3929341 #> O2 0.5019529 1.00000000 0.5503546 0.8165505 0.5503546 0.2619803 0.8165505 #> O5 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O7 0.3929341 0.81655052 0.6825925 1.0000000 0.6825925 0.3459888 1.0000000 #> O8 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O10 0.4845458 0.26198027 0.5369313 0.3459888 0.5369313 1.0000000 0.3459888 #> O11 0.3929341 0.81655052 0.6825925 1.0000000 0.6825925 0.3459888 1.0000000 #> O6 0.6318618 0.12253339 0.3341264 0.1905216 0.3341264 0.6276289 0.1905216 #> O9 0.4502107 0.22249134 0.4826536 0.3030534 0.4826536 0.2155385 0.3030534 #> O4 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O1 0.7296858 0.07776854 0.2788342 0.1440064 0.2788342 0.5466671 0.1440064 #> O6 O9 O4 O1 #> O0 0.6318618 0.4502107 0.2172958 0.72968585 #> O2 0.1225334 0.2224913 0.5503546 0.07776854 #> O5 0.3341264 0.4826536 1.0000000 0.27883424 #> O7 0.1905216 0.3030534 0.6825925 0.14400642 #> O8 0.3341264 0.4826536 1.0000000 0.27883424 #> O10 0.6276289 0.2155385 0.5369313 0.54666706 #> O11 0.1905216 0.3030534 0.6825925 0.14400642 #> O6 1.0000000 0.8144348 0.3341264 0.87997622 #> O9 0.8144348 1.0000000 0.4826536 0.71311180 #> O4 0.3341264 0.4826536 1.0000000 0.27883424 #> O1 0.8799762 0.7131118 0.2788342 1.00000000 # Test Graphlet Correlations for significant differences gcmtest <- testGCM(gcd) #> Perform Student's t-test for GCM1 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.2 #> Done. #> #> Perform Student's t-test for GCM2 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.02 #> Done. #> #> Test GCM1 and GCM2 for differences ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.58 #> Done. ### Plot heatmaps # GCM 1 (with significance code in the lower triangle) plotHeat(gcmtest$gcm1, pmat = gcmtest$pAdjust1, type = \"mixed\") # GCM 2 (with significance code in the lower triangle) plotHeat(gcmtest$gcm2, pmat = gcmtest$pAdjust2, type = \"mixed\") # Difference GCM1-GCM2 (with p-values in the lower triangle) plotHeat(gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"mixed\", textLow = \"pmat\")"},{"path":"https://netcomi.de/reference/calcGCM.html","id":null,"dir":"Reference","previous_headings":"","what":"Graphlet Correlation Matrix (GCM) — calcGCM","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"Computes Graphlet Correlation Matrix (GCM) network, given adjacency matrix. GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016; Yaveroglu et al., 2014). function considers orbits graphlets four nodes. Orbit counts determined using function count4 orca package. Unobserved orbits lead NAs correlation matrix, row pseudo counts 1 added orbit count matrix (ocount). function based R code provided Theresa Ullmann (https://orcid.org/0000-0003-1215-8561).","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"","code":"calcGCM(adja, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1))"},{"path":"https://netcomi.de/reference/calcGCM.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"adja adjacency matrix (numeric) defining network GCM calculated. orbits numeric vector integers 0 14 defining graphlet orbits use GCM calculation. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3.","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"list following elements:","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/calcGCM.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"","code":"# Load data set from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction net <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"thresh\", thresh = 0.5) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Get adjacency matrices adja <- net$adjaMat1 # Network visualization props <- netAnalyze(net) plot(props, rmSingles = TRUE, cexLabels = 1.7) # Calculate Graphlet Correlation Matrix (GCM) gcm <- calcGCM(adja) gcm #> $gcm #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.39267906 0.1582768 0.3242392 0.1582768 0.6766859 0.3242392 #> O2 0.3926791 1.00000000 0.5536742 0.8165382 0.5536742 0.1095796 0.8165382 #> O5 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O7 0.3242392 0.81653824 0.6855771 1.0000000 0.6855771 0.1730876 1.0000000 #> O8 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O10 0.6766859 0.10957964 0.3058042 0.1730876 0.3058042 1.0000000 0.1730876 #> O11 0.3242392 0.81653824 0.6855771 1.0000000 0.6855771 0.1730876 1.0000000 #> O6 0.6953513 0.08571015 0.2737340 0.1473592 0.2737340 0.9020241 0.1473592 #> O9 0.6953513 0.08571015 0.2737340 0.1473592 0.2737340 0.9020241 0.1473592 #> O4 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O1 0.7524380 0.05372207 0.2360482 0.1147146 0.2360482 0.8186652 0.1147146 #> O6 O9 O4 O1 #> O0 0.69535132 0.69535132 0.1582768 0.75243796 #> O2 0.08571015 0.08571015 0.5536742 0.05372207 #> O5 0.27373396 0.27373396 1.0000000 0.23604818 #> O7 0.14735917 0.14735917 0.6855771 0.11471459 #> O8 0.27373396 0.27373396 1.0000000 0.23604818 #> O10 0.90202408 0.90202408 0.3058042 0.81866523 #> O11 0.14735917 0.14735917 0.6855771 0.11471459 #> O6 1.00000000 1.00000000 0.2737340 0.90849770 #> O9 1.00000000 1.00000000 0.2737340 0.90849770 #> O4 0.27373396 0.27373396 1.0000000 0.23604818 #> O1 0.90849770 0.90849770 0.2360482 1.00000000 #> #> $ocount #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 3 0 0 0 0 8 0 3 3 0 4 #> 331820 1 0 0 0 0 0 0 0 0 0 1 #> 73352 0 0 0 0 0 0 0 0 0 0 0 #> 322235 1 0 0 0 0 0 0 0 0 0 0 #> 71543 3 0 0 0 0 8 0 3 3 0 4 #> 469709 1 0 0 0 0 0 0 0 0 0 1 #> 158660 1 0 0 0 0 0 0 0 0 0 0 #> 512309 1 0 0 0 0 0 0 9 6 0 6 #> 188236 0 0 0 0 0 0 0 0 0 0 0 #> 248140 0 0 0 0 0 0 0 0 0 0 0 #> 364563 0 0 0 0 0 0 0 0 0 0 0 #> 278234 0 0 0 0 0 0 0 0 0 0 0 #> 353985 0 0 0 0 0 0 0 0 0 0 0 #> 301645 3 0 0 0 0 8 0 3 3 0 4 #> 361496 0 0 0 0 0 0 0 0 0 0 0 #> 90487 0 0 0 0 0 0 0 0 0 0 0 #> 190597 0 0 0 0 0 0 0 0 0 0 0 #> 259569 0 0 0 0 0 0 0 0 0 0 0 #> 326792 0 0 0 0 0 0 0 0 0 0 0 #> 541301 0 0 0 0 0 0 0 0 0 0 0 #> 305760 3 0 0 0 0 8 0 3 3 0 4 #> 184983 0 0 0 0 0 0 0 0 0 0 0 #> 549871 0 0 0 0 0 0 0 0 0 0 0 #> 127309 0 0 0 0 0 0 0 0 0 0 0 #> 326977 0 0 0 0 0 0 0 0 0 0 0 #> 181095 0 0 0 0 0 0 0 0 0 0 0 #> 130663 0 0 0 0 0 0 0 0 0 0 0 #> 244304 0 0 0 0 0 0 0 0 0 0 0 #> 311477 0 0 0 0 0 0 0 0 0 0 0 #> 516022 0 0 0 0 0 0 0 0 0 0 0 #> 274244 0 0 0 0 0 0 0 0 0 0 0 #> 590083 0 0 0 0 0 0 0 0 0 0 0 #> 191541 0 0 0 0 0 0 0 0 0 0 0 #> 181016 0 0 0 0 0 0 0 0 0 0 0 #> 9715 7 15 0 9 0 0 24 0 0 0 0 #> 9753 3 0 0 0 0 8 0 3 3 0 4 #> 190464 0 0 0 0 0 0 0 0 0 0 0 #> 195102 0 0 0 0 0 0 0 0 0 0 0 #> 268332 2 1 0 0 0 0 0 0 0 0 0 #> 361480 0 0 0 0 0 0 0 0 0 0 0 #> 470973 0 0 0 0 0 0 0 0 0 0 0 #> 223059 0 0 0 0 0 0 0 0 0 0 0 #> 334393 1 0 0 0 0 0 0 0 0 0 0 #> 288134 0 0 0 0 0 0 0 0 0 0 0 #> 119010 3 0 0 0 0 8 0 3 3 0 4 #> 194648 0 0 0 0 0 0 0 0 0 0 0 #> 302160 0 0 0 0 0 0 0 0 0 0 0 #> 199487 0 0 0 0 0 0 0 0 0 0 0 #> 175617 1 0 0 0 0 0 0 0 0 0 0 #> 312461 0 0 0 0 0 0 0 0 0 0 0 #> pseudo 1 1 1 1 1 1 1 1 1 1 1 #> #> attr(,\"class\") #> [1] \"GCM\" # Plot heatmap of the GCM plotHeat(gcm$gcm)"},{"path":"https://netcomi.de/reference/cclasso.html","id":null,"dir":"Reference","previous_headings":"","what":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"Implementation CCLasso approach (Fang et al., 2015), published GitHub (Fang, 2016). function extended progress message.","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"","code":"cclasso( x, counts = F, pseudo = 0.5, sig = NULL, lams = 10^(seq(0, -8, by = -0.01)), K = 3, kmax = 5000, verbose = TRUE )"},{"path":"https://netcomi.de/reference/cclasso.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"x numeric matrix (nxp) samples rows OTUs/taxa columns. counts logical indicating whether x constains counts fractions. Defaults FALSE meaning x contains fractions rows sum 1. pseudo numeric value giving pseudo count, added counts counts = TRUE. Default 0.5. sig numeric matrix giving initial covariance matrix. NULL (default), diag(rep(1, p)) used. lams numeric vector specifying tuning parameter sequences. Default 10^(seq(0, -8, = -0.01)). K numeric value (integer) giving folds crossvalidation. Defaults 3. kmax numeric value (integer) specifying maximum iteration augmented lagrangian method. Default 5000. verbose logical indicating whether progress indicator shown (TRUE default).","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"list containing following elements:","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"fang2015cclassoNetCoMi fang2016cclassoGithubNetCoMi","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"Fang Huaying, Peking University (R code) Stefanie Peschel (documentation)","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":null,"dir":"Reference","previous_headings":"","what":"Adding transparency to a color — colToTransp","title":"Adding transparency to a color — colToTransp","text":"Adding transparency color","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adding transparency to a color — colToTransp","text":"","code":"colToTransp(col, percent = 50)"},{"path":"https://netcomi.de/reference/colToTransp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adding transparency to a color — colToTransp","text":"col color vector specified similar col argument col2rgb percent numeric 0 100 giving level transparency. Defaults 50.","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Adding transparency to a color — colToTransp","text":"","code":"# Excepts hexadecimal strings, written colors, or numbers as input colToTransp(\"#FF0000FF\", 50) #> [1] \"#FF00007F\" colToTransp(\"black\", 50) #> [1] \"#0000007F\" colToTransp(2) #> [1] \"#DF536B7F\" # Different shades of red r80 <- colToTransp(\"red\", 80) r50 <- colToTransp(\"red\", 50) r20 <- colToTransp(\"red\", 20) barplot(rep(5, 4), col=c(\"red\", r20, r50, r80), names.arg = 1:4) # Vector as input rain_transp <- colToTransp(rainbow(5), 50) barplot(rep(5, 5), col = rain_transp, names.arg = 1:5)"},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":null,"dir":"Reference","previous_headings":"","what":"Create and store association matrices for permuted data — createAssoPerm","title":"Create and store association matrices for permuted data — createAssoPerm","text":"function creates returns matrix permuted group labels saves association matrices computed permuted data external file.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create and store association matrices for permuted data — createAssoPerm","text":"","code":"createAssoPerm( x, computeAsso = TRUE, nPerm = 1000L, cores = 1L, seed = NULL, permGroupMat = NULL, fileStoreAssoPerm = \"assoPerm\", append = TRUE, storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), logFile = NULL, verbose = TRUE )"},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create and store association matrices for permuted data — createAssoPerm","text":"x object class \"microNet\" \"microNetProps\" (returned netConstruct netAnalyze). computeAsso logical indicating whether association matrices computed. FALSE, permuted group labels computed returned. nPerm integer indicating number permutations. cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). seed integer giving seed reproducibility results. permGroupMat optional matrix permuted group labels (nPerm rows n1+n2 columns). fileStoreAssoPerm character giving name file matrix associations/dissimilarities permuted data saved. Can also path. append logical indicating whether existing files (given fileStoreAssoPerm fileStoreCountsPerm) extended. TRUE, new file created file existing. FALSE, new file created case. storeCountsPerm logical indicating whether permuted count matrices saved external file. Defaults FALSE. Ignored fileLoadCountsPerm NULL. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. logFile character string naming log file current iteration number written. Defaults NULL log file generated. verbose logical. TRUE (default), status messages shown.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create and store association matrices for permuted data — createAssoPerm","text":"Invisible object: Matrix permuted group labels.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create and store association matrices for permuted data — createAssoPerm","text":"","code":"# \\donttest{ # Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Generate a random group vector set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Network construction: amgut_net <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 97 taxa removed. #> 30 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis: amgut_props <- netAnalyze(amgut_net, clustMethod = \"cluster_fast_greedy\") # Use 'createAssoPerm' to create \"permuted\" count and association matrices, # which can be reused by netCompare() and diffNet() # Note: # createAssoPerm() accepts objects 'amgut_net' and 'amgut_props' as input createAssoPerm(amgut_props, nPerm = 100L, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), append = FALSE, seed = 123456) #> Create matrix with permuted group labels ... #> Done. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, countsPerm2.bmat, and countsPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% #> Loading required package: dynamicTreeCut #> Loading required package: fastcluster #> #> Attaching package: ‘fastcluster’ #> The following object is masked from ‘package:stats’: #> #> hclust #> #> Attaching package: ‘WGCNA’ #> The following object is masked from ‘package:stats’: #> #> cor #> Loading required package: permute #> Loading required package: lattice #> This is vegan 2.6-8 #> #> Attaching package: ‘LaplacesDemon’ #> The following object is masked from ‘package:permute’: #> #> Blocks #> Loading required package: S4Vectors #> Loading required package: stats4 #> Loading required package: BiocGenerics #> #> Attaching package: ‘BiocGenerics’ #> The following objects are masked from ‘package:stats’: #> #> IQR, mad, sd, var, xtabs #> The following objects are masked from ‘package:base’: #> #> Filter, Find, Map, Position, Reduce, anyDuplicated, aperm, append, #> as.data.frame, basename, cbind, colnames, dirname, do.call, #> duplicated, eval, evalq, get, grep, grepl, intersect, is.unsorted, #> lapply, mapply, match, mget, order, paste, pmax, pmax.int, pmin, #> pmin.int, rank, rbind, rownames, sapply, saveRDS, setdiff, table, #> tapply, union, unique, unsplit, which.max, which.min #> #> Attaching package: ‘S4Vectors’ #> The following object is masked from ‘package:utils’: #> #> findMatches #> The following objects are masked from ‘package:base’: #> #> I, expand.grid, unname #> Loading required package: IRanges #> #> Attaching package: ‘IRanges’ #> The following object is masked from ‘package:phyloseq’: #> #> distance #> Loading required package: GenomicRanges #> Loading required package: GenomeInfoDb #> Loading required package: SummarizedExperiment #> Loading required package: MatrixGenerics #> Loading required package: matrixStats #> #> Attaching package: ‘MatrixGenerics’ #> The following objects are masked from ‘package:matrixStats’: #> #> colAlls, colAnyNAs, colAnys, colAvgsPerRowSet, colCollapse, #> colCounts, colCummaxs, colCummins, colCumprods, colCumsums, #> colDiffs, colIQRDiffs, colIQRs, colLogSumExps, colMadDiffs, #> colMads, colMaxs, colMeans2, colMedians, colMins, colOrderStats, #> colProds, colQuantiles, colRanges, colRanks, colSdDiffs, colSds, #> colSums2, colTabulates, colVarDiffs, colVars, colWeightedMads, #> colWeightedMeans, colWeightedMedians, colWeightedSds, #> colWeightedVars, rowAlls, rowAnyNAs, rowAnys, rowAvgsPerColSet, #> rowCollapse, rowCounts, rowCummaxs, rowCummins, rowCumprods, #> rowCumsums, rowDiffs, rowIQRDiffs, rowIQRs, rowLogSumExps, #> rowMadDiffs, rowMads, rowMaxs, rowMeans2, rowMedians, rowMins, #> rowOrderStats, rowProds, rowQuantiles, rowRanges, rowRanks, #> rowSdDiffs, rowSds, rowSums2, rowTabulates, rowVarDiffs, rowVars, #> rowWeightedMads, rowWeightedMeans, rowWeightedMedians, #> rowWeightedSds, rowWeightedVars #> Loading required package: Biobase #> Welcome to Bioconductor #> #> Vignettes contain introductory material; view with #> 'browseVignettes()'. To cite Bioconductor, see #> 'citation(\"Biobase\")', and for packages 'citation(\"pkgname\")'. #> #> Attaching package: ‘Biobase’ #> The following object is masked from ‘package:MatrixGenerics’: #> #> rowMedians #> The following objects are masked from ‘package:matrixStats’: #> #> anyMissing, rowMedians #> The following object is masked from ‘package:phyloseq’: #> #> sampleNames #> | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. # Run netcompare using the stored permutation count matrices # (association matrices are still computed within netCompare): amgut_comp1 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. # Run netcompare using the stored permutation association matrices: amgut_comp2 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. summary(amgut_comp1) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, seed = 123456, #> fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.900 0.967 0.067 0.584158 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.584158 #> Positive edge percentage 43.023 32.432 10.591 0.039604 * #> Edge density 0.245 0.182 0.063 0.346535 #> Natural connectivity 0.062 0.052 0.010 0.198020 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.927 0.949 0.021 0.217822 #> Average path length** 1.624 1.819 0.195 0.435644 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 4.000 2.000 2.000 0.544554 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.594059 #> Positive edge percentage 43.023 32.432 10.591 0.029703 * #> Edge density 0.198 0.170 0.028 0.752475 #> Natural connectivity 0.054 0.049 0.004 0.613861 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.333 0.631521 0.606925 #> betweenness centr. 0.333 0.631521 0.606925 #> closeness centr. 0.600 0.980338 0.076564 . #> eigenvec. centr. 0.778 0.999035 0.008281 ** #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.095 0.095 #> p-value 0.047 0.064 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.421000 0.945000 #> p-value 0.990099 0.811881 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.207 0.000 0.207 0.965347 #> 158660 0.414 0.241 0.172 0.965347 #> 301645 0.414 0.241 0.172 0.965347 #> 130663 0.069 0.207 0.138 0.965347 #> 331820 0.241 0.103 0.138 0.965347 #> 326977 0.207 0.069 0.138 0.965347 #> 364563 0.241 0.345 0.103 0.965347 #> 322235 0.310 0.207 0.103 0.965347 #> 353985 0.138 0.034 0.103 0.965347 #> 470973 0.138 0.034 0.103 0.965347 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 130663 0.000 0.169 0.169 0.878790 #> 512309 0.000 0.124 0.124 0.878790 #> 181095 0.105 0.000 0.105 0.292930 #> 326792 0.098 0.000 0.098 0.878790 #> 326977 0.086 0.000 0.086 0.894207 #> 331820 0.000 0.071 0.071 0.894207 #> 248140 0.095 0.161 0.066 0.894207 #> 188236 0.062 0.124 0.063 0.894207 #> 361496 0.000 0.058 0.058 0.894207 #> 9753 0.258 0.206 0.052 0.894207 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.733 0.000 0.733 0.933522 #> 259569 0.000 0.573 0.573 0.933522 #> 127309 0.000 0.443 0.443 0.933522 #> 549871 0.000 0.394 0.394 0.933522 #> 470973 0.695 0.480 0.216 0.933522 #> 331820 0.764 0.568 0.197 0.933522 #> 353985 0.717 0.530 0.187 0.933522 #> 541301 0.621 0.448 0.173 0.933522 #> 158660 0.935 0.777 0.157 0.933522 #> 244304 0.631 0.486 0.145 0.933522 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 331820 0.504 0.101 0.402 0.866062 #> 301645 1.000 0.698 0.302 0.283710 #> 158660 0.639 0.352 0.287 0.866062 #> 181095 0.242 0.000 0.242 0.496493 #> 307981 0.993 0.762 0.231 0.496493 #> 364563 0.564 0.762 0.198 0.866062 #> 326977 0.354 0.159 0.196 0.866062 #> 353985 0.246 0.063 0.183 0.866062 #> 188236 0.621 0.763 0.142 0.866062 #> 259569 0.000 0.132 0.132 0.866062 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp2) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.900 0.967 0.067 0.584158 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.584158 #> Positive edge percentage 43.023 32.432 10.591 0.039604 * #> Edge density 0.245 0.182 0.063 0.346535 #> Natural connectivity 0.062 0.052 0.010 0.198020 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.927 0.949 0.021 0.217822 #> Average path length** 1.624 1.819 0.195 0.435644 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 4.000 2.000 2.000 0.544554 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.594059 #> Positive edge percentage 43.023 32.432 10.591 0.029703 * #> Edge density 0.198 0.170 0.028 0.752475 #> Natural connectivity 0.054 0.049 0.004 0.613861 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.333 0.631521 0.606925 #> betweenness centr. 0.333 0.631521 0.606925 #> closeness centr. 0.600 0.980338 0.076564 . #> eigenvec. centr. 0.778 0.999035 0.008281 ** #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.095 0.095 #> p-value 0.051 0.057 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.421000 0.945000 #> p-value 0.990099 0.811881 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.207 0.000 0.207 0.965347 #> 158660 0.414 0.241 0.172 0.965347 #> 301645 0.414 0.241 0.172 0.965347 #> 130663 0.069 0.207 0.138 0.965347 #> 331820 0.241 0.103 0.138 0.965347 #> 326977 0.207 0.069 0.138 0.965347 #> 364563 0.241 0.345 0.103 0.965347 #> 322235 0.310 0.207 0.103 0.965347 #> 353985 0.138 0.034 0.103 0.965347 #> 470973 0.138 0.034 0.103 0.965347 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 130663 0.000 0.169 0.169 0.878790 #> 512309 0.000 0.124 0.124 0.878790 #> 181095 0.105 0.000 0.105 0.292930 #> 326792 0.098 0.000 0.098 0.878790 #> 326977 0.086 0.000 0.086 0.894207 #> 331820 0.000 0.071 0.071 0.894207 #> 248140 0.095 0.161 0.066 0.894207 #> 188236 0.062 0.124 0.063 0.894207 #> 361496 0.000 0.058 0.058 0.894207 #> 9753 0.258 0.206 0.052 0.894207 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.733 0.000 0.733 0.933522 #> 259569 0.000 0.573 0.573 0.933522 #> 127309 0.000 0.443 0.443 0.933522 #> 549871 0.000 0.394 0.394 0.933522 #> 470973 0.695 0.480 0.216 0.933522 #> 331820 0.764 0.568 0.197 0.933522 #> 353985 0.717 0.530 0.187 0.933522 #> 541301 0.621 0.448 0.173 0.933522 #> 158660 0.935 0.777 0.157 0.933522 #> 244304 0.631 0.486 0.145 0.933522 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 331820 0.504 0.101 0.402 0.866062 #> 301645 1.000 0.698 0.302 0.283710 #> 158660 0.639 0.352 0.287 0.866062 #> 181095 0.242 0.000 0.242 0.496493 #> 307981 0.993 0.762 0.231 0.496493 #> 364563 0.564 0.762 0.198 0.866062 #> 326977 0.354 0.159 0.196 0.866062 #> 353985 0.246 0.063 0.183 0.866062 #> 188236 0.621 0.763 0.142 0.866062 #> 259569 0.000 0.132 0.132 0.866062 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 all.equal(amgut_comp1$properties, amgut_comp2$properties) #> [1] TRUE # Run diffnet using the stored permutation count matrices in diffnet() diff1 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. # Run diffnet using the stored permutation association matrices diff2 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. #plot(diff1) #plot(diff2) # Note: Networks are empty (no significantly different associations) # for only 100 permutations # }"},{"path":"https://netcomi.de/reference/diffnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Constructing Differential Networks for Microbiome Data — diffnet","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"Constructs differential network objects class microNet. Three methods identifying differentially associated taxa provided: Fisher's z-test, permutation test, discordant method.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"","code":"diffnet( x, diffMethod = \"permute\", discordThresh = 0.8, n1 = NULL, n2 = NULL, fisherTrans = TRUE, nPerm = 1000L, permPvalsMethod = \"pseudo\", cores = 1L, verbose = TRUE, logFile = NULL, seed = NULL, alpha = 0.05, adjust = \"lfdr\", lfdrThresh = 0.2, trueNullMethod = \"convest\", pvalsVec = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), assoPerm = NULL )"},{"path":"https://netcomi.de/reference/diffnet.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"x object class microNet (returned netConstruct). diffMethod character string indicating method used determining differential associations. Possible values \"permute\" (default) performing permutation tests according Gill et al. (2010), \"discordant\", calls discordantRun (Siska Kechris, 2016), \"fisherTest\" Fisher's z-test (Fisher , 1992). discordThresh numeric value [0,1]. used discordant method. Specifies threshold posterior probability pair taxa differentially correlated groups. Taxa pairs posterior threshold connected network. Defaults 0.8. n1, n2 integer giving sample sizes two data sets used network construction. Needed Fisher's z-test association matrices instead count matrices used network construction. fisherTrans logical. TRUE (default), Fisher-transformed correlations used permutation tests. nPerm integer giving number permutations permutation tests. Defaults 1000L. permPvalsMethod character indicating method used determining p-values permutation tests. Currently, \"pseudo\" available option (see details). cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). verbose logical. TRUE (default), progress messages shown. logFile character string defining name log file, created permutation tests conducted (therein current iteration numbers stored). Defaults NULL file created. seed integer giving seed reproducibility results. alpha numeric value 0 1 giving significance level. Significantly different correlations connected network. Defaults 0.05. adjust character indicating method used multiple testing adjustment tests differentially correlated pairs taxa. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust. lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction. Defaults 0.2 meaning correlations corresponding local fdr less equal 0.2 identified significant. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\" (default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). pvalsVec vector p-values used permutation tests. Can used performing another method multiple testing adjustment without executing complete permutation process . See example. fileLoadAssoPerm character giving name (without extension) path file storing \"permuted\" association/dissimilarity matrices exported setting storeAssoPerm TRUE. used permutation tests. Set NULL existing associations used. fileLoadCountsPerm character giving name (without extension) path file storing \"permuted\" count matrices exported setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. Set NULL existing count matrices used. storeAssoPerm logical indicating whether association (dissimilarity) matrices permuted data stored file. filename given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. fileStoreAssoPerm character giving file name store matrix containing matrix associations/dissimilarities permuted data. Can also path. storeCountsPerm logical indicating whether permuted count matrices stored external file. Defaults FALSE. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. assoPerm needed output generated NetCoMi v1.0.1! list two elements used permutation procedure. entry must contain association matrices \"nPerm\" permutations. can either \"assoPerm\" value part output returned diffnet netCompare. See example.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"function returns object class diffnet. Depending performed test method, output contains following elements:Permutation tests: Discordant: Fisher's z-test:","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"Permutation procedure: null hypothesis tests defined $$H_0: a1_ij - a2_ij = 0,$$ \\(a1_ij\\) \\(a2_ij\\) denote association taxon j group 1 2, respectively. generate sampling distribution differences \\(H_0\\), group labels randomly reassigned samples group sizes kept. associations re-estimated permuted data set. p-values calculated proportion \"permutation-differences\" larger observed difference. pseudo-count added numerator denominator order avoid zero p-values. p-values adjusted multiple testing.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"benjamini2000adaptiveNetCoMi discordant2016NetCoMi farcomeni2007someNetCoMi fisher1992statisticalNetCoMi gill2010statisticalNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/diffnet.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Generate a random group vector set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Network construction: amgut_net <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 97 taxa removed. #> 30 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #--------------------- # Differential network # Fisher's z-test amgut_diff1 <- diffnet(amgut_net, diffMethod = \"fisherTest\") #> Checking input arguments ... #> Done. #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. # Network contains no differentially correlated taxa: if (FALSE) { # \\dontrun{ plot(amgut_diff1) } # } # Without multiple testing correction (statistically not correct!) amgut_diff2 <- diffnet(amgut_net, diffMethod = \"fisherTest\", adjust = \"none\") #> Checking input arguments ... #> Done. plot(amgut_diff2) if (FALSE) { # \\dontrun{ # Permutation test (permutation matrices are stored) amgut_diff3 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, cores = 4L, adjust = \"lfdr\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", seed = 123456) # Use the p-values again (different adjustment method possible), but without # re-estimating the associations amgut_diff4 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", pvalsVec = amgut_diff3$pvalsVec) x11() plot(amgut_diff4) # Use the permutation associations again (same result as amgut_diff4) amgut_diff5 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", fileLoadAssoPerm = \"assoPerm\") x11() plot(amgut_diff5) # Use the permuted count matrices again (same result as amgut_diff4) amgut_diff6 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) x11() plot(amgut_diff6) } # }"},{"path":"https://netcomi.de/reference/dot-boottest.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"Statistical significance correlations pairs taxonomic units tested using bootstrap procedure proposed Friedman Alm (2012).","code":""},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"","code":".boottest( countMat, assoMat, nboot = 1000, measure, measurePar, cores = 4, logFile = NULL, verbose = TRUE, seed = NULL, assoBoot = NULL )"},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"countMat matrix containing microbiome data (read counts) correlations calculated (rows represent samples, columns represent taxa) assoMat matrix containing associations estimated countMat. nboot number bootstrap samples. measure character specifying method used computing associations taxa. measurePar list parameters passed function computing associations/dissimilarities. See details respective functions. cores number CPU cores used parallelization. logFile character defining log file, number iteration stored. NULL, log file created. wherein current iteration numbers stored. verbose logical; TRUE, iteration numbers printed R console seed optional seed reproducibility results. assoBoot list bootstrap association matrices.","code":""},{"path":[]},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"friedman2012inferringNetCoMi","code":""},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute associations between taxa — .calcAssociation","title":"Compute associations between taxa — .calcAssociation","text":"Computes associations taxa distances subjects given read count matrix","code":""},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute associations between taxa — .calcAssociation","text":"","code":".calcAssociation(countMat, measure, measurePar, verbose)"},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute associations between taxa — .calcAssociation","text":"countMat numeric read count matrix, rows samples columns OTUs/taxa. measure character giving measure used estimating associations dissimilarities measurePar optional list parameters passed function estimating associations/dissimilarities verbose TRUE, progress messages returned.","code":""},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate network properties — .calcProps","title":"Calculate network properties — .calcProps","text":"Calculates network properties given adjacency matrix","code":""},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate network properties — .calcProps","text":"","code":".calcProps( adjaMat, dissMat, assoMat, avDissIgnoreInf, sPathNorm, sPathAlgo, normNatConnect, weighted, isempty, clustMethod, clustPar, weightClustCoef, hubPar, hubQuant, lnormFit, connectivity, graphlet, orbits, weightDeg, normDeg, normBetw, normClose, normEigen, centrLCC, jaccard = FALSE, jaccQuant = NULL, verbose = 0 )"},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate network properties — .calcProps","text":"adjaMat adjacency matrix dissMat dissimilarity matrix assoMat association matrix avDissIgnoreInf logical indicating whether ignore infinities calculating average dissimilarity. FALSE (default), infinity values set 1. sPathNorm logical. TRUE (default), shortest paths normalized average dissimilarity (connected nodes considered), .e., path interpreted steps average dissimilarity. FALSE, shortest path minimum sum dissimilarities two nodes. sPathAlgo character indicating algorithm used computing shortest paths node pairs. distances (igraph) used shortest path calculation. Possible values : \"unweighted\", \"dijkstra\" (default), \"bellman-ford\", \"johnson\", \"automatic\" (fastest suitable algorithm used). shortest paths needed average (shortest) path length closeness centrality. normNatConnect logical. TRUE (default), normalized natural connectivity returned. weighted logical indicating whether network weighted. isempty logical indicating whether network empty. clustMethod character indicating clustering algorithm. Possible values \"hierarchical\" hierarchical algorithm based dissimilarity values, clustering methods provided igraph package (see communities possible methods). Defaults \"cluster_fast_greedy\" association-based networks \"hierarchical\" sample similarity networks. clustPar list parameters passed clustering functions. hierarchical clustering used, parameters passed hclust well cutree. weightClustCoef logical indicating whether (global) clustering coefficient weighted (TRUE, default) unweighted (FALSE). hubPar character vector one elements (centrality measures) used identifying hub nodes. Possible values degree, betweenness, closeness, eigenvector. multiple measures given, hubs nodes highest centrality selected measures. See details. hubQuant quantile used determining hub nodes. Defaults 0.95. lnormFit hubs nodes centrality value 95% quantile fitted log-normal distribution (lnormFit = TRUE) empirical distribution centrality values (lnormFit = FALSE; default). connectivity logical. TRUE (default), edge vertex connectivity calculated. Might disabled reduce execution time. graphlet logical. TRUE (default), graphlet-based network properties computed: orbit counts graphlets 2-4 nodes (ocount) Graphlet Correlation Matrix (gcm). orbits numeric vector integers 0 14 defining graphlet orbits. weightDeg logical. TRUE, weighted degree used (see strength). Default FALSE. automatically set TRUE fully connected (dense) network. normDeg, normBetw, normClose, normEigen logical. TRUE (default measures), normalized version respective centrality values returned. centrLCC logical indicating whether compute centralities largest connected component (LCC). TRUE (default), centrality values disconnected components zero. jaccard shall Jaccard index calculated? jaccQuant quantile Jaccard index verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages shown. Can also logical.","code":""},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":null,"dir":"Reference","previous_headings":"","what":"Function for Generating Vector Names — .getVecNames","title":"Function for Generating Vector Names — .getVecNames","text":"function generates names vector contains elements lower triangle matrix. R code copied getNames (exported function discordant package) changed names generated columns (rows original function, produces implausible results).","code":""},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Function for Generating Vector Names — .getVecNames","text":"","code":".getVecNames(x)"},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Function for Generating Vector Names — .getVecNames","text":"x symmetric matrix column row names returned vector","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":null,"dir":"Reference","previous_headings":"","what":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"function implements procedures test whether pairs taxa differentially associated, whether taxon differentially associated taxa, whether two networks differentially associated two groups proposed Gill et al.(2010).","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"","code":".permTestDiffAsso( countMat1, countMat2, countsJoint, normCounts1, normCounts2, assoMat1, assoMat2, paramsNetConstruct, method = c(\"connect.pairs\", \"connect.variables\", \"connect.network\"), fisherTrans = TRUE, pvalsMethod = \"pseudo\", adjust = \"lfdr\", adjust2 = \"holm\", trueNullMethod = \"convest\", alpha = 0.05, lfdrThresh = 0.2, nPerm = 1000, matchDesign = NULL, callNetConstr = NULL, cores = 4, verbose = TRUE, logFile = \"log.txt\", seed = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), assoPerm = NULL )"},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"countMat1, countMat2 matrices containing microbiome data (read counts) group 1 group 2 (rows represent samples columns taxonomic units, respectively). countsJoint joint count matrices preprocessing normCounts1, normCounts2 normalized count matrices. assoMat1, assoMat2 association matrices corresponding two count matrices. associations must estimated count matrices countMat1 countMat2. paramsNetConstruct parameters used network construction. method character vector indicating tests performed. Possible values \"connect.pairs\" (differentially correlated taxa pairs), \"connect.variables\" (one taxon ) \"connect.network\" (differentially connected networks). default, three tests conducted. fisherTrans logical indicating whether correlation values Fisher-transformed. pvalsMethod currently \"pseudo\" available, 1 added number permutations permutation test statistics extreme observed one order avoid zero p-values. adjust multiple testing adjustment tests differentially correlated pairs taxa; possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool) one methods provided p.adjust adjust2 multiple testing adjustment tests taxa pair differentially correlated taxa; possible methods provided p.adjust (hundred tests necessary local fdr correction) trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). alpha significance level lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction; defaults 0.2, means correlations corresponding local fdr less equal 0.2 identified significant nPerm number permutations matchDesign Numeric vector two elements specifying optional matched-group (.e. matched-pair) design, used permutation tests netCompare diffnet. c(1,1) corresponds matched-pair design. 1:2 matching, instance, defined c(1,2), means first sample group 1 matched first two samples group 2 . appropriate order samples must ensured. NULL, group memberships shuffled randomly group sizes identical original data set ensured. callNetConstr call inherited netConstruct(). cores number CPU cores (permutation tests executed parallel) verbose TRUE, status messages numbers SparCC iterations printed logFile character string naming log file within current iteration number stored seed optional seed reproducibility results fileLoadAssoPerm character giving name (without extenstion) path file storing \"permuted\" association/dissimilarity matrices exported setting storeAssoPerm TRUE. used permutation tests. Set NULL existing associations used. fileLoadCountsPerm character giving name (without extenstion) path file storing \"permuted\" count matrices exported setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. Set NULL existing count matrices used. storeAssoPerm logical indicating whether association (dissimilarity) matrices permuted data stored file. filename given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. fileStoreAssoPerm character giving file name store matrix containing matrix associations/dissimilarities permuted data. Can also path. storeCountsPerm logical indicating whether permuted count matrices stored external file. Defaults FALSE. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. assoPerm used anymore.","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"gill2010statisticalNetCoMi knijnenburg2009fewerNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/editLabels.html","id":null,"dir":"Reference","previous_headings":"","what":"Edit labels — editLabels","title":"Edit labels — editLabels","text":"Function editing node labels, .e., shortening certain length removing unwanted characters. function used NetCoMi's plot functions plot.microNetProps plot.diffnet.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Edit labels — editLabels","text":"","code":"editLabels( x, shortenLabels = c(\"intelligent\", \"simple\", \"none\"), labelLength = 6, labelPattern = NULL, addBrack = TRUE, charToRm = NULL, verbose = TRUE )"},{"path":"https://netcomi.de/reference/editLabels.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Edit labels — editLabels","text":"x character vector node labels. shortenLabels character indicating shorten labels. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Labels shortened. labelLength integer defining length labels shall shortened shortenLabels used. Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(4, \"'\", 3, \"'\", 3). See details example. addBrack logical indicating whether add closing square bracket. TRUE, \"]\" added first part contains \"[\". charToRm character vector giving one patterns remove labels. verbose logical. TRUE, function allowed return messages.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Edit labels — editLabels","text":"Character vector edited labels.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Edit labels — editLabels","text":"Consider vector three bacteria names: \"Streptococcus1\", \"Streptococcus2\", \"Streptomyces\". shortenLabels = \"simple\" labelLength = 6 leads shortened labels: \"Strept\", \"Strept\", \"Strept\", distinguishable. shortenLabels = \"intelligent\" labelPattern = c(5, \"'\", 3) leads shortened labels: \"Strep'coc\", \"Strep'coc\", \"Strep'myc\", first two distinguishable. shortenLabels = \"intelligent\" labelPattern = c(5, \"'\", 3, \"'\", 3) leads shortened labels: \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\", original labels can inferred. intelligent approach follows: First, labels shortened defined length (argument labelLength). labelPattern applied duplicated labels. group duplicates, third label part starts letter two labels different first time. five-part pattern (given) applies group duplicates consists two labels shortened labels unique applying three-part pattern. , fifth part starts letter labels different first time. message printed returned labels unique.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Edit labels — editLabels","text":"","code":"labels <- c(\"Salmonella\", \"Clostridium\", \"Clostridiales(O)\", \"Ruminococcus\", \"Ruminococcaceae(F)\", \"Enterobacteriaceae\", \"Enterococcaceae\", \"[Bacillus] alkalinitrilicus\", \"[Bacillus] alkalisediminis\", \"[Bacillus] oceani\") # Use the \"simple\" method to shorten labels editLabels(labels, shortenLabels = \"simple\", labelLength = 6) #> [1] \"Salmon\" \"Clostr\" \"Clostr\" \"Rumino\" \"Rumino\" \"Entero\" \"Entero\" \"[Bacil\" #> [9] \"[Bacil\" \"[Bacil\" # -> Original labels cannot be inferred from shortened labels # Use the \"intelligent\" method to shorten labels with three-part pattern editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, labelPattern = c(6, \"'\", 4)) #> Shortened labels could not be made unique. #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ales\" \"Rumino'us \" \"Rumino'acea\" #> [6] \"Entero'bact\" \"Entero'cocc\" \"[Bacil]'alka\" \"[Bacil]'alka\" \"[Bacil]'ocea\" # -> [Bacillus] alkalinitrilicus and [Bacillus] alkalisediminis not # distinguishable # Use the \"intelligent\" method to shorten labels with five-part pattern editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, labelPattern = c(6, \"'\", 3, \"'\", 3)) #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ale\" \"Rumino'us \" #> [5] \"Rumino'ace\" \"Entero'bac\" \"Entero'coc\" \"[Bacil]'alk'nit\" #> [9] \"[Bacil]'alk'sed\" \"[Bacil]'oce\" # Same as before but no brackets are added editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, addBrack = FALSE, labelPattern = c(6, \"'\", 3, \"'\", 3)) #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ale\" \"Rumino'us \" #> [5] \"Rumino'ace\" \"Entero'bac\" \"Entero'coc\" \"[Bacil'alk'nit\" #> [9] \"[Bacil'alk'sed\" \"[Bacil'oce\" # Remove character pattern(s) (can also be a vector with multiple patterns) labels <- c(\"g__Faecalibacterium\", \"g__Clostridium\", \"g__Eubacterium\", \"g__Bifidobacterium\", \"g__Bacteroides\") editLabels(labels, charToRm = \"g__\") #> [1] \"Faecal\" \"Clostr\" \"Eubact\" \"Bifido\" \"Bacter\""},{"path":"https://netcomi.de/reference/gcoda.html","id":null,"dir":"Reference","previous_headings":"","what":"gCoda: conditional dependence network inference for compositional data — gcoda","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"parallelized implementation gCoda approach (Fang et al., 2017), published GitHub (Fang, 2016).","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"","code":"gcoda( x, counts = F, pseudo = 0.5, lambda.min.ratio = 1e-04, nlambda = 15, ebic.gamma = 0.5, cores = 1L, verbose = TRUE )"},{"path":"https://netcomi.de/reference/gcoda.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"x numeric matrix (nxp) samples rows OTUs/taxa columns. counts logical indicating whether x constains counts fractions. Defaults FALSE meaning x contains fractions rows sum 1. pseudo numeric value giving pseudo count, added counts counts = TRUE. Default 0.5. lambda.min.ratio numeric value specifying lambda(max) / lambda(min). Defaults 1e-4. nlambda numberic value (integer) giving tuning parameters. Defaults 15. ebic.gamma numeric value specifying gamma value EBIC. Defaults 0.5. cores integer indicating number CPU cores used computation. Defaults 1L. cores > 1L, foreach used parallel execution. verbose logical indicating whether progress indicator shown (TRUE default).","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"list containing following elements:","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"fang2016gcodaGithubNetCoMi fang2017gcodaNetCoMi","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"Fang Huaying, Peking University (R-Code documentation) Stefanie Peschel (Parts documentation; Parallelization)","code":""},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":null,"dir":"Reference","previous_headings":"","what":"Install all packages used within NetCoMi — installNetCoMiPacks","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"function installs R packages used NetCoMi listed dependencies imports NetCoMi's description file. optional packages needed certain network construction settings. BiocManager::install used installation since installs updates Bioconductor well CRAN packages. Installed CRAN packages: cccd LaplacesDemon propr zCompositions Installed Bioconductor packages: ccrepe DESeq2 discordant limma metagenomeSeq installed via function, packages installed respective NetCoMi functions needed.","code":""},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"","code":"installNetCoMiPacks(onlyMissing = TRUE, lib = NULL, ...)"},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"onlyMissing logical. TRUE (default), installed.packages used read packages installed given library missing packages installed. FALSE, packages installed updated (already installed). lib character vector giving library directories install missing packages. NULL, first element .libPaths used. ... Additional arguments used install install.packages.","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":null,"dir":"Reference","previous_headings":"","what":"Multiple testing adjustment — multAdjust","title":"Multiple testing adjustment — multAdjust","text":"functions adjusts vector p-values multiple testing","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Multiple testing adjustment — multAdjust","text":"","code":"multAdjust( pvals, adjust = \"adaptBH\", trueNullMethod = \"convest\", pTrueNull = NULL, verbose = FALSE )"},{"path":"https://netcomi.de/reference/multAdjust.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Multiple testing adjustment — multAdjust","text":"pvals numeric vector p-values adjust character specifying method used adjustment. Can \"lfdr\", \"adaptBH\", one methods provided p.adjust. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). pTrueNull proportion true null hypothesis used adaptBH method. NULL, proportion computed using method defined via trueNullMethod. verbose TRUE, progress messages returned.","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Multiple testing adjustment — multAdjust","text":"farcomeni2007someNetCoMi","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":null,"dir":"Reference","previous_headings":"","what":"Microbiome Network Analysis — netAnalyze","title":"Microbiome Network Analysis — netAnalyze","text":"Determine network properties objects class microNet.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Microbiome Network Analysis — netAnalyze","text":"","code":"netAnalyze(net, # Centrality related: centrLCC = TRUE, weightDeg = FALSE, normDeg = TRUE, normBetw = TRUE, normClose = TRUE, normEigen = TRUE, # Cluster related: clustMethod = NULL, clustPar = NULL, clustPar2 = NULL, weightClustCoef = TRUE, # Hub related: hubPar = \"eigenvector\", hubQuant = 0.95, lnormFit = FALSE, # Graphlet related: graphlet = TRUE, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), gcmHeat = TRUE, gcmHeatLCC = TRUE, # Further arguments: avDissIgnoreInf = FALSE, sPathAlgo = \"dijkstra\", sPathNorm = TRUE, normNatConnect = TRUE, connectivity = TRUE, verbose = 1 )"},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Microbiome Network Analysis — netAnalyze","text":"net object class microNet (returned netConstruct). centrLCC logical indicating whether compute centralities largest connected component (LCC). TRUE (default), centrality values disconnected components zero. weightDeg logical. TRUE, weighted degree used (see strength). Default FALSE. automatically set TRUE fully connected (dense) network. normDeg, normBetw, normClose, normEigen logical. TRUE (default measures), normalized version respective centrality values returned. clustMethod character indicating clustering algorithm. Possible values \"hierarchical\" hierarchical algorithm based dissimilarity values, clustering methods provided igraph package (see communities possible methods). Defaults \"cluster_fast_greedy\" association-based networks \"hierarchical\" sample similarity networks. clustPar list parameters passed clustering functions. hierarchical clustering used, parameters passed hclust cutree (default list(method = \"average\", k = 3). clustPar2 clustPar second network. NULL net contains two networks, clustPar used second network well. weightClustCoef logical indicating whether (global) clustering coefficient weighted (TRUE, default) unweighted (FALSE). hubPar character vector one elements (centrality measures) used identifying hub nodes. Possible values degree, betweenness, closeness, eigenvector. multiple measures given, hubs nodes highest centrality selected measures. See details. hubQuant quantile used determining hub nodes. Defaults 0.95. lnormFit hubs nodes centrality value 95% quantile fitted log-normal distribution (lnormFit = TRUE) empirical distribution centrality values (lnormFit = FALSE; default). graphlet logical. TRUE (default), graphlet-based network properties computed: orbit counts defined orbits corresponding Graphlet Correlation Matrix (gcm). orbits numeric vector integers 0 14 defining orbits used calculating GCM. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3. gcmHeat logical indicating heatmap GCM(s) plotted. Default TRUE. gcmHeatLCC logical. GCM heatmap plotted LCC TRUE (default) whole network FALSE. avDissIgnoreInf logical indicating whether ignore infinities calculating average dissimilarity. FALSE (default), infinity values set 1. sPathAlgo character indicating algorithm used computing shortest paths node pairs. distances (igraph) used shortest path calculation. Possible values : \"unweighted\", \"dijkstra\" (default), \"bellman-ford\", \"johnson\", \"automatic\" (fastest suitable algorithm used). shortest paths needed average (shortest) path length closeness centrality. sPathNorm logical. TRUE (default), shortest paths normalized average dissimilarity (connected nodes considered), .e., path interpreted steps average dissimilarity. FALSE, shortest path minimum sum dissimilarities two nodes. normNatConnect logical. TRUE (default), normalized natural connectivity returned. connectivity logical. TRUE (default), edge vertex connectivity calculated. Might disabled reduce execution time. verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages shown. Can also logical.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Microbiome Network Analysis — netAnalyze","text":"object class microNetProps containing following elements:","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Microbiome Network Analysis — netAnalyze","text":"Definitions: (Connected) Component Subnetwork two nodes connected path. Number components Number connected components. Since single node connected trivial path, single node component. Largest connected component (LCC) connected component highest number nodes. Shortest paths Computed using distances. algorithm defined via sPathAlgo. Normalized shortest paths ( sPathNorm TRUE) calculated dividing shortest paths average dissimilarity (see ). Global network properties: Relative LCC size = (# nodes LCC) / (# nodes complete network) Clustering Coefficient weighted (global) clustering coefficient arithmetic mean local clustering coefficient defined Barrat et al. (computed transitivity type = \"barrat\"), NAs ignored. unweighted (global) clustering coefficient computed using transitivity type = \"global\". Modularity modularity score determined clustering computed using modularity.igraph. Positive edge percentage Percentage edges positive estimated association total number edges. Edge density Computed using edge_density. Natural connectivity Computed using natural.connectivity. \"norm\" parameter defined normNatConnect. Vertex / Edge connectivity Computed using vertex_connectivity edge_connectivity. equal zero disconnected network. Average dissimilarity Computed mean dissimilarity values (lower triangle dissMat). avDissIgnoreInf specified whether ignore infinite dissimilarities. average dissimilarity empty network 1. Average path length Computed mean shortest paths (normalized unnormalized). av. path length empty network 1. Clustering algorithms: Hierarchical clustering Based dissimilarity values. Computed using hclust cutree. cluster_optimal Modularity optimization. See cluster_optimal. cluster_fast_greedy Fast greedy modularity optimization. See cluster_fast_greedy. cluster_louvain Multilevel optimization modularity. See cluster_louvain. cluster_edge_betweenness Based edge betweenness. Dissimilarity values used. See cluster_edge_betweenness. cluster_leading_eigen Based leading eigenvector community matrix. See cluster_leading_eigen. cluster_spinglass Find communities via spin-glass model simulated annealing. See cluster_spinglass. cluster_walktrap Find communities via short random walks. See cluster_walktrap. Hubs: Hubs nodes highest centrality values one centrality measures. \"highest values\" regarding centrality measure defined values lying certain quantile (defined hubQuant) either empirical distribution centralities (lnormFit = FALSE) fitted log-normal distribution (lnormFit = TRUE; fitdistr used fitting). quantile set using hubQuant. clustPar contains multiple measures, centrality values hub node must given quantile measures time. Centrality measures: Via centrLCC decided whether centralities calculated whole network largest connected component. latter case (centrLCC = FALSE), nodes outside LCC centrality value zero. Degree unweighted degree (normalized unnormalized) computed using degree, weighted degree using strength. Betweenness centrality unnormalized normalized betweenness centrality computed using betweenness. Closeness centrality Unnormalized: closeness = sum(1/shortest paths) Normalized: closeness_unnorm = closeness / (# nodes – 1) Eigenvector centrality centrLCC == FALSE network consists one components: eigenvector centrality (EVC) computed component separately (using eigen_centrality) scaled according component size overcome fact nodes smaller components higher EVC. normEigen == TRUE, EVC values divided maximum EVC value. EVC single nodes zero. Otherwise, EVC computed LCC using eigen_centrality (scale argument set according normEigen). Graphlet-based properties: Orbit counts Count node orbits graphlets 2 4 nodes. See Hocevar Demsar (2016) details. count4 function orca package used orbit counting. Graphlet Correlation Matrix (GCM) Matrix Spearman's correlations network's (non-redundant) node orbits (Yaveroglu et al., 2014). default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Microbiome Network Analysis — netAnalyze","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Microbiome Network Analysis — netAnalyze","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction amgut_net1 <- netConstruct(amgut1.filt, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.4) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis # Using eigenvector centrality as hub score amgut_props1 <- netAnalyze(amgut_net1, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\") summary(amgut_props1, showCentr = \"eigenvector\", numbNodes = 15L, digits = 3L) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.240 #> Clustering coefficient 0.733 #> Modularity 0.338 #> Positive edge percentage 86.364 #> Edge density 0.333 #> Natural connectivity 0.190 #> Vertex connectivity 1.000 #> Edge connectivity 1.000 #> Average dissimilarity* 0.820 #> Average path length** 1.526 #> #> Whole network: #> #> Number of components 33.000 #> Clustering coefficient 0.523 #> Modularity 0.512 #> Positive edge percentage 89.286 #> Edge density 0.023 #> Natural connectivity 0.028 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 30 6 4 2 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 119010 #> 71543 #> 9715 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Eigenvector centrality (normalized): #> #> 9715 1.000 #> 119010 0.733 #> 71543 0.723 #> 9753 0.670 #> 307981 0.670 #> 301645 0.670 #> 305760 0.669 #> 512309 0.607 #> 188236 0.131 #> 364563 0.026 #> 326792 0.023 #> 311477 0.005 #> 73352 0.000 #> 331820 0.000 #> 248140 0.000 # Using degree, betweenness and closeness centrality as hub scores amgut_props2 <- netAnalyze(amgut_net1, clustMethod = \"cluster_fast_greedy\", hubPar = c(\"degree\", \"betweenness\", \"closeness\")) summary(amgut_props2, showCentr = \"all\", numbNodes = 5L, digits = 5L) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.24000 #> Clustering coefficient 0.73277 #> Modularity 0.33781 #> Positive edge percentage 86.36364 #> Edge density 0.33333 #> Natural connectivity 0.19028 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.82023 #> Average path length** 1.52564 #> #> Whole network: #> #> Number of components 33.00000 #> Clustering coefficient 0.52341 #> Modularity 0.51212 #> Positive edge percentage 89.28571 #> Edge density 0.02286 #> Natural connectivity 0.02791 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 30 6 4 2 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> No hubs detected. #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> #> 9715 0.14286 #> 188236 0.10204 #> 307981 0.08163 #> 71543 0.08163 #> 512309 0.08163 #> #> Betweenness centrality (normalized): #> #> 9715 0.50909 #> 188236 0.47273 #> 307981 0.36364 #> 364563 0.18182 #> 73352 0.00000 #> #> Closeness centrality (normalized): #> #> 305760 2.17422 #> 301645 2.13487 #> 307981 2.12892 #> 119010 1.36913 #> 71543 1.33707 #> #> Eigenvector centrality (normalized): #> #> 9715 1.00000 #> 119010 0.73317 #> 71543 0.72255 #> 9753 0.67031 #> 307981 0.67026 # Calculate centralities only for the largest connected component amgut_props3 <- netAnalyze(amgut_net1, centrLCC = TRUE, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\") summary(amgut_props3, showCentr = \"none\", clusterLCC = TRUE) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.24000 #> Clustering coefficient 0.73277 #> Modularity 0.33781 #> Positive edge percentage 86.36364 #> Edge density 0.33333 #> Natural connectivity 0.19028 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.82023 #> Average path length** 1.52564 #> #> Whole network: #> #> Number of components 33.00000 #> Clustering coefficient 0.52341 #> Modularity 0.51212 #> Positive edge percentage 89.28571 #> Edge density 0.02286 #> Natural connectivity 0.02791 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the LCC #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 1 2 3 #> #: 4 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 119010 #> 71543 #> 9715 # Network plot plot(amgut_props1) plot(amgut_props2) plot(amgut_props3) #---------------------------------------------------------------------------- # Plot the GCM heatmap plotHeat(mat = amgut_props1$graphletLCC$gcm1, pmat = amgut_props1$graphletLCC$pAdjust1, type = \"mixed\", title = \"GCM\", colorLim = c(-1, 1), mar = c(2, 0, 2, 0)) # Add rectangles graphics::rect(xleft = c( 0.5, 1.5, 4.5, 7.5), ybottom = c(11.5, 7.5, 4.5, 0.5), xright = c( 1.5, 4.5, 7.5, 11.5), ytop = c(10.5, 10.5, 7.5, 4.5), lwd = 2, xpd = NA) text(6, -0.2, xpd = NA, \"Significance codes: ***: 0.001; **: 0.01; *: 0.05\") #---------------------------------------------------------------------------- # Dissimilarity-based network (where nodes are subjects) amgut_net4 <- netConstruct(amgut1.filt, measure = \"aitchison\", filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = 30), zeroMethod = \"multRepl\", sparsMethod = \"knn\") #> Checking input arguments ... #> Done. #> Infos about changed arguments: #> Counts normalized to fractions for measure \"aitchison\". #> Data filtering ... #> 259 samples removed. #> 127 taxa and 30 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... #> Done. #> #> Normalization: #> Counts normalized by total sum scaling. #> #> Calculate 'aitchison' dissimilarities ... #> Done. #> #> Sparsify dissimilarities via 'knn' ... #> Done. amgut_props4 <- netAnalyze(amgut_net4, clustMethod = \"hierarchical\", clustPar = list(k = 3)) plot(amgut_props4)"},{"path":"https://netcomi.de/reference/netCompare.html","id":null,"dir":"Reference","previous_headings":"","what":"Group Comparison of Network Properties — netCompare","title":"Group Comparison of Network Properties — netCompare","text":"Calculate compare network properties microbial networks using Jaccard's index, Rand index, Graphlet Correlation Distance, permutation tests.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Group Comparison of Network Properties — netCompare","text":"","code":"netCompare( x, permTest = FALSE, jaccQuant = 0.75, lnormFit = NULL, testRand = TRUE, nPermRand = 1000L, gcd = TRUE, gcdOrb = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), verbose = TRUE, nPerm = 1000L, adjust = \"adaptBH\", trueNullMethod = \"convest\", cores = 1L, logFile = NULL, seed = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), returnPermProps = FALSE, returnPermCentr = FALSE, assoPerm = NULL, dissPerm = NULL )"},{"path":"https://netcomi.de/reference/netCompare.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Group Comparison of Network Properties — netCompare","text":"x object class microNetProps (returned netAnalyze). permTest logical. TRUE, permutation test conducted test centrality measures global network properties group differences. Defaults FALSE. May lead considerably increased execution time! jaccQuant numeric value 0 1 specifying quantile used threshold identify central nodes centrality measure. resulting sets nodes used calculate Jaccard's index (see details). Default 0.75. lnormFit logical indicating whether log-normal distribution fitted calculated centrality values determining Jaccard's index (see details). NULL (default), value adopted input, .e., equals method used determining hub nodes. testRand logical. TRUE, permutation test conducted adjusted Rand index (H0: ARI = 0). Execution time may increased large networks. nPermRand integer giving number permutations used testing adjusted Rand index significantly different zero. Ignored testRand = FALSE. Defaults 1000L. gcd logical. TRUE (default), Graphlet Correlation Distance (GCD) computed. gcdOrb numeric vector integers 0 14 defining orbits used calculating GCD. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3. verbose logical. TRUE (default), status messages shown. nPerm integer giving number permutations permTest = TRUE. Default 1000L. adjust character indicating method used multiple testing adjustment permutation p-values. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust (see p.adjust.methods()). trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). logFile character string naming log file current iteration number written (permutation tests performed). Defaults NULL log file generated. seed integer giving seed reproducibility results. fileLoadAssoPerm character giving name path (without file extension) file containing \"permuted\" association/dissimilarity matrices generated setting storeAssoPerm TRUE. used permutation tests. NULL, existing associations used. fileLoadCountsPerm character giving name path (without file extension) file containing \"permuted\" count matrices generated setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. NULL, existing count matrices used. storeAssoPerm logical indicating whether association/dissimilarity matrices permuted data saved file. file name given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. Ignored fileLoadAssoPerm NULL. fileStoreAssoPerm character giving name file matrix associations/dissimilarities permuted data saved. Can also path. storeCountsPerm logical indicating whether permuted count matrices saved external file. Defaults FALSE. Ignored fileLoadCountsPerm NULL. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. returnPermProps logical. TRUE, global properties absolute differences permuted data returned. returnPermCentr logical. TRUE, centralities absolute differences permuted data returned. assoPerm needed output generated NetCoMi v1.0.1! list two elements used permutation procedure. entry must contain association matrices \"nPerm\" permutations. can \"assoPerm\" value part output either returned diffnet netCompare. dissPerm needed output generated NetCoMi v1.0.1! Usage analog assoPerm dissimilarity measure used network construction.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Group Comparison of Network Properties — netCompare","text":"Object class microNetComp following elements: Additional output permutation tests conducted:","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Group Comparison of Network Properties — netCompare","text":"Permutation procedure: Used testing centrality measures global network properties group differences. null hypothesis tests defined $$H_0: c1_i - c2_i = 0,$$ \\(c1_i\\) \\(c2_i\\) denote centrality values taxon group 1 2, respectively. generate sampling distribution differences \\(H_0\\), group labels randomly reassigned samples group sizes kept. associations re-estimated permuted data set. p-values calculated proportion \"permutation-differences\" larger equal observed difference. non-exact tests, pseudo-count added numerator denominator avoid p-values zero. Several methods adjusting p-values multiplicity available. Jaccard's index: Jaccard's index expresses centrality measure equal sets central nodes among two networks. sets defined nodes centrality value defined quantile (via jaccQuant) either empirical distribution centrality values (lnormFit = FALSE) fitted log-normal distribution (lnormFit = TRUE). index ranges 0 1, 1 means sets central nodes exactly equal networks 0 indicates central nodes completely different. index calculated suggested Real Vargas (1996). Rand index: Rand index used express whether determined clusterings equal groups. adjusted Rand index (ARI) ranges -1 1, 1 indicates two clusterings exactly equal. expected index value two random clusterings 0. implemented test procedure accordance explanations Qannari et al. (2014), p-value alpha levels means ARI significantly higher expected two random clusterings. Graphlet Correlation Distance: graphlet-based distance measure, defined Euclidean distance upper triangle values Graphlet Correlation Matrices (GCM) two networks (Yaveroglu et al., 2014). GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016). See calcGCD details.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Group Comparison of Network Properties — netCompare","text":"benjamini2000adaptiveNetCoMi farcomeni2007someNetCoMi gill2010statisticalNetCoMi hocevar2016computationNetCoMi qannari2014significanceNetCoMi real1996probabilisticNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netCompare.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Group Comparison of Network Properties — netCompare","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") amgut_season_yes #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 121 samples ] #> sample_data() Sample Data: [ 121 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] amgut_season_no #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 163 samples ] #> sample_data() Sample Data: [ 163 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] # Filter the 121 samples (sample size of the smaller group) with highest # frequency to make the sample sizes equal and thus ensure comparability. n_yes <- phyloseq::nsamples(amgut_season_yes) # Network construction amgut_net <- netConstruct(data = amgut_season_yes, data2 = amgut_season_no, measure = \"pearson\", filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 0 samples removed in data set 1. #> 42 samples removed in data set 2. #> 114 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 24 taxa and 120 samples remaining in group 1. #> 24 taxa and 121 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis # Note: Please zoom into the GCM plot or open a new window using: # x11(width = 10, height = 10) amgut_props <- netAnalyze(amgut_net, clustMethod = \"cluster_fast_greedy\") # Network plot plot(amgut_props, sameLayout = TRUE, title1 = \"Seasonal allergies\", title2 = \"No seasonal allergies\") #-------------------------- # Network comparison # Without permutation tests amgut_comp1 <- netCompare(amgut_props, permTest = FALSE) #> Checking input arguments ... #> Done. summary(amgut_comp1) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = FALSE) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' difference #> Number of components 1.000 2.000 1.000 #> Clustering coefficient 0.534 0.448 0.086 #> Modularity 0.168 0.155 0.012 #> Positive edge percentage 32.099 39.683 7.584 #> Edge density 0.293 0.249 0.044 #> Natural connectivity 0.070 0.068 0.002 #> Vertex connectivity 1.000 1.000 0.000 #> Edge connectivity 1.000 1.000 0.000 #> Average dissimilarity* 0.920 0.929 0.009 #> Average path length** 1.496 1.558 0.062 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.411 0.397 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203 0.954 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. #> 158660 0.522 0.261 0.261 #> 469709 0.391 0.174 0.217 #> 303304 0.261 0.043 0.217 #> 184983 0.174 0.348 0.174 #> 10116 0.478 0.304 0.174 #> 512309 0.565 0.391 0.174 #> 278234 0.174 0.043 0.130 #> 361496 0.130 0.000 0.130 #> 71543 0.522 0.391 0.130 #> 188236 0.565 0.435 0.130 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. #> 184983 0.000 0.147 0.147 #> 322235 0.087 0.195 0.108 #> 190597 0.099 0.000 0.099 #> 188236 0.225 0.143 0.082 #> 71543 0.123 0.043 0.079 #> 512309 0.083 0.139 0.056 #> 326792 0.000 0.043 0.043 #> 73352 0.055 0.095 0.040 #> 248140 0.000 0.026 0.026 #> 278234 0.020 0.000 0.020 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. #> 361496 0.643 0.000 0.643 #> 303304 0.790 0.510 0.280 #> 158660 1.011 0.812 0.200 #> 248140 0.478 0.675 0.197 #> 469709 0.931 0.772 0.159 #> 278234 0.678 0.539 0.139 #> 184983 0.775 0.909 0.135 #> 512309 1.045 0.912 0.133 #> 181016 0.544 0.665 0.121 #> 10116 0.966 0.850 0.115 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. #> 158660 0.971 0.314 0.657 #> 184983 0.319 0.774 0.455 #> 322235 0.857 0.403 0.454 #> 469709 0.695 0.309 0.386 #> 303304 0.397 0.037 0.360 #> 90487 0.483 0.200 0.283 #> 307981 0.682 0.965 0.283 #> 364563 0.716 0.990 0.274 #> 326792 0.707 0.954 0.246 #> 512309 1.000 0.828 0.172 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # \\donttest{ # With permutation tests (with only 100 permutations to decrease runtime) amgut_comp2 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, cores = 1L, storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, #> countsPerm2.bmat, and countsPerm2.desc.txt created. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. # Rerun with a different adjustment method ... # ... using the stored permutation count matrices amgut_comp3 <- netCompare(amgut_props, adjust = \"BH\", permTest = TRUE, nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'BH' ... #> Done. # ... using the stored permutation association matrices amgut_comp4 <- netCompare(amgut_props, adjust = \"BH\", permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'BH' ... #> Done. # amgut_comp3 and amgut_comp4 should be equal all.equal(amgut_comp3$adjaMatrices, amgut_comp4$adjaMatrices) #> [1] TRUE all.equal(amgut_comp3$properties, amgut_comp4$properties) #> [1] TRUE summary(amgut_comp2) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, cores = 1, #> seed = 123456, storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", #> storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", #> \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.344046 #> 303304 0.790 0.510 0.280 0.344046 #> 158660 1.011 0.812 0.200 0.796029 #> 248140 0.478 0.675 0.197 0.796029 #> 469709 0.931 0.772 0.159 0.796029 #> 278234 0.678 0.539 0.139 0.796029 #> 184983 0.775 0.909 0.135 0.796029 #> 512309 1.045 0.912 0.133 0.796029 #> 181016 0.544 0.665 0.121 0.815517 #> 10116 0.966 0.850 0.115 0.796029 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.511232 #> 184983 0.319 0.774 0.455 0.511232 #> 322235 0.857 0.403 0.454 0.511232 #> 469709 0.695 0.309 0.386 0.526269 #> 303304 0.397 0.037 0.360 0.315761 #> 90487 0.483 0.200 0.283 0.631522 #> 307981 0.682 0.965 0.283 0.631522 #> 364563 0.716 0.990 0.274 0.511232 #> 326792 0.707 0.954 0.246 0.511232 #> 512309 1.000 0.828 0.172 0.721740 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp3) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, adjust = \"BH\", #> seed = 123456, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.356436 #> 303304 0.790 0.510 0.280 0.356436 #> 158660 1.011 0.812 0.200 0.824694 #> 248140 0.478 0.675 0.197 0.824694 #> 469709 0.931 0.772 0.159 0.824694 #> 278234 0.678 0.539 0.139 0.824694 #> 184983 0.775 0.909 0.135 0.824694 #> 512309 1.045 0.912 0.133 0.824694 #> 181016 0.544 0.665 0.121 0.844884 #> 10116 0.966 0.850 0.115 0.824694 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.577086 #> 184983 0.319 0.774 0.455 0.577086 #> 322235 0.857 0.403 0.454 0.577086 #> 469709 0.695 0.309 0.386 0.594059 #> 303304 0.397 0.037 0.360 0.356436 #> 90487 0.483 0.200 0.283 0.712871 #> 307981 0.682 0.965 0.283 0.712871 #> 364563 0.716 0.990 0.274 0.577086 #> 326792 0.707 0.954 0.246 0.577086 #> 512309 1.000 0.828 0.172 0.814710 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp4) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, adjust = \"BH\", #> seed = 123456, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.356436 #> 303304 0.790 0.510 0.280 0.356436 #> 158660 1.011 0.812 0.200 0.824694 #> 248140 0.478 0.675 0.197 0.824694 #> 469709 0.931 0.772 0.159 0.824694 #> 278234 0.678 0.539 0.139 0.824694 #> 184983 0.775 0.909 0.135 0.824694 #> 512309 1.045 0.912 0.133 0.824694 #> 181016 0.544 0.665 0.121 0.844884 #> 10116 0.966 0.850 0.115 0.824694 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.577086 #> 184983 0.319 0.774 0.455 0.577086 #> 322235 0.857 0.403 0.454 0.577086 #> 469709 0.695 0.309 0.386 0.594059 #> 303304 0.397 0.037 0.360 0.356436 #> 90487 0.483 0.200 0.283 0.712871 #> 307981 0.682 0.965 0.283 0.712871 #> 364563 0.716 0.990 0.274 0.577086 #> 326792 0.707 0.954 0.246 0.577086 #> 512309 1.000 0.828 0.172 0.814710 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 #-------------------------- # Use 'createAssoPerm' to create \"permuted\" count and association matrices createAssoPerm(amgut_props, nPerm = 100, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), append = FALSE, seed = 123456) #> Create matrix with permuted group labels ... #> Done. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, countsPerm2.bmat, and countsPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. amgut_comp5 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. all.equal(amgut_comp3$properties, amgut_comp5$properties) #> [1] TRUE summary(amgut_comp5) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.405 0.398 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.344046 #> 303304 0.790 0.510 0.280 0.344046 #> 158660 1.011 0.812 0.200 0.796029 #> 248140 0.478 0.675 0.197 0.796029 #> 469709 0.931 0.772 0.159 0.796029 #> 278234 0.678 0.539 0.139 0.796029 #> 184983 0.775 0.909 0.135 0.796029 #> 512309 1.045 0.912 0.133 0.796029 #> 181016 0.544 0.665 0.121 0.815517 #> 10116 0.966 0.850 0.115 0.796029 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.511232 #> 184983 0.319 0.774 0.455 0.511232 #> 322235 0.857 0.403 0.454 0.511232 #> 469709 0.695 0.309 0.386 0.526269 #> 303304 0.397 0.037 0.360 0.315761 #> 90487 0.483 0.200 0.283 0.631522 #> 307981 0.682 0.965 0.283 0.631522 #> 364563 0.716 0.990 0.274 0.511232 #> 326792 0.707 0.954 0.246 0.511232 #> 512309 1.000 0.828 0.172 0.721740 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # }"},{"path":"https://netcomi.de/reference/netConstruct.html","id":null,"dir":"Reference","previous_headings":"","what":"Constructing Networks for Microbiome Data — netConstruct","title":"Constructing Networks for Microbiome Data — netConstruct","text":"Constructing microbial association networks dissimilarity based networks (nodes subjects) compositional count data.","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Constructing Networks for Microbiome Data — netConstruct","text":"","code":"netConstruct(data, data2 = NULL, dataType = \"counts\", group = NULL, matchDesign = NULL, taxRank = NULL, # Association/dissimilarity measure: measure = \"spieceasi\", measurePar = NULL, # Preprocessing: jointPrepro = NULL, filtTax = \"none\", filtTaxPar = NULL, filtSamp = \"none\", filtSampPar = NULL, zeroMethod = \"none\", zeroPar = NULL, normMethod = \"none\", normPar = NULL, # Sparsification: sparsMethod = \"t-test\", thresh = 0.3, alpha = 0.05, adjust = \"adaptBH\", trueNullMethod = \"convest\", lfdrThresh = 0.2, nboot = 1000L, assoBoot = NULL, cores = 1L, logFile = \"log.txt\", softThreshType = \"signed\", softThreshPower = NULL, softThreshCut = 0.8, kNeighbor = 3L, knnMutual = FALSE, # Transformation: dissFunc = \"signed\", dissFuncPar = NULL, simFunc = NULL, simFuncPar = NULL, scaleDiss = TRUE, weighted = TRUE, # Further arguments: sampleSize = NULL, verbose = 2, seed = NULL )"},{"path":"https://netcomi.de/reference/netConstruct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Constructing Networks for Microbiome Data — netConstruct","text":"data numeric matrix. Can count matrix (rows samples, columns OTUs/taxa), phyloseq object, association/dissimilarity matrix (dataType must set). second count matrix/phyloseq object second association/dissimilarity matrix. data2 optional numeric matrix used constructing second network (belonging group 2). Can either second count matrix/phyloseq object second association/dissimilarity matrix. dataType character indicating data type. Defaults \"counts\", means data (data2) count matrix object class phyloseq. options \"correlation\", \"partialCorr\" (partial correlation), \"condDependence\" (conditional dependence), \"proportionality\" \"dissimilarity\". group optional binary vector used splitting data two groups. group NULL (default) data2 set, single network constructed. See 'Details.' matchDesign Numeric vector two elements specifying optional matched-group (.e. matched-pair) design, used permutation tests netCompare diffnet. c(1,1) corresponds matched-pair design. 1:2 matching, instance, defined c(1,2), means first sample group 1 matched first two samples group 2 . appropriate order samples must ensured. NULL, group memberships shuffled randomly group sizes identical original data set ensured. taxRank character indicating taxonomic rank network constructed. used data (data 2) phyloseq object. given rank must match one column names taxonomic table (@tax_table slot phyloseq object). Taxa names chosen taxonomic rank must unique (consider using function renameTaxa make unique). phyloseq object given taxRank = NULL, row names OTU table used node labels. measure character specifying method used either computing associations taxa dissimilarities subjects. Ignored data count matrix (dataType set \"counts\"). Available measures : \"pearson\", \"spearman\", \"bicor\", \"sparcc\", \"cclasso\", \"ccrepe\", \"spieceasi\" (default), \"spring\", \"gcoda\" \"propr\" association measures, \"euclidean\", \"bray\", \"kld\", \"jeffrey\", \"jsd\", \"ckld\", \"aitchison\" dissimilarity measures. Parameters set via measurePar. measurePar list parameters passed function computing associations/dissimilarities. See 'Details' respective functions. SpiecEasi SPRING association measure, additional list element \"symBetaMode\" accepted define \"mode\" argument symBeta. jointPrepro logical indicating whether data preprocessing (filtering, zero treatment, normalization) done combined data sets, data set separately. Ignored single network constructed. Defaults TRUE group given, FALSE data2 given. Joint preprocessing possible dissimilarity networks. filtTax character indicating taxa shall filtered. Possible options : \"none\" Default. taxa kept. \"totalReads\" Keep taxa total number reads least x. \"relFreq\" Keep taxa whose number reads least x% total number reads. \"numbSamp\" Keep taxa observed least x samples. \"highestVar\" Keep x taxa highest variance. \"highestFreq\" Keep x taxa highest frequency. Except \"highestVar\" \"highestFreq\", different filter methods can combined. values x set via filtTaxPar. filtTaxPar list parameters filter methods given filtTax. Possible list entries : \"totalReads\" (int), \"relFreq\" (value [0,1]), \"numbSamp\" (int), \"highestVar\" (int), \"highestFreq\" (int). filtSamp character indicating samples shall filtered. Possible options : \"none\" Default. samples kept. \"totalReads\" Keep samples total number reads least x. \"numbTaxa\" Keep samples least x taxa observed. \"highestFreq\" Keep x samples highest frequency. Except \"highestFreq\", different filter methods can combined. values x set via filtSampPar. filtSampPar list parameters filter methods given filtSamp. Possible list entries : \"totalReads\" (int), \"numbTaxa\" (int), \"highestFreq\" (int). zeroMethod character indicating method used zero replacement. Possible values : \"none\" (default), \"pseudo\", \"pseudoZO\", \"multRepl\", \"alrEM\", \"bayesMult\". See 'Details'. corresponding parameters set via zeroPar. zeroMethod ignored approach calculating associations/dissimilarity includes zero handling. Defaults \"multRepl\" \"pseudo\" (depending expected input normalization function measure) zero replacement required. zeroPar list parameters passed function zero replacement (zeroMethod). See help page respective function details. zeroMethod \"pseudo\" \"pseudoZO\", pseudo count can specified via zeroPar = list(pseudocount = x) (x numeric). normMethod character indicating normalization method (make counts different samples comparable). Possible options : \"none\" (default), \"TSS\" (\"fractions\"), \"CSS\", \"COM\", \"rarefy\", \"VST\", \"clr\", \"mclr\". See 'Details'. corresponding parameters set via normPar. normPar list parameters passed function normalization (defined normMethod). sparsMethod character indicating method used sparsification (selected edges connected network). Available methods : \"none\" Leads fully connected network \"t-test\" Default. Associations significantly different zero selected using Student's t-test. Significance level multiple testing adjustment specified via alpha adjust. sampleSize must set dataType \"counts\". \"bootstrap\" Bootstrap procedure described Friedman Alm (2012). Corresponding arguments nboot, cores, logFile. Data type must \"counts\". \"threshold\" Selected taxa pairs absolute association/dissimilarity greater equal threshold defined via thresh. \"softThreshold\" Soft thresholding method according Zhang Horvath (2005) available WGCNA package. Corresponding arguments softThreshType, softThreshPower, softThreshCut. \"knn\" Construct k-nearest neighbor mutual k-nearest neighbor graph using nng. Corresponding arguments kNeighbor, knnMutual. Available dissimilarity networks . thresh numeric vector one two elements defining threshold used sparsification sparsMethod set \"threshold\". two networks constructed one value given, used groups. Defaults 0.3. alpha numeric vector one two elements indicating significance level. used Student's t-test bootstrap procedure used sparsification method. two networks constructed one value given, used groups. Defaults 0.05. adjust character indicating method used multiple testing adjustment (Student's t-test bootstrap procedure used edge selection). Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust (see p.adjust.methods(). trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). lfdrThresh numeric vector one two elements defining threshold(s) local FDR correction (adjust = \"locfdr\"). Defaults 0.2 meaning associations corresponding local FDR less equal 0.2 identified significant. two networks constructed one value given, used groups. nboot integer indicating number bootstrap samples, bootstrapping used sparsification method. assoBoot logical list. relevant bootstrapping. Set TRUE list (assoBoot) bootstrap association matrices returned. Can also list bootstrap association matrices, used sparsification. See example. cores integer indicating number CPU cores used bootstrapping. cores > 1, bootstrapping performed parallel. cores limited number available CPU cores determined detectCores. , core arguments function used association estimation (provided) set 1. logFile character defining log file iteration numbers stored bootstrapping used sparsification. file written current working directory. Defaults \"log.txt\". NULL, log file created. softThreshType character indicating method used transforming correlations similarities soft thresholding used sparsification method (sparsMethod = \"softThreshold\"). Possible values \"signed\", \"unsigned\", \"signed hybrid\" (according available options argument type adjacency WGCNA package). softThreshPower numeric vector one two elements defining power soft thresholding. used edgeSelect = \"softThreshold\". two networks constructed one value given, used groups. power set, computed using pickSoftThreshold, argument softThreshCut needed addition. softThreshCut numeric vector one two elements (0 1) indicating desired minimum scale free topology fitting index (corresponds argument \"RsquaredCut\" pickSoftThreshold). Defaults 0.8. two networks constructed one value given, used groups. kNeighbor integer specifying number neighbors k-nearest neighbor method used sparsification. Defaults 3L. knnMutual logical used k-nearest neighbor sparsification. TRUE, neighbors must mutual. Defaults FALSE. dissFunc method used transforming associations dissimilarities. Can character one following values: \"signed\"(default), \"unsigned\", \"signedPos\", \"TOMdiss\". Alternatively, function accepted association matrix first argument optional arguments, can set via dissFuncPar. Ignored dissimilarity measures. See 'Details.' dissFuncPar optional list parameters function passed dissFunc. simFunc function transforming dissimilarities similarities. Defaults f(x)=1-x dissimilarities [0,1], f(x)=1/(1 + x) otherwise. simFuncPar optional list parameters function passed simFunc. scaleDiss logical. Indicates whether dissimilarity values scaled [0,1] (x - min(dissEst)) / (max(dissEst) - min(dissEst)), dissEst matrix estimated dissimilarities. Defaults TRUE. weighted logical. TRUE, similarity values used adjacencies. FALSE leads binary adjacency matrix whose entries equal 1 (sparsified) similarity values > 0, 0 otherwise. sampleSize numeric vector one two elements giving number samples used computing association matrix. needed association matrix given instead count matrix , addition, Student's t-test used edge selection. two networks constructed one value given, used groups. verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages, \"3\" messages returned external functions shown addition. Can also logical. seed integer giving seed reproducibility results.","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Constructing Networks for Microbiome Data — netConstruct","text":"object class microNet containing following elements: v1, v2: names adjacent nodes/vertices asso: estimated association (association networks) diss: dissimilarity sim: similarity (unweighted networks) adja: adjacency (equals similarity weighted networks)","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Constructing Networks for Microbiome Data — netConstruct","text":"object returned netConstruct can either passed netAnalyze network analysis, diffnet construct differential network estimated associations. function enables construction either single network two networks. latter can compared using function netCompare. network(s) can either based associations (correlation, partial correlation / conditional dependence, proportionality) dissimilarities. Several measures available, respectively, estimate associations dissimilarities using netConstruct. Alternatively, pre-generated association dissimilarity matrix accepted input start workflow (argument dataType must set appropriately). Depending measure, network nodes either taxa subjects: association-based networks nodes taxa, whereas dissimilarity-based networks nodes subjects. order perform network comparison, following options constructing two networks available: Passing combined count matrix data group vector group (length nrow(data) association networks length ncol(data) dissimilarity-based networks). Passing count data group 1 data (matrix phyloseq object) count data group 2 data2 (matrix phyloseq object). association networks, column names must match, dissimilarity networks row names. Passing association/dissimilarity matrix group 1 data association/dissimilarity matrix group 2 data2. Group labeling: two networks generated, network belonging data always denoted \"group 1\" network belonging data2 \"group 2\". group vector used splitting data two groups, group names assigned according order group levels. group contains levels 0 1, instance, \"group 1\" assigned level 0 \"group 2\" assigned level 1. network plot, group 1 shown left group 2 right defined otherwise (see plot.microNetProps). Association measures Dissimilarity measures Definitions: Kullback-Leibler divergence: Since KLD symmetric, 0.5 * (KLD(p(x)||p(y)) + KLD(p(y)||p(x))) returned. Jeffrey divergence: Jeff = KLD(p(x)||p(y)) + KLD(p(y)||p(x)) Jensen-Shannon divergence: JSD = 0.5 KLD(P||M) + 0.5 KLD(Q||M), P=p(x), Q=p(y), M=0.5(P+Q). Compositional Kullback-Leibler divergence: cKLD(x,y) = p/2 * log((x/y) * (y/x)), (x/y) arithmetic mean vector ratios x/y. Aitchison distance: Euclidean distance clr-transformed data. Methods zero replacement Normalization methods methods (except rarefying) described Badri et al.(2020). Transformation methods Functions used transforming associations dissimilarities:","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Constructing Networks for Microbiome Data — netConstruct","text":"badri2020normalizationNetCoMi benjamini2000adaptiveNetCoMi farcomeni2007someNetCoMi friedman2012inferringNetCoMi WGCNApackageNetCoMi zhang2005generalNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netConstruct.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Constructing Networks for Microbiome Data — netConstruct","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") data(\"amgut2.filt.phy\") # Single network with the following specifications: # - Association measure: SpiecEasi # - SpiecEasi parameters are defined via 'measurePar' # (check ?SpiecEasi::spiec.easi for available options) # - Note: 'rep.num' should be higher for real data sets # - Taxa filtering: Keep the 50 taxa with highest variance # - Sample filtering: Keep samples with a total number of reads # of at least 1000 net1 <- netConstruct(amgut2.filt.phy, measure = \"spieceasi\", measurePar = list(method = \"mb\", pulsar.params = list(rep.num = 10), symBetaMode = \"ave\"), filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), sparsMethod = \"none\", normMethod = \"none\", verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 88 taxa removed. #> 50 taxa and 261 samples remaining. #> #> Calculate 'spieceasi' associations ... #> #> Applying data transformations... #> Selecting model with pulsar using stars... #> Fitting final estimate with mb... #> done #> Done. # Network analysis (see ?netAnalyze for details) props1 <- netAnalyze(net1, clustMethod = \"cluster_fast_greedy\") # Network plot (see ?plot.microNetProps for details) plot(props1) #---------------------------------------------------------------------------- # Same network as before but on genus level and without taxa filtering amgut.genus.phy <- phyloseq::tax_glom(amgut2.filt.phy, taxrank = \"Rank6\") dim(phyloseq::otu_table(amgut.genus.phy)) #> [1] 43 296 # Rename taxonomic table and make Rank6 (genus) unique amgut.genus.renamed <- renameTaxa(amgut.genus.phy, pat = \"\", substPat = \"_()\", numDupli = \"Rank6\") #> Column 7 contains NAs only and is ignored. net_genus <- netConstruct(amgut.genus.renamed, taxRank = \"Rank6\", measure = \"spieceasi\", measurePar = list(method = \"mb\", pulsar.params = list(rep.num = 10), symBetaMode = \"ave\"), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), sparsMethod = \"none\", normMethod = \"none\", verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 43 taxa and 261 samples remaining. #> #> Calculate 'spieceasi' associations ... #> #> Applying data transformations... #> Selecting model with pulsar using stars... #> Fitting final estimate with mb... #> done #> Done. # Network analysis props_genus <- netAnalyze(net_genus, clustMethod = \"cluster_fast_greedy\") # Network plot (with some modifications) plot(props_genus, shortenLabels = \"none\", labelScale = FALSE, cexLabels = 0.8) #---------------------------------------------------------------------------- # Single network with the following specifications: # - Association measure: Pearson correlation # - Taxa filtering: Keep the 50 taxa with highest frequency # - Sample filtering: Keep samples with a total number of reads of at least # 1000 and with at least 10 taxa with a non-zero count # - Zero replacement: A pseudo count of 0.5 is added to all counts # - Normalization: clr transformation # - Sparsification: Threshold = 0.3 # (an edge exists between taxa with an estimated association >= 0.3) net2 <- netConstruct(amgut2.filt.phy, measure = \"pearson\", filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = c(\"numbTaxa\", \"totalReads\"), filtSampPar = list(totalReads = 1000, numbTaxa = 10), zeroMethod = \"pseudo\", zeroPar = list(pseudocount = 0.5), normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 88 taxa removed. #> 50 taxa and 261 samples remaining. #> #> Zero treatment: #> Pseudo count of 0.5 added. #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis props2 <- netAnalyze(net2, clustMethod = \"cluster_fast_greedy\") plot(props2) #---------------------------------------------------------------------------- # Constructing and analyzing two networks # - A random group variable is used for splitting the data into two groups set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Option 1: Use the count matrix and group vector as input: net3 <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"t-test\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... #> Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Option 2: Pass the count matrix of group 1 to 'data' # and that of group 2 to 'data2' # Note: Argument 'jointPrepro' is set to FALSE by default (the data sets # are filtered separately and the intersect of filtered taxa is kept, # which leads to less than 50 taxa in this example). amgut1 <- amgut1.filt[group == 1, ] amgut2 <- amgut1.filt[group == 2, ] net3 <- netConstruct(data = amgut1, data2 = amgut2, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"t-test\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 85 taxa removed in each data set. #> 42 taxa and 138 samples remaining in group 1. #> 42 taxa and 151 samples remaining in group 2. #> #> Zero treatment in group 1: #> Execute multRepl() ... #> Done. #> #> Zero treatment in group 2: #> Execute multRepl() ... #> Done. #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis # Note: Please zoom into the GCM plot or open a new window using: # x11(width = 10, height = 10) props3 <- netAnalyze(net3, clustMethod = \"cluster_fast_greedy\") # Network plot (same layout is used in both groups) plot(props3, sameLayout = TRUE) # The two networks can be compared with NetCoMi's function netCompare(). #---------------------------------------------------------------------------- # Example of using the argument \"assoBoot\" # This functionality is useful for splitting up a large number of bootstrap # replicates and run the bootstrapping procedure iteratively. niter <- 5 nboot <- 1000 # Overall number of bootstrap replicates: 5000 # Use a different seed for each iteration seeds <- sample.int(1e8, size = niter) # List where all bootstrap association matrices are stored assoList <- list() for (i in 1:niter) { # assoBoot is set to TRUE to return the bootstrap association matrices net <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 0), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"bootstrap\", cores = 1, nboot = nboot, assoBoot = TRUE, verbose = 3, seed = seeds[i]) assoList[(1:nboot) + (i - 1) * nboot] <- net$assoBoot1 } #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% #> #> Attaching package: ‘gtools’ #> The following objects are masked from ‘package:LaplacesDemon’: #> #> ddirichlet, logit, rdirichlet #> The following object is masked from ‘package:permute’: #> #> permute #> | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. # Construct the actual network with all 5000 bootstrap association matrices net_final <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 0), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"bootstrap\", cores = 1, nboot = nboot * niter, assoBoot = assoList, verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. # Network analysis props <- netAnalyze(net_final, clustMethod = \"cluster_fast_greedy\") # Network plot plot(props)"},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot method for objects of class diffnet — plot.diffnet","title":"Plot method for objects of class diffnet — plot.diffnet","text":"Plot method objects class diffnet","code":""},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot method for objects of class diffnet — plot.diffnet","text":"","code":"# S3 method for class 'diffnet' plot( x, adjusted = TRUE, layout = NULL, repulsion = 1, labels = NULL, shortenLabels = \"none\", labelLength = 6, labelPattern = c(5, \"'\", 3, \"'\", 3), charToRm = NULL, labelScale = TRUE, labelFont = 1, rmSingles = TRUE, nodeColor = \"gray90\", nodeTransp = 60, borderWidth = 1, borderCol = \"gray80\", edgeFilter = \"none\", edgeFilterPar = NULL, edgeWidth = 1, edgeTransp = 0, edgeCol = NULL, title = NULL, legend = TRUE, legendPos = \"topright\", legendGroupnames = NULL, legendTitle = NULL, legendArgs = NULL, cexNodes = 1, cexLabels = 1, cexTitle = 1.2, cexLegend = 1, mar = c(2, 2, 4, 6), ... )"},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot method for objects of class diffnet — plot.diffnet","text":"x object class diffnet (returned diffnet) containing adjacency matrix, whose entries absolute differences associations. adjusted logical indicating whether adjacency matrix based adjusted p-values used. Defaults TRUE. FALSE, adjacency matrix based non-adjusted p-values. Ignored discordant method. layout indicates layout used defining node positions. Can character one layouts provided qgraph: \"spring\"(default), \"circle\", \"groups\". Alternatively, layouts provided igraph (see layout_) accepted (must given character, e.g. \"layout_with_fr\"). Can also matrix row number equal number nodes two columns corresponding x y coordinate. repulsion integer specifying repulse radius spring layout; value lower 1, nodes placed apart labels defines node labels. Can character vector entry node. FALSE, labels plotted. Defaults row/column names association matrices. shortenLabels character indicating shorten node labels. Ignored node labels defined via labels. NetCoMi's function editLabels() used label editing. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Default. Original dimnames adjacency matrices used. labelLength integer defining length labels shall shortened shortenLabels set \"simple\" \"intelligent\". Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(5, \"'\", 3, \"'\", 3). data contains, example, three bacteria \"Streptococcus1\", \"Streptococcus2\" \"Streptomyces\", default shortened \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\". charToRm vector characters remove node names. Ignored labels given via labels. labelScale logical. TRUE, node labels scaled according node size labelFont integer defining font node labels. Defaults 1. rmSingles logical. TRUE, unconnected nodes removed. nodeColor character numeric value specifying node colors. Can also vector color node. nodeTransp integer 0 100 indicating transparency node colors. 0 means transparency, 100 means full transparency. Defaults 60. borderWidth numeric specifying width node borders. Defaults 1. borderCol character specifying color node borders. Defaults \"gray80\" edgeFilter character indicating whether edges filtered. Possible values \"none\" (edges shown) \"highestDiff\" (first x edges highest absolute difference shown). x defined edgeFilterPar. edgeFilterPar numeric value specifying \"x\" edgeFilter. edgeWidth numeric specifying edge width. See argument \"edge.width\" qgraph. edgeTransp integer 0 100 indicating transparency edge colors. 0 means transparency (default), 100 means full transparency. edgeCol character vector specifying edge colors. Must length 6 discordant method (default: c(\"hotpink\", \"aquamarine\", \"red\", \"orange\", \"green\", \"blue\")) lengths 9 permutation tests Fisher's z-test (default: c(\"chartreuse2\", \"chartreuse4\", \"cyan\", \"magenta\", \"orange\", \"red\", \"blue\", \"black\", \"purple\")). title optional character string main title. legend logical. TRUE, legend plotted. legendPos either character specifying legend's position numeric vector two elements giving x y coordinates legend. See description x y arguments legend details. legendGroupnames vector two elements giving group names shown legend. legendTitle character specifying legend title. legendArgs list arguments passed legend. cexNodes numeric scaling node sizes. Defaults 1. cexLabels numeric scaling node labels. Defaults 1. cexTitle numeric scaling title. Defaults 1.2. cexLegend numeric scaling legend size. Defaults 1. mar numeric vector form c(bottom, left, top, right) defining plot margins. Works similar mar argument par. Defaults c(2,2,4,6). ... arguments passed qgraph.","code":""},{"path":[]},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot Method for microNetProps Objects — plot.microNetProps","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"Plotting objects class microNetProps.","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"","code":"# S3 method for class 'microNetProps' plot(x, layout = \"spring\", sameLayout = FALSE, layoutGroup = \"union\", repulsion = 1, groupNames = NULL, groupsChanged = FALSE, labels = NULL, shortenLabels = \"none\", labelLength = 6L, labelPattern = c(5, \"'\", 3, \"'\", 3), charToRm = NULL, labelScale = TRUE, labelFont = 1, labelFile = NULL, # Nodes: nodeFilter = \"none\", nodeFilterPar = NULL, rmSingles = \"none\", nodeSize = \"fix\", normPar = NULL, nodeSizeSpread = 4, nodeColor = \"cluster\", colorVec = NULL, featVecCol = NULL, sameFeatCol = TRUE, sameClustCol = TRUE, sameColThresh = 2L, nodeShape = NULL, featVecShape = NULL, nodeTransp = 60, borderWidth = 1, borderCol = \"gray80\", # Hubs: highlightHubs = TRUE, hubTransp = NULL, hubLabelFont = NULL, hubBorderWidth = NULL, hubBorderCol = \"black\", # Edges: edgeFilter = \"none\", edgeFilterPar = NULL, edgeInvisFilter = \"none\", edgeInvisPar = NULL, edgeWidth = 1, negDiffCol = TRUE, posCol = NULL, negCol = NULL, cut = NULL, edgeTranspLow = 0, edgeTranspHigh = 0, # Additional arguments: cexNodes = 1, cexHubs = 1.2, cexLabels = 1, cexHubLabels = NULL, cexTitle = 1.2, showTitle = NULL, title1 = NULL, title2 = NULL, mar = c(1, 3, 3, 3), doPlot = TRUE, ...)"},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"x object class microNetProps layout indicates layout used defining node positions. Can character one layouts provided qgraph: \"spring\" (default), \"circle\", \"groups\". Alternatively, layouts provided igraph (see layout_ accepted (must given character, e.g. \"layout_with_fr\"). Can also matrix row number equal number nodes two columns corresponding x y coordinate. sameLayout logical. Indicates whether layout used networks. Ignored x contains one network. See argument layoutGroup. layoutGroup numeric character. Indicates group, layout taken argument sameLayout TRUE. layout computed group 1 (adopted group 2) set \"1\" computed group 2 set \"2\". Can alternatively set \"union\" (default) compute union layouts, nodes placed optimal possible equally networks. repulsion positive numeric value indicating strength repulsive forces \"spring\" layout. Nodes placed closer together smaller values apart higher values. See repulsion argument qgraph. groupNames character vector two entries naming groups networks belong. Defaults group names returned netConstruct: data set split according group variable, factor levels (increasing order) used. Ignored arguments title1 title2 set single network plotted. groupsChanged logical. Indicates order networks plotted. TRUE, order exchanged. See details. Defaults FALSE. labels defines node labels. Can named character vector, used groups (, adjacency matrices x must contain variables). Can also list two named vectors (names must match row/column names adjacency matrices). FALSE, labels plotted. Defaults row/column names adjacency matrices. shortenLabels character indicating shorten node labels. Ignored node labels defined via labels. NetCoMi's function editLabels() used label editing. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Default. Original dimnames adjacency matrices used. labelLength integer defining length labels shall shortened shortenLabels set \"simple\" \"intelligent\". Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(5, \"'\", 3, \"'\", 3). data contains, example, three bacteria \"Streptococcus1\", \"Streptococcus2\" \"Streptomyces\", default shortened \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\". charToRm vector characters remove node names. Ignored labels given via labels. labelScale logical. TRUE, node labels scaled according node size labelFont integer defining font node labels. Defaults 1. labelFile optional character form \".txt\" naming file original renamed node labels stored. file stored current working directory. nodeFilter character indicating whether nodes filtered. Possible values : \"none\" Default. nodes plotted. \"highestConnect\" x nodes highest connectivity (sum edge weights) plotted. \"highestDegree\", \"highestBetween\", \"highestClose\", \"highestEigen\" x nodes highest degree/betweenness/closeness/eigenvector centrality plotted. \"clustTaxon\" nodes belonging cluster variables given character vector via nodeFilterPar. \"clustMin\" Plotted nodes belonging clusters minimum number nodes x. \"names\" Character vector variable names plotted Necessary parameters (e.g. \"x\") given via argument nodeFilterPar. nodeFilterPar parameters needed filtering method defined nodeFilter. rmSingles character value indicating handle unconnected nodes. Possible values \"\" (single nodes deleted), \"inboth\" (nodes unconnected networks removed) \"none\" (default; nodes removed). set \"\", layout used networks. nodeSize character indicating node sizes determined. Possible values : \"fix\" Default. nodes size (hub size can defined separately via cexHubs). \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" Size scaled according node's centrality \"counts\" Size scaled according sum counts (microbes samples, depending nodes express). \"normCounts\" Size scaled according sum normalized counts (microbes samples), exported netConstruct. \"TSS\", \"fractions\", \"CSS\", \"COM\", \"rarefy\", \"VST\", \"clr\", \"mclr\" Size scaled according sum normalized counts. Available options normMethod netConstruct. Parameters set via normPar. normPar list parameters passed function normalization nodeSize set normalization method. Used analogously normPar netConstruct(). nodeSizeSpread positive numeric value indicating spread node sizes. smaller value, similar node sizes. Node sizes calculated : (x - min(x)) / (max(x) - min(x)) * nodeSizeSpread + cexNodes. nodeSizeSpread = 4 (default) cexNodes = 1, node sizes range 1 5. nodeColor character specifying node colors. Possible values \"cluster\" (colors according determined clusters), \"feature\" (colors according node's features defined featVecCol), \"colorVec\" (vector colorVec). former two cases, colors can specified via colorVec. colorVec defined, rainbow function grDevices package used. Also accepted character value defining color, used nodes. NULL, \"grey40\" used nodes. colorVec vector list two vectors used specify node colors. Different usage depending \"nodeColor\" argument: nodeColor = \"cluster\" colorVec must vector. Depending sameClustCol argument, colors used one networks. vector long enough, warning returned colors rainbow() used remaining clusters. nodeColor = \"feature\" Defines color level featVecCol. Can list two vectors used two networks (single network, first element used) vector, used groups two networks plotted. nodeColor = \"colorVec\" colorVec defines color node implying names must match node's names (also ensured names match colnames original count matrix). Can list two vectors used two networks (single network, first element used) vector, used groups two networks plotted. featVecCol vector feature node. Used coloring nodes nodeColor set \"feature\". coerced factor. colorVec given, length must larger equal number feature levels. sameFeatCol logical indicating whether color used features networks (used two networks plotted, nodeColor = \"feature\", color vector/list given (via featVecCol)). sameClustCol TRUE (default) two networks plotted, clusters least sameColThresh nodes common color. used nodeColor set \"cluster\". sameColThresh indicates many nodes cluster must common two groups color. See argument sameClustCol. Defaults 2. nodeShape character vector specifying node shapes. Possible values \"circle\" (default), \"square\", \"triangle\", \"diamond\". featVecShape NULL, length nodeShape must equal number factor levels given featVecShape. , shape assigned one factor level (increasing order). featVecShape NULL, first shape used nodes. See example. featVecShape vector feature node. NULL, different node shape used feature. coerced factor mode. maximum number factor levels 4 corresponding four possible shapes defined via nodeShape. nodeTransp integer 0 100 indicating transparency node colors. 0 means transparency, 100 means full transparency. Defaults 60. borderWidth numeric specifying width node borders. Defaults 1. borderCol character specifying color node borders. Defaults \"gray80\" highlightHubs logical indicating hubs highlighted. TRUE, following features can defined separately hubs: transparency (hubTransp), label font (hubLabelFont), border width (hubBorderWidth), border color (hubBorderCol). hubTransp numeric 0 100 specifying color transparency hub nodes. See argument nodeTransp. Defaults 0.5*nodeTransp. Ignored highlightHubs FALSE. hubLabelFont integer specifying label font hub nodes. Defaults 2*labelFont. Ignored highlightHubs FALSE. hubBorderWidth numeric specifying border width hub nodes. Defaults 2*borderWidth. Ignored highlightHubs FALSE. hubBorderCol character specifying border color hub nodes. Defaults \"black\". Ignored highlightHubs FALSE. edgeFilter character specifying edges filtered. Possible values : \"none\" Default. edges plotted. \"threshold\" association networks, edges corresponding absolute association >= x plotted. dissimilarity networks, edges corresponding dissimilarity <= x plotted. behavior similar sparsification via threshold netConstruct(). \"highestWeight\" first x edges highest edge weight plotted. x defined edgeFilterPar, respectively. edgeFilterPar numeric specifying \"x\" edgeFilter. edgeInvisFilter similar edgeFilter edges removed computing layout edge removal influence layout. Defaults \"none\". edgeInvisPar numeric specifying \"x\" edgeInvisFilter. edgeWidth numeric specifying edge width. See argument \"edge.width\" qgraph. negDiffCol logical indicating edges negative corresponding association colored different. TRUE (default), argument posCol used edges positive association negCol negative association. FALSE dissimilarity networks, posCol used. posCol vector (character numeric) one two elements specifying color edges positive weight also edges negative weight negDiffCol set FALSE. first element used edges weight cut second edges weight cut. single value given, used cases. Defaults c(\"#009900\", \"darkgreen\"). negCol vector (character numeric) one two elements specifying color edges negative weight. first element used edges absolute weight cut second edges absolute weight cut. single value given, used cases. Ignored negDiffCol FALSE. Defaults c(\"red\", \"#BF0000\"). cut defines \"cut\" parameter qgraph. Can either numeric value (used groups two networks plotted) vector length two. default set analogous qgraph: \"0 graphs less 20 nodes. larger graphs cut value automatically chosen equal maximum 75th quantile absolute edge strengths edge strength corresponding 2n-th edge strength (n number nodes.)\" two networks plotted, mean two determined cut parameters used edge thicknesses comparable. edgeTranspLow numeric value 0 100 specifying transparency edges weight cut. higher value, higher transparency. edgeTranspHigh analogous edgeTranspLow, used edges weight cut. cexNodes numeric scaling node sizes. Defaults 1. cexHubs numeric scaling hub sizes. used nodeSize set \"hubs\". cexLabels numeric scaling node labels. Defaults 1. set 0, node labels plotted. cexHubLabels numeric scaling node labels hub nodes. Equals cexLabels default. Ignored, highlightHubs = FALSE. cexTitle numeric scaling title(s). Defaults 1.2. showTitle TRUE, title shown network, either defined via groupNames, title1 title2. Defaults TRUE two networks plotted FALSE single network. title1 character giving title first network. title2 character giving title second network (existing). mar numeric vector form c(bottom, left, top, right) defining plot margins. Works similar mar argument par. Defaults c(1,3,3,3). doPlot logical. FALSE, network plot suppressed. Useful saving output (e.g., layout) without plotting. ... arguments passed qgraph, used network plotting.","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"Returns (invisibly) list following elements:","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction amgut_net <- netConstruct(amgut1.filt, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis amgut_props <- netAnalyze(amgut_net) ### Network plots ### # Clusters are used for node coloring: plot(amgut_props, nodeColor = \"cluster\") # Remove singletons plot(amgut_props, nodeColor = \"cluster\", rmSingles = TRUE) # A higher repulsion places nodes with high edge weight closer together plot(amgut_props, nodeColor = \"cluster\", rmSingles = TRUE, repulsion = 1.2) # A feature vector is used for node coloring # (this could be a vector with phylum names of the ASVs) set.seed(123456) featVec <- sample(1:5, nrow(amgut1.filt), replace = TRUE) # Names must be equal to ASV names names(featVec) <- colnames(amgut1.filt) plot(amgut_props, rmSingles = TRUE, nodeColor = \"feature\", featVecCol = featVec, colorVec = heat.colors(5)) # Use a further feature vector for node shapes shapeVec <- sample(1:3, ncol(amgut1.filt), replace = TRUE) names(shapeVec) <- colnames(amgut1.filt) plot(amgut_props, rmSingles = TRUE, nodeColor = \"feature\", featVecCol = featVec, colorVec = heat.colors(5), nodeShape = c(\"circle\", \"square\", \"diamond\"), featVecShape = shapeVec, highlightHubs = FALSE)"},{"path":"https://netcomi.de/reference/plotHeat.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a heatmap with p-values — plotHeat","title":"Create a heatmap with p-values — plotHeat","text":"function draw heatmaps option use p-values significance codes cell text. allows draw mixed heatmap different cell text (values, p-values, significance code) lower upper triangle. function corrplot used plotting heatmap.","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a heatmap with p-values — plotHeat","text":"","code":"plotHeat( mat, pmat = NULL, type = \"full\", textUpp = \"mat\", textLow = \"code\", methUpp = \"color\", methLow = \"color\", diag = TRUE, title = \"\", mar = c(0, 0, 1, 0), labPos = \"lt\", labCol = \"gray40\", labCex = 1.1, textCol = \"black\", textCex = 1, textFont = 1, digits = 2L, legendPos = \"r\", colorPal = NULL, addWhite = TRUE, nCol = 51L, colorLim = NULL, revCol = FALSE, color = NULL, bg = \"white\", argsUpp = NULL, argsLow = NULL )"},{"path":"https://netcomi.de/reference/plotHeat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a heatmap with p-values — plotHeat","text":"mat numeric matrix values plotted. pmat optional matrix p-values. type character defining type heatmap. Possible values : \"full\" Default. cell text specified via textUpp used whole heatmap. \"mixed\" Different cell text used upper lower triangle. upper triangle specified via textUpp lower triangle via textLow. \"upper\" upper triangle plotted. text specified via textUpp. \"lower\" lower triangle plotted. text specified via textLow. textUpp character specifying cell text either full heatmap (type \"full\") upper triangle (type \"mixed\" \"upper\"). Default \"mat\". Possible values : \"mat\" Cells contain values matrix given mat \"sigmat\" \"mat\" insignificant values (cells) blank. \"pmat\" Cells contain p-values given p-mat. \"code\" Cells contain significance codes corresponding p-values given p-mat. following coding used: \"***: 0.001; **: 0.01; *: 0.05\". \"none\" cell text plotted. textLow textUpp lower triangle (type \"mixed\" \"lower\"). Default \"code\". methUpp character specifying values represented full heatmap (type \"full\") upper triangle (type \"mixed\" \"upper\"). Possible values : \"circle\", \"square\", \"ellipse\", \"number\", \"shade\", \"color\" (default), \"pie\". method passed method argument corrplot. methLow es methUpp lower triangle. diag logical. TRUE (default), diagonal printed. FALSE type \"full\" \"mixed\", diagonal cells white. FALSE type \"upper\" \"lower\", non-diagonal cells printed. title character giving title. mar vector specifying plot margins. See par. Default c(0, 0, 1, 0). labPos character defining label position. Possible values : \"lt\"(left top, default), \"ld\"(left diagonal; type must \"lower\"), \"td\"(top diagonal; type must \"upper\"), \"d\"(diagonal ), \"n\"(labels). Passed corrplot argument tl.pos. labCol label color. Default \"gray40\". Passed corrplot argument tl.col. labCex numeric defining label size. Default 1.1. Passed corrplot argument tl.cex. textCol color cell text (values, p-values, code). Default \"black\". textCex numeric defining text size. Default 1. Currently works types \"mat\" \"code\". textFont numeric defining text font. Default 1. Currently works type \"mat\". digits integer defining number decimal places used matrix values p-values. legendPos position color legend. Possible values : \"r\"(right; default), \"b\"(bottom), \"n\"(legend). colorPal character specifying color palette used cell coloring color set. Available sequential diverging color palettes RColorBrewer: Sequential: \"Blues\", \"BuGn\", \"BuPu\", \"GnBu\", \"Greens\", \"Greys\", \"Oranges\", \"OrRd\", \"PuBu\", \"PuBuGn\", \"PuRd\", \"Purples\", \"RdPu\", \"Reds\", \"YlGn\", \"YlGnBu\", \"YlOrBr\", \"YlOrRd\" Diverging: \"BrBG\", \"PiYG\", \"PRGn\", \"PuOr\", \"RdBu\", \"RdGy\", \"RdYlBu\", \"RdYlGn\", \"Spectral\" default, \"RdBu\" used first value colorLim negative \"YlOrRd\" otherwise. addWhite logical. TRUE, white added color palette. (first element sequential palettes middle element diverging palettes). diverging palette, nCol set odd number middle color white. nCol integer defining number colors color palette interpolated. Default 51L. colorRamp used color interpolation. colorLim numeric vector two values defining color limits. first element color vector assigned lower limit last element color vector upper limit. Default c(0,1) values mat [0,1], c(-1,1) values [-1,1], minimum maximum values otherwise. revCol logical. TRUE, reversed color vector used. Default FALSE. Ignored color given. color optional vector colors used cell coloring. bg background color cells. Default \"white\". argsUpp optional list arguments passed corrplot. Arguments set within plotHeat() overwritten arguments list. Used full heatmap type \"full\" upper triangle type \"mixed\" \"upper\". argsLow argsUpp lower triangle (type \"mixed\" \"lower\").","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a heatmap with p-values — plotHeat","text":"Invisible list two elements argsUpper argsLower containing corrplot arguments used upper lower triangle heatmap.","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a heatmap with p-values — plotHeat","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") # Sample sizes phyloseq::nsamples(amgut_season_yes) #> [1] 121 phyloseq::nsamples(amgut_season_no) #> [1] 163 # Make sample sizes equal to ensure comparability n_yes <- phyloseq::nsamples(amgut_season_yes) amgut_season_no <- phyloseq::subset_samples(amgut_season_no, X.SampleID %in% get_variable(amgut_season_no, \"X.SampleID\")[1:n_yes]) #> Error in h(simpleError(msg, call)): error in evaluating the argument 'table' in selecting a method for function '%in%': error in evaluating the argument 'object' in selecting a method for function 'sample_data': object 'amgut_season_no' not found # Network construction amgut_net <- netConstruct(data = amgut_season_yes, data2 = amgut_season_no, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"thresh\", thresh = 0.4, seed = 123456) #> Checking input arguments ... #> Done. #> Data filtering ... #> 95 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 1 rows with zero sum removed in group 2. #> 43 taxa and 120 samples remaining in group 1. #> 43 taxa and 162 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. #> #> Sparsify associations in group 2 ... #> Done. # Estimated and sparsified associations of group 1 plotHeat(amgut_net$assoEst1, textUpp = \"none\", labCex = 0.6) plotHeat(amgut_net$assoMat1, textUpp = \"none\", labCex = 0.6) # Compute graphlet correlation matrices and perform significance tests adja1 <- amgut_net$adjaMat1 adja2 <- amgut_net$adjaMat2 gcm1 <- calcGCM(adja1) gcm2 <- calcGCM(adja2) gcmtest <- testGCM(obj1 = gcm1, obj2 = gcm2) #> Perform Student's t-test for GCM1 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.22 #> Done. #> #> Perform Student's t-test for GCM2 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.08 #> Done. #> #> Test GCM1 and GCM2 for differences ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.64 #> Done. # Mixed heatmap of GCM1 and significance codes plotHeat(mat = gcmtest$gcm1, pmat = gcmtest$pAdjust1, type = \"mixed\", textLow = \"code\") # Mixed heatmap of GCM2 and p-values (diagonal disabled) plotHeat(mat = gcmtest$gcm1, pmat = gcmtest$pAdjust1, diag = FALSE, type = \"mixed\", textLow = \"pmat\") # Mixed heatmap of differences (GCM1 - GCM2) and significance codes plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"mixed\", textLow = \"code\", title = \"Differences between GCMs (GCM1 - GCM2)\", mar = c(0, 0, 2, 0)) # Heatmap of differences (insignificant values are blank) plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"full\", textUpp = \"sigmat\") # Same as before but with higher significance level plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"full\", textUpp = \"sigmat\", argsUpp = list(sig.level = 0.1)) # Heatmap of absolute differences # (different position of labels and legend) plotHeat(mat = gcmtest$absDiff, type = \"full\", labPos = \"d\", legendPos = \"b\") # Mixed heatmap of absolute differences # (different methods, text options, and color palette) plotHeat(mat = gcmtest$absDiff, type = \"mixed\", textLow = \"mat\", methUpp = \"number\", methLow = \"circle\", labCol = \"black\", textCol = \"gray50\", textCex = 1.3, textFont = 2, digits = 1L, colorLim = range(gcmtest$absDiff), colorPal = \"Blues\", nCol = 21L, bg = \"darkorange\", addWhite = FALSE) # Mixed heatmap of differences # (different methods, text options, and color palette) plotHeat(mat = gcmtest$diff, type = \"mixed\", textLow = \"none\", methUpp = \"number\", methLow = \"pie\", textCex = 1.3, textFont = 2, digits = 1L, colorLim = range(gcmtest$diff), colorPal = \"PiYG\", nCol = 21L, bg = \"gray80\") # Heatmap of differences with given color vector plotHeat(mat = gcmtest$diff, nCol = 21L, color = grDevices::colorRampPalette(c(\"blue\", \"white\", \"orange\"))(31))"},{"path":"https://netcomi.de/reference/print.GCD.html","id":null,"dir":"Reference","previous_headings":"","what":"Print method for GCD objects — print.GCD","title":"Print method for GCD objects — print.GCD","text":"Print method GCD objects","code":""},{"path":"https://netcomi.de/reference/print.GCD.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print method for GCD objects — print.GCD","text":"","code":"# S3 method for class 'GCD' print(x, ...)"},{"path":"https://netcomi.de/reference/print.GCD.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print method for GCD objects — print.GCD","text":"x object class GCD (returned calcGCD). ... used.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":null,"dir":"Reference","previous_headings":"","what":"Rename taxa — renameTaxa","title":"Rename taxa — renameTaxa","text":"Function renaming taxa taxonomic table, can given matrix phyloseq object. comes functionality making unknown unclassified taxa unique substituting next higher known taxonomic level, e.g., unknown genus \"g__\" can automatically renamed \"1_Streptococcaceae(F)\". User-defined patterns determine format known substituted names. Unknown names (e.g., NAs) unclassified taxa can handled separately. Duplicated names within one chosen ranks can also made unique numbering consecutively.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Rename taxa — renameTaxa","text":"","code":"renameTaxa( taxtab, pat = \"_\", substPat = \"___\", unknown = c(NA, \"\", \" \", \"__\"), numUnknown = TRUE, unclass = c(\"unclassified\", \"Unclassified\"), numUnclass = TRUE, numUnclassPat = \"\", numDupli = NULL, numDupliPat = \"\", ranks = NULL, ranksAbb = NULL, ignoreCols = NULL )"},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Rename taxa — renameTaxa","text":"taxtab taxonomic table (matrix containing taxonomic names; columns must taxonomic ranks) phyloseq object. pat character specifying pattern new taxonomic names current name KNOWN. See examples default value demo. Possible space holders : Taxonomic name (either original replaced one) Taxonomic rank lower case Taxonomic rank first letter upper case Abbreviated taxonomic rank lower case Abbreviated taxonomic rank upper case substPat character specifying pattern new taxonomic names current name UNKNOWN. current name substituted next higher existing name. Possible space holders (addition pat): Substituted taxonomic name (next higher existing name) Taxonomic rank substitute name lower case Taxonomic rank substitute name first letter upper case Abbreviated taxonomic rank substitute name lower case Abbreviated taxonomic rank substitute name upper case unknown character vector giving labels unknown taxa, without leading rank label (e.g., \"g_\" \"g__\" genus level). numUnknown = TRUE, unknown names replaced number. numUnknown logical. TRUE, number assigned unknown taxonomic names (defined unknown) make unique. unclass character vector giving label unclassified taxa, without leading rank label (e.g., \"g_\" \"g__\" genus level). numUnclass = TRUE, number added names unclassified taxa. Note unclassified taxa unknown taxa get separate numbering unclass set. replace unknown unclassified taxa numbers, add \"unclassified\" (appropriate counterpart) unknown set unclass NULL. numUnclass logical. TRUE, number assigned unclassified taxa (defined unclass) make unique. pattern defined via numUnclassPat. numUnclassPat character defining pattern used numbering unclassified taxa. Must include space holder name (\"\") one number (\"\"). Default \"\" resulting e.g., \"unclassified1\". numDupli character vector giving ranks made unique adding number. Elements must match column names. pattern defined via numDupliPat. numDupliPat character defining pattern used numbering duplicated names (numDupli given). Must include space holder name (\"\") one number (\"\"). Default \"\" resulting e.g., \"Ruminococcus1\". ranks character vector giving rank names used renaming taxa. NULL, functions tries automatically set rank names based common usage. ranksAbb character vector giving abbreviated rank names, directly used place holders , , , (former two lower case latter two upper case). NULL, first letter rank names used. ignoreCols numeric vector columns ignored. Names remain unchanged columns. Columns containing NAs ignored automatically ignoreCols = NULL. Note: length ranks ranksAbb must match number non-ignored columns.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Rename taxa — renameTaxa","text":"Renamed taxonomic table (matrix phyloseq object, depending input).","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Rename taxa — renameTaxa","text":"","code":"#--- Load and edit data ----------------------------------------------------- library(phyloseq) data(\"GlobalPatterns\") global <- subset_taxa(GlobalPatterns, Kingdom == \"Bacteria\") taxtab <- global@tax_table@.Data[1:10, ] # Add some unclassified taxa taxtab[c(2,3,5), \"Species\"] <- \"unclassified\" taxtab[c(2,3), \"Genus\"] <- \"unclassified\" taxtab[2, \"Family\"] <- \"unclassified\" # Add some blanks taxtab[7, \"Genus\"] <- \" \" taxtab[7:9, \"Species\"] <- \" \" # Add taxon that is unclassified up to Kingdom taxtab[9, ] <- \"unclassified\" taxtab[9, 1] <- \"Unclassified\" # Add row names rownames(taxtab) <- paste0(\"OTU\", 1:nrow(taxtab)) print(taxtab) #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"Unclassified\" \"unclassified\" \"unclassified\" \"unclassified\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus Species #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU3 \"Propionibacteriaceae\" \"unclassified\" \"unclassified\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" NA #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified\" #> OTU6 NA NA NA #> OTU7 \"Nocardioidaceae\" \" \" \" \" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" \" \" #> OTU9 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU10 NA NA NA #--- Example 1 (default setting) -------------------------------------------- # Example 1 (default setting) # - Known names are replaced by \"_\" # - Unknown names are replaced by \"___\" # - Unclassified taxa have separate numbering # - Ranks are taken from column names # - e.g., unknown genus -> \"g_1_f_Streptococcaceae\" renamed1 <- renameTaxa(taxtab) renamed1 #> Kingdom Phylum #> OTU1 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU2 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU3 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU4 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU5 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU6 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU7 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU8 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU9 \"k_Unclassified1\" \"p_unclassified1_k_Unclassified1\" #> OTU10 \"k_Bacteria\" \"p_Actinobacteria\" #> Class Order #> OTU1 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU2 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU3 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU4 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU5 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU6 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU7 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU8 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU9 \"c_unclassified1_k_Unclassified1\" \"o_unclassified1_k_Unclassified1\" #> OTU10 \"c_Actinobacteria\" \"o_Actinomycetales\" #> Family #> OTU1 \"f_Propionibacteriaceae\" #> OTU2 \"f_unclassified1_o_Actinomycetales\" #> OTU3 \"f_Propionibacteriaceae\" #> OTU4 \"f_Propionibacteriaceae\" #> OTU5 \"f_Propionibacteriaceae\" #> OTU6 \"f_1_o_Actinomycetales\" #> OTU7 \"f_Nocardioidaceae\" #> OTU8 \"f_Nocardioidaceae\" #> OTU9 \"f_unclassified2_k_Unclassified1\" #> OTU10 \"f_2_o_Actinomycetales\" #> Genus #> OTU1 \"g_Propionibacterium\" #> OTU2 \"g_unclassified1_o_Actinomycetales\" #> OTU3 \"g_unclassified2_f_Propionibacteriaceae\" #> OTU4 \"g_Tessaracoccus\" #> OTU5 \"g_Aestuariimicrobium\" #> OTU6 \"g_1_o_Actinomycetales\" #> OTU7 \"g_2_f_Nocardioidaceae\" #> OTU8 \"g_Propionicimonas\" #> OTU9 \"g_unclassified3_k_Unclassified1\" #> OTU10 \"g_3_o_Actinomycetales\" #> Species #> OTU1 \"s_Propionibacteriumacnes\" #> OTU2 \"s_unclassified1_o_Actinomycetales\" #> OTU3 \"s_unclassified2_f_Propionibacteriaceae\" #> OTU4 \"s_1_g_Tessaracoccus\" #> OTU5 \"s_unclassified3_g_Aestuariimicrobium\" #> OTU6 \"s_2_o_Actinomycetales\" #> OTU7 \"s_3_f_Nocardioidaceae\" #> OTU8 \"s_4_g_Propionicimonas\" #> OTU9 \"s_unclassified4_k_Unclassified1\" #> OTU10 \"s_5_o_Actinomycetales\" #--- Example 2 -------------------------------------------------------------- # - Use phyloseq object (subset of class clostridia to decrease runtime) global_sub <- subset_taxa(global, Class == \"Clostridia\") renamed2 <- renameTaxa(global_sub) tax_table(renamed2)[1:5, ] #> Taxonomy Table: [5 taxa by 7 taxonomic ranks]: #> Kingdom Phylum Class Order #> 69790 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Halanaerobiales\" #> 201587 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Halanaerobiales\" #> 14244 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> 589048 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> 310026 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> Family Genus #> 69790 \"f_Halobacteroidaceae\" \"g_1_f_Halobacteroidaceae\" #> 201587 \"f_Halanaerobiaceae\" \"g_2_f_Halanaerobiaceae\" #> 14244 \"f_1_o_Clostridiales\" \"g_3_o_Clostridiales\" #> 589048 \"f_2_o_Clostridiales\" \"g_4_o_Clostridiales\" #> 310026 \"f_3_o_Clostridiales\" \"g_5_o_Clostridiales\" #> Species #> 69790 \"s_1_f_Halobacteroidaceae\" #> 201587 \"s_2_f_Halanaerobiaceae\" #> 14244 \"s_3_o_Clostridiales\" #> 589048 \"s_4_o_Clostridiales\" #> 310026 \"s_5_o_Clostridiales\" #--- Example 3 -------------------------------------------------------------- # - Known names remain unchanged # - Substituted names are indicated by their rank in brackets # - Pattern for numbering unclassified taxa changed # - e.g., unknown genus -> \"Streptococcaceae (F)\" # - Note: Numbering of unknowns is not shown because \"\" is not # included in \"substPat\" renamed3 <- renameTaxa(taxtab, numUnclassPat = \"_\", pat = \"\", substPat = \" ()\") renamed3 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified_1\" \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family Genus #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacteriaceae (F)\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" \"Nocardioidaceae (F)\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" #> OTU10 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"Actinomycetales (O)\" #> OTU3 \"Propionibacteriaceae (F)\" #> OTU4 \"Tessaracoccus (G)\" #> OTU5 \"Aestuariimicrobium (G)\" #> OTU6 \"Actinomycetales (O)\" #> OTU7 \"Nocardioidaceae (F)\" #> OTU8 \"Propionicimonas (G)\" #> OTU9 \"Unclassified_1 (K)\" #> OTU10 \"Actinomycetales (O)\" #--- Example 4 -------------------------------------------------------------- # - Same as before but numbering shown for unknown names # - e.g., unknown genus -> \"1 Streptococcaceae (F)\" renamed4 <- renameTaxa(taxtab, numUnclassPat = \"_\", pat = \"\", substPat = \" ()\") renamed4 #> Kingdom Phylum #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified_1\" \"unclassified_1 Unclassified_1 (K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Class Order #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified_1 Unclassified_1 (K)\" \"unclassified_1 Unclassified_1 (K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Family #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1 Actinomycetales (O)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified_2 Unclassified_1 (K)\" #> OTU10 \"2 Actinomycetales (O)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"unclassified_2 Propionibacteriaceae (F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1 Actinomycetales (O)\" #> OTU7 \"2 Nocardioidaceae (F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified_3 Unclassified_1 (K)\" #> OTU10 \"3 Actinomycetales (O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"unclassified_2 Propionibacteriaceae (F)\" #> OTU4 \"1 Tessaracoccus (G)\" #> OTU5 \"unclassified_3 Aestuariimicrobium (G)\" #> OTU6 \"2 Actinomycetales (O)\" #> OTU7 \"3 Nocardioidaceae (F)\" #> OTU8 \"4 Propionicimonas (G)\" #> OTU9 \"unclassified_4 Unclassified_1 (K)\" #> OTU10 \"5 Actinomycetales (O)\" #--- Example 5 -------------------------------------------------------------- # - Same numbering for unkown names and unclassified taxa # - e.g., unknown genus -> \"1_Streptococcaceae(F)\" # - Note: We get a warning here because \"Unclassified\" (with capital U) # are not included in \"unknown\" but occur in the data renamed5 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\"), pat = \"\", substPat = \"_()\") #> Warning: Taxonomic table contains unclassified taxa. Consider adding \"Unclassified\" to argument \"unknown\". renamed5 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified\" \"1_Unclassified(K)\" \"1_Unclassified(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family Genus #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"Actinomycetales\" \"1_Actinomycetales(O)\" \"1_Actinomycetales(O)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" \"2_Propionibacteriaceae(F)\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"Actinomycetales\" \"2_Actinomycetales(O)\" \"3_Actinomycetales(O)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" \"4_Nocardioidaceae(F)\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"1_Unclassified(K)\" \"3_Unclassified(K)\" \"5_Unclassified(K)\" #> OTU10 \"Actinomycetales\" \"4_Actinomycetales(O)\" \"6_Actinomycetales(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1_Actinomycetales(O)\" #> OTU3 \"2_Propionibacteriaceae(F)\" #> OTU4 \"3_Tessaracoccus(G)\" #> OTU5 \"4_Aestuariimicrobium(G)\" #> OTU6 \"5_Actinomycetales(O)\" #> OTU7 \"6_Nocardioidaceae(F)\" #> OTU8 \"7_Propionicimonas(G)\" #> OTU9 \"8_Unclassified(K)\" #> OTU10 \"9_Actinomycetales(O)\" #--- Example 6 -------------------------------------------------------------- # - Same as before, but OTU9 is now renamed correctly renamed6 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\", \"Unclassified\"), pat = \"\", substPat = \"_()\") renamed6 #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"1\" \"1_1(K)\" \"1_1(K)\" \"1_1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"1_Actinomycetales(O)\" \"1_Actinomycetales(O)\" #> OTU3 \"Propionibacteriaceae\" \"2_Propionibacteriaceae(F)\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"2_Actinomycetales(O)\" \"3_Actinomycetales(O)\" #> OTU7 \"Nocardioidaceae\" \"4_Nocardioidaceae(F)\" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"3_1(K)\" \"5_1(K)\" #> OTU10 \"4_Actinomycetales(O)\" \"6_Actinomycetales(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1_Actinomycetales(O)\" #> OTU3 \"2_Propionibacteriaceae(F)\" #> OTU4 \"3_Tessaracoccus(G)\" #> OTU5 \"4_Aestuariimicrobium(G)\" #> OTU6 \"5_Actinomycetales(O)\" #> OTU7 \"6_Nocardioidaceae(F)\" #> OTU8 \"7_Propionicimonas(G)\" #> OTU9 \"8_1(K)\" #> OTU10 \"9_Actinomycetales(O)\" #--- Example 7 -------------------------------------------------------------- # - Add \"(: unknown)\" to unknown names # - e.g., unknown genus -> \"1 Streptococcaceae (Genus: unknown)\" renamed7 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\", \"Unclassified\"), pat = \"\", substPat = \" (: unknown)\") renamed7 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"1\" \"1 1 (Phylum: unknown)\" \"1 1 (Class: unknown)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU2 \"Actinomycetales\" \"1 Actinomycetales (Family: unknown)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU6 \"Actinomycetales\" \"2 Actinomycetales (Family: unknown)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" #> OTU9 \"1 1 (Order: unknown)\" \"3 1 (Family: unknown)\" #> OTU10 \"Actinomycetales\" \"4 Actinomycetales (Family: unknown)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"1 Actinomycetales (Genus: unknown)\" #> OTU3 \"2 Propionibacteriaceae (Genus: unknown)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"3 Actinomycetales (Genus: unknown)\" #> OTU7 \"4 Nocardioidaceae (Genus: unknown)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"5 1 (Genus: unknown)\" #> OTU10 \"6 Actinomycetales (Genus: unknown)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1 Actinomycetales (Species: unknown)\" #> OTU3 \"2 Propionibacteriaceae (Species: unknown)\" #> OTU4 \"3 Tessaracoccus (Species: unknown)\" #> OTU5 \"4 Aestuariimicrobium (Species: unknown)\" #> OTU6 \"5 Actinomycetales (Species: unknown)\" #> OTU7 \"6 Nocardioidaceae (Species: unknown)\" #> OTU8 \"7 Propionicimonas (Species: unknown)\" #> OTU9 \"8 1 (Species: unknown)\" #> OTU10 \"9 Actinomycetales (Species: unknown)\" #--- Example 8 -------------------------------------------------------------- # - Do not substitute unknowns and unclassified taxa by higher ranks # - e.g., unknown genus -> \"1\" renamed8 <- renameTaxa(taxtab, pat = \"\", substPat = \"\") renamed8 #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"Unclassified1\" \"unclassified1\" \"unclassified1\" \"unclassified1\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus Species #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified1\" \"unclassified1\" \"unclassified1\" #> OTU3 \"Propionibacteriaceae\" \"unclassified2\" \"unclassified2\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" \"1\" #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified3\" #> OTU6 \"1\" \"1\" \"2\" #> OTU7 \"Nocardioidaceae\" \"2\" \"3\" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" \"4\" #> OTU9 \"unclassified2\" \"unclassified3\" \"unclassified4\" #> OTU10 \"2\" \"3\" \"5\" #--- Example 9 -------------------------------------------------------------- # - Error if ranks cannot be automatically determined # from column names or taxonomic names taxtab_noranks <- taxtab colnames(taxtab_noranks) <- paste0(\"Rank\", 1:ncol(taxtab)) head(taxtab_noranks) #> Rank1 Rank2 Rank3 Rank4 #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Rank5 Rank6 Rank7 #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU3 \"Propionibacteriaceae\" \"unclassified\" \"unclassified\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" NA #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified\" #> OTU6 NA NA NA if (FALSE) { # \\dontrun{ renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\") } # } # Ranks can either be given via \"ranks\" ... (ranks <- colnames(taxtab)) #> [1] \"Kingdom\" \"Phylum\" \"Class\" \"Order\" \"Family\" \"Genus\" \"Species\" renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\", ranks = ranks) renamed9 #> Rank1 Rank2 #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Rank3 Rank4 #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified1_Unclassified1(K)\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Rank5 #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1_Actinomycetales(O)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified2_Unclassified1(K)\" #> OTU10 \"2_Actinomycetales(O)\" #> Rank6 #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae(F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales(O)\" #> OTU7 \"2_Nocardioidaceae(F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(K)\" #> OTU10 \"3_Actinomycetales(O)\" #> Rank7 #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae(F)\" #> OTU4 \"1_Tessaracoccus(G)\" #> OTU5 \"unclassified3_Aestuariimicrobium(G)\" #> OTU6 \"2_Actinomycetales(O)\" #> OTU7 \"3_Nocardioidaceae(F)\" #> OTU8 \"4_Propionicimonas(G)\" #> OTU9 \"unclassified4_Unclassified1(K)\" #> OTU10 \"5_Actinomycetales(O)\" # ... or \"ranksAbb\" (we now use the lower case within \"substPat\") (ranks <- substr(colnames(taxtab), 1, 1)) #> [1] \"K\" \"P\" \"C\" \"O\" \"F\" \"G\" \"S\" renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\", ranksAbb = ranks) renamed9 #> Rank1 Rank2 #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(k)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Rank3 Rank4 #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified1_Unclassified1(k)\" \"unclassified1_Unclassified1(k)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Rank5 #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1_Actinomycetales(o)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified2_Unclassified1(k)\" #> OTU10 \"2_Actinomycetales(o)\" #> Rank6 #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"unclassified2_Propionibacteriaceae(f)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales(o)\" #> OTU7 \"2_Nocardioidaceae(f)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(k)\" #> OTU10 \"3_Actinomycetales(o)\" #> Rank7 #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"unclassified2_Propionibacteriaceae(f)\" #> OTU4 \"1_Tessaracoccus(g)\" #> OTU5 \"unclassified3_Aestuariimicrobium(g)\" #> OTU6 \"2_Actinomycetales(o)\" #> OTU7 \"3_Nocardioidaceae(f)\" #> OTU8 \"4_Propionicimonas(g)\" #> OTU9 \"unclassified4_Unclassified1(k)\" #> OTU10 \"5_Actinomycetales(o)\" #--- Example 10 ------------------------------------------------------------- # - Make names of ranks \"Family\" and \"Order\" unique by adding numbers to # duplicated names renamed10 <- renameTaxa(taxtab, pat = \"\", substPat = \"_()\", numDupli = c(\"Family\", \"Order\")) renamed10 #> Kingdom Phylum #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Class Order #> OTU1 \"Actinobacteria\" \"Actinomycetales1\" #> OTU2 \"Actinobacteria\" \"Actinomycetales2\" #> OTU3 \"Actinobacteria\" \"Actinomycetales3\" #> OTU4 \"Actinobacteria\" \"Actinomycetales4\" #> OTU5 \"Actinobacteria\" \"Actinomycetales5\" #> OTU6 \"Actinobacteria\" \"Actinomycetales6\" #> OTU7 \"Actinobacteria\" \"Actinomycetales7\" #> OTU8 \"Actinobacteria\" \"Actinomycetales8\" #> OTU9 \"unclassified1_Unclassified1(K)\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales9\" #> Family #> OTU1 \"Propionibacteriaceae1\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"Propionibacteriaceae2\" #> OTU4 \"Propionibacteriaceae3\" #> OTU5 \"Propionibacteriaceae4\" #> OTU6 \"1_Actinomycetales6(O)\" #> OTU7 \"Nocardioidaceae1\" #> OTU8 \"Nocardioidaceae2\" #> OTU9 \"unclassified2_Unclassified1(K)\" #> OTU10 \"2_Actinomycetales9(O)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae2(F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales6(O)\" #> OTU7 \"2_Nocardioidaceae1(F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(K)\" #> OTU10 \"3_Actinomycetales9(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae2(F)\" #> OTU4 \"1_Tessaracoccus(G)\" #> OTU5 \"unclassified3_Aestuariimicrobium(G)\" #> OTU6 \"2_Actinomycetales6(O)\" #> OTU7 \"3_Nocardioidaceae1(F)\" #> OTU8 \"4_Propionicimonas(G)\" #> OTU9 \"unclassified4_Unclassified1(K)\" #> OTU10 \"5_Actinomycetales9(O)\" any(duplicated(renamed10[, \"Family\"])) #> [1] FALSE any(duplicated(renamed10[, \"Order\"])) #> [1] FALSE"},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for Objects of Class microNetComp — summary.microNetComp","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"main results returned netCompare printed well-arranged format.","code":""},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"","code":"# S3 method for class 'microNetComp' summary( object, groupNames = NULL, showCentr = \"all\", numbNodes = 10L, showGlobal = TRUE, showGlobalLCC = TRUE, showJacc = TRUE, showRand = TRUE, showGCD = TRUE, pAdjust = TRUE, digits = 3L, digitsPval = 6L, ... ) # S3 method for class 'summary.microNetComp' print(x, ...)"},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"object object class microNetComp returned netCompare. groupNames character vector two elements giving group names two networks. NULL, names adopted object. showCentr character vector indicating centrality measures included summary. Possible values \"\", \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" \"none\". numbNodes integer indicating many nodes centrality values shall printed. Defaults 10 meaning first 10 taxa highest absolute group difference specific centrality measure shown. showGlobal logical. TRUE, global network properties whole network printed. showGlobalLCC logical. TRUE, global network properties largest connected component printed. network connected (number components 1) global properties printed (one arguments showGlobal showGlobalLCC) TRUE. showJacc logical. TRUE, Jaccard index printed. showRand logical. TRUE, adjusted Rand index (existent) returned. showGCD logical. TRUE, Graphlet Correlation Distance (existent) printed. pAdjust logical. permutation p-values (existent) adjusted TRUE (default) adjusted FALSE. digits integer giving number decimal places results rounded. Defaults 3L. digitsPval integer giving number decimal places p-values rounded. Defaults 6L. ... used. x object class summary.microNetComp (returned summary.microNetComp).","code":""},{"path":[]},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for Objects of Class microNetProps — summary.microNetProps","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"main results returned netAnalyze printed well-arranged format.","code":""},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"","code":"# S3 method for class 'microNetProps' summary( object, groupNames = NULL, showCompSize = TRUE, showGlobal = TRUE, showGlobalLCC = TRUE, showCluster = TRUE, clusterLCC = FALSE, showHubs = TRUE, showCentr = \"all\", numbNodes = NULL, digits = 5L, ... ) # S3 method for class 'summary.microNetProps' print(x, ...)"},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"object object class microNetProps (returned netAnalyze). groupNames character vector two elements giving group names corresponding two networks. NULL, names adopted object. Ignored object contains single network. showCompSize logical. TRUE, component sizes printed. showGlobal logical. TRUE, global network properties whole network printed. showGlobalLCC logical. TRUE, global network properties largest connected component printed. network connected (number components 1) global properties printed (one arguments showGlobal showGlobalLCC) TRUE. showCluster logical. TRUE, cluster(s) printed. clusterLCC logical. TRUE, clusters printed largest connected component. Defaults FALSE (whole network). showHubs logical. TRUE, detected hubs printed. showCentr character vector indicating centrality measures results shall printed. Possible values \"\", \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" \"none\". numbNodes integer indicating many nodes centrality values shall printed. Defaults 10L single network 5L two networks. Thus, case single network, first 10 nodes highest centrality value specific centrality measure shown. object contains two networks, centrality measure splitted matrix shown upper part contains highest values first group, lower part highest values second group. digits integer giving number decimal places results rounded. Defaults 5L. ... used. x object class summary.microNetProps (returned summary.microNetProps).","code":""},{"path":[]},{"path":"https://netcomi.de/reference/testGCM.html","id":null,"dir":"Reference","previous_headings":"","what":"Test GCM(s) for statistical significance — testGCM","title":"Test GCM(s) for statistical significance — testGCM","text":"function tests whether graphlet correlations (entries GCM) significantly different zero. two GCMs given, graphlet correlations two networks tested significantly different, .e., Fishers z-test performed test absolute differences graphlet correlations significantly different zero.","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test GCM(s) for statistical significance — testGCM","text":"","code":"testGCM( obj1, obj2 = NULL, adjust = \"adaptBH\", lfdrThresh = 0.2, trueNullMethod = \"convest\", alpha = 0.05, verbose = TRUE )"},{"path":"https://netcomi.de/reference/testGCM.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test GCM(s) for statistical significance — testGCM","text":"obj1 object class GCM GCD returned calcGCM calcGCD. See details. obj2 optional object class GCM returned calcGCM. See details. adjust character indicating method used multiple testing adjustment. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust. lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction. Defaults 0.2 meaning differences corresponding local fdr less equal 0.2 identified significant. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\" (default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). alpha numeric value 0 1 giving desired significance level. verbose logical. TRUE (default), progress messages printed.","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test GCM(s) for statistical significance — testGCM","text":"list following elements: Additional elements two GCMs given:","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Test GCM(s) for statistical significance — testGCM","text":"applying Student's t-test Fisher-transformed correlations, entries GCM(s) tested significantly different zero: H0: gc_ij = 0 vs. H1: gc_ij != 0, gc_ij graphlet correlations. GCMs given obj1 class GCD, absolute differences graphlet correlations tested different zero using Fisher's z-test. hypotheses : H0: |d_ij| = 0 vs. H1: |d_ij| > 0, d_ij = gc1_ij - gc2_ij","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test GCM(s) for statistical significance — testGCM","text":"","code":"# See help page of calcGCD() ?calcGCD"},{"path":[]},{"path":"https://netcomi.de/news/index.html","id":"new-features-1-1-0","dir":"Changelog","previous_headings":"","what":"New features","title":"NetCoMi 1.1.0","text":"renameTaxa(): New function renaming taxa taxonomic table. comes functionality making unknown unclassified taxa unique substituting next higher known taxonomic level. E.g., unknown genus “g__“, family next higher known level, can automatically renamed ”1_Streptococcaceae(F)“. User-defined patterns determine format known substituted names. Unknown names (e.g., NAs) unclassified taxa can handled separately. Duplicated names within one chosen ranks can also made unique numbering consecutively. editLabels(): New function editing node labels, .e., shortening certain length removing unwanted characters. used NetCoMi’s plot functions plot.microNetProps() plot.diffnet(). netCompare(): adjusted Rand index also computed largest connected component (LCC). summary method adapted. Argument “testRand” added netCompare(). Performing permutation test adjusted Rand index can now disabled save run time. Graphlet-based network measures implemented. NetCoMi contains two new exported functions calcGCM() calcGCD() compute Graphlet Correlation Matrix (GCM) network Graphlet Correlation Distance (GCD) two networks. Orbits graphlets four nodes considered. Furthermore, GCM computed netAnalyze() GCD netCompare() (whole network largest connected component, respectively). Also orbit counts returned. GCD added summary class microNetComp objects returned netCompare(). Significance test GCD: permutation tests conducted netCompare(), GCD tested significantly different zero. New function testGCM() test graphlet-based measures significance. single GCM, correlations tested significantly different zero. two GCMs given, tested correlations significantly different two groups, , absolute differences correlations ( |gc1ij−gc2ij||gc1_{ij}-gc2_{ij}| ) tested different zero. New function plotHeat() plotting mixed heatmap , instance, values shown upper triangle corresponding p-values significance codes lower triangle. function used plotting heatmaps GCMs, also used association matrices. netAnalyze() now default returns heatmap GCM(s) graphlet correlations upper triangle significance codes lower triangle. Argument “doPlot” added plot.microNetProps() suppress plot return value interest. New “show” arguments added summary methods class microNetProps microNetComp objects. specify network properties printed summary. See help pages summary.microNetProps summary.microNetComp() details. New zero replacement method “pseudoZO” available netConstruct(). Instead adding desired pseudo count whole count matrix, added zero counts pseudoZO chosen. behavior “pseudo” (available method pseudo count added counts) changed. Adding pseudo count zeros preserves ratios non-zero counts, desirable. createAssoPerm() now accepts objects class microNet input (addition objects class microNetProps). SPRING's fast version latent correlation computation (implemented mixedCCA) available . can used setting netConstruct() parameter measurePar$Rmethod “approx”, now default . function multAdjust() now argument pTrueNull pre-define proportion true null hypotheses adaptive BH method. netConstruct() new argument assoBoot, enables computation bootstrap association matrices outside netConstruct() bootstrapping used sparsification. example added help page ?netConstruct. feature might useful large association matrices (working memory might reach limit).","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-1-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.1.0","text":"netConstruct(): Using “bootstrap” sparsification method combination one association methods “bicor”, “cclasso”, “ccrepe”, “gcoda” led error: argument \"verbose\" missing, default, fixed. “signedPos” transformation work properly. Dissimilarities corresponding negative correlations set zero instead infinity. editLabels(): function (thus also plot.microNetProps) threw error taxa renamed renameTaxa data contain 9 taxa equal names, double-digit numbers added avoid duplicates. Issues network analysis plotting association matrices used network construction, row /column names missing. (issue #65) diffnet() threw error association matrices used network construction instead count matrices. (issue #66) plot.microNetProps(): function now directly returns error x expected class. cut parameter changed. cclasso(): rare cases, function produced complex numbers, led error.","code":""},{"path":"https://netcomi.de/news/index.html","id":"further-changes-1-1-0","dir":"Changelog","previous_headings":"","what":"Further changes","title":"NetCoMi 1.1.0","text":"permutation tests: permuted group labels must now different original group vector. words, original group vector strictly avoided matrix permuted group labels. far, duplicates avoided. exact permutation tests (nPerm equals possible number permutations), original group vector still included permutation matrix. calculation p-values adapted new behavior: p=B/N exact p-values p=(B+1)/(N+1) approximated p-values, B number permutation test statistics larger equal observed one, N number permutations. far, p=(B+1)/(N+1) used cases. plot.microNetProps(): default shortenLabels now “none”, .e. labels shortened default, avoid confusion node labels. edge filter (specified via edgeFilter edgeInvisFilter) now refers estimated association/dissimilarities instead edge weights. E.g., setting threshold 0.3 association network hides edges corresponding absolute association 0.3 even though edge weight might different (depending transformation used network construction). (issue #26) two networks constructed cut parameter user-defined, mean two determined cut parameters now used networks edge thicknesses comparable. expressive messages errors diffnet plot.diffnet differential associations detected. New function .suppress_warnings() suppress certain warnings returned external functions. netConstruct “multRepl” used zero handling: warning proportion zeros suppressed setting multRepl() parameter “z.warning” 1. functions makeCluster stopCluster parallel package now used parallel computation snow package sometimes led problems Unix machines.","code":""},{"path":"https://netcomi.de/news/index.html","id":"style-1-1-0","dir":"Changelog","previous_headings":"","what":"Style","title":"NetCoMi 1.1.0","text":"whole R code reformatted follow general conventions. element \"clustering_lcc\" part netAnalyze output changed \"clusteringLCC\" line remaining output. Input argument checking exported function revised. New functions .checkArgsXxx() added perform argument checking outside main functions. Non-exported functions renamed follow general naming conventions, .e. Bioconductor: Use camelCase functions. Non-exported functions prefix “.” following functions renamed:","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcomi-103","dir":"Changelog","previous_headings":"","what":"NetCoMi 1.0.3","title":"NetCoMi 1.0.3","text":"minor release bug fixes changes documentation.","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-0-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.0.3","text":"netConstruct() threw error data row /column names, fixed. edge list added output netConstruct() (issue #41). See help page details. SPRING’s fast version latent correlation computation (implemented mixedCCA) currently available due deprecation R package chebpol. issue fixed setting netConstruct() parameter measurePar$Rmethod internally “original” SPRING used association estimation. plot.microNetProps(): xpd parameter changed NA plotting outside plot region possible (useful legends additional text). Labels network plot can now suppressed setting labels = FALSE (issue #43) netCompare() function threw error one permutation networks empty, .e. edges weight different zero (issue #38), now fixed. Fix issues #29 #40, permutation tests terminate small sample sizes. Now, possible number permutations (resulting sample size) smaller defined user, function stops returns error. Fix bug diffnet() (issue #51), colors differential networks changed. diffnet() threw error netConstruct() argument jointPrepro set TRUE.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcomi-102","dir":"Changelog","previous_headings":"","what":"NetCoMi 1.0.2","title":"NetCoMi 1.0.2","text":"release includes range new features fixes known bugs issues.","code":""},{"path":[]},{"path":"https://netcomi.de/news/index.html","id":"improved-installation-process-1-0-2","dir":"Changelog","previous_headings":"New features","what":"Improved installation process","title":"NetCoMi 1.0.2","text":"Packages optionally required certain settings installed together NetCoMi anymore. Instead, new function installNetCoMiPacks() installing remaining packages. installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":""},{"path":"https://netcomi.de/news/index.html","id":"installnetcomipacks-1-0-2","dir":"Changelog","previous_headings":"New features","what":"installNetCoMiPacks()","title":"NetCoMi 1.0.2","text":"New function installing R packages used NetCoMi listed dependencies imports NetCoMi’s description file.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netconstruct-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netConstruct()","title":"NetCoMi 1.0.2","text":"New argument matchDesign: Implements matched-group (.e. matched-pair) designs, used permutation tests netCompare() diffnet(). c(1,2), instance, means one sample first group matched two samples second group. argument NULL, matched-group design kept generating permuted data. New argument jointPrepro: Specifies whether two data sets (group one two) preprocessed together. Preprocessing includes sample taxa filtering, zero treatment, normalization. Defaults TRUE data group given, FALSE data data2 given, similar behavior NetCoMi 1.0.1. dissimilarity networks, joint preprocessing possible. mclr(){SPRING} now available normalization method. clr{SpiecEasi} used centered log-ratio transformation instead cenLR(){robCompositions}. \"symBetaMode\" accepted list element measurePar, passed symBeta(){SpiecEasi}. needed SpiecEasi SPRING associations. pseudocount (zeroMethod = \"pseudo\") may freely specified. v1.0.1, unit pseudocounts possible.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netanalyze-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netAnalyze()","title":"NetCoMi 1.0.2","text":"Global network properties now computed whole network well largest connected component (LCC). summary network properties now contains whole network statistics based shortest paths (, generally, also meaningful disconnected networks). LCC, global properties available NetCoMi shown. New global network properties (see docu netAnalyze() definitions): Number components (whole network) Relative LCC size (LCC) Positive edge percentage Natural connectivity Average dissimilarity (meaningful LCC) Average path length (meaningful LCC) New argument centrLCC: Specifies whether compute centralities LCC. TRUE, centrality values disconnected components zero. New argument avDissIgnoreInf: Indicates whether infinite values ignored average dissimilarity. FALSE, infinities set 1. New argument sPathAlgo: Algorithm used computing shortest paths New argument sPathNorm: Indicates whether shortest paths normalized average dissimilarity improve interpretability. New argument normNatConnect: Indicates whether normalize natural connectivity values. New argument weightClustCoef: Specifies algorithm used computing global clustering coefficient. FALSE, transitivity(){igraph} type = \"global\" used (similar NetCoMi 1.0.1). TRUE, local clustering coefficient computed using transitivity(){igraph} type = \"barrat\". global clustering coefficient arithmetic mean local values. Argument connect changed connectivity. Documentation extended definitions network properties.","code":""},{"path":"https://netcomi.de/news/index.html","id":"summarymicronetprops-1-0-2","dir":"Changelog","previous_headings":"New features","what":"summary.microNetProps()","title":"NetCoMi 1.0.2","text":"New argument clusterLCC: Indicates whether clusters shown whole network LCC. print method summary.microNetProps completely revised.","code":""},{"path":"https://netcomi.de/news/index.html","id":"plotmicronetprops-1-0-2","dir":"Changelog","previous_headings":"New features","what":"plot.microNetProps()","title":"NetCoMi 1.0.2","text":"normalization methods available network construction can now used scaling node sizes (argument nodeSize). New argument normPar: Optional parameters used normalization. Usage colorVec changed: Node colors can now set separately groups (colorVec can single vector list two vectors). Usage depends nodeColor (see docu colorVec). New argument sameFeatCol: nodeColor = \"feature\" colorVec given, sameFeatCol indicates whether features colors groups. Argument colorNegAsso renamed negDiffCol. Using old name leads warning. New functionality using layout groups (two networks plotted). addition computing layout one group adopting group, union layout can computed used groups nodes placed optimal possible equally networks. option applied via sameLayout = TRUE layoutGroup = \"union\". Many thanks Christian L. Müller Alice Sommer providing idea R code new feature!","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcompare-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netCompare()","title":"NetCoMi 1.0.2","text":"New arguments storing association count matrices permuted data external file: fileLoadAssoPerm fileLoadCountsPerm storeAssoPerm fileStoreAssoPerm storeCountsPerm fileStoreCountsPerm New argument returnPermProps: TRUE, global network properties respective absolute group differences permuted data returned. New argument returnPermCentr: TRUE, computed centrality values respective absolute group differences permuted data returned list matrix centrality measure. arguments assoPerm dissPerm still existent compatibility NetCoMi 1.0.1 former elements assoPerm dissPerm returned anymore (matrices stored external file instead).","code":""},{"path":"https://netcomi.de/news/index.html","id":"createassoperm-1-0-2","dir":"Changelog","previous_headings":"New features","what":"createAssoPerm()","title":"NetCoMi 1.0.2","text":"New function creating association/dissimilarity matrices permuted count data. stored count association/dissimilarity matrices can passed netCompare() diffnet() decrease runtime. function also allows generate matrix permuted group labels without computing associations. Using matrix, createAssoPerm() furthermore allows estimate permutation associations/dissimilarities blocks (passing subset permuted group matrix createAssoPerm()).","code":""},{"path":"https://netcomi.de/news/index.html","id":"summarymicronetcomp-1-0-2","dir":"Changelog","previous_headings":"New features","what":"summary.microNetComp()","title":"NetCoMi 1.0.2","text":"Summary method adapted new network properties (analogous summary microNetProps objects, returned netAnalyze())","code":""},{"path":"https://netcomi.de/news/index.html","id":"diffnet-1-0-2","dir":"Changelog","previous_headings":"New features","what":"diffnet()","title":"NetCoMi 1.0.2","text":"New arguments storing association count matrices permuted data external file: fileLoadAssoPerm fileLoadCountsPerm storeAssoPerm fileStoreAssoPerm storeCountsPerm fileStoreCountsPerm argument assoPerm still existent compatibility NetCoMi 1.0.1 former element assoPerm returned anymore (matrices stored external file instead). Changed output: permutation tests Fisher’s z-test, vector matrix p-values corresponding matrix group differences returned without multiple testing adjustment. Documentation revised.","code":""},{"path":"https://netcomi.de/news/index.html","id":"plotdiffnet-1-0-2","dir":"Changelog","previous_headings":"New features","what":"plot.diffnet()","title":"NetCoMi 1.0.2","text":"New argument adjusted: Indicates whether adjacency matrix (matrix group differences) based adjusted unadjusted p-values plotted. New argument legendPos positioning legend. New argument legendArgs specifying arguments passed legend.","code":""},{"path":"https://netcomi.de/news/index.html","id":"coltotransp-1-0-2","dir":"Changelog","previous_headings":"New features","what":"colToTransp()","title":"NetCoMi 1.0.2","text":"function now exported name changed col_to_transp() colToTransp(). function expects color vector input adds transparency color.","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-0-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.0.2","text":"major issues fixed release : following error solved: Error update.list(...): argument \"new\" missing. error caused conflict SpiecEasi metagenomeSeq, particular gplot dependency metagenomeSeq. former version gplot dependend gdata, caused conflict. , please update gplot remove package gdata fix error. sparcc() SpiecEasi package now used estimating SparCC associations. users, NetCoMi’s Rccp implementation SparCC caused errors installing NetCoMi. fixed, Rcpp implementation included , users can decide two SparCC versions. VST transformations now computed correctly. Error plotting two networks, one network empty, fixed.","code":""}] +[{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-construction","dir":"Articles","previous_headings":"","what":"Network construction","title":"Get started","text":"use SPRING package estimating associations (conditional dependence) OTUs. data filtered within netConstruct() follows: samples total number reads least 1000 included (argument filtSamp). 50 taxa highest frequency included (argument filtTax). measure defines association dissimilarity measure, \"spring\" case. Additional arguments passed SPRING() via measurePar. nlambda rep.num set 10 decreased execution time, higher real data. Rmethod set “approx” estimate correlations using hybrid multi-linear interpolation approach proposed @yoon2020fast. method considerably reduces runtime controlling approximation error. Normalization well zero handling performed internally SPRING(). Hence, set normMethod zeroMethod \"none\". furthermore set sparsMethod \"none\" SPRING returns sparse network additional sparsification step necessary. use “signed” method transforming associations dissimilarities (argument dissFunc). , strongly negatively associated taxa high dissimilarity , turn, low similarity, corresponds edge weights network plot. verbose argument set 3 messages generated netConstruct() well messages external functions printed.","code":"net_spring <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend': #> method from #> rev.hclust vegan #> Registered S3 method overwritten by 'seriation': #> method from #> reorder.hclust vegan #> Done."},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-analysis","dir":"Articles","previous_headings":"","what":"Network analysis","title":"Get started","text":"NetCoMi’s netAnalyze() function used analyzing constructed network(s). , centrLCC set TRUE meaning centralities calculated nodes largest connected component (LCC). Clusters identified using greedy modularity optimization (cluster_fast_greedy() igraph package). Hubs nodes eigenvector centrality value empirical 95% quantile eigenvector centralities network (argument hubPar). weightDeg normDeg set FALSE degree node simply defined number nodes adjacent node. default, heatmap Graphlet Correlation Matrix (GCM) returned (graphlet correlations upper triangle significance codes resulting Student’s t-test lower triangle). See ?calcGCM ?testGCM details.","code":"props_spring <- netAnalyze(net_spring, centrLCC = TRUE, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\", weightDeg = FALSE, normDeg = FALSE) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. #?summary.microNetProps summary(props_spring, numbNodes = 5L) #> #> Component sizes #> ``````````````` #> size: 48 1 #> #: 1 2 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.96000 #> Clustering coefficient 0.33594 #> Modularity 0.53407 #> Positive edge percentage 88.34951 #> Edge density 0.09131 #> Natural connectivity 0.02855 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.97035 #> Average path length** 2.36912 #> #> Whole network: #> #> Number of components 3.00000 #> Clustering coefficient 0.33594 #> Modularity 0.53407 #> Positive edge percentage 88.34951 #> Edge density 0.08408 #> Natural connectivity 0.02714 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 2 12 17 10 5 4 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 190597 #> 288134 #> 311477 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (unnormalized): #> #> 288134 10 #> 190597 9 #> 311477 9 #> 188236 8 #> 199487 8 #> #> Betweenness centrality (normalized): #> #> 302160 0.31360 #> 268332 0.24144 #> 259569 0.23404 #> 470973 0.21462 #> 119010 0.19611 #> #> Closeness centrality (normalized): #> #> 288134 0.68426 #> 311477 0.68413 #> 199487 0.68099 #> 302160 0.67518 #> 188236 0.66852 #> #> Eigenvector centrality (normalized): #> #> 288134 1.00000 #> 311477 0.94417 #> 190597 0.90794 #> 199487 0.85439 #> 188236 0.72684"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"plotting-the-gcm-heatmap-manually","dir":"Articles","previous_headings":"Network analysis","what":"Plotting the GCM heatmap manually","title":"Get started","text":"","code":"plotHeat(mat = props_spring$graphletLCC$gcm1, pmat = props_spring$graphletLCC$pAdjust1, type = \"mixed\", title = \"GCM\", colorLim = c(-1, 1), mar = c(2, 0, 2, 0)) # Add rectangles highlighting the four types of orbits graphics::rect(xleft = c( 0.5, 1.5, 4.5, 7.5), ybottom = c(11.5, 7.5, 4.5, 0.5), xright = c( 1.5, 4.5, 7.5, 11.5), ytop = c(10.5, 10.5, 7.5, 4.5), lwd = 2, xpd = NA) text(6, -0.2, xpd = NA, \"Significance codes: ***: 0.001; **: 0.01; *: 0.05\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"visualizing-the-network","dir":"Articles","previous_headings":"","what":"Visualizing the network","title":"Get started","text":"use determined clusters node colors scale node sizes according node’s eigenvector centrality. Note edge weights (non-negative) similarities, however, edges belonging negative estimated associations colored red default (negDiffCol = TRUE). default, different transparency value added edges absolute weight cut value (arguments edgeTranspLow edgeTranspHigh). determined cut value can read follows:","code":"# help page ?plot.microNetProps p <- plot(props_spring, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with SPRING associations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated association:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) p$q1$Arguments$cut #> 75% #> 0.337099"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"export-to-gephi","dir":"Articles","previous_headings":"","what":"Export to Gephi","title":"Get started","text":"users may interested export network Gephi. ’s example: exported .csv files can imported Gephi.","code":"# For Gephi, we have to generate an edge list with IDs. # The corresponding labels (and also further node features) are stored as node list. # Create edge object from the edge list exported by netConstruct() edges <- dplyr::select(net_spring$edgelist1, v1, v2) # Add Source and Target variables (as IDs) edges$Source <- as.numeric(factor(edges$v1)) edges$Target <- as.numeric(factor(edges$v2)) edges$Type <- \"Undirected\" edges$Weight <- net_spring$edgelist1$adja nodes <- unique(edges[,c('v1','Source')]) colnames(nodes) <- c(\"Label\", \"Id\") # Add category with clusters (can be used as node colors in Gephi) nodes$Category <- props_spring$clustering$clust1[nodes$Label] edges <- dplyr::select(edges, Source, Target, Type, Weight) write.csv(nodes, file = \"nodes.csv\", row.names = FALSE) write.csv(edges, file = \"edges.csv\", row.names = FALSE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-with-pearson-correlation-as-association-measure","dir":"Articles","previous_headings":"","what":"Network with Pearson correlation as association measure","title":"Get started","text":"Let’s construct another network using Pearson’s correlation coefficient association measure. input now phyloseq object. Since Pearson correlations may lead compositional effects applied sequencing data, use clr transformation normalization method. Zero treatment necessary case. threshold 0.3 used sparsification method, OTUs absolute correlation greater equal 0.3 connected. Network analysis plotting: Let’s improve visualization changing following arguments: repulsion = 0.8: Place nodes apart. rmSingles = TRUE: Single nodes removed. labelScale = FALSE cexLabels = 1.6: labels equal size enlarged improve readability small node’s labels. nodeSizeSpread = 3 (default 4): Node sizes similar value decreased. argument (combination cexNodes) useful enlarge small nodes keeping size big nodes. hubBorderCol = \"darkgray\": Change border color better readability node labels.","code":"net_pears <- netConstruct(amgut2.filt.phy, measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"multRepl\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... Done. #> 2 rows with zero sum removed. #> 138 taxa and 294 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Sparsify associations via 'threshold' ... Done. props_pears <- netAnalyze(net_pears, clustMethod = \"cluster_fast_greedy\") plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) plot(props_pears, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network on OTU level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"edge-filtering","dir":"Articles","previous_headings":"Network with Pearson correlation as association measure","what":"Edge filtering","title":"Get started","text":"network can sparsified using arguments edgeFilter (edges filtered layout computed) edgeInvisFilter (edges removed layout computed thus just made “invisible”).","code":"plot(props_pears, edgeInvisFilter = \"threshold\", edgeInvisPar = 0.4, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.8, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = paste0(\"Network on OTU level with Pearson correlations\", \"\\n(edge filter: threshold = 0.4)\"), showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"using-the-unsigned-transformation","dir":"Articles","previous_headings":"","what":"Using the “unsigned” transformation","title":"Get started","text":"network, “signed” transformation used transform estimated associations dissimilarities. leads network strongly positive correlated taxa high edge weight (1 correlation equals 1) strongly negative correlated taxa low edge weight (0 correlation equals -1). now use “unsigned” transformation edge weight strongly correlated taxa high, matter sign. Hence, correlation -1 1 lead edge weight 1.","code":""},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-construction-1","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Network construction","title":"Get started","text":"can pass network object netConstruct() save runtime.","code":"net_pears_unsigned <- netConstruct(data = net_pears$assoEst1, dataType = \"correlation\", sparsMethod = \"threshold\", thresh = 0.3, dissFunc = \"unsigned\", verbose = 3) #> Checking input arguments ... Done. #> #> Sparsify associations via 'threshold' ... Done."},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"estimated-correlations-and-adjacency-values","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Estimated correlations and adjacency values","title":"Get started","text":"following histograms demonstrate estimated correlations transformed adjacencies (= sparsified similarities weighted networks). Sparsified estimated correlations: Adjacency values computed using “signed” transformation (values different 0 1 edges network): Adjacency values computed using “unsigned” transformation:","code":"hist(net_pears$assoMat1, 100, xlim = c(-1, 1), ylim = c(0, 400), xlab = \"Estimated correlation\", main = \"Estimated correlations after sparsification\") hist(net_pears$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"signed\\\" transformation)\") hist(net_pears_unsigned$adjaMat1, 100, ylim = c(0, 400), xlab = \"Adjacency values\", main = \"Adjacencies (with \\\"unsigned\\\" transformation)\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-analysis-and-plotting","dir":"Articles","previous_headings":"Using the “unsigned” transformation","what":"Network analysis and plotting","title":"Get started","text":"“signed” transformation, positive correlated taxa likely belong cluster, “unsigned” transformation clusters contain strongly positive negative correlated taxa.","code":"props_pears_unsigned <- netAnalyze(net_pears_unsigned, clustMethod = \"cluster_fast_greedy\", gcmHeat = FALSE) plot(props_pears_unsigned, nodeColor = \"cluster\", nodeSize = \"eigenvector\", repulsion = 0.9, rmSingles = TRUE, labelScale = FALSE, cexLabels = 1.6, nodeSizeSpread = 3, cexNodes = 2, hubBorderCol = \"darkgray\", title1 = \"Network with Pearson correlations and \\\"unsigned\\\" transformation\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-on-genus-level","dir":"Articles","previous_headings":"","what":"Network on genus level","title":"Get started","text":"now construct network, OTUs agglomerated genera.","code":"library(phyloseq) data(\"amgut2.filt.phy\") # Agglomerate to genus level amgut_genus <- tax_glom(amgut2.filt.phy, taxrank = \"Rank6\") # Taxonomic table taxtab <- as(tax_table(amgut_genus), \"matrix\") # Rename taxonomic table and make Rank6 (genus) unique amgut_genus_renamed <- renameTaxa(amgut_genus, pat = \"\", substPat = \"_()\", numDupli = \"Rank6\") #> Column 7 contains NAs only and is ignored. # Network construction and analysis net_genus <- netConstruct(amgut_genus_renamed, taxRank = \"Rank6\", measure = \"pearson\", zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... #> Done. #> 2 rows with zero sum removed. #> 43 taxa and 294 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Sparsify associations via 'threshold' ... Done. props_genus <- netAnalyze(net_genus, clustMethod = \"cluster_fast_greedy\")"},{"path":"https://netcomi.de/articles/NetCoMi.html","id":"network-plots","dir":"Articles","previous_headings":"Network on genus level","what":"Network plots","title":"Get started","text":"Modifications: Fruchterman-Reingold layout algorithm igraph package used (passed plot matrix) Shortened labels (using “intelligent” method, avoids duplicates) Fixed node sizes, hubs enlarged Node color gray nodes (transparancy lower hub nodes default) Since visualization obviously optimal, make adjustments: time, Fruchterman-Reingold layout algorithm computed within plot function thus applied “reduced” network without singletons Labels scaled node sizes Single nodes removed Node sizes scaled column sums clr-transformed data Node colors represent determined clusters Border color hub nodes changed black darkgray Label size hubs enlarged Let’s check whether largest nodes actually highest column sums matrix normalized counts returned netConstruct(). order improve plot, use following modifications: time, choose “spring” layout part qgraph() (function generally used network plotting NetCoMi) repulsion value 1 places nodes apart Labels shortened anymore Nodes (bacteria genus level) colored according respective phylum Edges representing positive associations colored blue, negative ones orange (just give example alternative edge coloring) Transparency increased edges high weight improve readability node labels","code":"# Compute layout graph3 <- igraph::graph_from_adjacency_matrix(net_genus$adjaMat1, weighted = TRUE) set.seed(123456) lay_fr <- igraph::layout_with_fr(graph3) # Row names of the layout matrix must match the node names rownames(lay_fr) <- rownames(net_genus$adjaMat1) plot(props_genus, layout = lay_fr, shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), nodeSize = \"fix\", nodeColor = \"gray\", cexNodes = 0.8, cexHubs = 1.1, cexLabels = 1.2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) set.seed(123456) plot(props_genus, layout = \"layout_with_fr\", shortenLabels = \"intelligent\", labelLength = 10, labelPattern = c(5, \"'\", 3, \"'\", 3), labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeColor = \"cluster\", hubBorderCol = \"darkgray\", cexNodes = 2, cexLabels = 1.5, cexHubLabels = 2, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"#009900\",\"red\"), bty = \"n\", horiz = TRUE) sort(colSums(net_genus$normCounts1), decreasing = TRUE)[1:10] #> Bacteroides Klebsiella Faecalibacterium #> 1200.7971 1137.4928 708.0877 #> 5_Clostridiales(O) 2_Ruminococcaceae(F) 3_Lachnospiraceae(F) #> 549.2647 502.1889 493.7558 #> 6_Enterobacteriaceae(F) Roseburia Parabacteroides #> 363.3841 333.8737 328.0495 #> Coprococcus #> 274.4082 # Get phyla names taxtab <- as(tax_table(amgut_genus_renamed), \"matrix\") phyla <- as.factor(gsub(\"p__\", \"\", taxtab[, \"Rank2\"])) names(phyla) <- taxtab[, \"Rank6\"] #table(phyla) # Define phylum colors phylcol <- c(\"cyan\", \"blue3\", \"red\", \"lawngreen\", \"yellow\", \"deeppink\") plot(props_genus, layout = \"spring\", repulsion = 0.84, shortenLabels = \"none\", charToRm = \"g__\", labelScale = FALSE, rmSingles = TRUE, nodeSize = \"clr\", nodeSizeSpread = 4, nodeColor = \"feature\", featVecCol = phyla, colorVec = phylcol, posCol = \"darkturquoise\", negCol = \"orange\", edgeTranspLow = 0, edgeTranspHigh = 40, cexNodes = 2, cexLabels = 2, cexHubLabels = 2.5, title1 = \"Network on genus level with Pearson correlations\", showTitle = TRUE, cexTitle = 2.3) # Colors used in the legend should be equally transparent as in the plot phylcol_transp <- colToTransp(phylcol, 60) legend(-1.2, 1.2, cex = 2, pt.cex = 2.5, title = \"Phylum:\", legend=levels(phyla), col = phylcol_transp, bty = \"n\", pch = 16) legend(0.7, 1.1, cex = 2.2, title = \"estimated correlation:\", legend = c(\"+\",\"-\"), lty = 1, lwd = 3, col = c(\"darkturquoise\",\"orange\"), bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-construction-and-analysis","dir":"Articles","previous_headings":"","what":"Network construction and analysis","title":"Generate permuted association matrices","text":"use data American Gut Project conduct network comparison subjects without lactose intolerance. demonstrate NetCoMi’s functionality matched data, build “fake” 1:2 matched data set, two samples LACTOSE = \"\" group assigned one sample LACTOSE = \"yes\" group. use subset 150 samples, leading 50 samples group “yes” 100 samples group “”.","code":"library(NetCoMi) library(phyloseq) set.seed(123456) # Load American Gut Data (from SpiecEasi package) data(\"amgut2.filt.phy\") #table(amgut2.filt.phy@sam_data@.Data[[which(amgut2.filt.phy@sam_data@names == \"LACTOSE\")]]) # Divide samples into two groups: with and without lactose intolerance lact_yes <- phyloseq::subset_samples(amgut2.filt.phy, LACTOSE == \"yes\") lact_no <- phyloseq::subset_samples(amgut2.filt.phy, LACTOSE == \"no\") # Extract count tables counts_yes <- t(as(phyloseq::otu_table(lact_yes), \"matrix\")) counts_no <- t(as(phyloseq::otu_table(lact_no), \"matrix\")) # Build the 1:2 matched data set counts_matched <- matrix(NA, nrow = 150, ncol = ncol(counts_yes)) colnames(counts_matched) <- colnames(counts_yes) rownames(counts_matched) <- 1:150 ind_yes <- ind_no <- 1 for (i in 1:150) { if ((i-1)%%3 == 0) { counts_matched[i, ] <- counts_yes[ind_yes, ] rownames(counts_matched)[i] <- rownames(counts_yes)[ind_yes] ind_yes <- ind_yes + 1 } else { counts_matched[i, ] <- counts_no[ind_no, ] rownames(counts_matched)[i] <- rownames(counts_no)[ind_no] ind_no <- ind_no + 1 } } # The corresponding group vector used for splitting the data into two subsets. group_vec <- rep(c(1,2,2), 50) # Note: group \"1\" belongs to \"yes\", group \"2\" belongs to \"no\" # Network construction net_amgut <- netConstruct(counts_matched, group = group_vec, matchDesign = c(1,2), filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", zeroMethod = \"pseudo\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.4, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 88 taxa removed. #> 50 taxa and 150 samples remaining. #> #> Zero treatment: #> Pseudo count of 1 added. #> #> Normalization: #> Execute clr(){SpiecEasi} ... Done. #> #> Calculate 'pearson' associations ... Done. #> #> Calculate associations in group 2 ... Done. #> #> Sparsify associations via 'threshold' ... Done. #> #> Sparsify associations in group 2 ... Done. # Network analysis with default values props_amgut <- netAnalyze(net_amgut) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. #summary(props_amgut) # Network plot plot(props_amgut, sameLayout = TRUE, layoutGroup = \"union\", nodeSize = \"clr\", repulsion = 0.9, cexTitle = 3.7, cexNodes = 2, cexLabels = 2, groupNames = c(\"LACTOSE = yes\", \"LACTOSE = no\")) legend(\"bottom\", title = \"estimated correlation:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-comparison-via-the-classical-way","dir":"Articles","previous_headings":"","what":"Network comparison via the “classical way”","title":"Generate permuted association matrices","text":"conduct network comparison permutation tests examine whether group differences significant. order reduce execution time, 100 permutations used. real data sets, number permutations least 1000 get reliable results. matrices estimated associations permuted data stored external file (current working directory) named \"assoPerm_comp\". network comparison repeated, time, stored permutation associations loaded netCompare(). option might useful rerun function alternative multiple testing adjustment, without need re-estimating associations. stored permutation associations can also passed diffnet() construct differential network. expected number permutations 100, differential associations multiple testing adjustment. Just take look differential network look like, plot differential network based non-adjusted p-values. Note approach statistically correct!","code":"# Network comparison comp_amgut_orig <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Files 'assoPerm_comp.bmat and assoPerm_comp.desc.txt created. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. summary(comp_amgut_orig) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = props_amgut, permTest = TRUE, nPerm = 100, seed = 123456, #> storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm_comp\", #> storeCountsPerm = FALSE) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.480 0.400 0.080 0.950495 #> Clustering coefficient 0.510 0.635 0.125 0.584158 #> Modularity 0.261 0.175 0.085 0.524752 #> Positive edge percentage 57.627 62.500 4.873 0.554455 #> Edge density 0.214 0.295 0.081 0.693069 #> Natural connectivity 0.080 0.109 0.029 0.742574 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.921 0.887 0.033 0.693069 #> Average path length** 1.786 1.459 0.327 0.643564 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 21.000 27.000 6.000 0.861386 #> Clustering coefficient 0.463 0.635 0.172 0.247525 #> Modularity 0.332 0.252 0.080 0.504950 #> Positive edge percentage 58.462 63.333 4.872 0.564356 #> Edge density 0.053 0.049 0.004 0.871287 #> Natural connectivity 0.030 0.031 0.001 0.772277 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.615 0.991177 0.034655 * #> betweenness centr. 0.312 0.546936 0.660877 #> closeness centr. 0.529 0.972716 0.075475 . #> eigenvec. centr. 0.625 0.995960 0.015945 * #> hub taxa 0.500 0.888889 0.407407 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.383 0.281 #> p-value 0.000 0.000 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.914000 2.241000 #> p-value 0.633663 0.574257 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 364563 0.061 0.245 0.184 1 #> 190597 0.102 0.000 0.102 1 #> 369164 0.102 0.000 0.102 1 #> 184983 0.184 0.102 0.082 1 #> 194648 0.000 0.082 0.082 1 #> 353985 0.082 0.000 0.082 1 #> 242070 0.082 0.000 0.082 1 #> 307981 0.122 0.184 0.061 1 #> 188236 0.122 0.184 0.061 1 #> 363302 0.102 0.041 0.061 1 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 353985 0.320 0.000 0.320 0.494890 #> 364563 0.040 0.216 0.177 0.999678 #> 188236 0.024 0.187 0.163 0.999678 #> 307981 0.020 0.158 0.138 0.999678 #> 157547 0.103 0.000 0.103 0.999678 #> 71543 0.091 0.000 0.091 0.999678 #> 590083 0.087 0.000 0.087 0.742335 #> 190597 0.079 0.000 0.079 0.999678 #> 194648 0.000 0.070 0.070 0.999678 #> 288134 0.063 0.129 0.065 0.999678 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 190597 0.871 0.000 0.871 0.489307 #> 194648 0.000 0.868 0.868 0.768911 #> 242070 0.809 0.000 0.809 0.733961 #> 369164 0.788 0.000 0.788 0.733961 #> 302160 0.000 0.677 0.677 0.988400 #> 353985 0.639 0.000 0.639 0.733961 #> 516022 0.000 0.561 0.561 0.988400 #> 181095 0.515 0.000 0.515 0.733961 #> 590083 0.507 0.000 0.507 0.489307 #> 470239 0.349 0.000 0.349 0.988400 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 242070 0.469 0.000 0.469 0.9998 #> 307981 0.334 0.672 0.339 0.9998 #> 301645 0.332 0.661 0.329 0.9998 #> 364563 0.079 0.339 0.260 0.9998 #> 369164 0.201 0.000 0.201 0.9998 #> 326792 0.130 0.300 0.170 0.9998 #> 157547 0.632 0.801 0.169 0.9998 #> 190597 0.146 0.000 0.146 0.9998 #> 363302 0.153 0.012 0.141 0.9998 #> 71543 0.910 0.777 0.133 0.9998 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # Network comparison comp_amgut1 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Check whether the second comparison leads to equal results all.equal(comp_amgut_orig$properties, comp_amgut1$properties) #> [1] TRUE # Construct differential network diffnet_amgut <- diffnet(net_amgut, diffMethod = \"permute\", nPerm = 100, fileLoadAssoPerm = \"assoPerm_comp\", storeCountsPerm = FALSE) #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> Done. #> No significant differential associations detected after multiple testing adjustment. plot(diffnet_amgut) #> Error in plot.diffnet(diffnet_amgut): There are no differential correlations to plot (after multiple testing adjustment). plot(diffnet_amgut, adjusted = FALSE, mar = c(2, 2, 5, 15), legendPos = c(1.2,1.2), legendArgs = list(bty = \"n\"), legendGroupnames = c(\"yes\", \"no\"), legendTitle = \"Correlations:\")"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"network-comparison-using-createassoperm","dir":"Articles","previous_headings":"","what":"Network comparison using createAssoPerm()","title":"Generate permuted association matrices","text":"time, permutation association matrices generated using createAssoPerm() passed netCompare(). output written variable createAssoPerm() generally returns matrix permuted group labels. Let’s take look permuted group labels. interpret group labels correctly, important know, within netConstruct(), data set divided two matrices belonging two groups. permutation tests, two matrices combined rows permutation, samples reassigned one two groups keeping matching design matched data. case, permGroupMat matrix consists 100 rows (nPerm = 100) 150 columns (sample size). first 50 columns belong first group (group “yes” case) columns 51 150 belong second group. Since two samples group 2 matched one sample group 1, number group label matrix accordingly. Now, can see matching design kept: Since sample 3 assigned group 1, samples 1 2 assigned group 2 (entries [1,1] [1,51:52] permGroupMat). , stored permutation association matrices passed netCompare(). Using fm.open function, take look stored matrices .","code":"permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, append = FALSE, seed = 123456) seq1 <- seq(1,150, by = 3) seq2 <- seq(1:150)[!seq(1:150)%in%seq1] colnames(permGroupMat) <- c(seq1, seq2) permGroupMat[1:5, 1:10] #> 1 4 7 10 13 16 19 22 25 28 #> [1,] 2 2 2 2 1 2 2 1 2 2 #> [2,] 2 2 2 1 1 2 2 2 2 2 #> [3,] 2 2 1 1 2 2 2 1 2 2 #> [4,] 2 2 1 2 1 2 2 2 1 1 #> [5,] 2 2 2 2 2 1 2 2 2 2 permGroupMat[1:5, 51:71] #> 2 3 5 6 8 9 11 12 14 15 17 18 20 21 23 24 26 27 29 30 32 #> [1,] 2 1 1 2 2 1 1 2 2 2 1 2 2 1 2 2 2 1 1 2 1 #> [2,] 2 1 2 1 2 1 2 2 2 2 1 2 1 2 2 1 2 1 2 1 2 #> [3,] 2 1 1 2 2 2 2 2 1 2 1 2 2 1 2 2 1 2 2 1 2 #> [4,] 1 2 1 2 2 2 1 2 2 2 1 2 2 1 1 2 2 2 2 2 1 #> [5,] 2 1 2 1 1 2 2 1 1 2 2 2 2 1 1 2 1 2 1 2 2 comp_amgut2 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal? all.equal(comp_amgut_orig$properties, comp_amgut2$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- filematrix::fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm2 <- filematrix::fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) identical(as.matrix(assoPerm1), as.matrix(assoPerm2)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm2)) #> [1] 5000 100 # Close files filematrix::close(assoPerm1) filematrix::close(assoPerm2)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"block-wise-execution","dir":"Articles","previous_headings":"","what":"Block-wise execution","title":"Generate permuted association matrices","text":"Due limited resources, might meaningful estimate associations blocks, , subset permutations instead permutations . ’ll now see perform block-wise network comparison using NetCoMi’s functions. Note approach, external file extended iteration, parallelizable. first step, createAssoPerm used generate matrix permuted group labels (permutations!). Hence, set computeAsso parameter FALSE. now compute association matrices blocks 20 permutations loop (leading 5 iterations). Note: nPerm argument must set block size. external file (containing association matrices) must extended loop, except first iteration, file created. Thus, append set TRUE >=2. stored file, now contains associations 100 permutations, can passed netCompare() .","code":"permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = FALSE, seed = 123456) #> Create matrix with permuted group labels ... Done. nPerm_all <- 100 blocksize <- 20 repetitions <- nPerm_all / blocksize for (i in 1:repetitions) { print(i) if (i == 1) { # Create a new file in the first run tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, append = FALSE) } else { tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, append = TRUE) } } #> [1] 1 #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 2 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 3 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 4 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> [1] 5 #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. comp_amgut3 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, storeAssoPerm = TRUE, fileLoadAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal to the first comparison? all.equal(comp_amgut_orig$properties, comp_amgut3$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm3 <- fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) all.equal(as.matrix(assoPerm1), as.matrix(assoPerm3)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm3)) #> [1] 5000 100 # Close files close(assoPerm1) close(assoPerm3)"},{"path":"https://netcomi.de/articles/create_asso_perm.html","id":"block-wise-execution-executable-in-parallel","dir":"Articles","previous_headings":"","what":"Block-wise execution (executable in parallel)","title":"Generate permuted association matrices","text":"blocks computed parallel, extending \"assoPerm\" file iteration work. able run blocks parallel, create separate file iteration combine end. last step, pass file containing combined matrix netCompare().","code":"# Create the matrix with permuted group labels (as before) permGroupMat <- createAssoPerm(props_amgut, nPerm = 100, computeAsso = FALSE, seed = 123456) #> Create matrix with permuted group labels ... Done. nPerm_all <- 100 blocksize <- 20 repetitions <- nPerm_all / blocksize # 5 repetitions # Execute as standard for-loop: for (i in 1:repetitions) { tmp <- createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = paste0(\"assoPerm\", i), storeCountsPerm = FALSE, append = FALSE) } #> Files 'assoPerm1.bmat and assoPerm1.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm2.bmat and assoPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm3.bmat and assoPerm3.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm4.bmat and assoPerm4.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. #> Files 'assoPerm5.bmat and assoPerm5.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |==== | 5% | |======= | 10% | |========== | 15% | |============== | 20% | |================== | 25% | |===================== | 30% | |======================== | 35% | |============================ | 40% | |================================ | 45% | |=================================== | 50% | |====================================== | 55% | |========================================== | 60% | |============================================== | 65% | |================================================= | 70% | |==================================================== | 75% | |======================================================== | 80% | |============================================================ | 85% | |=============================================================== | 90% | |================================================================== | 95% | |======================================================================| 100% #> Done. # OR execute in parallel: library(\"foreach\") cores <- 2 # Please choose an appropriate number of cores cl <- parallel::makeCluster(cores) doSNOW::registerDoSNOW(cl) # Create progress bar: pb <- utils::txtProgressBar(0, repetitions, style=3) #> | | | 0% progress <- function(n) { utils::setTxtProgressBar(pb, n) } opts <- list(progress = progress) tmp <- foreach(i = 1:repetitions, .packages = c(\"NetCoMi\"), .options.snow = opts) %dopar% { progress(i) NetCoMi::createAssoPerm(props_amgut, nPerm = blocksize, permGroupMat = permGroupMat[(i-1) * blocksize + 1:blocksize, ], computeAsso = TRUE, fileStoreAssoPerm = paste0(\"assoPerm\", i), storeCountsPerm = FALSE, append = FALSE) } #> | |============== | 20% | |============================ | 40% | |========================================== | 60% | |======================================================== | 80% | |======================================================================| 100% # Close progress bar close(pb) # Stop cluster parallel::stopCluster(cl) # Combine the matrices and store them into a new file (because netCompare() # needs an external file) assoPerm_all <- NULL for (i in 1:repetitions) { assoPerm_tmp <- fm.open(filenamebase = paste0(\"assoPerm\", i) , readonly = TRUE) assoPerm_all <- rbind(assoPerm_all, as.matrix(assoPerm_tmp)) close(assoPerm_tmp) } dim(assoPerm_all) #> [1] 5000 100 # Combine the permutation association matrices fm.create.from.matrix(filenamebase = \"assoPerm\", mat = assoPerm_all) #> 5000 x 100 filematrix with 8 byte \"double\" elements comp_amgut4 <- netCompare(props_amgut, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, seed = 123456) #> Checking input arguments ... Done. #> Calculate network properties ... Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... Done. #> Adjust for multiple testing using 'adaptBH' ... Done. # Are the network properties equal to those of the first comparison? all.equal(comp_amgut_orig$properties, comp_amgut4$properties) #> [1] TRUE # Open stored files and check whether they are equal assoPerm1 <- fm.open(filenamebase = \"assoPerm_comp\" , readonly = TRUE) assoPerm4 <- fm.open(filenamebase = \"assoPerm\" , readonly = TRUE) identical(as.matrix(assoPerm1), as.matrix(assoPerm4)) #> [1] TRUE dim(as.matrix(assoPerm1)) #> [1] 5000 100 dim(as.matrix(assoPerm4)) #> [1] 5000 100 # Close files close(assoPerm1) close(assoPerm4)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"network-construction","dir":"Articles","previous_headings":"","what":"Network construction","title":"Network comparison","text":"amgut data set split \"SEASONAL_ALLERGIES\" leading two subsets samples (without seasonal allergies). ignore “None” group. 50 nodes highest variance selected network construction get smaller networks. filter 121 samples (sample size smaller group) highest frequency make sample sizes equal thus ensure comparability. Alternatively, group vector passed group, according data set split two groups:","code":"data(\"amgut2.filt.phy\") # Split the phyloseq object into two groups amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") amgut_season_yes #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 121 samples ] #> sample_data() Sample Data: [ 121 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] amgut_season_no #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 163 samples ] #> sample_data() Sample Data: [ 163 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] n_yes <- phyloseq::nsamples(amgut_season_yes) # Network construction net_season <- netConstruct(data = amgut_season_no, data2 = amgut_season_yes, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda = 10, rep.num = 10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 2, seed = 123456) #> Checking input arguments ... Done. #> Data filtering ... #> 42 samples removed in data set 1. #> 0 samples removed in data set 2. #> 96 taxa removed in each data set. #> 1 rows with zero sum removed in group 2. #> 42 taxa and 121 samples remaining in group 1. #> 42 taxa and 120 samples remaining in group 2. #> #> Calculate 'spring' associations ... Registered S3 method overwritten by 'dendextend': #> method from #> rev.hclust vegan #> Registered S3 method overwritten by 'seriation': #> method from #> reorder.hclust vegan #> Done. #> #> Calculate associations in group 2 ... Done. # Get count table countMat <- phyloseq::otu_table(amgut2.filt.phy) # netConstruct() expects samples in rows countMat <- t(as(countMat, \"matrix\")) group_vec <- phyloseq::get_variable(amgut2.filt.phy, \"SEASONAL_ALLERGIES\") # Select the two groups of interest (level \"none\" is excluded) sel <- which(group_vec %in% c(\"no\", \"yes\")) group_vec <- group_vec[sel] countMat <- countMat[sel, ] net_season <- netConstruct(countMat, group = group_vec, filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), measure = \"spring\", measurePar = list(nlambda=10, rep.num=10, Rmethod = \"approx\"), normMethod = \"none\", zeroMethod = \"none\", sparsMethod = \"none\", dissFunc = \"signed\", verbose = 3, seed = 123456)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"network-analysis","dir":"Articles","previous_headings":"","what":"Network analysis","title":"Network comparison","text":"object returned netConstruct() containing networks passed netAnalyze(). Network properties computed networks simultaneously. demonstrate functionalities netAnalyze(), play around available arguments, even chosen setting might optimal. centrLCC = FALSE: Centralities calculated nodes (largest connected component). avDissIgnoreInf = TRUE: Nodes infinite dissimilarity ignored calculating average dissimilarity. sPathNorm = FALSE: Shortest paths normalized average dissimilarity. hubPar = c(\"degree\", \"eigenvector\"): Hubs nodes highest degree eigenvector centrality time. lnormFit = TRUE hubQuant = 0.9: log-normal distribution fitted centrality values identify nodes “highest” centrality values. , node identified hub three centrality measures, node’s centrality value 90% quantile fitted log-normal distribution. non-normalized centralities used four measures. Note! arguments must set carefully, depending research questions. NetCoMi’s default values generally preferable practical cases!","code":"props_season <- netAnalyze(net_season, centrLCC = FALSE, avDissIgnoreInf = TRUE, sPathNorm = FALSE, clustMethod = \"cluster_fast_greedy\", hubPar = c(\"degree\", \"eigenvector\"), hubQuant = 0.9, lnormFit = TRUE, normDeg = FALSE, normBetw = FALSE, normClose = FALSE, normEigen = FALSE) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was #> generated. summary(props_season) #> #> Component sizes #> ``````````````` #> group '1': #> size: 28 1 #> #: 1 14 #> group '2': #> size: 31 8 1 #> #: 1 1 3 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' #> Relative LCC size 0.66667 0.73810 #> Clustering coefficient 0.15161 0.27111 #> Modularity 0.62611 0.45823 #> Positive edge percentage 86.66667 100.00000 #> Edge density 0.07937 0.12473 #> Natural connectivity 0.04539 0.04362 #> Vertex connectivity 1.00000 1.00000 #> Edge connectivity 1.00000 1.00000 #> Average dissimilarity* 0.67251 0.68178 #> Average path length** 3.40008 1.86767 #> #> Whole network: #> group '1' group '2' #> Number of components 15.00000 5.00000 #> Clustering coefficient 0.15161 0.29755 #> Modularity 0.62611 0.55684 #> Positive edge percentage 86.66667 100.00000 #> Edge density 0.03484 0.08130 #> Natural connectivity 0.02826 0.03111 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Sum of dissimilarities along the path #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> group '1': #> name: 0 1 2 3 4 5 #> #: 14 7 6 5 4 6 #> #> group '2': #> name: 0 1 2 3 4 5 #> #: 3 5 14 4 8 8 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on log-normal quantiles of centralities #> ``````````````````````````````````````````````` #> group '1' group '2' #> 307981 322235 #> 363302 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the complete network #> ```````````````````````````````````` #> Degree (unnormalized): #> group '1' group '2' #> 307981 5 2 #> 9715 5 5 #> 364563 4 4 #> 259569 4 5 #> 322235 3 9 #> ______ ______ #> 322235 3 9 #> 363302 3 9 #> 158660 2 6 #> 188236 3 5 #> 259569 4 5 #> #> Betweenness centrality (unnormalized): #> group '1' group '2' #> 307981 231 0 #> 331820 170 9 #> 158660 162 80 #> 188236 161 85 #> 322235 159 126 #> ______ ______ #> 322235 159 126 #> 363302 74 93 #> 188236 161 85 #> 158660 162 80 #> 326792 17 58 #> #> Closeness centrality (unnormalized): #> group '1' group '2' #> 307981 18.17276 7.80251 #> 9715 15.8134 9.27254 #> 188236 15.7949 23.24055 #> 301645 15.30177 9.01509 #> 364563 14.73566 21.21352 #> ______ ______ #> 322235 13.50232 26.36749 #> 363302 12.30297 24.19703 #> 158660 13.07106 23.31577 #> 188236 15.7949 23.24055 #> 326792 14.61391 22.52157 #> #> Eigenvector centrality (unnormalized): #> group '1' group '2' #> 307981 1 0.13142 #> 9715 0.83277 0.20513 #> 301645 0.78551 0.16298 #> 326792 0.50706 0.42082 #> 188236 0.48439 0.56626 #> ______ ______ #> 322235 0.03281 0.79487 #> 363302 0.06613 0.76293 #> 188236 0.48439 0.56626 #> 194648 0.00687 0.52039 #> 184983 0.172 0.49611"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"visual-network-comparison","dir":"Articles","previous_headings":"","what":"Visual network comparison","title":"Network comparison","text":"First, layout computed separately groups (qgraph’s “spring” layout case). Node sizes scaled according mclr-transformed data since SPRING uses mclr transformation normalization method. Node colors represent clusters. Note default, two clusters color groups least two nodes common (sameColThresh = 2). Set sameClustCol FALSE get different cluster colors. Using different layouts leads “nice-looking” network plot group, however, difficult identify group differences first glance. Thus, now use layout groups. following, layout computed group 1 (left network) taken group 2. rmSingles set \"inboth\" nodes unconnected groups can removed layout used. plot, can see clear differences groups. OTU “322235”, instance, strongly connected “Seasonal allergies” group group without seasonal allergies, hub right, left. However, layout one group simply taken , one networks (“seasonal allergies” group) usually nice-looking due long edges. Therefore, NetCoMi (>= 1.0.2) offers option (layoutGroup = \"union\"), union two layouts used groups. , nodes placed optimal possible equally networks. idea R code functionality provided Christian L. Müller Alice Sommer","code":"plot(props_season, sameLayout = FALSE, nodeColor = \"cluster\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.7, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, layoutGroup = 1, rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE) plot(props_season, sameLayout = TRUE, repulsion = 0.95, layoutGroup = \"union\", rmSingles = \"inboth\", nodeSize = \"mclr\", labelScale = FALSE, cexNodes = 1.5, cexLabels = 2.5, cexHubLabels = 3, cexTitle = 3.8, groupNames = c(\"No seasonal allergies\", \"Seasonal allergies\"), hubBorderCol = \"gray40\") legend(\"bottom\", title = \"estimated association:\", legend = c(\"+\",\"-\"), col = c(\"#009900\",\"red\"), inset = 0.02, cex = 4, lty = 1, lwd = 4, bty = \"n\", horiz = TRUE)"},{"path":"https://netcomi.de/articles/net_comparison.html","id":"quantitative-network-comparison","dir":"Articles","previous_headings":"","what":"Quantitative network comparison","title":"Network comparison","text":"Since runtime considerably increased permutation tests performed, set permTest parameter FALSE. See tutorial_createAssoPerm file network comparison including permutation tests. Since permutation tests still conducted Adjusted Rand Index, seed set reproducibility.","code":"comp_season <- netCompare(props_season, permTest = FALSE, verbose = FALSE, seed = 123456) summary(comp_season, groupNames = c(\"No allergies\", \"Allergies\"), showCentr = c(\"degree\", \"between\", \"closeness\"), numbNodes = 5) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = props_season, permTest = FALSE, verbose = FALSE, #> seed = 123456) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> No allergies Allergies difference #> Relative LCC size 0.667 0.738 0.071 #> Clustering coefficient 0.152 0.271 0.120 #> Modularity 0.626 0.458 0.168 #> Positive edge percentage 86.667 100.000 13.333 #> Edge density 0.079 0.125 0.045 #> Natural connectivity 0.045 0.044 0.002 #> Vertex connectivity 1.000 1.000 0.000 #> Edge connectivity 1.000 1.000 0.000 #> Average dissimilarity* 0.673 0.682 0.009 #> Average path length** 3.400 1.868 1.532 #> #> Whole network: #> No allergies Allergies difference #> Number of components 15.000 5.000 10.000 #> Clustering coefficient 0.152 0.298 0.146 #> Modularity 0.626 0.557 0.069 #> Positive edge percentage 86.667 100.000 13.333 #> Edge density 0.035 0.081 0.046 #> Natural connectivity 0.028 0.031 0.003 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Sum of dissimilarities along the path #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.556 0.957578 0.144846 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.231 0.322424 0.861268 #> eigenvec. centr. 0.100 0.017593 * 0.996692 #> hub taxa 0.000 0.296296 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.232 0.355 #> p-value 0.000 0.000 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.577 1.863 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (unnormalized): #> No allergies Allergies abs.diff. #> 322235 3 9 6 #> 363302 3 9 6 #> 469709 0 4 4 #> 158660 2 6 4 #> 223059 0 4 4 #> #> Betweenness centrality (unnormalized): #> No allergies Allergies abs.diff. #> 307981 231 0 231 #> 331820 170 9 161 #> 259569 137 34 103 #> 158660 162 80 82 #> 184983 92 12 80 #> #> Closeness centrality (unnormalized): #> No allergies Allergies abs.diff. #> 469709 0 21.203 21.203 #> 541301 0 20.942 20.942 #> 181016 0 19.498 19.498 #> 361496 0 19.349 19.349 #> 223059 0 19.261 19.261 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1"},{"path":"https://netcomi.de/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Stefanie Peschel. Author, maintainer. Christian L. Müller. Contributor. Anne-Laure Boulesteix. Contributor. Erika von Mutius. Contributor. Martin Depner. Contributor.","code":""},{"path":"https://netcomi.de/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Stefanie Peschel, (2022). NetCoMi: Network Construction Comparison Microbiome Data. R package version 1.1.0. https://netcomi.de","code":"@Manual{, title = {{NetCoMi: Network Construction and Comparison for Microbiome Data}}, author = {Stefanie Peschel}, year = {2022}, note = {R package version 1.1.0}, url = {https://netcomi.de}, }"},{"path":"https://netcomi.de/index.html","id":"netcomi-","dir":"","previous_headings":"","what":"Network Construction and Comparison for Microbiome Data","title":"Network Construction and Comparison for Microbiome Data","text":"NetCoMi (Network Construction Comparison Microbiome Data) R package designed facilitate construction, analysis, comparison networks tailored microbial compositional data. implements comprehensive workflow introduced Peschel et al. (2020), guides users step network generation analysis strong emphasis reproducibility computational efficiency. NetCoMi, users can construct microbial association dissimilarity networks directly sequencing data, typically provided read count matrix. package includes broad selection methods handling zeros, normalizing data, computing associations microbial taxa, sparsifying resulting matrices. offering components modular format, NetCoMi allows users tailor workflow specific research needs, creating highly customizable microbial networks. package supports construction, analysis, visualization single network comparison two networks graphical quantitative approaches, including statistical testing. Additionally, NetCoMi offers capability constructing differential networks, differentially associated taxa connected. Exemplary network comparison using soil microbiome data (‘soilrep’ data phyloseq package). Microbial associations compared two experimantal settings ‘warming’ ‘non-warming’ using layout groups.","code":""},{"path":[]},{"path":"https://netcomi.de/index.html","id":"methods-included-in-netcomi","dir":"","previous_headings":"","what":"Methods included in NetCoMi","title":"Network Construction and Comparison for Microbiome Data","text":"overview methods available network construction, together information implementation R: Association measures: Pearson coefficient (cor() stats package) Spearman coefficient (cor() stats package) Biweight Midcorrelation bicor() WGCNA package SparCC (sparcc() SpiecEasi package) CCLasso (R code GitHub) CCREPE (ccrepe package) SpiecEasi (SpiecEasi package) SPRING (SPRING package) gCoda (R code GitHub) propr (propr package) Dissimilarity measures: Euclidean distance (vegdist() vegan package) Bray-Curtis dissimilarity (vegdist() vegan package) Kullback-Leibler divergence (KLD) (KLD() LaplacesDemon package) Jeffrey divergence (code using KLD() LaplacesDemon package) Jensen-Shannon divergence (code using KLD() LaplacesDemon package) Compositional KLD (implementation following Martin-Fernández et al. (1999)) Aitchison distance (vegdist() clr() SpiecEasi package) Methods zero replacement: Add predefined pseudo count count table Replace zeros count table predefined pseudo count (ratios non-zero values preserved) Multiplicative replacement (multRepl zCompositions package) Modified EM alr-algorithm (lrEM zCompositions package) Bayesian-multiplicative replacement (cmultRepl zCompositions package) Normalization methods: Total Sum Scaling (TSS) (implementation) Cumulative Sum Scaling (CSS) (cumNormMat metagenomeSeq package) Common Sum Scaling (COM) (implementation) Rarefying (rrarefy vegan package) Variance Stabilizing Transformation (VST) (varianceStabilizingTransformation DESeq2 package) Centered log-ratio (clr) transformation (clr() SpiecEasi package)) TSS, CSS, COM, VST, clr transformation described (Badri et al. 2020).","code":""},{"path":"https://netcomi.de/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Network Construction and Comparison for Microbiome Data","text":"errors installation, please install missing dependencies manually. Packages optionally required certain settings installed together NetCoMi. can installed automatically using: installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":"# Required packages install.packages(\"devtools\") install.packages(\"BiocManager\") # Since two of NetCoMi's dependencies are only available on GitHub, # it is recommended to install them first: devtools::install_github(\"zdk123/SpiecEasi\") devtools::install_github(\"GraceYoon/SPRING\") # Install NetCoMi devtools::install_github(\"stefpeschel/NetCoMi\", repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories())) installNetCoMiPacks()"},{"path":"https://netcomi.de/index.html","id":"bioconda","dir":"","previous_headings":"","what":"Bioconda","title":"Network Construction and Comparison for Microbiome Data","text":"Thanks daydream-boost, NetCoMi can also installed conda bioconda channel ","code":"# You can install an individual environment firstly with # conda create -n NetCoMi # conda activate NetCoMi conda install -c bioconda -c conda-forge r-netcomi"},{"path":"https://netcomi.de/index.html","id":"development-version","dir":"","previous_headings":"","what":"Development version","title":"Network Construction and Comparison for Microbiome Data","text":"Everyone wants use new features included releases invited install NetCoMi’s development version: Please check NEWS document features implemented develop branch.","code":"devtools::install_github(\"stefpeschel/NetCoMi\", ref = \"develop\", repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories()))"},{"path":[]},{"path":"https://netcomi.de/readme.html","id":null,"dir":"","previous_headings":"","what":"NetCoMi ","title":"NetCoMi ","text":"NetCoMi (Network Construction Comparison Microbiome Data) R package designed facilitate construction, analysis, comparison networks tailored microbial compositional data. implements comprehensive workflow introduced Peschel et al. (2020), guides users step network generation analysis strong emphasis reproducibility computational efficiency. NetCoMi, users can construct microbial association dissimilarity networks directly sequencing data, typically provided read count matrix. package includes broad selection methods handling zeros, normalizing data, computing associations microbial taxa, sparsifying resulting matrices. offering components modular format, NetCoMi allows users tailor workflow specific research needs, creating highly customizable microbial networks. package supports construction, analysis, visualization single network comparison two networks graphical quantitative approaches, including statistical testing. Additionally, NetCoMi offers capability constructing differential networks, differentially associated taxa connected. Exemplary network comparison using soil microbiome data (‘soilrep’ data phyloseq package). Microbial associations compared two experimantal settings ‘warming’ ‘non-warming’ using layout groups.","code":""},{"path":"https://netcomi.de/readme.html","id":"website","dir":"","previous_headings":"","what":"Website","title":"NetCoMi ","text":"Please visit netcomi.de complete reference.","code":""},{"path":"https://netcomi.de/readme.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"NetCoMi ","text":"errors installation, please install missing dependencies manually. Packages optionally required certain settings installed together NetCoMi. can installed automatically using: installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":"# Required packages install.packages(\"devtools\") install.packages(\"BiocManager\") # Since two of NetCoMi's dependencies are only available on GitHub, # it is recommended to install them first: devtools::install_github(\"zdk123/SpiecEasi\") devtools::install_github(\"GraceYoon/SPRING\") # Install NetCoMi devtools::install_github(\"stefpeschel/NetCoMi\", repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories())) installNetCoMiPacks()"},{"path":"https://netcomi.de/readme.html","id":"bioconda","dir":"","previous_headings":"","what":"Bioconda","title":"NetCoMi ","text":"Thanks daydream-boost, NetCoMi can also installed conda bioconda channel ","code":"# You can install an individual environment firstly with # conda create -n NetCoMi # conda activate NetCoMi conda install -c bioconda -c conda-forge r-netcomi"},{"path":"https://netcomi.de/readme.html","id":"development-version","dir":"","previous_headings":"","what":"Development version","title":"NetCoMi ","text":"Everyone wants use new features included releases invited install NetCoMi’s development version: Please check NEWS document features implemented develop branch.","code":"devtools::install_github(\"stefpeschel/NetCoMi\", ref = \"develop\", repos = c(\"https://cloud.r-project.org/\", BiocManager::repositories()))"},{"path":[]},{"path":"https://netcomi.de/reference/NetCoMi-package.html","id":null,"dir":"Reference","previous_headings":"","what":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","title":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","text":"NetCoMi offers functions constructing, analyzing, comparing microbial association networks well dissimilarity-based networks compositional data. also includes function constructing differential association networks. main functions netConstruct, netAnalyze, netCompare, diffnet","code":""},{"path":[]},{"path":"https://netcomi.de/reference/NetCoMi-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"NetCoMi: Network Comparison for Microbial Compositional Data — NetCoMi-package","text":"Maintainer: Stefanie Peschel stefanie.peschel@mail.deAcknowledgments: Anne-Laure Boulesteix, Christian L. Müller, Martin Depner (inspiration theoretical background) Anastasiia Holovchak (package testing editing)","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":null,"dir":"Reference","previous_headings":"","what":"Graphlet Correlation Distance (GCD) — calcGCD","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"Computes Graphlet Correlation Distance (GCD) - graphlet-based distance measure - two networks. Following Yaveroglu et al. (2014), GCD defined Euclidean distance upper triangle values Graphlet Correlation Matrices (GCM) two networks, defined adjacency matrices. GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016). function considers orbits graphlets four nodes. Orbit counts determined using function count4 orca package. Unobserved orbits lead NAs correlation matrix, row pseudo counts 1 added orbit count matrices (ocount1 ocount2). function based R code provided Theresa Ullmann (https://orcid.org/0000-0003-1215-8561).","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"","code":"calcGCD(adja1, adja2, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1))"},{"path":"https://netcomi.de/reference/calcGCD.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"adja1, adja2 adjacency matrices (numeric) defining two networks GCD shall calculated. orbits numeric vector integers 0 14 defining graphlet orbits use GCD calculation. Minimum length 2. Defaults c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), thus excluding redundant orbits orbit o3. See details.","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"object class gcd containing following elements:","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/calcGCD.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/calcGCD.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Graphlet Correlation Distance (GCD) — calcGCD","text":"","code":"library(phyloseq) # Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") # Make sample sizes equal to ensure comparability n_yes <- phyloseq::nsamples(amgut_season_yes) ids_yes <- phyloseq::get_variable(amgut_season_no, \"X.SampleID\")[1:n_yes] amgut_season_no <- phyloseq::subset_samples(amgut_season_no, X.SampleID %in% ids_yes) #> Error in h(simpleError(msg, call)): error in evaluating the argument 'table' in selecting a method for function '%in%': object 'ids_yes' not found # Network construction net <- netConstruct(amgut_season_yes, amgut_season_no, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"thresh\", thresh = 0.5) #> Checking input arguments ... #> Done. #> Data filtering ... #> 94 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 1 rows with zero sum removed in group 2. #> 44 taxa and 120 samples remaining in group 1. #> 44 taxa and 162 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. #> #> Sparsify associations in group 2 ... #> Done. # Get adjacency matrices adja1 <- net$adjaMat1 adja2 <- net$adjaMat2 # Network visualization props <- netAnalyze(net) #> Warning: The `scale` argument of `eigen_centrality()` always as if TRUE as of igraph #> 2.1.1. #> ℹ Normalization is always performed #> ℹ The deprecated feature was likely used in the NetCoMi package. #> Please report the issue at . plot(props, rmSingles = TRUE, cexLabels = 1.7) # Calculate the GCD gcd <- calcGCD(adja1, adja2) gcd #> GCD: 2.40613 # Orbit counts head(gcd$ocount1) #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 2 0 0 0 0 3 0 0 3 0 3 #> 71543 3 0 0 0 0 4 0 0 1 0 2 #> 331820 0 0 0 0 0 0 0 0 0 0 0 #> 322235 2 1 1 0 0 0 0 0 0 0 1 #> 469709 2 1 1 0 0 0 0 0 0 0 1 #> 73352 0 0 0 0 0 0 0 0 0 0 0 head(gcd$ocount2) #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 2 0 0 0 0 3 0 3 0 0 3 #> 71543 1 0 0 0 0 0 0 5 1 0 4 #> 331820 0 0 0 0 0 0 0 0 0 0 0 #> 322235 0 0 0 0 0 0 0 0 0 0 0 #> 469709 0 0 0 0 0 0 0 0 0 0 0 #> 73352 0 0 0 0 0 0 0 0 0 0 0 # GCMs gcd$gcm1 #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.45431964 0.3404901 0.1339970 0.1339970 0.60502382 0.3091818 #> O2 0.4543196 1.00000000 0.8341060 0.4704992 0.4704992 0.07685639 0.7072981 #> O5 0.3404901 0.83410601 1.0000000 0.5640761 0.5640761 0.12776023 0.3649530 #> O7 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O8 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O10 0.6050238 0.07685639 0.1277602 0.3341264 0.3341264 1.00000000 0.1905216 #> O11 0.3091818 0.70729807 0.3649530 0.6825925 0.6825925 0.19052158 1.0000000 #> O6 0.1339970 0.47049925 0.5640761 1.0000000 1.0000000 0.33412645 0.6825925 #> O9 0.5960912 0.09182761 0.1452644 0.3638144 0.3638144 0.99263054 0.2112551 #> O4 0.2375512 0.22242827 0.2857143 0.5640761 0.5640761 0.12776023 0.3649530 #> O1 0.7481584 0.31969180 0.4248120 0.2396263 0.2396263 0.79591010 0.1091601 #> O6 O9 O4 O1 #> O0 0.1339970 0.59609122 0.2375512 0.7481584 #> O2 0.4704992 0.09182761 0.2224283 0.3196918 #> O5 0.5640761 0.14526441 0.2857143 0.4248120 #> O7 1.0000000 0.36381438 0.5640761 0.2396263 #> O8 1.0000000 0.36381438 0.5640761 0.2396263 #> O10 0.3341264 0.99263054 0.1277602 0.7959101 #> O11 0.6825925 0.21125512 0.3649530 0.1091601 #> O6 1.0000000 0.36381438 0.5640761 0.2396263 #> O9 0.3638144 1.00000000 0.1452644 0.7991260 #> O4 0.5640761 0.14526441 1.0000000 0.4248120 #> O1 0.2396263 0.79912603 0.4248120 1.0000000 gcd$gcm2 #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.50195290 0.2172958 0.3929341 0.2172958 0.4845458 0.3929341 #> O2 0.5019529 1.00000000 0.5503546 0.8165505 0.5503546 0.2619803 0.8165505 #> O5 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O7 0.3929341 0.81655052 0.6825925 1.0000000 0.6825925 0.3459888 1.0000000 #> O8 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O10 0.4845458 0.26198027 0.5369313 0.3459888 0.5369313 1.0000000 0.3459888 #> O11 0.3929341 0.81655052 0.6825925 1.0000000 0.6825925 0.3459888 1.0000000 #> O6 0.6318618 0.12253339 0.3341264 0.1905216 0.3341264 0.6276289 0.1905216 #> O9 0.4502107 0.22249134 0.4826536 0.3030534 0.4826536 0.2155385 0.3030534 #> O4 0.2172958 0.55035458 1.0000000 0.6825925 1.0000000 0.5369313 0.6825925 #> O1 0.7296858 0.07776854 0.2788342 0.1440064 0.2788342 0.5466671 0.1440064 #> O6 O9 O4 O1 #> O0 0.6318618 0.4502107 0.2172958 0.72968585 #> O2 0.1225334 0.2224913 0.5503546 0.07776854 #> O5 0.3341264 0.4826536 1.0000000 0.27883424 #> O7 0.1905216 0.3030534 0.6825925 0.14400642 #> O8 0.3341264 0.4826536 1.0000000 0.27883424 #> O10 0.6276289 0.2155385 0.5369313 0.54666706 #> O11 0.1905216 0.3030534 0.6825925 0.14400642 #> O6 1.0000000 0.8144348 0.3341264 0.87997622 #> O9 0.8144348 1.0000000 0.4826536 0.71311180 #> O4 0.3341264 0.4826536 1.0000000 0.27883424 #> O1 0.8799762 0.7131118 0.2788342 1.00000000 # Test Graphlet Correlations for significant differences gcmtest <- testGCM(gcd) #> Perform Student's t-test for GCM1 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.2 #> Done. #> #> Perform Student's t-test for GCM2 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.02 #> Done. #> #> Test GCM1 and GCM2 for differences ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.58 #> Done. ### Plot heatmaps # GCM 1 (with significance code in the lower triangle) plotHeat(gcmtest$gcm1, pmat = gcmtest$pAdjust1, type = \"mixed\") # GCM 2 (with significance code in the lower triangle) plotHeat(gcmtest$gcm2, pmat = gcmtest$pAdjust2, type = \"mixed\") # Difference GCM1-GCM2 (with p-values in the lower triangle) plotHeat(gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"mixed\", textLow = \"pmat\")"},{"path":"https://netcomi.de/reference/calcGCM.html","id":null,"dir":"Reference","previous_headings":"","what":"Graphlet Correlation Matrix (GCM) — calcGCM","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"Computes Graphlet Correlation Matrix (GCM) network, given adjacency matrix. GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016; Yaveroglu et al., 2014). function considers orbits graphlets four nodes. Orbit counts determined using function count4 orca package. Unobserved orbits lead NAs correlation matrix, row pseudo counts 1 added orbit count matrix (ocount). function based R code provided Theresa Ullmann (https://orcid.org/0000-0003-1215-8561).","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"","code":"calcGCM(adja, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1))"},{"path":"https://netcomi.de/reference/calcGCM.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"adja adjacency matrix (numeric) defining network GCM calculated. orbits numeric vector integers 0 14 defining graphlet orbits use GCM calculation. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3.","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"list following elements:","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/calcGCM.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/calcGCM.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Graphlet Correlation Matrix (GCM) — calcGCM","text":"","code":"# Load data set from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction net <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"thresh\", thresh = 0.5) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Get adjacency matrices adja <- net$adjaMat1 # Network visualization props <- netAnalyze(net) plot(props, rmSingles = TRUE, cexLabels = 1.7) # Calculate Graphlet Correlation Matrix (GCM) gcm <- calcGCM(adja) gcm #> $gcm #> O0 O2 O5 O7 O8 O10 O11 #> O0 1.0000000 0.39267906 0.1582768 0.3242392 0.1582768 0.6766859 0.3242392 #> O2 0.3926791 1.00000000 0.5536742 0.8165382 0.5536742 0.1095796 0.8165382 #> O5 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O7 0.3242392 0.81653824 0.6855771 1.0000000 0.6855771 0.1730876 1.0000000 #> O8 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O10 0.6766859 0.10957964 0.3058042 0.1730876 0.3058042 1.0000000 0.1730876 #> O11 0.3242392 0.81653824 0.6855771 1.0000000 0.6855771 0.1730876 1.0000000 #> O6 0.6953513 0.08571015 0.2737340 0.1473592 0.2737340 0.9020241 0.1473592 #> O9 0.6953513 0.08571015 0.2737340 0.1473592 0.2737340 0.9020241 0.1473592 #> O4 0.1582768 0.55367418 1.0000000 0.6855771 1.0000000 0.3058042 0.6855771 #> O1 0.7524380 0.05372207 0.2360482 0.1147146 0.2360482 0.8186652 0.1147146 #> O6 O9 O4 O1 #> O0 0.69535132 0.69535132 0.1582768 0.75243796 #> O2 0.08571015 0.08571015 0.5536742 0.05372207 #> O5 0.27373396 0.27373396 1.0000000 0.23604818 #> O7 0.14735917 0.14735917 0.6855771 0.11471459 #> O8 0.27373396 0.27373396 1.0000000 0.23604818 #> O10 0.90202408 0.90202408 0.3058042 0.81866523 #> O11 0.14735917 0.14735917 0.6855771 0.11471459 #> O6 1.00000000 1.00000000 0.2737340 0.90849770 #> O9 1.00000000 1.00000000 0.2737340 0.90849770 #> O4 0.27373396 0.27373396 1.0000000 0.23604818 #> O1 0.90849770 0.90849770 0.2360482 1.00000000 #> #> $ocount #> O0 O2 O5 O7 O8 O10 O11 O6 O9 O4 O1 #> 307981 3 0 0 0 0 8 0 3 3 0 4 #> 331820 1 0 0 0 0 0 0 0 0 0 1 #> 73352 0 0 0 0 0 0 0 0 0 0 0 #> 322235 1 0 0 0 0 0 0 0 0 0 0 #> 71543 3 0 0 0 0 8 0 3 3 0 4 #> 469709 1 0 0 0 0 0 0 0 0 0 1 #> 158660 1 0 0 0 0 0 0 0 0 0 0 #> 512309 1 0 0 0 0 0 0 9 6 0 6 #> 188236 0 0 0 0 0 0 0 0 0 0 0 #> 248140 0 0 0 0 0 0 0 0 0 0 0 #> 364563 0 0 0 0 0 0 0 0 0 0 0 #> 278234 0 0 0 0 0 0 0 0 0 0 0 #> 353985 0 0 0 0 0 0 0 0 0 0 0 #> 301645 3 0 0 0 0 8 0 3 3 0 4 #> 361496 0 0 0 0 0 0 0 0 0 0 0 #> 90487 0 0 0 0 0 0 0 0 0 0 0 #> 190597 0 0 0 0 0 0 0 0 0 0 0 #> 259569 0 0 0 0 0 0 0 0 0 0 0 #> 326792 0 0 0 0 0 0 0 0 0 0 0 #> 541301 0 0 0 0 0 0 0 0 0 0 0 #> 305760 3 0 0 0 0 8 0 3 3 0 4 #> 184983 0 0 0 0 0 0 0 0 0 0 0 #> 549871 0 0 0 0 0 0 0 0 0 0 0 #> 127309 0 0 0 0 0 0 0 0 0 0 0 #> 326977 0 0 0 0 0 0 0 0 0 0 0 #> 181095 0 0 0 0 0 0 0 0 0 0 0 #> 130663 0 0 0 0 0 0 0 0 0 0 0 #> 244304 0 0 0 0 0 0 0 0 0 0 0 #> 311477 0 0 0 0 0 0 0 0 0 0 0 #> 516022 0 0 0 0 0 0 0 0 0 0 0 #> 274244 0 0 0 0 0 0 0 0 0 0 0 #> 590083 0 0 0 0 0 0 0 0 0 0 0 #> 191541 0 0 0 0 0 0 0 0 0 0 0 #> 181016 0 0 0 0 0 0 0 0 0 0 0 #> 9715 7 15 0 9 0 0 24 0 0 0 0 #> 9753 3 0 0 0 0 8 0 3 3 0 4 #> 190464 0 0 0 0 0 0 0 0 0 0 0 #> 195102 0 0 0 0 0 0 0 0 0 0 0 #> 268332 2 1 0 0 0 0 0 0 0 0 0 #> 361480 0 0 0 0 0 0 0 0 0 0 0 #> 470973 0 0 0 0 0 0 0 0 0 0 0 #> 223059 0 0 0 0 0 0 0 0 0 0 0 #> 334393 1 0 0 0 0 0 0 0 0 0 0 #> 288134 0 0 0 0 0 0 0 0 0 0 0 #> 119010 3 0 0 0 0 8 0 3 3 0 4 #> 194648 0 0 0 0 0 0 0 0 0 0 0 #> 302160 0 0 0 0 0 0 0 0 0 0 0 #> 199487 0 0 0 0 0 0 0 0 0 0 0 #> 175617 1 0 0 0 0 0 0 0 0 0 0 #> 312461 0 0 0 0 0 0 0 0 0 0 0 #> pseudo 1 1 1 1 1 1 1 1 1 1 1 #> #> attr(,\"class\") #> [1] \"GCM\" # Plot heatmap of the GCM plotHeat(gcm$gcm)"},{"path":"https://netcomi.de/reference/cclasso.html","id":null,"dir":"Reference","previous_headings":"","what":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"Implementation CCLasso approach (Fang et al., 2015), published GitHub (Fang, 2016). function extended progress message.","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"","code":"cclasso( x, counts = F, pseudo = 0.5, sig = NULL, lams = 10^(seq(0, -8, by = -0.01)), K = 3, kmax = 5000, verbose = TRUE )"},{"path":"https://netcomi.de/reference/cclasso.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"x numeric matrix (nxp) samples rows OTUs/taxa columns. counts logical indicating whether x constains counts fractions. Defaults FALSE meaning x contains fractions rows sum 1. pseudo numeric value giving pseudo count, added counts counts = TRUE. Default 0.5. sig numeric matrix giving initial covariance matrix. NULL (default), diag(rep(1, p)) used. lams numeric vector specifying tuning parameter sequences. Default 10^(seq(0, -8, = -0.01)). K numeric value (integer) giving folds crossvalidation. Defaults 3. kmax numeric value (integer) specifying maximum iteration augmented lagrangian method. Default 5000. verbose logical indicating whether progress indicator shown (TRUE default).","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"list containing following elements:","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"fang2015cclassoNetCoMi fang2016cclassoGithubNetCoMi","code":""},{"path":"https://netcomi.de/reference/cclasso.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"CCLasso: Correlation inference of Composition data through Lasso method — cclasso","text":"Fang Huaying, Peking University (R code) Stefanie Peschel (documentation)","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":null,"dir":"Reference","previous_headings":"","what":"Adding transparency to a color — colToTransp","title":"Adding transparency to a color — colToTransp","text":"Adding transparency color","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Adding transparency to a color — colToTransp","text":"","code":"colToTransp(col, percent = 50)"},{"path":"https://netcomi.de/reference/colToTransp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Adding transparency to a color — colToTransp","text":"col color vector specified similar col argument col2rgb percent numeric 0 100 giving level transparency. Defaults 50.","code":""},{"path":"https://netcomi.de/reference/colToTransp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Adding transparency to a color — colToTransp","text":"","code":"# Excepts hexadecimal strings, written colors, or numbers as input colToTransp(\"#FF0000FF\", 50) #> [1] \"#FF00007F\" colToTransp(\"black\", 50) #> [1] \"#0000007F\" colToTransp(2) #> [1] \"#DF536B7F\" # Different shades of red r80 <- colToTransp(\"red\", 80) r50 <- colToTransp(\"red\", 50) r20 <- colToTransp(\"red\", 20) barplot(rep(5, 4), col=c(\"red\", r20, r50, r80), names.arg = 1:4) # Vector as input rain_transp <- colToTransp(rainbow(5), 50) barplot(rep(5, 5), col = rain_transp, names.arg = 1:5)"},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":null,"dir":"Reference","previous_headings":"","what":"Create and store association matrices for permuted data — createAssoPerm","title":"Create and store association matrices for permuted data — createAssoPerm","text":"function creates returns matrix permuted group labels saves association matrices computed permuted data external file.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create and store association matrices for permuted data — createAssoPerm","text":"","code":"createAssoPerm( x, computeAsso = TRUE, nPerm = 1000L, cores = 1L, seed = NULL, permGroupMat = NULL, fileStoreAssoPerm = \"assoPerm\", append = TRUE, storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), logFile = NULL, verbose = TRUE )"},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create and store association matrices for permuted data — createAssoPerm","text":"x object class \"microNet\" \"microNetProps\" (returned netConstruct netAnalyze). computeAsso logical indicating whether association matrices computed. FALSE, permuted group labels computed returned. nPerm integer indicating number permutations. cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). seed integer giving seed reproducibility results. permGroupMat optional matrix permuted group labels (nPerm rows n1+n2 columns). fileStoreAssoPerm character giving name file matrix associations/dissimilarities permuted data saved. Can also path. append logical indicating whether existing files (given fileStoreAssoPerm fileStoreCountsPerm) extended. TRUE, new file created file existing. FALSE, new file created case. storeCountsPerm logical indicating whether permuted count matrices saved external file. Defaults FALSE. Ignored fileLoadCountsPerm NULL. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. logFile character string naming log file current iteration number written. Defaults NULL log file generated. verbose logical. TRUE (default), status messages shown.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create and store association matrices for permuted data — createAssoPerm","text":"Invisible object: Matrix permuted group labels.","code":""},{"path":"https://netcomi.de/reference/createAssoPerm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create and store association matrices for permuted data — createAssoPerm","text":"","code":"# \\donttest{ # Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Generate a random group vector set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Network construction: amgut_net <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 97 taxa removed. #> 30 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis: amgut_props <- netAnalyze(amgut_net, clustMethod = \"cluster_fast_greedy\") # Use 'createAssoPerm' to create \"permuted\" count and association matrices, # which can be reused by netCompare() and diffNet() # Note: # createAssoPerm() accepts objects 'amgut_net' and 'amgut_props' as input createAssoPerm(amgut_props, nPerm = 100L, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), append = FALSE, seed = 123456) #> Create matrix with permuted group labels ... #> Done. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, countsPerm2.bmat, and countsPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% #> Loading required package: dynamicTreeCut #> Loading required package: fastcluster #> #> Attaching package: ‘fastcluster’ #> The following object is masked from ‘package:stats’: #> #> hclust #> #> Attaching package: ‘WGCNA’ #> The following object is masked from ‘package:stats’: #> #> cor #> Loading required package: permute #> Loading required package: lattice #> This is vegan 2.6-8 #> #> Attaching package: ‘LaplacesDemon’ #> The following object is masked from ‘package:permute’: #> #> Blocks #> Loading required package: S4Vectors #> Loading required package: stats4 #> Loading required package: BiocGenerics #> #> Attaching package: ‘BiocGenerics’ #> The following objects are masked from ‘package:stats’: #> #> IQR, mad, sd, var, xtabs #> The following objects are masked from ‘package:base’: #> #> Filter, Find, Map, Position, Reduce, anyDuplicated, aperm, append, #> as.data.frame, basename, cbind, colnames, dirname, do.call, #> duplicated, eval, evalq, get, grep, grepl, intersect, is.unsorted, #> lapply, mapply, match, mget, order, paste, pmax, pmax.int, pmin, #> pmin.int, rank, rbind, rownames, sapply, saveRDS, setdiff, table, #> tapply, union, unique, unsplit, which.max, which.min #> #> Attaching package: ‘S4Vectors’ #> The following object is masked from ‘package:utils’: #> #> findMatches #> The following objects are masked from ‘package:base’: #> #> I, expand.grid, unname #> Loading required package: IRanges #> #> Attaching package: ‘IRanges’ #> The following object is masked from ‘package:phyloseq’: #> #> distance #> Loading required package: GenomicRanges #> Loading required package: GenomeInfoDb #> Loading required package: SummarizedExperiment #> Loading required package: MatrixGenerics #> Loading required package: matrixStats #> #> Attaching package: ‘MatrixGenerics’ #> The following objects are masked from ‘package:matrixStats’: #> #> colAlls, colAnyNAs, colAnys, colAvgsPerRowSet, colCollapse, #> colCounts, colCummaxs, colCummins, colCumprods, colCumsums, #> colDiffs, colIQRDiffs, colIQRs, colLogSumExps, colMadDiffs, #> colMads, colMaxs, colMeans2, colMedians, colMins, colOrderStats, #> colProds, colQuantiles, colRanges, colRanks, colSdDiffs, colSds, #> colSums2, colTabulates, colVarDiffs, colVars, colWeightedMads, #> colWeightedMeans, colWeightedMedians, colWeightedSds, #> colWeightedVars, rowAlls, rowAnyNAs, rowAnys, rowAvgsPerColSet, #> rowCollapse, rowCounts, rowCummaxs, rowCummins, rowCumprods, #> rowCumsums, rowDiffs, rowIQRDiffs, rowIQRs, rowLogSumExps, #> rowMadDiffs, rowMads, rowMaxs, rowMeans2, rowMedians, rowMins, #> rowOrderStats, rowProds, rowQuantiles, rowRanges, rowRanks, #> rowSdDiffs, rowSds, rowSums2, rowTabulates, rowVarDiffs, rowVars, #> rowWeightedMads, rowWeightedMeans, rowWeightedMedians, #> rowWeightedSds, rowWeightedVars #> Loading required package: Biobase #> Welcome to Bioconductor #> #> Vignettes contain introductory material; view with #> 'browseVignettes()'. To cite Bioconductor, see #> 'citation(\"Biobase\")', and for packages 'citation(\"pkgname\")'. #> #> Attaching package: ‘Biobase’ #> The following object is masked from ‘package:MatrixGenerics’: #> #> rowMedians #> The following objects are masked from ‘package:matrixStats’: #> #> anyMissing, rowMedians #> The following object is masked from ‘package:phyloseq’: #> #> sampleNames #> | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. # Run netcompare using the stored permutation count matrices # (association matrices are still computed within netCompare): amgut_comp1 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. # Run netcompare using the stored permutation association matrices: amgut_comp2 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. summary(amgut_comp1) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, seed = 123456, #> fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.900 0.967 0.067 0.584158 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.584158 #> Positive edge percentage 43.023 32.432 10.591 0.039604 * #> Edge density 0.245 0.182 0.063 0.346535 #> Natural connectivity 0.062 0.052 0.010 0.198020 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.927 0.949 0.021 0.217822 #> Average path length** 1.624 1.819 0.195 0.435644 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 4.000 2.000 2.000 0.544554 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.594059 #> Positive edge percentage 43.023 32.432 10.591 0.029703 * #> Edge density 0.198 0.170 0.028 0.752475 #> Natural connectivity 0.054 0.049 0.004 0.613861 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.333 0.631521 0.606925 #> betweenness centr. 0.333 0.631521 0.606925 #> closeness centr. 0.600 0.980338 0.076564 . #> eigenvec. centr. 0.778 0.999035 0.008281 ** #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.095 0.095 #> p-value 0.047 0.064 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.421000 0.945000 #> p-value 0.990099 0.811881 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.207 0.000 0.207 0.965347 #> 158660 0.414 0.241 0.172 0.965347 #> 301645 0.414 0.241 0.172 0.965347 #> 130663 0.069 0.207 0.138 0.965347 #> 331820 0.241 0.103 0.138 0.965347 #> 326977 0.207 0.069 0.138 0.965347 #> 364563 0.241 0.345 0.103 0.965347 #> 322235 0.310 0.207 0.103 0.965347 #> 353985 0.138 0.034 0.103 0.965347 #> 470973 0.138 0.034 0.103 0.965347 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 130663 0.000 0.169 0.169 0.878790 #> 512309 0.000 0.124 0.124 0.878790 #> 181095 0.105 0.000 0.105 0.292930 #> 326792 0.098 0.000 0.098 0.878790 #> 326977 0.086 0.000 0.086 0.894207 #> 331820 0.000 0.071 0.071 0.894207 #> 248140 0.095 0.161 0.066 0.894207 #> 188236 0.062 0.124 0.063 0.894207 #> 361496 0.000 0.058 0.058 0.894207 #> 9753 0.258 0.206 0.052 0.894207 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.733 0.000 0.733 0.933522 #> 259569 0.000 0.573 0.573 0.933522 #> 127309 0.000 0.443 0.443 0.933522 #> 549871 0.000 0.394 0.394 0.933522 #> 470973 0.695 0.480 0.216 0.933522 #> 331820 0.764 0.568 0.197 0.933522 #> 353985 0.717 0.530 0.187 0.933522 #> 541301 0.621 0.448 0.173 0.933522 #> 158660 0.935 0.777 0.157 0.933522 #> 244304 0.631 0.486 0.145 0.933522 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 331820 0.504 0.101 0.402 0.866062 #> 301645 1.000 0.698 0.302 0.283710 #> 158660 0.639 0.352 0.287 0.866062 #> 181095 0.242 0.000 0.242 0.496493 #> 307981 0.993 0.762 0.231 0.496493 #> 364563 0.564 0.762 0.198 0.866062 #> 326977 0.354 0.159 0.196 0.866062 #> 353985 0.246 0.063 0.183 0.866062 #> 188236 0.621 0.763 0.142 0.866062 #> 259569 0.000 0.132 0.132 0.866062 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp2) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> group '1' group '2' abs.diff. p-value #> Relative LCC size 0.900 0.967 0.067 0.584158 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.584158 #> Positive edge percentage 43.023 32.432 10.591 0.039604 * #> Edge density 0.245 0.182 0.063 0.346535 #> Natural connectivity 0.062 0.052 0.010 0.198020 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.927 0.949 0.021 0.217822 #> Average path length** 1.624 1.819 0.195 0.435644 #> #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 4.000 2.000 2.000 0.544554 #> Clustering coefficient 0.522 0.394 0.128 0.277228 #> Modularity 0.269 0.209 0.060 0.594059 #> Positive edge percentage 43.023 32.432 10.591 0.029703 * #> Edge density 0.198 0.170 0.028 0.752475 #> Natural connectivity 0.054 0.049 0.004 0.613861 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.333 0.631521 0.606925 #> betweenness centr. 0.333 0.631521 0.606925 #> closeness centr. 0.600 0.980338 0.076564 . #> eigenvec. centr. 0.778 0.999035 0.008281 ** #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.095 0.095 #> p-value 0.051 0.057 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 0.421000 0.945000 #> p-value 0.990099 0.811881 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.207 0.000 0.207 0.965347 #> 158660 0.414 0.241 0.172 0.965347 #> 301645 0.414 0.241 0.172 0.965347 #> 130663 0.069 0.207 0.138 0.965347 #> 331820 0.241 0.103 0.138 0.965347 #> 326977 0.207 0.069 0.138 0.965347 #> 364563 0.241 0.345 0.103 0.965347 #> 322235 0.310 0.207 0.103 0.965347 #> 353985 0.138 0.034 0.103 0.965347 #> 470973 0.138 0.034 0.103 0.965347 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 130663 0.000 0.169 0.169 0.878790 #> 512309 0.000 0.124 0.124 0.878790 #> 181095 0.105 0.000 0.105 0.292930 #> 326792 0.098 0.000 0.098 0.878790 #> 326977 0.086 0.000 0.086 0.894207 #> 331820 0.000 0.071 0.071 0.894207 #> 248140 0.095 0.161 0.066 0.894207 #> 188236 0.062 0.124 0.063 0.894207 #> 361496 0.000 0.058 0.058 0.894207 #> 9753 0.258 0.206 0.052 0.894207 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 181095 0.733 0.000 0.733 0.933522 #> 259569 0.000 0.573 0.573 0.933522 #> 127309 0.000 0.443 0.443 0.933522 #> 549871 0.000 0.394 0.394 0.933522 #> 470973 0.695 0.480 0.216 0.933522 #> 331820 0.764 0.568 0.197 0.933522 #> 353985 0.717 0.530 0.187 0.933522 #> 541301 0.621 0.448 0.173 0.933522 #> 158660 0.935 0.777 0.157 0.933522 #> 244304 0.631 0.486 0.145 0.933522 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 331820 0.504 0.101 0.402 0.866062 #> 301645 1.000 0.698 0.302 0.283710 #> 158660 0.639 0.352 0.287 0.866062 #> 181095 0.242 0.000 0.242 0.496493 #> 307981 0.993 0.762 0.231 0.496493 #> 364563 0.564 0.762 0.198 0.866062 #> 326977 0.354 0.159 0.196 0.866062 #> 353985 0.246 0.063 0.183 0.866062 #> 188236 0.621 0.763 0.142 0.866062 #> 259569 0.000 0.132 0.132 0.866062 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 all.equal(amgut_comp1$properties, amgut_comp2$properties) #> [1] TRUE # Run diffnet using the stored permutation count matrices in diffnet() diff1 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. # Run diffnet using the stored permutation association matrices diff2 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. #plot(diff1) #plot(diff2) # Note: Networks are empty (no significantly different associations) # for only 100 permutations # }"},{"path":"https://netcomi.de/reference/diffnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Constructing Differential Networks for Microbiome Data — diffnet","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"Constructs differential network objects class microNet. Three methods identifying differentially associated taxa provided: Fisher's z-test, permutation test, discordant method.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"","code":"diffnet( x, diffMethod = \"permute\", discordThresh = 0.8, n1 = NULL, n2 = NULL, fisherTrans = TRUE, nPerm = 1000L, permPvalsMethod = \"pseudo\", cores = 1L, verbose = TRUE, logFile = NULL, seed = NULL, alpha = 0.05, adjust = \"lfdr\", lfdrThresh = 0.2, trueNullMethod = \"convest\", pvalsVec = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), assoPerm = NULL )"},{"path":"https://netcomi.de/reference/diffnet.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"x object class microNet (returned netConstruct). diffMethod character string indicating method used determining differential associations. Possible values \"permute\" (default) performing permutation tests according Gill et al. (2010), \"discordant\", calls discordantRun (Siska Kechris, 2016), \"fisherTest\" Fisher's z-test (Fisher , 1992). discordThresh numeric value [0,1]. used discordant method. Specifies threshold posterior probability pair taxa differentially correlated groups. Taxa pairs posterior threshold connected network. Defaults 0.8. n1, n2 integer giving sample sizes two data sets used network construction. Needed Fisher's z-test association matrices instead count matrices used network construction. fisherTrans logical. TRUE (default), Fisher-transformed correlations used permutation tests. nPerm integer giving number permutations permutation tests. Defaults 1000L. permPvalsMethod character indicating method used determining p-values permutation tests. Currently, \"pseudo\" available option (see details). cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). verbose logical. TRUE (default), progress messages shown. logFile character string defining name log file, created permutation tests conducted (therein current iteration numbers stored). Defaults NULL file created. seed integer giving seed reproducibility results. alpha numeric value 0 1 giving significance level. Significantly different correlations connected network. Defaults 0.05. adjust character indicating method used multiple testing adjustment tests differentially correlated pairs taxa. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust. lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction. Defaults 0.2 meaning correlations corresponding local fdr less equal 0.2 identified significant. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\" (default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). pvalsVec vector p-values used permutation tests. Can used performing another method multiple testing adjustment without executing complete permutation process . See example. fileLoadAssoPerm character giving name (without extension) path file storing \"permuted\" association/dissimilarity matrices exported setting storeAssoPerm TRUE. used permutation tests. Set NULL existing associations used. fileLoadCountsPerm character giving name (without extension) path file storing \"permuted\" count matrices exported setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. Set NULL existing count matrices used. storeAssoPerm logical indicating whether association (dissimilarity) matrices permuted data stored file. filename given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. fileStoreAssoPerm character giving file name store matrix containing matrix associations/dissimilarities permuted data. Can also path. storeCountsPerm logical indicating whether permuted count matrices stored external file. Defaults FALSE. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. assoPerm needed output generated NetCoMi v1.0.1! list two elements used permutation procedure. entry must contain association matrices \"nPerm\" permutations. can either \"assoPerm\" value part output returned diffnet netCompare. See example.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"function returns object class diffnet. Depending performed test method, output contains following elements:Permutation tests: Discordant: Fisher's z-test:","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"Permutation procedure: null hypothesis tests defined $$H_0: a1_ij - a2_ij = 0,$$ \\(a1_ij\\) \\(a2_ij\\) denote association taxon j group 1 2, respectively. generate sampling distribution differences \\(H_0\\), group labels randomly reassigned samples group sizes kept. associations re-estimated permuted data set. p-values calculated proportion \"permutation-differences\" larger observed difference. pseudo-count added numerator denominator order avoid zero p-values. p-values adjusted multiple testing.","code":""},{"path":"https://netcomi.de/reference/diffnet.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"benjamini2000adaptiveNetCoMi discordant2016NetCoMi farcomeni2007someNetCoMi fisher1992statisticalNetCoMi gill2010statisticalNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/diffnet.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Constructing Differential Networks for Microbiome Data — diffnet","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Generate a random group vector set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Network construction: amgut_net <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 97 taxa removed. #> 30 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #--------------------- # Differential network # Fisher's z-test amgut_diff1 <- diffnet(amgut_net, diffMethod = \"fisherTest\") #> Checking input arguments ... #> Done. #> Adjust for multiple testing using 'lfdr' ... #> #> Execute fdrtool() ... #> Step 1... determine cutoff point #> Step 2... estimate parameters of null distribution and eta0 #> Step 3... compute p-values and estimate empirical PDF/CDF #> Step 4... compute q-values and local fdr #> #> Done. #> No significant differential associations detected after multiple testing adjustment. # Network contains no differentially correlated taxa: if (FALSE) { # \\dontrun{ plot(amgut_diff1) } # } # Without multiple testing correction (statistically not correct!) amgut_diff2 <- diffnet(amgut_net, diffMethod = \"fisherTest\", adjust = \"none\") #> Checking input arguments ... #> Done. plot(amgut_diff2) if (FALSE) { # \\dontrun{ # Permutation test (permutation matrices are stored) amgut_diff3 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, cores = 4L, adjust = \"lfdr\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", seed = 123456) # Use the p-values again (different adjustment method possible), but without # re-estimating the associations amgut_diff4 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", pvalsVec = amgut_diff3$pvalsVec) x11() plot(amgut_diff4) # Use the permutation associations again (same result as amgut_diff4) amgut_diff5 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", fileLoadAssoPerm = \"assoPerm\") x11() plot(amgut_diff5) # Use the permuted count matrices again (same result as amgut_diff4) amgut_diff6 <- diffnet(amgut_net, diffMethod = \"permute\", nPerm = 1000L, adjust = \"none\", fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) x11() plot(amgut_diff6) } # }"},{"path":"https://netcomi.de/reference/dot-boottest.html","id":null,"dir":"Reference","previous_headings":"","what":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"Statistical significance correlations pairs taxonomic units tested using bootstrap procedure proposed Friedman Alm (2012).","code":""},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"","code":".boottest( countMat, assoMat, nboot = 1000, measure, measurePar, cores = 4, logFile = NULL, verbose = TRUE, seed = NULL, assoBoot = NULL )"},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"countMat matrix containing microbiome data (read counts) correlations calculated (rows represent samples, columns represent taxa) assoMat matrix containing associations estimated countMat. nboot number bootstrap samples. measure character specifying method used computing associations taxa. measurePar list parameters passed function computing associations/dissimilarities. See details respective functions. cores number CPU cores used parallelization. logFile character defining log file, number iteration stored. NULL, log file created. wherein current iteration numbers stored. verbose logical; TRUE, iteration numbers printed R console seed optional seed reproducibility results. assoBoot list bootstrap association matrices.","code":""},{"path":[]},{"path":"https://netcomi.de/reference/dot-boottest.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bootstrap Procedure for Testing Statistical Significance of Correlation Values — .boottest","text":"friedman2012inferringNetCoMi","code":""},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute associations between taxa — .calcAssociation","title":"Compute associations between taxa — .calcAssociation","text":"Computes associations taxa distances subjects given read count matrix","code":""},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute associations between taxa — .calcAssociation","text":"","code":".calcAssociation(countMat, measure, measurePar, verbose)"},{"path":"https://netcomi.de/reference/dot-calcAssociation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute associations between taxa — .calcAssociation","text":"countMat numeric read count matrix, rows samples columns OTUs/taxa. measure character giving measure used estimating associations dissimilarities measurePar optional list parameters passed function estimating associations/dissimilarities verbose TRUE, progress messages returned.","code":""},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate network properties — .calcProps","title":"Calculate network properties — .calcProps","text":"Calculates network properties given adjacency matrix","code":""},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate network properties — .calcProps","text":"","code":".calcProps( adjaMat, dissMat, assoMat, avDissIgnoreInf, sPathNorm, sPathAlgo, normNatConnect, weighted, isempty, clustMethod, clustPar, weightClustCoef, hubPar, hubQuant, lnormFit, connectivity, graphlet, orbits, weightDeg, normDeg, normBetw, normClose, normEigen, centrLCC, jaccard = FALSE, jaccQuant = NULL, verbose = 0 )"},{"path":"https://netcomi.de/reference/dot-calcProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate network properties — .calcProps","text":"adjaMat adjacency matrix dissMat dissimilarity matrix assoMat association matrix avDissIgnoreInf logical indicating whether ignore infinities calculating average dissimilarity. FALSE (default), infinity values set 1. sPathNorm logical. TRUE (default), shortest paths normalized average dissimilarity (connected nodes considered), .e., path interpreted steps average dissimilarity. FALSE, shortest path minimum sum dissimilarities two nodes. sPathAlgo character indicating algorithm used computing shortest paths node pairs. distances (igraph) used shortest path calculation. Possible values : \"unweighted\", \"dijkstra\" (default), \"bellman-ford\", \"johnson\", \"automatic\" (fastest suitable algorithm used). shortest paths needed average (shortest) path length closeness centrality. normNatConnect logical. TRUE (default), normalized natural connectivity returned. weighted logical indicating whether network weighted. isempty logical indicating whether network empty. clustMethod character indicating clustering algorithm. Possible values \"hierarchical\" hierarchical algorithm based dissimilarity values, clustering methods provided igraph package (see communities possible methods). Defaults \"cluster_fast_greedy\" association-based networks \"hierarchical\" sample similarity networks. clustPar list parameters passed clustering functions. hierarchical clustering used, parameters passed hclust well cutree. weightClustCoef logical indicating whether (global) clustering coefficient weighted (TRUE, default) unweighted (FALSE). hubPar character vector one elements (centrality measures) used identifying hub nodes. Possible values degree, betweenness, closeness, eigenvector. multiple measures given, hubs nodes highest centrality selected measures. See details. hubQuant quantile used determining hub nodes. Defaults 0.95. lnormFit hubs nodes centrality value 95% quantile fitted log-normal distribution (lnormFit = TRUE) empirical distribution centrality values (lnormFit = FALSE; default). connectivity logical. TRUE (default), edge vertex connectivity calculated. Might disabled reduce execution time. graphlet logical. TRUE (default), graphlet-based network properties computed: orbit counts graphlets 2-4 nodes (ocount) Graphlet Correlation Matrix (gcm). orbits numeric vector integers 0 14 defining graphlet orbits. weightDeg logical. TRUE, weighted degree used (see strength). Default FALSE. automatically set TRUE fully connected (dense) network. normDeg, normBetw, normClose, normEigen logical. TRUE (default measures), normalized version respective centrality values returned. centrLCC logical indicating whether compute centralities largest connected component (LCC). TRUE (default), centrality values disconnected components zero. jaccard shall Jaccard index calculated? jaccQuant quantile Jaccard index verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages shown. Can also logical.","code":""},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":null,"dir":"Reference","previous_headings":"","what":"Function for Generating Vector Names — .getVecNames","title":"Function for Generating Vector Names — .getVecNames","text":"function generates names vector contains elements lower triangle matrix. R code copied getNames (exported function discordant package) changed names generated columns (rows original function, produces implausible results).","code":""},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Function for Generating Vector Names — .getVecNames","text":"","code":".getVecNames(x)"},{"path":"https://netcomi.de/reference/dot-getVecNames.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Function for Generating Vector Names — .getVecNames","text":"x symmetric matrix column row names returned vector","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":null,"dir":"Reference","previous_headings":"","what":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"function implements procedures test whether pairs taxa differentially associated, whether taxon differentially associated taxa, whether two networks differentially associated two groups proposed Gill et al.(2010).","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"","code":".permTestDiffAsso( countMat1, countMat2, countsJoint, normCounts1, normCounts2, assoMat1, assoMat2, paramsNetConstruct, method = c(\"connect.pairs\", \"connect.variables\", \"connect.network\"), fisherTrans = TRUE, pvalsMethod = \"pseudo\", adjust = \"lfdr\", adjust2 = \"holm\", trueNullMethod = \"convest\", alpha = 0.05, lfdrThresh = 0.2, nPerm = 1000, matchDesign = NULL, callNetConstr = NULL, cores = 4, verbose = TRUE, logFile = \"log.txt\", seed = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), assoPerm = NULL )"},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"countMat1, countMat2 matrices containing microbiome data (read counts) group 1 group 2 (rows represent samples columns taxonomic units, respectively). countsJoint joint count matrices preprocessing normCounts1, normCounts2 normalized count matrices. assoMat1, assoMat2 association matrices corresponding two count matrices. associations must estimated count matrices countMat1 countMat2. paramsNetConstruct parameters used network construction. method character vector indicating tests performed. Possible values \"connect.pairs\" (differentially correlated taxa pairs), \"connect.variables\" (one taxon ) \"connect.network\" (differentially connected networks). default, three tests conducted. fisherTrans logical indicating whether correlation values Fisher-transformed. pvalsMethod currently \"pseudo\" available, 1 added number permutations permutation test statistics extreme observed one order avoid zero p-values. adjust multiple testing adjustment tests differentially correlated pairs taxa; possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool) one methods provided p.adjust adjust2 multiple testing adjustment tests taxa pair differentially correlated taxa; possible methods provided p.adjust (hundred tests necessary local fdr correction) trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). alpha significance level lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction; defaults 0.2, means correlations corresponding local fdr less equal 0.2 identified significant nPerm number permutations matchDesign Numeric vector two elements specifying optional matched-group (.e. matched-pair) design, used permutation tests netCompare diffnet. c(1,1) corresponds matched-pair design. 1:2 matching, instance, defined c(1,2), means first sample group 1 matched first two samples group 2 . appropriate order samples must ensured. NULL, group memberships shuffled randomly group sizes identical original data set ensured. callNetConstr call inherited netConstruct(). cores number CPU cores (permutation tests executed parallel) verbose TRUE, status messages numbers SparCC iterations printed logFile character string naming log file within current iteration number stored seed optional seed reproducibility results fileLoadAssoPerm character giving name (without extenstion) path file storing \"permuted\" association/dissimilarity matrices exported setting storeAssoPerm TRUE. used permutation tests. Set NULL existing associations used. fileLoadCountsPerm character giving name (without extenstion) path file storing \"permuted\" count matrices exported setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. Set NULL existing count matrices used. storeAssoPerm logical indicating whether association (dissimilarity) matrices permuted data stored file. filename given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. fileStoreAssoPerm character giving file name store matrix containing matrix associations/dissimilarities permuted data. Can also path. storeCountsPerm logical indicating whether permuted count matrices stored external file. Defaults FALSE. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. assoPerm used anymore.","code":""},{"path":"https://netcomi.de/reference/dot-permTestDiffAsso.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Permutation Tests for Determining Differential Associations — .permTestDiffAsso","text":"gill2010statisticalNetCoMi knijnenburg2009fewerNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/editLabels.html","id":null,"dir":"Reference","previous_headings":"","what":"Edit labels — editLabels","title":"Edit labels — editLabels","text":"Function editing node labels, .e., shortening certain length removing unwanted characters. function used NetCoMi's plot functions plot.microNetProps plot.diffnet.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Edit labels — editLabels","text":"","code":"editLabels( x, shortenLabels = c(\"intelligent\", \"simple\", \"none\"), labelLength = 6, labelPattern = NULL, addBrack = TRUE, charToRm = NULL, verbose = TRUE )"},{"path":"https://netcomi.de/reference/editLabels.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Edit labels — editLabels","text":"x character vector node labels. shortenLabels character indicating shorten labels. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Labels shortened. labelLength integer defining length labels shall shortened shortenLabels used. Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(4, \"'\", 3, \"'\", 3). See details example. addBrack logical indicating whether add closing square bracket. TRUE, \"]\" added first part contains \"[\". charToRm character vector giving one patterns remove labels. verbose logical. TRUE, function allowed return messages.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Edit labels — editLabels","text":"Character vector edited labels.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Edit labels — editLabels","text":"Consider vector three bacteria names: \"Streptococcus1\", \"Streptococcus2\", \"Streptomyces\". shortenLabels = \"simple\" labelLength = 6 leads shortened labels: \"Strept\", \"Strept\", \"Strept\", distinguishable. shortenLabels = \"intelligent\" labelPattern = c(5, \"'\", 3) leads shortened labels: \"Strep'coc\", \"Strep'coc\", \"Strep'myc\", first two distinguishable. shortenLabels = \"intelligent\" labelPattern = c(5, \"'\", 3, \"'\", 3) leads shortened labels: \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\", original labels can inferred. intelligent approach follows: First, labels shortened defined length (argument labelLength). labelPattern applied duplicated labels. group duplicates, third label part starts letter two labels different first time. five-part pattern (given) applies group duplicates consists two labels shortened labels unique applying three-part pattern. , fifth part starts letter labels different first time. message printed returned labels unique.","code":""},{"path":"https://netcomi.de/reference/editLabels.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Edit labels — editLabels","text":"","code":"labels <- c(\"Salmonella\", \"Clostridium\", \"Clostridiales(O)\", \"Ruminococcus\", \"Ruminococcaceae(F)\", \"Enterobacteriaceae\", \"Enterococcaceae\", \"[Bacillus] alkalinitrilicus\", \"[Bacillus] alkalisediminis\", \"[Bacillus] oceani\") # Use the \"simple\" method to shorten labels editLabels(labels, shortenLabels = \"simple\", labelLength = 6) #> [1] \"Salmon\" \"Clostr\" \"Clostr\" \"Rumino\" \"Rumino\" \"Entero\" \"Entero\" \"[Bacil\" #> [9] \"[Bacil\" \"[Bacil\" # -> Original labels cannot be inferred from shortened labels # Use the \"intelligent\" method to shorten labels with three-part pattern editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, labelPattern = c(6, \"'\", 4)) #> Shortened labels could not be made unique. #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ales\" \"Rumino'us \" \"Rumino'acea\" #> [6] \"Entero'bact\" \"Entero'cocc\" \"[Bacil]'alka\" \"[Bacil]'alka\" \"[Bacil]'ocea\" # -> [Bacillus] alkalinitrilicus and [Bacillus] alkalisediminis not # distinguishable # Use the \"intelligent\" method to shorten labels with five-part pattern editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, labelPattern = c(6, \"'\", 3, \"'\", 3)) #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ale\" \"Rumino'us \" #> [5] \"Rumino'ace\" \"Entero'bac\" \"Entero'coc\" \"[Bacil]'alk'nit\" #> [9] \"[Bacil]'alk'sed\" \"[Bacil]'oce\" # Same as before but no brackets are added editLabels(labels, shortenLabels = \"intelligent\", labelLength = 6, addBrack = FALSE, labelPattern = c(6, \"'\", 3, \"'\", 3)) #> [1] \"Salmon\" \"Clostr'um \" \"Clostr'ale\" \"Rumino'us \" #> [5] \"Rumino'ace\" \"Entero'bac\" \"Entero'coc\" \"[Bacil'alk'nit\" #> [9] \"[Bacil'alk'sed\" \"[Bacil'oce\" # Remove character pattern(s) (can also be a vector with multiple patterns) labels <- c(\"g__Faecalibacterium\", \"g__Clostridium\", \"g__Eubacterium\", \"g__Bifidobacterium\", \"g__Bacteroides\") editLabels(labels, charToRm = \"g__\") #> [1] \"Faecal\" \"Clostr\" \"Eubact\" \"Bifido\" \"Bacter\""},{"path":"https://netcomi.de/reference/gcoda.html","id":null,"dir":"Reference","previous_headings":"","what":"gCoda: conditional dependence network inference for compositional data — gcoda","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"parallelized implementation gCoda approach (Fang et al., 2017), published GitHub (Fang, 2016).","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"","code":"gcoda( x, counts = F, pseudo = 0.5, lambda.min.ratio = 1e-04, nlambda = 15, ebic.gamma = 0.5, cores = 1L, verbose = TRUE )"},{"path":"https://netcomi.de/reference/gcoda.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"x numeric matrix (nxp) samples rows OTUs/taxa columns. counts logical indicating whether x constains counts fractions. Defaults FALSE meaning x contains fractions rows sum 1. pseudo numeric value giving pseudo count, added counts counts = TRUE. Default 0.5. lambda.min.ratio numeric value specifying lambda(max) / lambda(min). Defaults 1e-4. nlambda numberic value (integer) giving tuning parameters. Defaults 15. ebic.gamma numeric value specifying gamma value EBIC. Defaults 0.5. cores integer indicating number CPU cores used computation. Defaults 1L. cores > 1L, foreach used parallel execution. verbose logical indicating whether progress indicator shown (TRUE default).","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"list containing following elements:","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"fang2016gcodaGithubNetCoMi fang2017gcodaNetCoMi","code":""},{"path":"https://netcomi.de/reference/gcoda.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"gCoda: conditional dependence network inference for compositional data — gcoda","text":"Fang Huaying, Peking University (R-Code documentation) Stefanie Peschel (Parts documentation; Parallelization)","code":""},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":null,"dir":"Reference","previous_headings":"","what":"Install all packages used within NetCoMi — installNetCoMiPacks","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"function installs R packages used NetCoMi listed dependencies imports NetCoMi's description file. optional packages needed certain network construction settings. BiocManager::install used installation since installs updates Bioconductor well CRAN packages. Installed CRAN packages: cccd LaplacesDemon propr zCompositions Installed Bioconductor packages: ccrepe DESeq2 discordant limma metagenomeSeq installed via function, packages installed respective NetCoMi functions needed.","code":""},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"","code":"installNetCoMiPacks(onlyMissing = TRUE, lib = NULL, ...)"},{"path":"https://netcomi.de/reference/installNetCoMiPacks.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Install all packages used within NetCoMi — installNetCoMiPacks","text":"onlyMissing logical. TRUE (default), installed.packages used read packages installed given library missing packages installed. FALSE, packages installed updated (already installed). lib character vector giving library directories install missing packages. NULL, first element .libPaths used. ... Additional arguments used install install.packages.","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":null,"dir":"Reference","previous_headings":"","what":"Multiple testing adjustment — multAdjust","title":"Multiple testing adjustment — multAdjust","text":"functions adjusts vector p-values multiple testing","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Multiple testing adjustment — multAdjust","text":"","code":"multAdjust( pvals, adjust = \"adaptBH\", trueNullMethod = \"convest\", pTrueNull = NULL, verbose = FALSE )"},{"path":"https://netcomi.de/reference/multAdjust.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Multiple testing adjustment — multAdjust","text":"pvals numeric vector p-values adjust character specifying method used adjustment. Can \"lfdr\", \"adaptBH\", one methods provided p.adjust. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). pTrueNull proportion true null hypothesis used adaptBH method. NULL, proportion computed using method defined via trueNullMethod. verbose TRUE, progress messages returned.","code":""},{"path":"https://netcomi.de/reference/multAdjust.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Multiple testing adjustment — multAdjust","text":"farcomeni2007someNetCoMi","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":null,"dir":"Reference","previous_headings":"","what":"Microbiome Network Analysis — netAnalyze","title":"Microbiome Network Analysis — netAnalyze","text":"Determine network properties objects class microNet.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Microbiome Network Analysis — netAnalyze","text":"","code":"netAnalyze(net, # Centrality related: centrLCC = TRUE, weightDeg = FALSE, normDeg = TRUE, normBetw = TRUE, normClose = TRUE, normEigen = TRUE, # Cluster related: clustMethod = NULL, clustPar = NULL, clustPar2 = NULL, weightClustCoef = TRUE, # Hub related: hubPar = \"eigenvector\", hubQuant = 0.95, lnormFit = FALSE, # Graphlet related: graphlet = TRUE, orbits = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), gcmHeat = TRUE, gcmHeatLCC = TRUE, # Further arguments: avDissIgnoreInf = FALSE, sPathAlgo = \"dijkstra\", sPathNorm = TRUE, normNatConnect = TRUE, connectivity = TRUE, verbose = 1 )"},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Microbiome Network Analysis — netAnalyze","text":"net object class microNet (returned netConstruct). centrLCC logical indicating whether compute centralities largest connected component (LCC). TRUE (default), centrality values disconnected components zero. weightDeg logical. TRUE, weighted degree used (see strength). Default FALSE. automatically set TRUE fully connected (dense) network. normDeg, normBetw, normClose, normEigen logical. TRUE (default measures), normalized version respective centrality values returned. clustMethod character indicating clustering algorithm. Possible values \"hierarchical\" hierarchical algorithm based dissimilarity values, clustering methods provided igraph package (see communities possible methods). Defaults \"cluster_fast_greedy\" association-based networks \"hierarchical\" sample similarity networks. clustPar list parameters passed clustering functions. hierarchical clustering used, parameters passed hclust cutree (default list(method = \"average\", k = 3). clustPar2 clustPar second network. NULL net contains two networks, clustPar used second network well. weightClustCoef logical indicating whether (global) clustering coefficient weighted (TRUE, default) unweighted (FALSE). hubPar character vector one elements (centrality measures) used identifying hub nodes. Possible values degree, betweenness, closeness, eigenvector. multiple measures given, hubs nodes highest centrality selected measures. See details. hubQuant quantile used determining hub nodes. Defaults 0.95. lnormFit hubs nodes centrality value 95% quantile fitted log-normal distribution (lnormFit = TRUE) empirical distribution centrality values (lnormFit = FALSE; default). graphlet logical. TRUE (default), graphlet-based network properties computed: orbit counts defined orbits corresponding Graphlet Correlation Matrix (gcm). orbits numeric vector integers 0 14 defining orbits used calculating GCM. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3. gcmHeat logical indicating heatmap GCM(s) plotted. Default TRUE. gcmHeatLCC logical. GCM heatmap plotted LCC TRUE (default) whole network FALSE. avDissIgnoreInf logical indicating whether ignore infinities calculating average dissimilarity. FALSE (default), infinity values set 1. sPathAlgo character indicating algorithm used computing shortest paths node pairs. distances (igraph) used shortest path calculation. Possible values : \"unweighted\", \"dijkstra\" (default), \"bellman-ford\", \"johnson\", \"automatic\" (fastest suitable algorithm used). shortest paths needed average (shortest) path length closeness centrality. sPathNorm logical. TRUE (default), shortest paths normalized average dissimilarity (connected nodes considered), .e., path interpreted steps average dissimilarity. FALSE, shortest path minimum sum dissimilarities two nodes. normNatConnect logical. TRUE (default), normalized natural connectivity returned. connectivity logical. TRUE (default), edge vertex connectivity calculated. Might disabled reduce execution time. verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages shown. Can also logical.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Microbiome Network Analysis — netAnalyze","text":"object class microNetProps containing following elements:","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Microbiome Network Analysis — netAnalyze","text":"Definitions: (Connected) Component Subnetwork two nodes connected path. Number components Number connected components. Since single node connected trivial path, single node component. Largest connected component (LCC) connected component highest number nodes. Shortest paths Computed using distances. algorithm defined via sPathAlgo. Normalized shortest paths ( sPathNorm TRUE) calculated dividing shortest paths average dissimilarity (see ). Global network properties: Relative LCC size = (# nodes LCC) / (# nodes complete network) Clustering Coefficient weighted (global) clustering coefficient arithmetic mean local clustering coefficient defined Barrat et al. (computed transitivity type = \"barrat\"), NAs ignored. unweighted (global) clustering coefficient computed using transitivity type = \"global\". Modularity modularity score determined clustering computed using modularity.igraph. Positive edge percentage Percentage edges positive estimated association total number edges. Edge density Computed using edge_density. Natural connectivity Computed using natural.connectivity. \"norm\" parameter defined normNatConnect. Vertex / Edge connectivity Computed using vertex_connectivity edge_connectivity. equal zero disconnected network. Average dissimilarity Computed mean dissimilarity values (lower triangle dissMat). avDissIgnoreInf specified whether ignore infinite dissimilarities. average dissimilarity empty network 1. Average path length Computed mean shortest paths (normalized unnormalized). av. path length empty network 1. Clustering algorithms: Hierarchical clustering Based dissimilarity values. Computed using hclust cutree. cluster_optimal Modularity optimization. See cluster_optimal. cluster_fast_greedy Fast greedy modularity optimization. See cluster_fast_greedy. cluster_louvain Multilevel optimization modularity. See cluster_louvain. cluster_edge_betweenness Based edge betweenness. Dissimilarity values used. See cluster_edge_betweenness. cluster_leading_eigen Based leading eigenvector community matrix. See cluster_leading_eigen. cluster_spinglass Find communities via spin-glass model simulated annealing. See cluster_spinglass. cluster_walktrap Find communities via short random walks. See cluster_walktrap. Hubs: Hubs nodes highest centrality values one centrality measures. \"highest values\" regarding centrality measure defined values lying certain quantile (defined hubQuant) either empirical distribution centralities (lnormFit = FALSE) fitted log-normal distribution (lnormFit = TRUE; fitdistr used fitting). quantile set using hubQuant. clustPar contains multiple measures, centrality values hub node must given quantile measures time. Centrality measures: Via centrLCC decided whether centralities calculated whole network largest connected component. latter case (centrLCC = FALSE), nodes outside LCC centrality value zero. Degree unweighted degree (normalized unnormalized) computed using degree, weighted degree using strength. Betweenness centrality unnormalized normalized betweenness centrality computed using betweenness. Closeness centrality Unnormalized: closeness = sum(1/shortest paths) Normalized: closeness_unnorm = closeness / (# nodes – 1) Eigenvector centrality centrLCC == FALSE network consists one components: eigenvector centrality (EVC) computed component separately (using eigen_centrality) scaled according component size overcome fact nodes smaller components higher EVC. normEigen == TRUE, EVC values divided maximum EVC value. EVC single nodes zero. Otherwise, EVC computed LCC using eigen_centrality (scale argument set according normEigen). Graphlet-based properties: Orbit counts Count node orbits graphlets 2 4 nodes. See Hocevar Demsar (2016) details. count4 function orca package used orbit counting. Graphlet Correlation Matrix (GCM) Matrix Spearman's correlations network's (non-redundant) node orbits (Yaveroglu et al., 2014). default, 11 non-redundant orbits used. grouped according role: orbit 0 represents degree, orbits (2, 5, 7) represent nodes within chain, orbits (8, 10, 11) represent nodes cycle, orbits (6, 9, 4, 1) represent terminal node.","code":""},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Microbiome Network Analysis — netAnalyze","text":"hocevar2016computationNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netAnalyze.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Microbiome Network Analysis — netAnalyze","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction amgut_net1 <- netConstruct(amgut1.filt, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.4) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis # Using eigenvector centrality as hub score amgut_props1 <- netAnalyze(amgut_net1, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\") summary(amgut_props1, showCentr = \"eigenvector\", numbNodes = 15L, digits = 3L) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.240 #> Clustering coefficient 0.733 #> Modularity 0.338 #> Positive edge percentage 86.364 #> Edge density 0.333 #> Natural connectivity 0.190 #> Vertex connectivity 1.000 #> Edge connectivity 1.000 #> Average dissimilarity* 0.820 #> Average path length** 1.526 #> #> Whole network: #> #> Number of components 33.000 #> Clustering coefficient 0.523 #> Modularity 0.512 #> Positive edge percentage 89.286 #> Edge density 0.023 #> Natural connectivity 0.028 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 30 6 4 2 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 119010 #> 71543 #> 9715 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Eigenvector centrality (normalized): #> #> 9715 1.000 #> 119010 0.733 #> 71543 0.723 #> 9753 0.670 #> 307981 0.670 #> 301645 0.670 #> 305760 0.669 #> 512309 0.607 #> 188236 0.131 #> 364563 0.026 #> 326792 0.023 #> 311477 0.005 #> 73352 0.000 #> 331820 0.000 #> 248140 0.000 # Using degree, betweenness and closeness centrality as hub scores amgut_props2 <- netAnalyze(amgut_net1, clustMethod = \"cluster_fast_greedy\", hubPar = c(\"degree\", \"betweenness\", \"closeness\")) summary(amgut_props2, showCentr = \"all\", numbNodes = 5L, digits = 5L) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.24000 #> Clustering coefficient 0.73277 #> Modularity 0.33781 #> Positive edge percentage 86.36364 #> Edge density 0.33333 #> Natural connectivity 0.19028 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.82023 #> Average path length** 1.52564 #> #> Whole network: #> #> Number of components 33.00000 #> Clustering coefficient 0.52341 #> Modularity 0.51212 #> Positive edge percentage 89.28571 #> Edge density 0.02286 #> Natural connectivity 0.02791 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the whole network #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 0 1 2 3 4 5 #> #: 30 6 4 2 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> No hubs detected. #> ______________________________ #> Centrality measures #> - In decreasing order #> - Centrality of disconnected components is zero #> ```````````````````````````````````````````````` #> Degree (normalized): #> #> 9715 0.14286 #> 188236 0.10204 #> 307981 0.08163 #> 71543 0.08163 #> 512309 0.08163 #> #> Betweenness centrality (normalized): #> #> 9715 0.50909 #> 188236 0.47273 #> 307981 0.36364 #> 364563 0.18182 #> 73352 0.00000 #> #> Closeness centrality (normalized): #> #> 305760 2.17422 #> 301645 2.13487 #> 307981 2.12892 #> 119010 1.36913 #> 71543 1.33707 #> #> Eigenvector centrality (normalized): #> #> 9715 1.00000 #> 119010 0.73317 #> 71543 0.72255 #> 9753 0.67031 #> 307981 0.67026 # Calculate centralities only for the largest connected component amgut_props3 <- netAnalyze(amgut_net1, centrLCC = TRUE, clustMethod = \"cluster_fast_greedy\", hubPar = \"eigenvector\") summary(amgut_props3, showCentr = \"none\", clusterLCC = TRUE) #> #> Component sizes #> ``````````````` #> size: 12 6 2 1 #> #: 1 1 1 30 #> ______________________________ #> Global network properties #> ````````````````````````` #> Largest connected component (LCC): #> #> Relative LCC size 0.24000 #> Clustering coefficient 0.73277 #> Modularity 0.33781 #> Positive edge percentage 86.36364 #> Edge density 0.33333 #> Natural connectivity 0.19028 #> Vertex connectivity 1.00000 #> Edge connectivity 1.00000 #> Average dissimilarity* 0.82023 #> Average path length** 1.52564 #> #> Whole network: #> #> Number of components 33.00000 #> Clustering coefficient 0.52341 #> Modularity 0.51212 #> Positive edge percentage 89.28571 #> Edge density 0.02286 #> Natural connectivity 0.02791 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Clusters #> - In the LCC #> - Algorithm: cluster_fast_greedy #> ```````````````````````````````` #> #> name: 1 2 3 #> #: 4 5 3 #> #> ______________________________ #> Hubs #> - In alphabetical/numerical order #> - Based on empirical quantiles of centralities #> ``````````````````````````````````````````````` #> 119010 #> 71543 #> 9715 # Network plot plot(amgut_props1) plot(amgut_props2) plot(amgut_props3) #---------------------------------------------------------------------------- # Plot the GCM heatmap plotHeat(mat = amgut_props1$graphletLCC$gcm1, pmat = amgut_props1$graphletLCC$pAdjust1, type = \"mixed\", title = \"GCM\", colorLim = c(-1, 1), mar = c(2, 0, 2, 0)) # Add rectangles graphics::rect(xleft = c( 0.5, 1.5, 4.5, 7.5), ybottom = c(11.5, 7.5, 4.5, 0.5), xright = c( 1.5, 4.5, 7.5, 11.5), ytop = c(10.5, 10.5, 7.5, 4.5), lwd = 2, xpd = NA) text(6, -0.2, xpd = NA, \"Significance codes: ***: 0.001; **: 0.01; *: 0.05\") #---------------------------------------------------------------------------- # Dissimilarity-based network (where nodes are subjects) amgut_net4 <- netConstruct(amgut1.filt, measure = \"aitchison\", filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = 30), zeroMethod = \"multRepl\", sparsMethod = \"knn\") #> Checking input arguments ... #> Done. #> Infos about changed arguments: #> Counts normalized to fractions for measure \"aitchison\". #> Data filtering ... #> 259 samples removed. #> 127 taxa and 30 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... #> Done. #> #> Normalization: #> Counts normalized by total sum scaling. #> #> Calculate 'aitchison' dissimilarities ... #> Done. #> #> Sparsify dissimilarities via 'knn' ... #> Done. amgut_props4 <- netAnalyze(amgut_net4, clustMethod = \"hierarchical\", clustPar = list(k = 3)) plot(amgut_props4)"},{"path":"https://netcomi.de/reference/netCompare.html","id":null,"dir":"Reference","previous_headings":"","what":"Group Comparison of Network Properties — netCompare","title":"Group Comparison of Network Properties — netCompare","text":"Calculate compare network properties microbial networks using Jaccard's index, Rand index, Graphlet Correlation Distance, permutation tests.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Group Comparison of Network Properties — netCompare","text":"","code":"netCompare( x, permTest = FALSE, jaccQuant = 0.75, lnormFit = NULL, testRand = TRUE, nPermRand = 1000L, gcd = TRUE, gcdOrb = c(0, 2, 5, 7, 8, 10, 11, 6, 9, 4, 1), verbose = TRUE, nPerm = 1000L, adjust = \"adaptBH\", trueNullMethod = \"convest\", cores = 1L, logFile = NULL, seed = NULL, fileLoadAssoPerm = NULL, fileLoadCountsPerm = NULL, storeAssoPerm = FALSE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = FALSE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), returnPermProps = FALSE, returnPermCentr = FALSE, assoPerm = NULL, dissPerm = NULL )"},{"path":"https://netcomi.de/reference/netCompare.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Group Comparison of Network Properties — netCompare","text":"x object class microNetProps (returned netAnalyze). permTest logical. TRUE, permutation test conducted test centrality measures global network properties group differences. Defaults FALSE. May lead considerably increased execution time! jaccQuant numeric value 0 1 specifying quantile used threshold identify central nodes centrality measure. resulting sets nodes used calculate Jaccard's index (see details). Default 0.75. lnormFit logical indicating whether log-normal distribution fitted calculated centrality values determining Jaccard's index (see details). NULL (default), value adopted input, .e., equals method used determining hub nodes. testRand logical. TRUE, permutation test conducted adjusted Rand index (H0: ARI = 0). Execution time may increased large networks. nPermRand integer giving number permutations used testing adjusted Rand index significantly different zero. Ignored testRand = FALSE. Defaults 1000L. gcd logical. TRUE (default), Graphlet Correlation Distance (GCD) computed. gcdOrb numeric vector integers 0 14 defining orbits used calculating GCD. Minimum length 2. Defaults c(0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11), thus excluding redundant orbits orbit o3. verbose logical. TRUE (default), status messages shown. nPerm integer giving number permutations permTest = TRUE. Default 1000L. adjust character indicating method used multiple testing adjustment permutation p-values. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust (see p.adjust.methods()). trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). cores integer indicating number CPU cores used permutation tests. cores > 1, tests performed parallel. limited number available CPU cores determined detectCores. Defaults 1L (parallelization). logFile character string naming log file current iteration number written (permutation tests performed). Defaults NULL log file generated. seed integer giving seed reproducibility results. fileLoadAssoPerm character giving name path (without file extension) file containing \"permuted\" association/dissimilarity matrices generated setting storeAssoPerm TRUE. used permutation tests. NULL, existing associations used. fileLoadCountsPerm character giving name path (without file extension) file containing \"permuted\" count matrices generated setting storeCountsPerm TRUE. used permutation tests, fileLoadAssoPerm = NULL. NULL, existing count matrices used. storeAssoPerm logical indicating whether association/dissimilarity matrices permuted data saved file. file name given via fileStoreAssoPerm. TRUE, computed \"permutation\" association/dissimilarity matrices can reused via fileLoadAssoPerm save runtime. Defaults FALSE. Ignored fileLoadAssoPerm NULL. fileStoreAssoPerm character giving name file matrix associations/dissimilarities permuted data saved. Can also path. storeCountsPerm logical indicating whether permuted count matrices saved external file. Defaults FALSE. Ignored fileLoadCountsPerm NULL. fileStoreCountsPerm character vector two elements giving names two files storing permuted count matrices belonging two groups. returnPermProps logical. TRUE, global properties absolute differences permuted data returned. returnPermCentr logical. TRUE, centralities absolute differences permuted data returned. assoPerm needed output generated NetCoMi v1.0.1! list two elements used permutation procedure. entry must contain association matrices \"nPerm\" permutations. can \"assoPerm\" value part output either returned diffnet netCompare. dissPerm needed output generated NetCoMi v1.0.1! Usage analog assoPerm dissimilarity measure used network construction.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Group Comparison of Network Properties — netCompare","text":"Object class microNetComp following elements: Additional output permutation tests conducted:","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Group Comparison of Network Properties — netCompare","text":"Permutation procedure: Used testing centrality measures global network properties group differences. null hypothesis tests defined $$H_0: c1_i - c2_i = 0,$$ \\(c1_i\\) \\(c2_i\\) denote centrality values taxon group 1 2, respectively. generate sampling distribution differences \\(H_0\\), group labels randomly reassigned samples group sizes kept. associations re-estimated permuted data set. p-values calculated proportion \"permutation-differences\" larger equal observed difference. non-exact tests, pseudo-count added numerator denominator avoid p-values zero. Several methods adjusting p-values multiplicity available. Jaccard's index: Jaccard's index expresses centrality measure equal sets central nodes among two networks. sets defined nodes centrality value defined quantile (via jaccQuant) either empirical distribution centrality values (lnormFit = FALSE) fitted log-normal distribution (lnormFit = TRUE). index ranges 0 1, 1 means sets central nodes exactly equal networks 0 indicates central nodes completely different. index calculated suggested Real Vargas (1996). Rand index: Rand index used express whether determined clusterings equal groups. adjusted Rand index (ARI) ranges -1 1, 1 indicates two clusterings exactly equal. expected index value two random clusterings 0. implemented test procedure accordance explanations Qannari et al. (2014), p-value alpha levels means ARI significantly higher expected two random clusterings. Graphlet Correlation Distance: graphlet-based distance measure, defined Euclidean distance upper triangle values Graphlet Correlation Matrices (GCM) two networks (Yaveroglu et al., 2014). GCM network matrix Spearman's correlations network's node orbits (Hocevar Demsar, 2016). See calcGCD details.","code":""},{"path":"https://netcomi.de/reference/netCompare.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Group Comparison of Network Properties — netCompare","text":"benjamini2000adaptiveNetCoMi farcomeni2007someNetCoMi gill2010statisticalNetCoMi hocevar2016computationNetCoMi qannari2014significanceNetCoMi real1996probabilisticNetCoMi yaveroglu2014revealingNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netCompare.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Group Comparison of Network Properties — netCompare","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") amgut_season_yes #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 121 samples ] #> sample_data() Sample Data: [ 121 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] amgut_season_no #> phyloseq-class experiment-level object #> otu_table() OTU Table: [ 138 taxa and 163 samples ] #> sample_data() Sample Data: [ 163 samples by 166 sample variables ] #> tax_table() Taxonomy Table: [ 138 taxa by 7 taxonomic ranks ] # Filter the 121 samples (sample size of the smaller group) with highest # frequency to make the sample sizes equal and thus ensure comparability. n_yes <- phyloseq::nsamples(amgut_season_yes) # Network construction amgut_net <- netConstruct(data = amgut_season_yes, data2 = amgut_season_no, measure = \"pearson\", filtSamp = \"highestFreq\", filtSampPar = list(highestFreq = n_yes), filtTax = \"highestVar\", filtTaxPar = list(highestVar = 30), zeroMethod = \"pseudoZO\", normMethod = \"clr\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 0 samples removed in data set 1. #> 42 samples removed in data set 2. #> 114 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 24 taxa and 120 samples remaining in group 1. #> 24 taxa and 121 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis # Note: Please zoom into the GCM plot or open a new window using: # x11(width = 10, height = 10) amgut_props <- netAnalyze(amgut_net, clustMethod = \"cluster_fast_greedy\") # Network plot plot(amgut_props, sameLayout = TRUE, title1 = \"Seasonal allergies\", title2 = \"No seasonal allergies\") #-------------------------- # Network comparison # Without permutation tests amgut_comp1 <- netCompare(amgut_props, permTest = FALSE) #> Checking input arguments ... #> Done. summary(amgut_comp1) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = FALSE) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' difference #> Number of components 1.000 2.000 1.000 #> Clustering coefficient 0.534 0.448 0.086 #> Modularity 0.168 0.155 0.012 #> Positive edge percentage 32.099 39.683 7.584 #> Edge density 0.293 0.249 0.044 #> Natural connectivity 0.070 0.068 0.002 #> Vertex connectivity 1.000 1.000 0.000 #> Edge connectivity 1.000 1.000 0.000 #> Average dissimilarity* 0.920 0.929 0.009 #> Average path length** 1.496 1.558 0.062 #> ----- #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.411 0.397 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203 0.954 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. #> 158660 0.522 0.261 0.261 #> 469709 0.391 0.174 0.217 #> 303304 0.261 0.043 0.217 #> 184983 0.174 0.348 0.174 #> 10116 0.478 0.304 0.174 #> 512309 0.565 0.391 0.174 #> 278234 0.174 0.043 0.130 #> 361496 0.130 0.000 0.130 #> 71543 0.522 0.391 0.130 #> 188236 0.565 0.435 0.130 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. #> 184983 0.000 0.147 0.147 #> 322235 0.087 0.195 0.108 #> 190597 0.099 0.000 0.099 #> 188236 0.225 0.143 0.082 #> 71543 0.123 0.043 0.079 #> 512309 0.083 0.139 0.056 #> 326792 0.000 0.043 0.043 #> 73352 0.055 0.095 0.040 #> 248140 0.000 0.026 0.026 #> 278234 0.020 0.000 0.020 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. #> 361496 0.643 0.000 0.643 #> 303304 0.790 0.510 0.280 #> 158660 1.011 0.812 0.200 #> 248140 0.478 0.675 0.197 #> 469709 0.931 0.772 0.159 #> 278234 0.678 0.539 0.139 #> 184983 0.775 0.909 0.135 #> 512309 1.045 0.912 0.133 #> 181016 0.544 0.665 0.121 #> 10116 0.966 0.850 0.115 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. #> 158660 0.971 0.314 0.657 #> 184983 0.319 0.774 0.455 #> 322235 0.857 0.403 0.454 #> 469709 0.695 0.309 0.386 #> 303304 0.397 0.037 0.360 #> 90487 0.483 0.200 0.283 #> 307981 0.682 0.965 0.283 #> 364563 0.716 0.990 0.274 #> 326792 0.707 0.954 0.246 #> 512309 1.000 0.828 0.172 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # \\donttest{ # With permutation tests (with only 100 permutations to decrease runtime) amgut_comp2 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, cores = 1L, storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, #> countsPerm2.bmat, and countsPerm2.desc.txt created. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. # Rerun with a different adjustment method ... # ... using the stored permutation count matrices amgut_comp3 <- netCompare(amgut_props, adjust = \"BH\", permTest = TRUE, nPerm = 100L, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'BH' ... #> Done. # ... using the stored permutation association matrices amgut_comp4 <- netCompare(amgut_props, adjust = \"BH\", permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\", seed = 123456) #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'BH' ... #> Done. # amgut_comp3 and amgut_comp4 should be equal all.equal(amgut_comp3$adjaMatrices, amgut_comp4$adjaMatrices) #> [1] TRUE all.equal(amgut_comp3$properties, amgut_comp4$properties) #> [1] TRUE summary(amgut_comp2) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, cores = 1, #> seed = 123456, storeAssoPerm = TRUE, fileStoreAssoPerm = \"assoPerm\", #> storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", #> \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.344046 #> 303304 0.790 0.510 0.280 0.344046 #> 158660 1.011 0.812 0.200 0.796029 #> 248140 0.478 0.675 0.197 0.796029 #> 469709 0.931 0.772 0.159 0.796029 #> 278234 0.678 0.539 0.139 0.796029 #> 184983 0.775 0.909 0.135 0.796029 #> 512309 1.045 0.912 0.133 0.796029 #> 181016 0.544 0.665 0.121 0.815517 #> 10116 0.966 0.850 0.115 0.796029 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.511232 #> 184983 0.319 0.774 0.455 0.511232 #> 322235 0.857 0.403 0.454 0.511232 #> 469709 0.695 0.309 0.386 0.526269 #> 303304 0.397 0.037 0.360 0.315761 #> 90487 0.483 0.200 0.283 0.631522 #> 307981 0.682 0.965 0.283 0.631522 #> 364563 0.716 0.990 0.274 0.511232 #> 326792 0.707 0.954 0.246 0.511232 #> 512309 1.000 0.828 0.172 0.721740 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp3) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, adjust = \"BH\", #> seed = 123456, fileLoadCountsPerm = c(\"countsPerm1\", \"countsPerm2\")) #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.356436 #> 303304 0.790 0.510 0.280 0.356436 #> 158660 1.011 0.812 0.200 0.824694 #> 248140 0.478 0.675 0.197 0.824694 #> 469709 0.931 0.772 0.159 0.824694 #> 278234 0.678 0.539 0.139 0.824694 #> 184983 0.775 0.909 0.135 0.824694 #> 512309 1.045 0.912 0.133 0.824694 #> 181016 0.544 0.665 0.121 0.844884 #> 10116 0.966 0.850 0.115 0.824694 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.577086 #> 184983 0.319 0.774 0.455 0.577086 #> 322235 0.857 0.403 0.454 0.577086 #> 469709 0.695 0.309 0.386 0.594059 #> 303304 0.397 0.037 0.360 0.356436 #> 90487 0.483 0.200 0.283 0.712871 #> 307981 0.682 0.965 0.283 0.712871 #> 364563 0.716 0.990 0.274 0.577086 #> 326792 0.707 0.954 0.246 0.577086 #> 512309 1.000 0.828 0.172 0.814710 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 summary(amgut_comp4) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, adjust = \"BH\", #> seed = 123456, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.412 0.399 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.356436 #> 303304 0.790 0.510 0.280 0.356436 #> 158660 1.011 0.812 0.200 0.824694 #> 248140 0.478 0.675 0.197 0.824694 #> 469709 0.931 0.772 0.159 0.824694 #> 278234 0.678 0.539 0.139 0.824694 #> 184983 0.775 0.909 0.135 0.824694 #> 512309 1.045 0.912 0.133 0.824694 #> 181016 0.544 0.665 0.121 0.844884 #> 10116 0.966 0.850 0.115 0.824694 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.577086 #> 184983 0.319 0.774 0.455 0.577086 #> 322235 0.857 0.403 0.454 0.577086 #> 469709 0.695 0.309 0.386 0.594059 #> 303304 0.397 0.037 0.360 0.356436 #> 90487 0.483 0.200 0.283 0.712871 #> 307981 0.682 0.965 0.283 0.712871 #> 364563 0.716 0.990 0.274 0.577086 #> 326792 0.707 0.954 0.246 0.577086 #> 512309 1.000 0.828 0.172 0.814710 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 #-------------------------- # Use 'createAssoPerm' to create \"permuted\" count and association matrices createAssoPerm(amgut_props, nPerm = 100, computeAsso = TRUE, fileStoreAssoPerm = \"assoPerm\", storeCountsPerm = TRUE, fileStoreCountsPerm = c(\"countsPerm1\", \"countsPerm2\"), append = FALSE, seed = 123456) #> Create matrix with permuted group labels ... #> Done. #> Files 'assoPerm.bmat and assoPerm.desc.txt created. #> Files 'countsPerm1.bmat, countsPerm1.desc.txt, countsPerm2.bmat, and countsPerm2.desc.txt created. #> Compute permutation associations ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. amgut_comp5 <- netCompare(amgut_props, permTest = TRUE, nPerm = 100L, fileLoadAssoPerm = \"assoPerm\") #> Checking input arguments ... #> Done. #> Calculate network properties ... #> Done. #> Execute permutation tests ... #> | | | 0% | |= | 1% | |= | 2% | |== | 3% | |=== | 4% | |==== | 5% | |==== | 6% | |===== | 7% | |====== | 8% | |====== | 9% | |======= | 10% | |======== | 11% | |======== | 12% | |========= | 13% | |========== | 14% | |========== | 15% | |=========== | 16% | |============ | 17% | |============= | 18% | |============= | 19% | |============== | 20% | |=============== | 21% | |=============== | 22% | |================ | 23% | |================= | 24% | |================== | 25% | |================== | 26% | |=================== | 27% | |==================== | 28% | |==================== | 29% | |===================== | 30% | |====================== | 31% | |====================== | 32% | |======================= | 33% | |======================== | 34% | |======================== | 35% | |========================= | 36% | |========================== | 37% | |=========================== | 38% | |=========================== | 39% | |============================ | 40% | |============================= | 41% | |============================= | 42% | |============================== | 43% | |=============================== | 44% | |================================ | 45% | |================================ | 46% | |================================= | 47% | |================================== | 48% | |================================== | 49% | |=================================== | 50% | |==================================== | 51% | |==================================== | 52% | |===================================== | 53% | |====================================== | 54% | |====================================== | 55% | |======================================= | 56% | |======================================== | 57% | |========================================= | 58% | |========================================= | 59% | |========================================== | 60% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 63% | |============================================= | 64% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 67% | |================================================ | 68% | |================================================ | 69% | |================================================= | 70% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 73% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 76% | |====================================================== | 77% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 80% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 83% | |=========================================================== | 84% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 87% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 90% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 93% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 97% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 100% #> Done. #> Calculating p-values ... #> Done. #> Adjust for multiple testing using 'adaptBH' ... #> Done. all.equal(amgut_comp3$properties, amgut_comp5$properties) #> [1] TRUE summary(amgut_comp5) #> #> Comparison of Network Properties #> ---------------------------------- #> CALL: #> netCompare(x = amgut_props, permTest = TRUE, nPerm = 100, fileLoadAssoPerm = \"assoPerm\") #> #> ______________________________ #> Global network properties #> ````````````````````````` #> Whole network: #> group '1' group '2' abs.diff. p-value #> Number of components 1.000 2.000 1.000 0.811881 #> Clustering coefficient 0.534 0.448 0.086 0.435644 #> Modularity 0.168 0.155 0.012 0.881188 #> Positive edge percentage 32.099 39.683 7.584 0.108911 #> Edge density 0.293 0.249 0.044 0.524752 #> Natural connectivity 0.070 0.068 0.002 0.891089 #> Vertex connectivity 1.000 1.000 0.000 1.000000 #> Edge connectivity 1.000 1.000 0.000 1.000000 #> Average dissimilarity* 0.920 0.929 0.009 0.643564 #> Average path length** 1.496 1.558 0.062 0.712871 #> ----- #> p-values: one-tailed test with null hypothesis diff=0 #> *: Dissimilarity = 1 - edge weight #> **: Path length = Units with average dissimilarity #> #> ______________________________ #> Jaccard index (similarity betw. sets of most central nodes) #> ``````````````````````````````````````````````````````````` #> Jacc P(<=Jacc) P(>=Jacc) #> degree 0.167 0.351166 0.912209 #> betweenness centr. 0.333 0.650307 0.622822 #> closeness centr. 0.333 0.650307 0.622822 #> eigenvec. centr. 0.333 0.650307 0.622822 #> hub taxa 0.000 0.197531 1.000000 #> ----- #> Jaccard index in [0,1] (1 indicates perfect agreement) #> #> ______________________________ #> Adjusted Rand index (similarity betw. clusterings) #> `````````````````````````````````````````````````` #> wholeNet LCC #> ARI 0.054 0.054 #> p-value 0.405 0.398 #> ----- #> ARI in [-1,1] with ARI=1: perfect agreement betw. clusterings #> ARI=0: expected for two random clusterings #> p-value: permutation test (n=1000) with null hypothesis ARI=0 #> #> ______________________________ #> Graphlet Correlation Distance #> ````````````````````````````` #> wholeNet LCC #> GCD 1.203000 0.95400 #> p-value 0.762376 0.90099 #> ----- #> GCD >= 0 (GCD=0 indicates perfect agreement between GCMs) #> p-value: permutation test with null hypothesis GCD=0 #> #> ______________________________ #> Centrality measures #> - In decreasing order #> - Computed for the whole network #> ```````````````````````````````````` #> Degree (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.522 0.261 0.261 0.984441 #> 469709 0.391 0.174 0.217 0.984441 #> 303304 0.261 0.043 0.217 0.984441 #> 184983 0.174 0.348 0.174 0.984441 #> 10116 0.478 0.304 0.174 0.984441 #> 512309 0.565 0.391 0.174 0.984441 #> 278234 0.174 0.043 0.130 0.984441 #> 361496 0.130 0.000 0.130 0.984441 #> 71543 0.522 0.391 0.130 0.984441 #> 188236 0.565 0.435 0.130 0.984441 #> #> Betweenness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 184983 0.000 0.147 0.147 0.861386 #> 322235 0.087 0.195 0.108 0.891089 #> 190597 0.099 0.000 0.099 0.861386 #> 188236 0.225 0.143 0.082 0.891089 #> 71543 0.123 0.043 0.079 0.891089 #> 512309 0.083 0.139 0.056 1.000000 #> 326792 0.000 0.043 0.043 0.861386 #> 73352 0.055 0.095 0.040 0.891089 #> 248140 0.000 0.026 0.026 0.861386 #> 278234 0.020 0.000 0.020 0.861386 #> #> Closeness centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 361496 0.643 0.000 0.643 0.344046 #> 303304 0.790 0.510 0.280 0.344046 #> 158660 1.011 0.812 0.200 0.796029 #> 248140 0.478 0.675 0.197 0.796029 #> 469709 0.931 0.772 0.159 0.796029 #> 278234 0.678 0.539 0.139 0.796029 #> 184983 0.775 0.909 0.135 0.796029 #> 512309 1.045 0.912 0.133 0.796029 #> 181016 0.544 0.665 0.121 0.815517 #> 10116 0.966 0.850 0.115 0.796029 #> #> Eigenvector centrality (normalized): #> group '1' group '2' abs.diff. adj.p-value #> 158660 0.971 0.314 0.657 0.511232 #> 184983 0.319 0.774 0.455 0.511232 #> 322235 0.857 0.403 0.454 0.511232 #> 469709 0.695 0.309 0.386 0.526269 #> 303304 0.397 0.037 0.360 0.315761 #> 90487 0.483 0.200 0.283 0.631522 #> 307981 0.682 0.965 0.283 0.631522 #> 364563 0.716 0.990 0.274 0.511232 #> 326792 0.707 0.954 0.246 0.511232 #> 512309 1.000 0.828 0.172 0.721740 #> #> _________________________________________________________ #> Significance codes: ***: 0.001, **: 0.01, *: 0.05, .: 0.1 # }"},{"path":"https://netcomi.de/reference/netConstruct.html","id":null,"dir":"Reference","previous_headings":"","what":"Constructing Networks for Microbiome Data — netConstruct","title":"Constructing Networks for Microbiome Data — netConstruct","text":"Constructing microbial association networks dissimilarity based networks (nodes subjects) compositional count data.","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Constructing Networks for Microbiome Data — netConstruct","text":"","code":"netConstruct(data, data2 = NULL, dataType = \"counts\", group = NULL, matchDesign = NULL, taxRank = NULL, # Association/dissimilarity measure: measure = \"spieceasi\", measurePar = NULL, # Preprocessing: jointPrepro = NULL, filtTax = \"none\", filtTaxPar = NULL, filtSamp = \"none\", filtSampPar = NULL, zeroMethod = \"none\", zeroPar = NULL, normMethod = \"none\", normPar = NULL, # Sparsification: sparsMethod = \"t-test\", thresh = 0.3, alpha = 0.05, adjust = \"adaptBH\", trueNullMethod = \"convest\", lfdrThresh = 0.2, nboot = 1000L, assoBoot = NULL, cores = 1L, logFile = \"log.txt\", softThreshType = \"signed\", softThreshPower = NULL, softThreshCut = 0.8, kNeighbor = 3L, knnMutual = FALSE, # Transformation: dissFunc = \"signed\", dissFuncPar = NULL, simFunc = NULL, simFuncPar = NULL, scaleDiss = TRUE, weighted = TRUE, # Further arguments: sampleSize = NULL, verbose = 2, seed = NULL )"},{"path":"https://netcomi.de/reference/netConstruct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Constructing Networks for Microbiome Data — netConstruct","text":"data numeric matrix. Can count matrix (rows samples, columns OTUs/taxa), phyloseq object, association/dissimilarity matrix (dataType must set). second count matrix/phyloseq object second association/dissimilarity matrix. data2 optional numeric matrix used constructing second network (belonging group 2). Can either second count matrix/phyloseq object second association/dissimilarity matrix. dataType character indicating data type. Defaults \"counts\", means data (data2) count matrix object class phyloseq. options \"correlation\", \"partialCorr\" (partial correlation), \"condDependence\" (conditional dependence), \"proportionality\" \"dissimilarity\". group optional binary vector used splitting data two groups. group NULL (default) data2 set, single network constructed. See 'Details.' matchDesign Numeric vector two elements specifying optional matched-group (.e. matched-pair) design, used permutation tests netCompare diffnet. c(1,1) corresponds matched-pair design. 1:2 matching, instance, defined c(1,2), means first sample group 1 matched first two samples group 2 . appropriate order samples must ensured. NULL, group memberships shuffled randomly group sizes identical original data set ensured. taxRank character indicating taxonomic rank network constructed. used data (data 2) phyloseq object. given rank must match one column names taxonomic table (@tax_table slot phyloseq object). Taxa names chosen taxonomic rank must unique (consider using function renameTaxa make unique). phyloseq object given taxRank = NULL, row names OTU table used node labels. measure character specifying method used either computing associations taxa dissimilarities subjects. Ignored data count matrix (dataType set \"counts\"). Available measures : \"pearson\", \"spearman\", \"bicor\", \"sparcc\", \"cclasso\", \"ccrepe\", \"spieceasi\" (default), \"spring\", \"gcoda\" \"propr\" association measures, \"euclidean\", \"bray\", \"kld\", \"jeffrey\", \"jsd\", \"ckld\", \"aitchison\" dissimilarity measures. Parameters set via measurePar. measurePar list parameters passed function computing associations/dissimilarities. See 'Details' respective functions. SpiecEasi SPRING association measure, additional list element \"symBetaMode\" accepted define \"mode\" argument symBeta. jointPrepro logical indicating whether data preprocessing (filtering, zero treatment, normalization) done combined data sets, data set separately. Ignored single network constructed. Defaults TRUE group given, FALSE data2 given. Joint preprocessing possible dissimilarity networks. filtTax character indicating taxa shall filtered. Possible options : \"none\" Default. taxa kept. \"totalReads\" Keep taxa total number reads least x. \"relFreq\" Keep taxa whose number reads least x% total number reads. \"numbSamp\" Keep taxa observed least x samples. \"highestVar\" Keep x taxa highest variance. \"highestFreq\" Keep x taxa highest frequency. Except \"highestVar\" \"highestFreq\", different filter methods can combined. values x set via filtTaxPar. filtTaxPar list parameters filter methods given filtTax. Possible list entries : \"totalReads\" (int), \"relFreq\" (value [0,1]), \"numbSamp\" (int), \"highestVar\" (int), \"highestFreq\" (int). filtSamp character indicating samples shall filtered. Possible options : \"none\" Default. samples kept. \"totalReads\" Keep samples total number reads least x. \"numbTaxa\" Keep samples least x taxa observed. \"highestFreq\" Keep x samples highest frequency. Except \"highestFreq\", different filter methods can combined. values x set via filtSampPar. filtSampPar list parameters filter methods given filtSamp. Possible list entries : \"totalReads\" (int), \"numbTaxa\" (int), \"highestFreq\" (int). zeroMethod character indicating method used zero replacement. Possible values : \"none\" (default), \"pseudo\", \"pseudoZO\", \"multRepl\", \"alrEM\", \"bayesMult\". See 'Details'. corresponding parameters set via zeroPar. zeroMethod ignored approach calculating associations/dissimilarity includes zero handling. Defaults \"multRepl\" \"pseudo\" (depending expected input normalization function measure) zero replacement required. zeroPar list parameters passed function zero replacement (zeroMethod). See help page respective function details. zeroMethod \"pseudo\" \"pseudoZO\", pseudo count can specified via zeroPar = list(pseudocount = x) (x numeric). normMethod character indicating normalization method (make counts different samples comparable). Possible options : \"none\" (default), \"TSS\" (\"fractions\"), \"CSS\", \"COM\", \"rarefy\", \"VST\", \"clr\", \"mclr\". See 'Details'. corresponding parameters set via normPar. normPar list parameters passed function normalization (defined normMethod). sparsMethod character indicating method used sparsification (selected edges connected network). Available methods : \"none\" Leads fully connected network \"t-test\" Default. Associations significantly different zero selected using Student's t-test. Significance level multiple testing adjustment specified via alpha adjust. sampleSize must set dataType \"counts\". \"bootstrap\" Bootstrap procedure described Friedman Alm (2012). Corresponding arguments nboot, cores, logFile. Data type must \"counts\". \"threshold\" Selected taxa pairs absolute association/dissimilarity greater equal threshold defined via thresh. \"softThreshold\" Soft thresholding method according Zhang Horvath (2005) available WGCNA package. Corresponding arguments softThreshType, softThreshPower, softThreshCut. \"knn\" Construct k-nearest neighbor mutual k-nearest neighbor graph using nng. Corresponding arguments kNeighbor, knnMutual. Available dissimilarity networks . thresh numeric vector one two elements defining threshold used sparsification sparsMethod set \"threshold\". two networks constructed one value given, used groups. Defaults 0.3. alpha numeric vector one two elements indicating significance level. used Student's t-test bootstrap procedure used sparsification method. two networks constructed one value given, used groups. Defaults 0.05. adjust character indicating method used multiple testing adjustment (Student's t-test bootstrap procedure used edge selection). Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust (see p.adjust.methods(). trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\"(default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). lfdrThresh numeric vector one two elements defining threshold(s) local FDR correction (adjust = \"locfdr\"). Defaults 0.2 meaning associations corresponding local FDR less equal 0.2 identified significant. two networks constructed one value given, used groups. nboot integer indicating number bootstrap samples, bootstrapping used sparsification method. assoBoot logical list. relevant bootstrapping. Set TRUE list (assoBoot) bootstrap association matrices returned. Can also list bootstrap association matrices, used sparsification. See example. cores integer indicating number CPU cores used bootstrapping. cores > 1, bootstrapping performed parallel. cores limited number available CPU cores determined detectCores. , core arguments function used association estimation (provided) set 1. logFile character defining log file iteration numbers stored bootstrapping used sparsification. file written current working directory. Defaults \"log.txt\". NULL, log file created. softThreshType character indicating method used transforming correlations similarities soft thresholding used sparsification method (sparsMethod = \"softThreshold\"). Possible values \"signed\", \"unsigned\", \"signed hybrid\" (according available options argument type adjacency WGCNA package). softThreshPower numeric vector one two elements defining power soft thresholding. used edgeSelect = \"softThreshold\". two networks constructed one value given, used groups. power set, computed using pickSoftThreshold, argument softThreshCut needed addition. softThreshCut numeric vector one two elements (0 1) indicating desired minimum scale free topology fitting index (corresponds argument \"RsquaredCut\" pickSoftThreshold). Defaults 0.8. two networks constructed one value given, used groups. kNeighbor integer specifying number neighbors k-nearest neighbor method used sparsification. Defaults 3L. knnMutual logical used k-nearest neighbor sparsification. TRUE, neighbors must mutual. Defaults FALSE. dissFunc method used transforming associations dissimilarities. Can character one following values: \"signed\"(default), \"unsigned\", \"signedPos\", \"TOMdiss\". Alternatively, function accepted association matrix first argument optional arguments, can set via dissFuncPar. Ignored dissimilarity measures. See 'Details.' dissFuncPar optional list parameters function passed dissFunc. simFunc function transforming dissimilarities similarities. Defaults f(x)=1-x dissimilarities [0,1], f(x)=1/(1 + x) otherwise. simFuncPar optional list parameters function passed simFunc. scaleDiss logical. Indicates whether dissimilarity values scaled [0,1] (x - min(dissEst)) / (max(dissEst) - min(dissEst)), dissEst matrix estimated dissimilarities. Defaults TRUE. weighted logical. TRUE, similarity values used adjacencies. FALSE leads binary adjacency matrix whose entries equal 1 (sparsified) similarity values > 0, 0 otherwise. sampleSize numeric vector one two elements giving number samples used computing association matrix. needed association matrix given instead count matrix , addition, Student's t-test used edge selection. two networks constructed one value given, used groups. verbose integer indicating level verbosity. Possible values: \"0\": messages, \"1\": important messages, \"2\"(default): progress messages, \"3\" messages returned external functions shown addition. Can also logical. seed integer giving seed reproducibility results.","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Constructing Networks for Microbiome Data — netConstruct","text":"object class microNet containing following elements: v1, v2: names adjacent nodes/vertices asso: estimated association (association networks) diss: dissimilarity sim: similarity (unweighted networks) adja: adjacency (equals similarity weighted networks)","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Constructing Networks for Microbiome Data — netConstruct","text":"object returned netConstruct can either passed netAnalyze network analysis, diffnet construct differential network estimated associations. function enables construction either single network two networks. latter can compared using function netCompare. network(s) can either based associations (correlation, partial correlation / conditional dependence, proportionality) dissimilarities. Several measures available, respectively, estimate associations dissimilarities using netConstruct. Alternatively, pre-generated association dissimilarity matrix accepted input start workflow (argument dataType must set appropriately). Depending measure, network nodes either taxa subjects: association-based networks nodes taxa, whereas dissimilarity-based networks nodes subjects. order perform network comparison, following options constructing two networks available: Passing combined count matrix data group vector group (length nrow(data) association networks length ncol(data) dissimilarity-based networks). Passing count data group 1 data (matrix phyloseq object) count data group 2 data2 (matrix phyloseq object). association networks, column names must match, dissimilarity networks row names. Passing association/dissimilarity matrix group 1 data association/dissimilarity matrix group 2 data2. Group labeling: two networks generated, network belonging data always denoted \"group 1\" network belonging data2 \"group 2\". group vector used splitting data two groups, group names assigned according order group levels. group contains levels 0 1, instance, \"group 1\" assigned level 0 \"group 2\" assigned level 1. network plot, group 1 shown left group 2 right defined otherwise (see plot.microNetProps). Association measures Dissimilarity measures Definitions: Kullback-Leibler divergence: Since KLD symmetric, 0.5 * (KLD(p(x)||p(y)) + KLD(p(y)||p(x))) returned. Jeffrey divergence: Jeff = KLD(p(x)||p(y)) + KLD(p(y)||p(x)) Jensen-Shannon divergence: JSD = 0.5 KLD(P||M) + 0.5 KLD(Q||M), P=p(x), Q=p(y), M=0.5(P+Q). Compositional Kullback-Leibler divergence: cKLD(x,y) = p/2 * log((x/y) * (y/x)), (x/y) arithmetic mean vector ratios x/y. Aitchison distance: Euclidean distance clr-transformed data. Methods zero replacement Normalization methods methods (except rarefying) described Badri et al.(2020). Transformation methods Functions used transforming associations dissimilarities:","code":""},{"path":"https://netcomi.de/reference/netConstruct.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Constructing Networks for Microbiome Data — netConstruct","text":"badri2020normalizationNetCoMi benjamini2000adaptiveNetCoMi farcomeni2007someNetCoMi friedman2012inferringNetCoMi WGCNApackageNetCoMi zhang2005generalNetCoMi","code":""},{"path":[]},{"path":"https://netcomi.de/reference/netConstruct.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Constructing Networks for Microbiome Data — netConstruct","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") data(\"amgut2.filt.phy\") # Single network with the following specifications: # - Association measure: SpiecEasi # - SpiecEasi parameters are defined via 'measurePar' # (check ?SpiecEasi::spiec.easi for available options) # - Note: 'rep.num' should be higher for real data sets # - Taxa filtering: Keep the 50 taxa with highest variance # - Sample filtering: Keep samples with a total number of reads # of at least 1000 net1 <- netConstruct(amgut2.filt.phy, measure = \"spieceasi\", measurePar = list(method = \"mb\", pulsar.params = list(rep.num = 10), symBetaMode = \"ave\"), filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), sparsMethod = \"none\", normMethod = \"none\", verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 88 taxa removed. #> 50 taxa and 261 samples remaining. #> #> Calculate 'spieceasi' associations ... #> #> Applying data transformations... #> Selecting model with pulsar using stars... #> Fitting final estimate with mb... #> done #> Done. # Network analysis (see ?netAnalyze for details) props1 <- netAnalyze(net1, clustMethod = \"cluster_fast_greedy\") # Network plot (see ?plot.microNetProps for details) plot(props1) #---------------------------------------------------------------------------- # Same network as before but on genus level and without taxa filtering amgut.genus.phy <- phyloseq::tax_glom(amgut2.filt.phy, taxrank = \"Rank6\") dim(phyloseq::otu_table(amgut.genus.phy)) #> [1] 43 296 # Rename taxonomic table and make Rank6 (genus) unique amgut.genus.renamed <- renameTaxa(amgut.genus.phy, pat = \"\", substPat = \"_()\", numDupli = \"Rank6\") #> Column 7 contains NAs only and is ignored. net_genus <- netConstruct(amgut.genus.renamed, taxRank = \"Rank6\", measure = \"spieceasi\", measurePar = list(method = \"mb\", pulsar.params = list(rep.num = 10), symBetaMode = \"ave\"), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), sparsMethod = \"none\", normMethod = \"none\", verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 43 taxa and 261 samples remaining. #> #> Calculate 'spieceasi' associations ... #> #> Applying data transformations... #> Selecting model with pulsar using stars... #> Fitting final estimate with mb... #> done #> Done. # Network analysis props_genus <- netAnalyze(net_genus, clustMethod = \"cluster_fast_greedy\") # Network plot (with some modifications) plot(props_genus, shortenLabels = \"none\", labelScale = FALSE, cexLabels = 0.8) #---------------------------------------------------------------------------- # Single network with the following specifications: # - Association measure: Pearson correlation # - Taxa filtering: Keep the 50 taxa with highest frequency # - Sample filtering: Keep samples with a total number of reads of at least # 1000 and with at least 10 taxa with a non-zero count # - Zero replacement: A pseudo count of 0.5 is added to all counts # - Normalization: clr transformation # - Sparsification: Threshold = 0.3 # (an edge exists between taxa with an estimated association >= 0.3) net2 <- netConstruct(amgut2.filt.phy, measure = \"pearson\", filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = c(\"numbTaxa\", \"totalReads\"), filtSampPar = list(totalReads = 1000, numbTaxa = 10), zeroMethod = \"pseudo\", zeroPar = list(pseudocount = 0.5), normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3, verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 35 samples removed. #> 88 taxa removed. #> 50 taxa and 261 samples remaining. #> #> Zero treatment: #> Pseudo count of 0.5 added. #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis props2 <- netAnalyze(net2, clustMethod = \"cluster_fast_greedy\") plot(props2) #---------------------------------------------------------------------------- # Constructing and analyzing two networks # - A random group variable is used for splitting the data into two groups set.seed(123456) group <- sample(1:2, nrow(amgut1.filt), replace = TRUE) # Option 1: Use the count matrix and group vector as input: net3 <- netConstruct(amgut1.filt, group = group, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"t-test\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Execute multRepl() ... #> Done. #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Option 2: Pass the count matrix of group 1 to 'data' # and that of group 2 to 'data2' # Note: Argument 'jointPrepro' is set to FALSE by default (the data sets # are filtered separately and the intersect of filtered taxa is kept, # which leads to less than 50 taxa in this example). amgut1 <- amgut1.filt[group == 1, ] amgut2 <- amgut1.filt[group == 2, ] net3 <- netConstruct(data = amgut1, data2 = amgut2, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 1000), zeroMethod = \"multRepl\", normMethod = \"clr\", sparsMethod = \"t-test\") #> Checking input arguments ... #> Done. #> Data filtering ... #> 85 taxa removed in each data set. #> 42 taxa and 138 samples remaining in group 1. #> 42 taxa and 151 samples remaining in group 2. #> #> Zero treatment in group 1: #> Execute multRepl() ... #> Done. #> #> Zero treatment in group 2: #> Execute multRepl() ... #> Done. #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 't-test' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. #> #> Sparsify associations in group 2 ... #> #> Adjust for multiple testing via 'adaptBH' ... #> Done. #> Done. # Network analysis # Note: Please zoom into the GCM plot or open a new window using: # x11(width = 10, height = 10) props3 <- netAnalyze(net3, clustMethod = \"cluster_fast_greedy\") # Network plot (same layout is used in both groups) plot(props3, sameLayout = TRUE) # The two networks can be compared with NetCoMi's function netCompare(). #---------------------------------------------------------------------------- # Example of using the argument \"assoBoot\" # This functionality is useful for splitting up a large number of bootstrap # replicates and run the bootstrapping procedure iteratively. niter <- 5 nboot <- 1000 # Overall number of bootstrap replicates: 5000 # Use a different seed for each iteration seeds <- sample.int(1e8, size = niter) # List where all bootstrap association matrices are stored assoList <- list() for (i in 1:niter) { # assoBoot is set to TRUE to return the bootstrap association matrices net <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 0), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"bootstrap\", cores = 1, nboot = nboot, assoBoot = TRUE, verbose = 3, seed = seeds[i]) assoList[(1:nboot) + (i - 1) * nboot] <- net$assoBoot1 } #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% #> #> Attaching package: ‘gtools’ #> The following objects are masked from ‘package:LaplacesDemon’: #> #> ddirichlet, logit, rdirichlet #> The following object is masked from ‘package:permute’: #> #> permute #> | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> | | | 0% | | | 1% | |= | 1% | |= | 2% | |== | 2% | |== | 3% | |== | 4% | |=== | 4% | |=== | 5% | |==== | 5% | |==== | 6% | |===== | 6% | |===== | 7% | |===== | 8% | |====== | 8% | |====== | 9% | |======= | 9% | |======= | 10% | |======= | 11% | |======== | 11% | |======== | 12% | |========= | 12% | |========= | 13% | |========= | 14% | |========== | 14% | |========== | 15% | |=========== | 15% | |=========== | 16% | |============ | 16% | |============ | 17% | |============ | 18% | |============= | 18% | |============= | 19% | |============== | 19% | |============== | 20% | |============== | 21% | |=============== | 21% | |=============== | 22% | |================ | 22% | |================ | 23% | |================ | 24% | |================= | 24% | |================= | 25% | |================== | 25% | |================== | 26% | |=================== | 26% | |=================== | 27% | |=================== | 28% | |==================== | 28% | |==================== | 29% | |===================== | 29% | |===================== | 30% | |===================== | 31% | |====================== | 31% | |====================== | 32% | |======================= | 32% | |======================= | 33% | |======================= | 34% | |======================== | 34% | |======================== | 35% | |========================= | 35% | |========================= | 36% | |========================== | 36% | |========================== | 37% | |========================== | 38% | |=========================== | 38% | |=========================== | 39% | |============================ | 39% | |============================ | 40% | |============================ | 41% | |============================= | 41% | |============================= | 42% | |============================== | 42% | |============================== | 43% | |============================== | 44% | |=============================== | 44% | |=============================== | 45% | |================================ | 45% | |================================ | 46% | |================================= | 46% | |================================= | 47% | |================================= | 48% | |================================== | 48% | |================================== | 49% | |=================================== | 49% | |=================================== | 50% | |=================================== | 51% | |==================================== | 51% | |==================================== | 52% | |===================================== | 52% | |===================================== | 53% | |===================================== | 54% | |====================================== | 54% | |====================================== | 55% | |======================================= | 55% | |======================================= | 56% | |======================================== | 56% | |======================================== | 57% | |======================================== | 58% | |========================================= | 58% | |========================================= | 59% | |========================================== | 59% | |========================================== | 60% | |========================================== | 61% | |=========================================== | 61% | |=========================================== | 62% | |============================================ | 62% | |============================================ | 63% | |============================================ | 64% | |============================================= | 64% | |============================================= | 65% | |============================================== | 65% | |============================================== | 66% | |=============================================== | 66% | |=============================================== | 67% | |=============================================== | 68% | |================================================ | 68% | |================================================ | 69% | |================================================= | 69% | |================================================= | 70% | |================================================= | 71% | |================================================== | 71% | |================================================== | 72% | |=================================================== | 72% | |=================================================== | 73% | |=================================================== | 74% | |==================================================== | 74% | |==================================================== | 75% | |===================================================== | 75% | |===================================================== | 76% | |====================================================== | 76% | |====================================================== | 77% | |====================================================== | 78% | |======================================================= | 78% | |======================================================= | 79% | |======================================================== | 79% | |======================================================== | 80% | |======================================================== | 81% | |========================================================= | 81% | |========================================================= | 82% | |========================================================== | 82% | |========================================================== | 83% | |========================================================== | 84% | |=========================================================== | 84% | |=========================================================== | 85% | |============================================================ | 85% | |============================================================ | 86% | |============================================================= | 86% | |============================================================= | 87% | |============================================================= | 88% | |============================================================== | 88% | |============================================================== | 89% | |=============================================================== | 89% | |=============================================================== | 90% | |=============================================================== | 91% | |================================================================ | 91% | |================================================================ | 92% | |================================================================= | 92% | |================================================================= | 93% | |================================================================= | 94% | |================================================================== | 94% | |================================================================== | 95% | |=================================================================== | 95% | |=================================================================== | 96% | |==================================================================== | 96% | |==================================================================== | 97% | |==================================================================== | 98% | |===================================================================== | 98% | |===================================================================== | 99% | |======================================================================| 99% | |======================================================================| 100% #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. # Construct the actual network with all 5000 bootstrap association matrices net_final <- netConstruct(amgut1.filt, filtTax = \"highestFreq\", filtTaxPar = list(highestFreq = 50), filtSamp = \"totalReads\", filtSampPar = list(totalReads = 0), measure = \"pearson\", normMethod = \"clr\", zeroMethod = \"pseudoZO\", sparsMethod = \"bootstrap\", cores = 1, nboot = nboot * niter, assoBoot = assoList, verbose = 3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'bootstrap' ... #> #> Adjust for multiple testing via 'adaptBH' ... #> #> Proportion of true null hypotheses: 0.46 #> Done. #> Done. # Network analysis props <- netAnalyze(net_final, clustMethod = \"cluster_fast_greedy\") # Network plot plot(props)"},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot method for objects of class diffnet — plot.diffnet","title":"Plot method for objects of class diffnet — plot.diffnet","text":"Plot method objects class diffnet","code":""},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot method for objects of class diffnet — plot.diffnet","text":"","code":"# S3 method for class 'diffnet' plot( x, adjusted = TRUE, layout = NULL, repulsion = 1, labels = NULL, shortenLabels = \"none\", labelLength = 6, labelPattern = c(5, \"'\", 3, \"'\", 3), charToRm = NULL, labelScale = TRUE, labelFont = 1, rmSingles = TRUE, nodeColor = \"gray90\", nodeTransp = 60, borderWidth = 1, borderCol = \"gray80\", edgeFilter = \"none\", edgeFilterPar = NULL, edgeWidth = 1, edgeTransp = 0, edgeCol = NULL, title = NULL, legend = TRUE, legendPos = \"topright\", legendGroupnames = NULL, legendTitle = NULL, legendArgs = NULL, cexNodes = 1, cexLabels = 1, cexTitle = 1.2, cexLegend = 1, mar = c(2, 2, 4, 6), ... )"},{"path":"https://netcomi.de/reference/plot.diffnet.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot method for objects of class diffnet — plot.diffnet","text":"x object class diffnet (returned diffnet) containing adjacency matrix, whose entries absolute differences associations. adjusted logical indicating whether adjacency matrix based adjusted p-values used. Defaults TRUE. FALSE, adjacency matrix based non-adjusted p-values. Ignored discordant method. layout indicates layout used defining node positions. Can character one layouts provided qgraph: \"spring\"(default), \"circle\", \"groups\". Alternatively, layouts provided igraph (see layout_) accepted (must given character, e.g. \"layout_with_fr\"). Can also matrix row number equal number nodes two columns corresponding x y coordinate. repulsion integer specifying repulse radius spring layout; value lower 1, nodes placed apart labels defines node labels. Can character vector entry node. FALSE, labels plotted. Defaults row/column names association matrices. shortenLabels character indicating shorten node labels. Ignored node labels defined via labels. NetCoMi's function editLabels() used label editing. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Default. Original dimnames adjacency matrices used. labelLength integer defining length labels shall shortened shortenLabels set \"simple\" \"intelligent\". Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(5, \"'\", 3, \"'\", 3). data contains, example, three bacteria \"Streptococcus1\", \"Streptococcus2\" \"Streptomyces\", default shortened \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\". charToRm vector characters remove node names. Ignored labels given via labels. labelScale logical. TRUE, node labels scaled according node size labelFont integer defining font node labels. Defaults 1. rmSingles logical. TRUE, unconnected nodes removed. nodeColor character numeric value specifying node colors. Can also vector color node. nodeTransp integer 0 100 indicating transparency node colors. 0 means transparency, 100 means full transparency. Defaults 60. borderWidth numeric specifying width node borders. Defaults 1. borderCol character specifying color node borders. Defaults \"gray80\" edgeFilter character indicating whether edges filtered. Possible values \"none\" (edges shown) \"highestDiff\" (first x edges highest absolute difference shown). x defined edgeFilterPar. edgeFilterPar numeric value specifying \"x\" edgeFilter. edgeWidth numeric specifying edge width. See argument \"edge.width\" qgraph. edgeTransp integer 0 100 indicating transparency edge colors. 0 means transparency (default), 100 means full transparency. edgeCol character vector specifying edge colors. Must length 6 discordant method (default: c(\"hotpink\", \"aquamarine\", \"red\", \"orange\", \"green\", \"blue\")) lengths 9 permutation tests Fisher's z-test (default: c(\"chartreuse2\", \"chartreuse4\", \"cyan\", \"magenta\", \"orange\", \"red\", \"blue\", \"black\", \"purple\")). title optional character string main title. legend logical. TRUE, legend plotted. legendPos either character specifying legend's position numeric vector two elements giving x y coordinates legend. See description x y arguments legend details. legendGroupnames vector two elements giving group names shown legend. legendTitle character specifying legend title. legendArgs list arguments passed legend. cexNodes numeric scaling node sizes. Defaults 1. cexLabels numeric scaling node labels. Defaults 1. cexTitle numeric scaling title. Defaults 1.2. cexLegend numeric scaling legend size. Defaults 1. mar numeric vector form c(bottom, left, top, right) defining plot margins. Works similar mar argument par. Defaults c(2,2,4,6). ... arguments passed qgraph.","code":""},{"path":[]},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot Method for microNetProps Objects — plot.microNetProps","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"Plotting objects class microNetProps.","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"","code":"# S3 method for class 'microNetProps' plot(x, layout = \"spring\", sameLayout = FALSE, layoutGroup = \"union\", repulsion = 1, groupNames = NULL, groupsChanged = FALSE, labels = NULL, shortenLabels = \"none\", labelLength = 6L, labelPattern = c(5, \"'\", 3, \"'\", 3), charToRm = NULL, labelScale = TRUE, labelFont = 1, labelFile = NULL, # Nodes: nodeFilter = \"none\", nodeFilterPar = NULL, rmSingles = \"none\", nodeSize = \"fix\", normPar = NULL, nodeSizeSpread = 4, nodeColor = \"cluster\", colorVec = NULL, featVecCol = NULL, sameFeatCol = TRUE, sameClustCol = TRUE, sameColThresh = 2L, nodeShape = NULL, featVecShape = NULL, nodeTransp = 60, borderWidth = 1, borderCol = \"gray80\", # Hubs: highlightHubs = TRUE, hubTransp = NULL, hubLabelFont = NULL, hubBorderWidth = NULL, hubBorderCol = \"black\", # Edges: edgeFilter = \"none\", edgeFilterPar = NULL, edgeInvisFilter = \"none\", edgeInvisPar = NULL, edgeWidth = 1, negDiffCol = TRUE, posCol = NULL, negCol = NULL, cut = NULL, edgeTranspLow = 0, edgeTranspHigh = 0, # Additional arguments: cexNodes = 1, cexHubs = 1.2, cexLabels = 1, cexHubLabels = NULL, cexTitle = 1.2, showTitle = NULL, title1 = NULL, title2 = NULL, mar = c(1, 3, 3, 3), doPlot = TRUE, ...)"},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"x object class microNetProps layout indicates layout used defining node positions. Can character one layouts provided qgraph: \"spring\" (default), \"circle\", \"groups\". Alternatively, layouts provided igraph (see layout_ accepted (must given character, e.g. \"layout_with_fr\"). Can also matrix row number equal number nodes two columns corresponding x y coordinate. sameLayout logical. Indicates whether layout used networks. Ignored x contains one network. See argument layoutGroup. layoutGroup numeric character. Indicates group, layout taken argument sameLayout TRUE. layout computed group 1 (adopted group 2) set \"1\" computed group 2 set \"2\". Can alternatively set \"union\" (default) compute union layouts, nodes placed optimal possible equally networks. repulsion positive numeric value indicating strength repulsive forces \"spring\" layout. Nodes placed closer together smaller values apart higher values. See repulsion argument qgraph. groupNames character vector two entries naming groups networks belong. Defaults group names returned netConstruct: data set split according group variable, factor levels (increasing order) used. Ignored arguments title1 title2 set single network plotted. groupsChanged logical. Indicates order networks plotted. TRUE, order exchanged. See details. Defaults FALSE. labels defines node labels. Can named character vector, used groups (, adjacency matrices x must contain variables). Can also list two named vectors (names must match row/column names adjacency matrices). FALSE, labels plotted. Defaults row/column names adjacency matrices. shortenLabels character indicating shorten node labels. Ignored node labels defined via labels. NetCoMi's function editLabels() used label editing. Available options : \"intelligent\" Elements charToRm removed, labels shortened length labelLength, duplicates removed using labelPattern. \"simple\" Elements charToRm removed labels shortened length labelLength. \"none\" Default. Original dimnames adjacency matrices used. labelLength integer defining length labels shall shortened shortenLabels set \"simple\" \"intelligent\". Defaults 6. labelPattern vector three five elements, used argument shortenLabels set \"intelligent\". cutting label length labelLength leads duplicates, label shortened according labelPattern, first entry gives length first part, second entry used separator, third entry length third part. labelPattern five elements shortened labels still unique, fourth element serves separator, fifth element gives length last label part. Defaults c(5, \"'\", 3, \"'\", 3). data contains, example, three bacteria \"Streptococcus1\", \"Streptococcus2\" \"Streptomyces\", default shortened \"Strep'coc'1\", \"Strep'coc'2\", \"Strep'myc\". charToRm vector characters remove node names. Ignored labels given via labels. labelScale logical. TRUE, node labels scaled according node size labelFont integer defining font node labels. Defaults 1. labelFile optional character form \".txt\" naming file original renamed node labels stored. file stored current working directory. nodeFilter character indicating whether nodes filtered. Possible values : \"none\" Default. nodes plotted. \"highestConnect\" x nodes highest connectivity (sum edge weights) plotted. \"highestDegree\", \"highestBetween\", \"highestClose\", \"highestEigen\" x nodes highest degree/betweenness/closeness/eigenvector centrality plotted. \"clustTaxon\" nodes belonging cluster variables given character vector via nodeFilterPar. \"clustMin\" Plotted nodes belonging clusters minimum number nodes x. \"names\" Character vector variable names plotted Necessary parameters (e.g. \"x\") given via argument nodeFilterPar. nodeFilterPar parameters needed filtering method defined nodeFilter. rmSingles character value indicating handle unconnected nodes. Possible values \"\" (single nodes deleted), \"inboth\" (nodes unconnected networks removed) \"none\" (default; nodes removed). set \"\", layout used networks. nodeSize character indicating node sizes determined. Possible values : \"fix\" Default. nodes size (hub size can defined separately via cexHubs). \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" Size scaled according node's centrality \"counts\" Size scaled according sum counts (microbes samples, depending nodes express). \"normCounts\" Size scaled according sum normalized counts (microbes samples), exported netConstruct. \"TSS\", \"fractions\", \"CSS\", \"COM\", \"rarefy\", \"VST\", \"clr\", \"mclr\" Size scaled according sum normalized counts. Available options normMethod netConstruct. Parameters set via normPar. normPar list parameters passed function normalization nodeSize set normalization method. Used analogously normPar netConstruct(). nodeSizeSpread positive numeric value indicating spread node sizes. smaller value, similar node sizes. Node sizes calculated : (x - min(x)) / (max(x) - min(x)) * nodeSizeSpread + cexNodes. nodeSizeSpread = 4 (default) cexNodes = 1, node sizes range 1 5. nodeColor character specifying node colors. Possible values \"cluster\" (colors according determined clusters), \"feature\" (colors according node's features defined featVecCol), \"colorVec\" (vector colorVec). former two cases, colors can specified via colorVec. colorVec defined, rainbow function grDevices package used. Also accepted character value defining color, used nodes. NULL, \"grey40\" used nodes. colorVec vector list two vectors used specify node colors. Different usage depending \"nodeColor\" argument: nodeColor = \"cluster\" colorVec must vector. Depending sameClustCol argument, colors used one networks. vector long enough, warning returned colors rainbow() used remaining clusters. nodeColor = \"feature\" Defines color level featVecCol. Can list two vectors used two networks (single network, first element used) vector, used groups two networks plotted. nodeColor = \"colorVec\" colorVec defines color node implying names must match node's names (also ensured names match colnames original count matrix). Can list two vectors used two networks (single network, first element used) vector, used groups two networks plotted. featVecCol vector feature node. Used coloring nodes nodeColor set \"feature\". coerced factor. colorVec given, length must larger equal number feature levels. sameFeatCol logical indicating whether color used features networks (used two networks plotted, nodeColor = \"feature\", color vector/list given (via featVecCol)). sameClustCol TRUE (default) two networks plotted, clusters least sameColThresh nodes common color. used nodeColor set \"cluster\". sameColThresh indicates many nodes cluster must common two groups color. See argument sameClustCol. Defaults 2. nodeShape character vector specifying node shapes. Possible values \"circle\" (default), \"square\", \"triangle\", \"diamond\". featVecShape NULL, length nodeShape must equal number factor levels given featVecShape. , shape assigned one factor level (increasing order). featVecShape NULL, first shape used nodes. See example. featVecShape vector feature node. NULL, different node shape used feature. coerced factor mode. maximum number factor levels 4 corresponding four possible shapes defined via nodeShape. nodeTransp integer 0 100 indicating transparency node colors. 0 means transparency, 100 means full transparency. Defaults 60. borderWidth numeric specifying width node borders. Defaults 1. borderCol character specifying color node borders. Defaults \"gray80\" highlightHubs logical indicating hubs highlighted. TRUE, following features can defined separately hubs: transparency (hubTransp), label font (hubLabelFont), border width (hubBorderWidth), border color (hubBorderCol). hubTransp numeric 0 100 specifying color transparency hub nodes. See argument nodeTransp. Defaults 0.5*nodeTransp. Ignored highlightHubs FALSE. hubLabelFont integer specifying label font hub nodes. Defaults 2*labelFont. Ignored highlightHubs FALSE. hubBorderWidth numeric specifying border width hub nodes. Defaults 2*borderWidth. Ignored highlightHubs FALSE. hubBorderCol character specifying border color hub nodes. Defaults \"black\". Ignored highlightHubs FALSE. edgeFilter character specifying edges filtered. Possible values : \"none\" Default. edges plotted. \"threshold\" association networks, edges corresponding absolute association >= x plotted. dissimilarity networks, edges corresponding dissimilarity <= x plotted. behavior similar sparsification via threshold netConstruct(). \"highestWeight\" first x edges highest edge weight plotted. x defined edgeFilterPar, respectively. edgeFilterPar numeric specifying \"x\" edgeFilter. edgeInvisFilter similar edgeFilter edges removed computing layout edge removal influence layout. Defaults \"none\". edgeInvisPar numeric specifying \"x\" edgeInvisFilter. edgeWidth numeric specifying edge width. See argument \"edge.width\" qgraph. negDiffCol logical indicating edges negative corresponding association colored different. TRUE (default), argument posCol used edges positive association negCol negative association. FALSE dissimilarity networks, posCol used. posCol vector (character numeric) one two elements specifying color edges positive weight also edges negative weight negDiffCol set FALSE. first element used edges weight cut second edges weight cut. single value given, used cases. Defaults c(\"#009900\", \"darkgreen\"). negCol vector (character numeric) one two elements specifying color edges negative weight. first element used edges absolute weight cut second edges absolute weight cut. single value given, used cases. Ignored negDiffCol FALSE. Defaults c(\"red\", \"#BF0000\"). cut defines \"cut\" parameter qgraph. Can either numeric value (used groups two networks plotted) vector length two. default set analogous qgraph: \"0 graphs less 20 nodes. larger graphs cut value automatically chosen equal maximum 75th quantile absolute edge strengths edge strength corresponding 2n-th edge strength (n number nodes.)\" two networks plotted, mean two determined cut parameters used edge thicknesses comparable. edgeTranspLow numeric value 0 100 specifying transparency edges weight cut. higher value, higher transparency. edgeTranspHigh analogous edgeTranspLow, used edges weight cut. cexNodes numeric scaling node sizes. Defaults 1. cexHubs numeric scaling hub sizes. used nodeSize set \"hubs\". cexLabels numeric scaling node labels. Defaults 1. set 0, node labels plotted. cexHubLabels numeric scaling node labels hub nodes. Equals cexLabels default. Ignored, highlightHubs = FALSE. cexTitle numeric scaling title(s). Defaults 1.2. showTitle TRUE, title shown network, either defined via groupNames, title1 title2. Defaults TRUE two networks plotted FALSE single network. title1 character giving title first network. title2 character giving title second network (existing). mar numeric vector form c(bottom, left, top, right) defining plot margins. Works similar mar argument par. Defaults c(1,3,3,3). doPlot logical. FALSE, network plot suppressed. Useful saving output (e.g., layout) without plotting. ... arguments passed qgraph, used network plotting.","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"Returns (invisibly) list following elements:","code":""},{"path":"https://netcomi.de/reference/plot.microNetProps.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot Method for microNetProps Objects — plot.microNetProps","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut1.filt\") # Network construction amgut_net <- netConstruct(amgut1.filt, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"threshold\", thresh = 0.3) #> Checking input arguments ... #> Done. #> Data filtering ... #> 77 taxa removed. #> 50 taxa and 289 samples remaining. #> #> Zero treatment: #> Zero counts replaced by 1 #> #> Normalization: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. # Network analysis amgut_props <- netAnalyze(amgut_net) ### Network plots ### # Clusters are used for node coloring: plot(amgut_props, nodeColor = \"cluster\") # Remove singletons plot(amgut_props, nodeColor = \"cluster\", rmSingles = TRUE) # A higher repulsion places nodes with high edge weight closer together plot(amgut_props, nodeColor = \"cluster\", rmSingles = TRUE, repulsion = 1.2) # A feature vector is used for node coloring # (this could be a vector with phylum names of the ASVs) set.seed(123456) featVec <- sample(1:5, nrow(amgut1.filt), replace = TRUE) # Names must be equal to ASV names names(featVec) <- colnames(amgut1.filt) plot(amgut_props, rmSingles = TRUE, nodeColor = \"feature\", featVecCol = featVec, colorVec = heat.colors(5)) # Use a further feature vector for node shapes shapeVec <- sample(1:3, ncol(amgut1.filt), replace = TRUE) names(shapeVec) <- colnames(amgut1.filt) plot(amgut_props, rmSingles = TRUE, nodeColor = \"feature\", featVecCol = featVec, colorVec = heat.colors(5), nodeShape = c(\"circle\", \"square\", \"diamond\"), featVecShape = shapeVec, highlightHubs = FALSE)"},{"path":"https://netcomi.de/reference/plotHeat.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a heatmap with p-values — plotHeat","title":"Create a heatmap with p-values — plotHeat","text":"function draw heatmaps option use p-values significance codes cell text. allows draw mixed heatmap different cell text (values, p-values, significance code) lower upper triangle. function corrplot used plotting heatmap.","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a heatmap with p-values — plotHeat","text":"","code":"plotHeat( mat, pmat = NULL, type = \"full\", textUpp = \"mat\", textLow = \"code\", methUpp = \"color\", methLow = \"color\", diag = TRUE, title = \"\", mar = c(0, 0, 1, 0), labPos = \"lt\", labCol = \"gray40\", labCex = 1.1, textCol = \"black\", textCex = 1, textFont = 1, digits = 2L, legendPos = \"r\", colorPal = NULL, addWhite = TRUE, nCol = 51L, colorLim = NULL, revCol = FALSE, color = NULL, bg = \"white\", argsUpp = NULL, argsLow = NULL )"},{"path":"https://netcomi.de/reference/plotHeat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a heatmap with p-values — plotHeat","text":"mat numeric matrix values plotted. pmat optional matrix p-values. type character defining type heatmap. Possible values : \"full\" Default. cell text specified via textUpp used whole heatmap. \"mixed\" Different cell text used upper lower triangle. upper triangle specified via textUpp lower triangle via textLow. \"upper\" upper triangle plotted. text specified via textUpp. \"lower\" lower triangle plotted. text specified via textLow. textUpp character specifying cell text either full heatmap (type \"full\") upper triangle (type \"mixed\" \"upper\"). Default \"mat\". Possible values : \"mat\" Cells contain values matrix given mat \"sigmat\" \"mat\" insignificant values (cells) blank. \"pmat\" Cells contain p-values given p-mat. \"code\" Cells contain significance codes corresponding p-values given p-mat. following coding used: \"***: 0.001; **: 0.01; *: 0.05\". \"none\" cell text plotted. textLow textUpp lower triangle (type \"mixed\" \"lower\"). Default \"code\". methUpp character specifying values represented full heatmap (type \"full\") upper triangle (type \"mixed\" \"upper\"). Possible values : \"circle\", \"square\", \"ellipse\", \"number\", \"shade\", \"color\" (default), \"pie\". method passed method argument corrplot. methLow es methUpp lower triangle. diag logical. TRUE (default), diagonal printed. FALSE type \"full\" \"mixed\", diagonal cells white. FALSE type \"upper\" \"lower\", non-diagonal cells printed. title character giving title. mar vector specifying plot margins. See par. Default c(0, 0, 1, 0). labPos character defining label position. Possible values : \"lt\"(left top, default), \"ld\"(left diagonal; type must \"lower\"), \"td\"(top diagonal; type must \"upper\"), \"d\"(diagonal ), \"n\"(labels). Passed corrplot argument tl.pos. labCol label color. Default \"gray40\". Passed corrplot argument tl.col. labCex numeric defining label size. Default 1.1. Passed corrplot argument tl.cex. textCol color cell text (values, p-values, code). Default \"black\". textCex numeric defining text size. Default 1. Currently works types \"mat\" \"code\". textFont numeric defining text font. Default 1. Currently works type \"mat\". digits integer defining number decimal places used matrix values p-values. legendPos position color legend. Possible values : \"r\"(right; default), \"b\"(bottom), \"n\"(legend). colorPal character specifying color palette used cell coloring color set. Available sequential diverging color palettes RColorBrewer: Sequential: \"Blues\", \"BuGn\", \"BuPu\", \"GnBu\", \"Greens\", \"Greys\", \"Oranges\", \"OrRd\", \"PuBu\", \"PuBuGn\", \"PuRd\", \"Purples\", \"RdPu\", \"Reds\", \"YlGn\", \"YlGnBu\", \"YlOrBr\", \"YlOrRd\" Diverging: \"BrBG\", \"PiYG\", \"PRGn\", \"PuOr\", \"RdBu\", \"RdGy\", \"RdYlBu\", \"RdYlGn\", \"Spectral\" default, \"RdBu\" used first value colorLim negative \"YlOrRd\" otherwise. addWhite logical. TRUE, white added color palette. (first element sequential palettes middle element diverging palettes). diverging palette, nCol set odd number middle color white. nCol integer defining number colors color palette interpolated. Default 51L. colorRamp used color interpolation. colorLim numeric vector two values defining color limits. first element color vector assigned lower limit last element color vector upper limit. Default c(0,1) values mat [0,1], c(-1,1) values [-1,1], minimum maximum values otherwise. revCol logical. TRUE, reversed color vector used. Default FALSE. Ignored color given. color optional vector colors used cell coloring. bg background color cells. Default \"white\". argsUpp optional list arguments passed corrplot. Arguments set within plotHeat() overwritten arguments list. Used full heatmap type \"full\" upper triangle type \"mixed\" \"upper\". argsLow argsUpp lower triangle (type \"mixed\" \"lower\").","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a heatmap with p-values — plotHeat","text":"Invisible list two elements argsUpper argsLower containing corrplot arguments used upper lower triangle heatmap.","code":""},{"path":"https://netcomi.de/reference/plotHeat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a heatmap with p-values — plotHeat","text":"","code":"# Load data sets from American Gut Project (from SpiecEasi package) data(\"amgut2.filt.phy\") # Split data into two groups: with and without seasonal allergies amgut_season_yes <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"yes\") amgut_season_no <- phyloseq::subset_samples(amgut2.filt.phy, SEASONAL_ALLERGIES == \"no\") # Sample sizes phyloseq::nsamples(amgut_season_yes) #> [1] 121 phyloseq::nsamples(amgut_season_no) #> [1] 163 # Make sample sizes equal to ensure comparability n_yes <- phyloseq::nsamples(amgut_season_yes) amgut_season_no <- phyloseq::subset_samples(amgut_season_no, X.SampleID %in% get_variable(amgut_season_no, \"X.SampleID\")[1:n_yes]) #> Error in h(simpleError(msg, call)): error in evaluating the argument 'table' in selecting a method for function '%in%': error in evaluating the argument 'object' in selecting a method for function 'sample_data': object 'amgut_season_no' not found # Network construction amgut_net <- netConstruct(data = amgut_season_yes, data2 = amgut_season_no, measure = \"pearson\", filtTax = \"highestVar\", filtTaxPar = list(highestVar = 50), zeroMethod = \"pseudoZO\", normMethod = \"clr\", sparsMethod = \"thresh\", thresh = 0.4, seed = 123456) #> Checking input arguments ... #> Done. #> Data filtering ... #> 95 taxa removed in each data set. #> 1 rows with zero sum removed in group 1. #> 1 rows with zero sum removed in group 2. #> 43 taxa and 120 samples remaining in group 1. #> 43 taxa and 162 samples remaining in group 2. #> #> Zero treatment in group 1: #> Zero counts replaced by 1 #> #> Zero treatment in group 2: #> Zero counts replaced by 1 #> #> Normalization in group 1: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Normalization in group 2: #> Execute clr(){SpiecEasi} ... #> Done. #> #> Calculate 'pearson' associations ... #> Done. #> #> Calculate associations in group 2 ... #> Done. #> #> Sparsify associations via 'threshold' ... #> Done. #> #> Sparsify associations in group 2 ... #> Done. # Estimated and sparsified associations of group 1 plotHeat(amgut_net$assoEst1, textUpp = \"none\", labCex = 0.6) plotHeat(amgut_net$assoMat1, textUpp = \"none\", labCex = 0.6) # Compute graphlet correlation matrices and perform significance tests adja1 <- amgut_net$adjaMat1 adja2 <- amgut_net$adjaMat2 gcm1 <- calcGCM(adja1) gcm2 <- calcGCM(adja2) gcmtest <- testGCM(obj1 = gcm1, obj2 = gcm2) #> Perform Student's t-test for GCM1 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.22 #> Done. #> #> Perform Student's t-test for GCM2 ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.08 #> Done. #> #> Test GCM1 and GCM2 for differences ... #> Adjust for multiple testing ... #> #> Proportion of true null hypotheses: 0.64 #> Done. # Mixed heatmap of GCM1 and significance codes plotHeat(mat = gcmtest$gcm1, pmat = gcmtest$pAdjust1, type = \"mixed\", textLow = \"code\") # Mixed heatmap of GCM2 and p-values (diagonal disabled) plotHeat(mat = gcmtest$gcm1, pmat = gcmtest$pAdjust1, diag = FALSE, type = \"mixed\", textLow = \"pmat\") # Mixed heatmap of differences (GCM1 - GCM2) and significance codes plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"mixed\", textLow = \"code\", title = \"Differences between GCMs (GCM1 - GCM2)\", mar = c(0, 0, 2, 0)) # Heatmap of differences (insignificant values are blank) plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"full\", textUpp = \"sigmat\") # Same as before but with higher significance level plotHeat(mat = gcmtest$diff, pmat = gcmtest$pAdjustDiff, type = \"full\", textUpp = \"sigmat\", argsUpp = list(sig.level = 0.1)) # Heatmap of absolute differences # (different position of labels and legend) plotHeat(mat = gcmtest$absDiff, type = \"full\", labPos = \"d\", legendPos = \"b\") # Mixed heatmap of absolute differences # (different methods, text options, and color palette) plotHeat(mat = gcmtest$absDiff, type = \"mixed\", textLow = \"mat\", methUpp = \"number\", methLow = \"circle\", labCol = \"black\", textCol = \"gray50\", textCex = 1.3, textFont = 2, digits = 1L, colorLim = range(gcmtest$absDiff), colorPal = \"Blues\", nCol = 21L, bg = \"darkorange\", addWhite = FALSE) # Mixed heatmap of differences # (different methods, text options, and color palette) plotHeat(mat = gcmtest$diff, type = \"mixed\", textLow = \"none\", methUpp = \"number\", methLow = \"pie\", textCex = 1.3, textFont = 2, digits = 1L, colorLim = range(gcmtest$diff), colorPal = \"PiYG\", nCol = 21L, bg = \"gray80\") # Heatmap of differences with given color vector plotHeat(mat = gcmtest$diff, nCol = 21L, color = grDevices::colorRampPalette(c(\"blue\", \"white\", \"orange\"))(31))"},{"path":"https://netcomi.de/reference/print.GCD.html","id":null,"dir":"Reference","previous_headings":"","what":"Print method for GCD objects — print.GCD","title":"Print method for GCD objects — print.GCD","text":"Print method GCD objects","code":""},{"path":"https://netcomi.de/reference/print.GCD.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print method for GCD objects — print.GCD","text":"","code":"# S3 method for class 'GCD' print(x, ...)"},{"path":"https://netcomi.de/reference/print.GCD.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print method for GCD objects — print.GCD","text":"x object class GCD (returned calcGCD). ... used.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":null,"dir":"Reference","previous_headings":"","what":"Rename taxa — renameTaxa","title":"Rename taxa — renameTaxa","text":"Function renaming taxa taxonomic table, can given matrix phyloseq object. comes functionality making unknown unclassified taxa unique substituting next higher known taxonomic level, e.g., unknown genus \"g__\" can automatically renamed \"1_Streptococcaceae(F)\". User-defined patterns determine format known substituted names. Unknown names (e.g., NAs) unclassified taxa can handled separately. Duplicated names within one chosen ranks can also made unique numbering consecutively.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Rename taxa — renameTaxa","text":"","code":"renameTaxa( taxtab, pat = \"_\", substPat = \"___\", unknown = c(NA, \"\", \" \", \"__\"), numUnknown = TRUE, unclass = c(\"unclassified\", \"Unclassified\"), numUnclass = TRUE, numUnclassPat = \"\", numDupli = NULL, numDupliPat = \"\", ranks = NULL, ranksAbb = NULL, ignoreCols = NULL )"},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Rename taxa — renameTaxa","text":"taxtab taxonomic table (matrix containing taxonomic names; columns must taxonomic ranks) phyloseq object. pat character specifying pattern new taxonomic names current name KNOWN. See examples default value demo. Possible space holders : Taxonomic name (either original replaced one) Taxonomic rank lower case Taxonomic rank first letter upper case Abbreviated taxonomic rank lower case Abbreviated taxonomic rank upper case substPat character specifying pattern new taxonomic names current name UNKNOWN. current name substituted next higher existing name. Possible space holders (addition pat): Substituted taxonomic name (next higher existing name) Taxonomic rank substitute name lower case Taxonomic rank substitute name first letter upper case Abbreviated taxonomic rank substitute name lower case Abbreviated taxonomic rank substitute name upper case unknown character vector giving labels unknown taxa, without leading rank label (e.g., \"g_\" \"g__\" genus level). numUnknown = TRUE, unknown names replaced number. numUnknown logical. TRUE, number assigned unknown taxonomic names (defined unknown) make unique. unclass character vector giving label unclassified taxa, without leading rank label (e.g., \"g_\" \"g__\" genus level). numUnclass = TRUE, number added names unclassified taxa. Note unclassified taxa unknown taxa get separate numbering unclass set. replace unknown unclassified taxa numbers, add \"unclassified\" (appropriate counterpart) unknown set unclass NULL. numUnclass logical. TRUE, number assigned unclassified taxa (defined unclass) make unique. pattern defined via numUnclassPat. numUnclassPat character defining pattern used numbering unclassified taxa. Must include space holder name (\"\") one number (\"\"). Default \"\" resulting e.g., \"unclassified1\". numDupli character vector giving ranks made unique adding number. Elements must match column names. pattern defined via numDupliPat. numDupliPat character defining pattern used numbering duplicated names (numDupli given). Must include space holder name (\"\") one number (\"\"). Default \"\" resulting e.g., \"Ruminococcus1\". ranks character vector giving rank names used renaming taxa. NULL, functions tries automatically set rank names based common usage. ranksAbb character vector giving abbreviated rank names, directly used place holders , , , (former two lower case latter two upper case). NULL, first letter rank names used. ignoreCols numeric vector columns ignored. Names remain unchanged columns. Columns containing NAs ignored automatically ignoreCols = NULL. Note: length ranks ranksAbb must match number non-ignored columns.","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Rename taxa — renameTaxa","text":"Renamed taxonomic table (matrix phyloseq object, depending input).","code":""},{"path":"https://netcomi.de/reference/renameTaxa.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Rename taxa — renameTaxa","text":"","code":"#--- Load and edit data ----------------------------------------------------- library(phyloseq) data(\"GlobalPatterns\") global <- subset_taxa(GlobalPatterns, Kingdom == \"Bacteria\") taxtab <- global@tax_table@.Data[1:10, ] # Add some unclassified taxa taxtab[c(2,3,5), \"Species\"] <- \"unclassified\" taxtab[c(2,3), \"Genus\"] <- \"unclassified\" taxtab[2, \"Family\"] <- \"unclassified\" # Add some blanks taxtab[7, \"Genus\"] <- \" \" taxtab[7:9, \"Species\"] <- \" \" # Add taxon that is unclassified up to Kingdom taxtab[9, ] <- \"unclassified\" taxtab[9, 1] <- \"Unclassified\" # Add row names rownames(taxtab) <- paste0(\"OTU\", 1:nrow(taxtab)) print(taxtab) #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"Unclassified\" \"unclassified\" \"unclassified\" \"unclassified\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus Species #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU3 \"Propionibacteriaceae\" \"unclassified\" \"unclassified\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" NA #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified\" #> OTU6 NA NA NA #> OTU7 \"Nocardioidaceae\" \" \" \" \" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" \" \" #> OTU9 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU10 NA NA NA #--- Example 1 (default setting) -------------------------------------------- # Example 1 (default setting) # - Known names are replaced by \"_\" # - Unknown names are replaced by \"___\" # - Unclassified taxa have separate numbering # - Ranks are taken from column names # - e.g., unknown genus -> \"g_1_f_Streptococcaceae\" renamed1 <- renameTaxa(taxtab) renamed1 #> Kingdom Phylum #> OTU1 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU2 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU3 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU4 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU5 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU6 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU7 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU8 \"k_Bacteria\" \"p_Actinobacteria\" #> OTU9 \"k_Unclassified1\" \"p_unclassified1_k_Unclassified1\" #> OTU10 \"k_Bacteria\" \"p_Actinobacteria\" #> Class Order #> OTU1 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU2 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU3 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU4 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU5 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU6 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU7 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU8 \"c_Actinobacteria\" \"o_Actinomycetales\" #> OTU9 \"c_unclassified1_k_Unclassified1\" \"o_unclassified1_k_Unclassified1\" #> OTU10 \"c_Actinobacteria\" \"o_Actinomycetales\" #> Family #> OTU1 \"f_Propionibacteriaceae\" #> OTU2 \"f_unclassified1_o_Actinomycetales\" #> OTU3 \"f_Propionibacteriaceae\" #> OTU4 \"f_Propionibacteriaceae\" #> OTU5 \"f_Propionibacteriaceae\" #> OTU6 \"f_1_o_Actinomycetales\" #> OTU7 \"f_Nocardioidaceae\" #> OTU8 \"f_Nocardioidaceae\" #> OTU9 \"f_unclassified2_k_Unclassified1\" #> OTU10 \"f_2_o_Actinomycetales\" #> Genus #> OTU1 \"g_Propionibacterium\" #> OTU2 \"g_unclassified1_o_Actinomycetales\" #> OTU3 \"g_unclassified2_f_Propionibacteriaceae\" #> OTU4 \"g_Tessaracoccus\" #> OTU5 \"g_Aestuariimicrobium\" #> OTU6 \"g_1_o_Actinomycetales\" #> OTU7 \"g_2_f_Nocardioidaceae\" #> OTU8 \"g_Propionicimonas\" #> OTU9 \"g_unclassified3_k_Unclassified1\" #> OTU10 \"g_3_o_Actinomycetales\" #> Species #> OTU1 \"s_Propionibacteriumacnes\" #> OTU2 \"s_unclassified1_o_Actinomycetales\" #> OTU3 \"s_unclassified2_f_Propionibacteriaceae\" #> OTU4 \"s_1_g_Tessaracoccus\" #> OTU5 \"s_unclassified3_g_Aestuariimicrobium\" #> OTU6 \"s_2_o_Actinomycetales\" #> OTU7 \"s_3_f_Nocardioidaceae\" #> OTU8 \"s_4_g_Propionicimonas\" #> OTU9 \"s_unclassified4_k_Unclassified1\" #> OTU10 \"s_5_o_Actinomycetales\" #--- Example 2 -------------------------------------------------------------- # - Use phyloseq object (subset of class clostridia to decrease runtime) global_sub <- subset_taxa(global, Class == \"Clostridia\") renamed2 <- renameTaxa(global_sub) tax_table(renamed2)[1:5, ] #> Taxonomy Table: [5 taxa by 7 taxonomic ranks]: #> Kingdom Phylum Class Order #> 69790 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Halanaerobiales\" #> 201587 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Halanaerobiales\" #> 14244 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> 589048 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> 310026 \"k_Bacteria\" \"p_Firmicutes\" \"c_Clostridia\" \"o_Clostridiales\" #> Family Genus #> 69790 \"f_Halobacteroidaceae\" \"g_1_f_Halobacteroidaceae\" #> 201587 \"f_Halanaerobiaceae\" \"g_2_f_Halanaerobiaceae\" #> 14244 \"f_1_o_Clostridiales\" \"g_3_o_Clostridiales\" #> 589048 \"f_2_o_Clostridiales\" \"g_4_o_Clostridiales\" #> 310026 \"f_3_o_Clostridiales\" \"g_5_o_Clostridiales\" #> Species #> 69790 \"s_1_f_Halobacteroidaceae\" #> 201587 \"s_2_f_Halanaerobiaceae\" #> 14244 \"s_3_o_Clostridiales\" #> 589048 \"s_4_o_Clostridiales\" #> 310026 \"s_5_o_Clostridiales\" #--- Example 3 -------------------------------------------------------------- # - Known names remain unchanged # - Substituted names are indicated by their rank in brackets # - Pattern for numbering unclassified taxa changed # - e.g., unknown genus -> \"Streptococcaceae (F)\" # - Note: Numbering of unknowns is not shown because \"\" is not # included in \"substPat\" renamed3 <- renameTaxa(taxtab, numUnclassPat = \"_\", pat = \"\", substPat = \" ()\") renamed3 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified_1\" \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family Genus #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacteriaceae (F)\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" \"Nocardioidaceae (F)\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" \"Unclassified_1 (K)\" #> OTU10 \"Actinomycetales\" \"Actinomycetales (O)\" \"Actinomycetales (O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"Actinomycetales (O)\" #> OTU3 \"Propionibacteriaceae (F)\" #> OTU4 \"Tessaracoccus (G)\" #> OTU5 \"Aestuariimicrobium (G)\" #> OTU6 \"Actinomycetales (O)\" #> OTU7 \"Nocardioidaceae (F)\" #> OTU8 \"Propionicimonas (G)\" #> OTU9 \"Unclassified_1 (K)\" #> OTU10 \"Actinomycetales (O)\" #--- Example 4 -------------------------------------------------------------- # - Same as before but numbering shown for unknown names # - e.g., unknown genus -> \"1 Streptococcaceae (F)\" renamed4 <- renameTaxa(taxtab, numUnclassPat = \"_\", pat = \"\", substPat = \" ()\") renamed4 #> Kingdom Phylum #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified_1\" \"unclassified_1 Unclassified_1 (K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Class Order #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified_1 Unclassified_1 (K)\" \"unclassified_1 Unclassified_1 (K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Family #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1 Actinomycetales (O)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified_2 Unclassified_1 (K)\" #> OTU10 \"2 Actinomycetales (O)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"unclassified_2 Propionibacteriaceae (F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1 Actinomycetales (O)\" #> OTU7 \"2 Nocardioidaceae (F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified_3 Unclassified_1 (K)\" #> OTU10 \"3 Actinomycetales (O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified_1 Actinomycetales (O)\" #> OTU3 \"unclassified_2 Propionibacteriaceae (F)\" #> OTU4 \"1 Tessaracoccus (G)\" #> OTU5 \"unclassified_3 Aestuariimicrobium (G)\" #> OTU6 \"2 Actinomycetales (O)\" #> OTU7 \"3 Nocardioidaceae (F)\" #> OTU8 \"4 Propionicimonas (G)\" #> OTU9 \"unclassified_4 Unclassified_1 (K)\" #> OTU10 \"5 Actinomycetales (O)\" #--- Example 5 -------------------------------------------------------------- # - Same numbering for unkown names and unclassified taxa # - e.g., unknown genus -> \"1_Streptococcaceae(F)\" # - Note: We get a warning here because \"Unclassified\" (with capital U) # are not included in \"unknown\" but occur in the data renamed5 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\"), pat = \"\", substPat = \"_()\") #> Warning: Taxonomic table contains unclassified taxa. Consider adding \"Unclassified\" to argument \"unknown\". renamed5 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified\" \"1_Unclassified(K)\" \"1_Unclassified(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family Genus #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"Actinomycetales\" \"1_Actinomycetales(O)\" \"1_Actinomycetales(O)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" \"2_Propionibacteriaceae(F)\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"Actinomycetales\" \"2_Actinomycetales(O)\" \"3_Actinomycetales(O)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" \"4_Nocardioidaceae(F)\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"1_Unclassified(K)\" \"3_Unclassified(K)\" \"5_Unclassified(K)\" #> OTU10 \"Actinomycetales\" \"4_Actinomycetales(O)\" \"6_Actinomycetales(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1_Actinomycetales(O)\" #> OTU3 \"2_Propionibacteriaceae(F)\" #> OTU4 \"3_Tessaracoccus(G)\" #> OTU5 \"4_Aestuariimicrobium(G)\" #> OTU6 \"5_Actinomycetales(O)\" #> OTU7 \"6_Nocardioidaceae(F)\" #> OTU8 \"7_Propionicimonas(G)\" #> OTU9 \"8_Unclassified(K)\" #> OTU10 \"9_Actinomycetales(O)\" #--- Example 6 -------------------------------------------------------------- # - Same as before, but OTU9 is now renamed correctly renamed6 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\", \"Unclassified\"), pat = \"\", substPat = \"_()\") renamed6 #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"1\" \"1_1(K)\" \"1_1(K)\" \"1_1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" #> OTU2 \"1_Actinomycetales(O)\" \"1_Actinomycetales(O)\" #> OTU3 \"Propionibacteriaceae\" \"2_Propionibacteriaceae(F)\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" #> OTU6 \"2_Actinomycetales(O)\" \"3_Actinomycetales(O)\" #> OTU7 \"Nocardioidaceae\" \"4_Nocardioidaceae(F)\" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" #> OTU9 \"3_1(K)\" \"5_1(K)\" #> OTU10 \"4_Actinomycetales(O)\" \"6_Actinomycetales(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1_Actinomycetales(O)\" #> OTU3 \"2_Propionibacteriaceae(F)\" #> OTU4 \"3_Tessaracoccus(G)\" #> OTU5 \"4_Aestuariimicrobium(G)\" #> OTU6 \"5_Actinomycetales(O)\" #> OTU7 \"6_Nocardioidaceae(F)\" #> OTU8 \"7_Propionicimonas(G)\" #> OTU9 \"8_1(K)\" #> OTU10 \"9_Actinomycetales(O)\" #--- Example 7 -------------------------------------------------------------- # - Add \"(: unknown)\" to unknown names # - e.g., unknown genus -> \"1 Streptococcaceae (Genus: unknown)\" renamed7 <- renameTaxa(taxtab, unclass = NULL, unknown = c(NA, \" \", \"unclassified\", \"Unclassified\"), pat = \"\", substPat = \" (: unknown)\") renamed7 #> Kingdom Phylum Class #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> OTU9 \"1\" \"1 1 (Phylum: unknown)\" \"1 1 (Class: unknown)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" #> Order Family #> OTU1 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU2 \"Actinomycetales\" \"1 Actinomycetales (Family: unknown)\" #> OTU3 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU4 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU5 \"Actinomycetales\" \"Propionibacteriaceae\" #> OTU6 \"Actinomycetales\" \"2 Actinomycetales (Family: unknown)\" #> OTU7 \"Actinomycetales\" \"Nocardioidaceae\" #> OTU8 \"Actinomycetales\" \"Nocardioidaceae\" #> OTU9 \"1 1 (Order: unknown)\" \"3 1 (Family: unknown)\" #> OTU10 \"Actinomycetales\" \"4 Actinomycetales (Family: unknown)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"1 Actinomycetales (Genus: unknown)\" #> OTU3 \"2 Propionibacteriaceae (Genus: unknown)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"3 Actinomycetales (Genus: unknown)\" #> OTU7 \"4 Nocardioidaceae (Genus: unknown)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"5 1 (Genus: unknown)\" #> OTU10 \"6 Actinomycetales (Genus: unknown)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"1 Actinomycetales (Species: unknown)\" #> OTU3 \"2 Propionibacteriaceae (Species: unknown)\" #> OTU4 \"3 Tessaracoccus (Species: unknown)\" #> OTU5 \"4 Aestuariimicrobium (Species: unknown)\" #> OTU6 \"5 Actinomycetales (Species: unknown)\" #> OTU7 \"6 Nocardioidaceae (Species: unknown)\" #> OTU8 \"7 Propionicimonas (Species: unknown)\" #> OTU9 \"8 1 (Species: unknown)\" #> OTU10 \"9 Actinomycetales (Species: unknown)\" #--- Example 8 -------------------------------------------------------------- # - Do not substitute unknowns and unclassified taxa by higher ranks # - e.g., unknown genus -> \"1\" renamed8 <- renameTaxa(taxtab, pat = \"\", substPat = \"\") renamed8 #> Kingdom Phylum Class Order #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"Unclassified1\" \"unclassified1\" \"unclassified1\" \"unclassified1\" #> OTU10 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Family Genus Species #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified1\" \"unclassified1\" \"unclassified1\" #> OTU3 \"Propionibacteriaceae\" \"unclassified2\" \"unclassified2\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" \"1\" #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified3\" #> OTU6 \"1\" \"1\" \"2\" #> OTU7 \"Nocardioidaceae\" \"2\" \"3\" #> OTU8 \"Nocardioidaceae\" \"Propionicimonas\" \"4\" #> OTU9 \"unclassified2\" \"unclassified3\" \"unclassified4\" #> OTU10 \"2\" \"3\" \"5\" #--- Example 9 -------------------------------------------------------------- # - Error if ranks cannot be automatically determined # from column names or taxonomic names taxtab_noranks <- taxtab colnames(taxtab_noranks) <- paste0(\"Rank\", 1:ncol(taxtab)) head(taxtab_noranks) #> Rank1 Rank2 Rank3 Rank4 #> OTU1 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Bacteria\" \"Actinobacteria\" \"Actinobacteria\" \"Actinomycetales\" #> Rank5 Rank6 Rank7 #> OTU1 \"Propionibacteriaceae\" \"Propionibacterium\" \"Propionibacteriumacnes\" #> OTU2 \"unclassified\" \"unclassified\" \"unclassified\" #> OTU3 \"Propionibacteriaceae\" \"unclassified\" \"unclassified\" #> OTU4 \"Propionibacteriaceae\" \"Tessaracoccus\" NA #> OTU5 \"Propionibacteriaceae\" \"Aestuariimicrobium\" \"unclassified\" #> OTU6 NA NA NA if (FALSE) { # \\dontrun{ renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\") } # } # Ranks can either be given via \"ranks\" ... (ranks <- colnames(taxtab)) #> [1] \"Kingdom\" \"Phylum\" \"Class\" \"Order\" \"Family\" \"Genus\" \"Species\" renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\", ranks = ranks) renamed9 #> Rank1 Rank2 #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Rank3 Rank4 #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified1_Unclassified1(K)\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Rank5 #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1_Actinomycetales(O)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified2_Unclassified1(K)\" #> OTU10 \"2_Actinomycetales(O)\" #> Rank6 #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae(F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales(O)\" #> OTU7 \"2_Nocardioidaceae(F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(K)\" #> OTU10 \"3_Actinomycetales(O)\" #> Rank7 #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae(F)\" #> OTU4 \"1_Tessaracoccus(G)\" #> OTU5 \"unclassified3_Aestuariimicrobium(G)\" #> OTU6 \"2_Actinomycetales(O)\" #> OTU7 \"3_Nocardioidaceae(F)\" #> OTU8 \"4_Propionicimonas(G)\" #> OTU9 \"unclassified4_Unclassified1(K)\" #> OTU10 \"5_Actinomycetales(O)\" # ... or \"ranksAbb\" (we now use the lower case within \"substPat\") (ranks <- substr(colnames(taxtab), 1, 1)) #> [1] \"K\" \"P\" \"C\" \"O\" \"F\" \"G\" \"S\" renamed9 <- renameTaxa(taxtab_noranks, pat = \"\", substPat = \"_()\", ranksAbb = ranks) renamed9 #> Rank1 Rank2 #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(k)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Rank3 Rank4 #> OTU1 \"Actinobacteria\" \"Actinomycetales\" #> OTU2 \"Actinobacteria\" \"Actinomycetales\" #> OTU3 \"Actinobacteria\" \"Actinomycetales\" #> OTU4 \"Actinobacteria\" \"Actinomycetales\" #> OTU5 \"Actinobacteria\" \"Actinomycetales\" #> OTU6 \"Actinobacteria\" \"Actinomycetales\" #> OTU7 \"Actinobacteria\" \"Actinomycetales\" #> OTU8 \"Actinobacteria\" \"Actinomycetales\" #> OTU9 \"unclassified1_Unclassified1(k)\" \"unclassified1_Unclassified1(k)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales\" #> Rank5 #> OTU1 \"Propionibacteriaceae\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"Propionibacteriaceae\" #> OTU4 \"Propionibacteriaceae\" #> OTU5 \"Propionibacteriaceae\" #> OTU6 \"1_Actinomycetales(o)\" #> OTU7 \"Nocardioidaceae\" #> OTU8 \"Nocardioidaceae\" #> OTU9 \"unclassified2_Unclassified1(k)\" #> OTU10 \"2_Actinomycetales(o)\" #> Rank6 #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"unclassified2_Propionibacteriaceae(f)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales(o)\" #> OTU7 \"2_Nocardioidaceae(f)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(k)\" #> OTU10 \"3_Actinomycetales(o)\" #> Rank7 #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales(o)\" #> OTU3 \"unclassified2_Propionibacteriaceae(f)\" #> OTU4 \"1_Tessaracoccus(g)\" #> OTU5 \"unclassified3_Aestuariimicrobium(g)\" #> OTU6 \"2_Actinomycetales(o)\" #> OTU7 \"3_Nocardioidaceae(f)\" #> OTU8 \"4_Propionicimonas(g)\" #> OTU9 \"unclassified4_Unclassified1(k)\" #> OTU10 \"5_Actinomycetales(o)\" #--- Example 10 ------------------------------------------------------------- # - Make names of ranks \"Family\" and \"Order\" unique by adding numbers to # duplicated names renamed10 <- renameTaxa(taxtab, pat = \"\", substPat = \"_()\", numDupli = c(\"Family\", \"Order\")) renamed10 #> Kingdom Phylum #> OTU1 \"Bacteria\" \"Actinobacteria\" #> OTU2 \"Bacteria\" \"Actinobacteria\" #> OTU3 \"Bacteria\" \"Actinobacteria\" #> OTU4 \"Bacteria\" \"Actinobacteria\" #> OTU5 \"Bacteria\" \"Actinobacteria\" #> OTU6 \"Bacteria\" \"Actinobacteria\" #> OTU7 \"Bacteria\" \"Actinobacteria\" #> OTU8 \"Bacteria\" \"Actinobacteria\" #> OTU9 \"Unclassified1\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Bacteria\" \"Actinobacteria\" #> Class Order #> OTU1 \"Actinobacteria\" \"Actinomycetales1\" #> OTU2 \"Actinobacteria\" \"Actinomycetales2\" #> OTU3 \"Actinobacteria\" \"Actinomycetales3\" #> OTU4 \"Actinobacteria\" \"Actinomycetales4\" #> OTU5 \"Actinobacteria\" \"Actinomycetales5\" #> OTU6 \"Actinobacteria\" \"Actinomycetales6\" #> OTU7 \"Actinobacteria\" \"Actinomycetales7\" #> OTU8 \"Actinobacteria\" \"Actinomycetales8\" #> OTU9 \"unclassified1_Unclassified1(K)\" \"unclassified1_Unclassified1(K)\" #> OTU10 \"Actinobacteria\" \"Actinomycetales9\" #> Family #> OTU1 \"Propionibacteriaceae1\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"Propionibacteriaceae2\" #> OTU4 \"Propionibacteriaceae3\" #> OTU5 \"Propionibacteriaceae4\" #> OTU6 \"1_Actinomycetales6(O)\" #> OTU7 \"Nocardioidaceae1\" #> OTU8 \"Nocardioidaceae2\" #> OTU9 \"unclassified2_Unclassified1(K)\" #> OTU10 \"2_Actinomycetales9(O)\" #> Genus #> OTU1 \"Propionibacterium\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae2(F)\" #> OTU4 \"Tessaracoccus\" #> OTU5 \"Aestuariimicrobium\" #> OTU6 \"1_Actinomycetales6(O)\" #> OTU7 \"2_Nocardioidaceae1(F)\" #> OTU8 \"Propionicimonas\" #> OTU9 \"unclassified3_Unclassified1(K)\" #> OTU10 \"3_Actinomycetales9(O)\" #> Species #> OTU1 \"Propionibacteriumacnes\" #> OTU2 \"unclassified1_Actinomycetales2(O)\" #> OTU3 \"unclassified2_Propionibacteriaceae2(F)\" #> OTU4 \"1_Tessaracoccus(G)\" #> OTU5 \"unclassified3_Aestuariimicrobium(G)\" #> OTU6 \"2_Actinomycetales6(O)\" #> OTU7 \"3_Nocardioidaceae1(F)\" #> OTU8 \"4_Propionicimonas(G)\" #> OTU9 \"unclassified4_Unclassified1(K)\" #> OTU10 \"5_Actinomycetales9(O)\" any(duplicated(renamed10[, \"Family\"])) #> [1] FALSE any(duplicated(renamed10[, \"Order\"])) #> [1] FALSE"},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for Objects of Class microNetComp — summary.microNetComp","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"main results returned netCompare printed well-arranged format.","code":""},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"","code":"# S3 method for class 'microNetComp' summary( object, groupNames = NULL, showCentr = \"all\", numbNodes = 10L, showGlobal = TRUE, showGlobalLCC = TRUE, showJacc = TRUE, showRand = TRUE, showGCD = TRUE, pAdjust = TRUE, digits = 3L, digitsPval = 6L, ... ) # S3 method for class 'summary.microNetComp' print(x, ...)"},{"path":"https://netcomi.de/reference/summarize.microNetComp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for Objects of Class microNetComp — summary.microNetComp","text":"object object class microNetComp returned netCompare. groupNames character vector two elements giving group names two networks. NULL, names adopted object. showCentr character vector indicating centrality measures included summary. Possible values \"\", \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" \"none\". numbNodes integer indicating many nodes centrality values shall printed. Defaults 10 meaning first 10 taxa highest absolute group difference specific centrality measure shown. showGlobal logical. TRUE, global network properties whole network printed. showGlobalLCC logical. TRUE, global network properties largest connected component printed. network connected (number components 1) global properties printed (one arguments showGlobal showGlobalLCC) TRUE. showJacc logical. TRUE, Jaccard index printed. showRand logical. TRUE, adjusted Rand index (existent) returned. showGCD logical. TRUE, Graphlet Correlation Distance (existent) printed. pAdjust logical. permutation p-values (existent) adjusted TRUE (default) adjusted FALSE. digits integer giving number decimal places results rounded. Defaults 3L. digitsPval integer giving number decimal places p-values rounded. Defaults 6L. ... used. x object class summary.microNetComp (returned summary.microNetComp).","code":""},{"path":[]},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary Method for Objects of Class microNetProps — summary.microNetProps","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"main results returned netAnalyze printed well-arranged format.","code":""},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"","code":"# S3 method for class 'microNetProps' summary( object, groupNames = NULL, showCompSize = TRUE, showGlobal = TRUE, showGlobalLCC = TRUE, showCluster = TRUE, clusterLCC = FALSE, showHubs = TRUE, showCentr = \"all\", numbNodes = NULL, digits = 5L, ... ) # S3 method for class 'summary.microNetProps' print(x, ...)"},{"path":"https://netcomi.de/reference/summarize.microNetProps.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary Method for Objects of Class microNetProps — summary.microNetProps","text":"object object class microNetProps (returned netAnalyze). groupNames character vector two elements giving group names corresponding two networks. NULL, names adopted object. Ignored object contains single network. showCompSize logical. TRUE, component sizes printed. showGlobal logical. TRUE, global network properties whole network printed. showGlobalLCC logical. TRUE, global network properties largest connected component printed. network connected (number components 1) global properties printed (one arguments showGlobal showGlobalLCC) TRUE. showCluster logical. TRUE, cluster(s) printed. clusterLCC logical. TRUE, clusters printed largest connected component. Defaults FALSE (whole network). showHubs logical. TRUE, detected hubs printed. showCentr character vector indicating centrality measures results shall printed. Possible values \"\", \"degree\", \"betweenness\", \"closeness\", \"eigenvector\" \"none\". numbNodes integer indicating many nodes centrality values shall printed. Defaults 10L single network 5L two networks. Thus, case single network, first 10 nodes highest centrality value specific centrality measure shown. object contains two networks, centrality measure splitted matrix shown upper part contains highest values first group, lower part highest values second group. digits integer giving number decimal places results rounded. Defaults 5L. ... used. x object class summary.microNetProps (returned summary.microNetProps).","code":""},{"path":[]},{"path":"https://netcomi.de/reference/testGCM.html","id":null,"dir":"Reference","previous_headings":"","what":"Test GCM(s) for statistical significance — testGCM","title":"Test GCM(s) for statistical significance — testGCM","text":"function tests whether graphlet correlations (entries GCM) significantly different zero. two GCMs given, graphlet correlations two networks tested significantly different, .e., Fishers z-test performed test absolute differences graphlet correlations significantly different zero.","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test GCM(s) for statistical significance — testGCM","text":"","code":"testGCM( obj1, obj2 = NULL, adjust = \"adaptBH\", lfdrThresh = 0.2, trueNullMethod = \"convest\", alpha = 0.05, verbose = TRUE )"},{"path":"https://netcomi.de/reference/testGCM.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test GCM(s) for statistical significance — testGCM","text":"obj1 object class GCM GCD returned calcGCM calcGCD. See details. obj2 optional object class GCM returned calcGCM. See details. adjust character indicating method used multiple testing adjustment. Possible values \"lfdr\" (default) local false discovery rate correction (via fdrtool), \"adaptBH\" adaptive Benjamini-Hochberg method (Benjamini Hochberg, 2000), one methods provided p.adjust. lfdrThresh defines threshold local fdr \"lfdr\" chosen method multiple testing correction. Defaults 0.2 meaning differences corresponding local fdr less equal 0.2 identified significant. trueNullMethod character indicating method used estimating proportion true null hypotheses vector p-values. Used adaptive Benjamini-Hochberg method multiple testing adjustment (chosen adjust = \"adaptBH\"). Accepts provided options method argument propTrueNull: \"convest\" (default), \"lfdr\", \"mean\", \"hist\". Can alternatively \"farco\" \"iterative plug-method\" proposed Farcomeni (2007). alpha numeric value 0 1 giving desired significance level. verbose logical. TRUE (default), progress messages printed.","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test GCM(s) for statistical significance — testGCM","text":"list following elements: Additional elements two GCMs given:","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Test GCM(s) for statistical significance — testGCM","text":"applying Student's t-test Fisher-transformed correlations, entries GCM(s) tested significantly different zero: H0: gc_ij = 0 vs. H1: gc_ij != 0, gc_ij graphlet correlations. GCMs given obj1 class GCD, absolute differences graphlet correlations tested different zero using Fisher's z-test. hypotheses : H0: |d_ij| = 0 vs. H1: |d_ij| > 0, d_ij = gc1_ij - gc2_ij","code":""},{"path":"https://netcomi.de/reference/testGCM.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test GCM(s) for statistical significance — testGCM","text":"","code":"# See help page of calcGCD() ?calcGCD"},{"path":[]},{"path":"https://netcomi.de/news/index.html","id":"new-features-1-1-0","dir":"Changelog","previous_headings":"","what":"New features","title":"NetCoMi 1.1.0","text":"renameTaxa(): New function renaming taxa taxonomic table. comes functionality making unknown unclassified taxa unique substituting next higher known taxonomic level. E.g., unknown genus “g__“, family next higher known level, can automatically renamed ”1_Streptococcaceae(F)“. User-defined patterns determine format known substituted names. Unknown names (e.g., NAs) unclassified taxa can handled separately. Duplicated names within one chosen ranks can also made unique numbering consecutively. editLabels(): New function editing node labels, .e., shortening certain length removing unwanted characters. used NetCoMi’s plot functions plot.microNetProps() plot.diffnet(). netCompare(): adjusted Rand index also computed largest connected component (LCC). summary method adapted. Argument “testRand” added netCompare(). Performing permutation test adjusted Rand index can now disabled save run time. Graphlet-based network measures implemented. NetCoMi contains two new exported functions calcGCM() calcGCD() compute Graphlet Correlation Matrix (GCM) network Graphlet Correlation Distance (GCD) two networks. Orbits graphlets four nodes considered. Furthermore, GCM computed netAnalyze() GCD netCompare() (whole network largest connected component, respectively). Also orbit counts returned. GCD added summary class microNetComp objects returned netCompare(). Significance test GCD: permutation tests conducted netCompare(), GCD tested significantly different zero. New function testGCM() test graphlet-based measures significance. single GCM, correlations tested significantly different zero. two GCMs given, tested correlations significantly different two groups, , absolute differences correlations ( |gc1ij−gc2ij||gc1_{ij}-gc2_{ij}| ) tested different zero. New function plotHeat() plotting mixed heatmap , instance, values shown upper triangle corresponding p-values significance codes lower triangle. function used plotting heatmaps GCMs, also used association matrices. netAnalyze() now default returns heatmap GCM(s) graphlet correlations upper triangle significance codes lower triangle. Argument “doPlot” added plot.microNetProps() suppress plot return value interest. New “show” arguments added summary methods class microNetProps microNetComp objects. specify network properties printed summary. See help pages summary.microNetProps summary.microNetComp() details. New zero replacement method “pseudoZO” available netConstruct(). Instead adding desired pseudo count whole count matrix, added zero counts pseudoZO chosen. behavior “pseudo” (available method pseudo count added counts) changed. Adding pseudo count zeros preserves ratios non-zero counts, desirable. createAssoPerm() now accepts objects class microNet input (addition objects class microNetProps). SPRING's fast version latent correlation computation (implemented mixedCCA) available . can used setting netConstruct() parameter measurePar$Rmethod “approx”, now default . function multAdjust() now argument pTrueNull pre-define proportion true null hypotheses adaptive BH method. netConstruct() new argument assoBoot, enables computation bootstrap association matrices outside netConstruct() bootstrapping used sparsification. example added help page ?netConstruct. feature might useful large association matrices (working memory might reach limit).","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-1-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.1.0","text":"netConstruct(): Using “bootstrap” sparsification method combination one association methods “bicor”, “cclasso”, “ccrepe”, “gcoda” led error: argument \"verbose\" missing, default, fixed. “signedPos” transformation work properly. Dissimilarities corresponding negative correlations set zero instead infinity. editLabels(): function (thus also plot.microNetProps) threw error taxa renamed renameTaxa data contain 9 taxa equal names, double-digit numbers added avoid duplicates. Issues network analysis plotting association matrices used network construction, row /column names missing. (issue #65) diffnet() threw error association matrices used network construction instead count matrices. (issue #66) plot.microNetProps(): function now directly returns error x expected class. cut parameter changed. cclasso(): rare cases, function produced complex numbers, led error.","code":""},{"path":"https://netcomi.de/news/index.html","id":"further-changes-1-1-0","dir":"Changelog","previous_headings":"","what":"Further changes","title":"NetCoMi 1.1.0","text":"permutation tests: permuted group labels must now different original group vector. words, original group vector strictly avoided matrix permuted group labels. far, duplicates avoided. exact permutation tests (nPerm equals possible number permutations), original group vector still included permutation matrix. calculation p-values adapted new behavior: p=B/N exact p-values p=(B+1)/(N+1) approximated p-values, B number permutation test statistics larger equal observed one, N number permutations. far, p=(B+1)/(N+1) used cases. plot.microNetProps(): default shortenLabels now “none”, .e. labels shortened default, avoid confusion node labels. edge filter (specified via edgeFilter edgeInvisFilter) now refers estimated association/dissimilarities instead edge weights. E.g., setting threshold 0.3 association network hides edges corresponding absolute association 0.3 even though edge weight might different (depending transformation used network construction). (issue #26) two networks constructed cut parameter user-defined, mean two determined cut parameters now used networks edge thicknesses comparable. expressive messages errors diffnet plot.diffnet differential associations detected. New function .suppress_warnings() suppress certain warnings returned external functions. netConstruct “multRepl” used zero handling: warning proportion zeros suppressed setting multRepl() parameter “z.warning” 1. functions makeCluster stopCluster parallel package now used parallel computation snow package sometimes led problems Unix machines.","code":""},{"path":"https://netcomi.de/news/index.html","id":"style-1-1-0","dir":"Changelog","previous_headings":"","what":"Style","title":"NetCoMi 1.1.0","text":"whole R code reformatted follow general conventions. element \"clustering_lcc\" part netAnalyze output changed \"clusteringLCC\" line remaining output. Input argument checking exported function revised. New functions .checkArgsXxx() added perform argument checking outside main functions. Non-exported functions renamed follow general naming conventions, .e. Bioconductor: Use camelCase functions. Non-exported functions prefix “.” following functions renamed:","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcomi-103","dir":"Changelog","previous_headings":"","what":"NetCoMi 1.0.3","title":"NetCoMi 1.0.3","text":"minor release bug fixes changes documentation.","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-0-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.0.3","text":"netConstruct() threw error data row /column names, fixed. edge list added output netConstruct() (issue #41). See help page details. SPRING’s fast version latent correlation computation (implemented mixedCCA) currently available due deprecation R package chebpol. issue fixed setting netConstruct() parameter measurePar$Rmethod internally “original” SPRING used association estimation. plot.microNetProps(): xpd parameter changed NA plotting outside plot region possible (useful legends additional text). Labels network plot can now suppressed setting labels = FALSE (issue #43) netCompare() function threw error one permutation networks empty, .e. edges weight different zero (issue #38), now fixed. Fix issues #29 #40, permutation tests terminate small sample sizes. Now, possible number permutations (resulting sample size) smaller defined user, function stops returns error. Fix bug diffnet() (issue #51), colors differential networks changed. diffnet() threw error netConstruct() argument jointPrepro set TRUE.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcomi-102","dir":"Changelog","previous_headings":"","what":"NetCoMi 1.0.2","title":"NetCoMi 1.0.2","text":"release includes range new features fixes known bugs issues.","code":""},{"path":[]},{"path":"https://netcomi.de/news/index.html","id":"improved-installation-process-1-0-2","dir":"Changelog","previous_headings":"New features","what":"Improved installation process","title":"NetCoMi 1.0.2","text":"Packages optionally required certain settings installed together NetCoMi anymore. Instead, new function installNetCoMiPacks() installing remaining packages. installed via installNetCoMiPacks(), required package installed respective NetCoMi function needed.","code":""},{"path":"https://netcomi.de/news/index.html","id":"installnetcomipacks-1-0-2","dir":"Changelog","previous_headings":"New features","what":"installNetCoMiPacks()","title":"NetCoMi 1.0.2","text":"New function installing R packages used NetCoMi listed dependencies imports NetCoMi’s description file.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netconstruct-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netConstruct()","title":"NetCoMi 1.0.2","text":"New argument matchDesign: Implements matched-group (.e. matched-pair) designs, used permutation tests netCompare() diffnet(). c(1,2), instance, means one sample first group matched two samples second group. argument NULL, matched-group design kept generating permuted data. New argument jointPrepro: Specifies whether two data sets (group one two) preprocessed together. Preprocessing includes sample taxa filtering, zero treatment, normalization. Defaults TRUE data group given, FALSE data data2 given, similar behavior NetCoMi 1.0.1. dissimilarity networks, joint preprocessing possible. mclr(){SPRING} now available normalization method. clr{SpiecEasi} used centered log-ratio transformation instead cenLR(){robCompositions}. \"symBetaMode\" accepted list element measurePar, passed symBeta(){SpiecEasi}. needed SpiecEasi SPRING associations. pseudocount (zeroMethod = \"pseudo\") may freely specified. v1.0.1, unit pseudocounts possible.","code":""},{"path":"https://netcomi.de/news/index.html","id":"netanalyze-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netAnalyze()","title":"NetCoMi 1.0.2","text":"Global network properties now computed whole network well largest connected component (LCC). summary network properties now contains whole network statistics based shortest paths (, generally, also meaningful disconnected networks). LCC, global properties available NetCoMi shown. New global network properties (see docu netAnalyze() definitions): Number components (whole network) Relative LCC size (LCC) Positive edge percentage Natural connectivity Average dissimilarity (meaningful LCC) Average path length (meaningful LCC) New argument centrLCC: Specifies whether compute centralities LCC. TRUE, centrality values disconnected components zero. New argument avDissIgnoreInf: Indicates whether infinite values ignored average dissimilarity. FALSE, infinities set 1. New argument sPathAlgo: Algorithm used computing shortest paths New argument sPathNorm: Indicates whether shortest paths normalized average dissimilarity improve interpretability. New argument normNatConnect: Indicates whether normalize natural connectivity values. New argument weightClustCoef: Specifies algorithm used computing global clustering coefficient. FALSE, transitivity(){igraph} type = \"global\" used (similar NetCoMi 1.0.1). TRUE, local clustering coefficient computed using transitivity(){igraph} type = \"barrat\". global clustering coefficient arithmetic mean local values. Argument connect changed connectivity. Documentation extended definitions network properties.","code":""},{"path":"https://netcomi.de/news/index.html","id":"summarymicronetprops-1-0-2","dir":"Changelog","previous_headings":"New features","what":"summary.microNetProps()","title":"NetCoMi 1.0.2","text":"New argument clusterLCC: Indicates whether clusters shown whole network LCC. print method summary.microNetProps completely revised.","code":""},{"path":"https://netcomi.de/news/index.html","id":"plotmicronetprops-1-0-2","dir":"Changelog","previous_headings":"New features","what":"plot.microNetProps()","title":"NetCoMi 1.0.2","text":"normalization methods available network construction can now used scaling node sizes (argument nodeSize). New argument normPar: Optional parameters used normalization. Usage colorVec changed: Node colors can now set separately groups (colorVec can single vector list two vectors). Usage depends nodeColor (see docu colorVec). New argument sameFeatCol: nodeColor = \"feature\" colorVec given, sameFeatCol indicates whether features colors groups. Argument colorNegAsso renamed negDiffCol. Using old name leads warning. New functionality using layout groups (two networks plotted). addition computing layout one group adopting group, union layout can computed used groups nodes placed optimal possible equally networks. option applied via sameLayout = TRUE layoutGroup = \"union\". Many thanks Christian L. Müller Alice Sommer providing idea R code new feature!","code":""},{"path":"https://netcomi.de/news/index.html","id":"netcompare-1-0-2","dir":"Changelog","previous_headings":"New features","what":"netCompare()","title":"NetCoMi 1.0.2","text":"New arguments storing association count matrices permuted data external file: fileLoadAssoPerm fileLoadCountsPerm storeAssoPerm fileStoreAssoPerm storeCountsPerm fileStoreCountsPerm New argument returnPermProps: TRUE, global network properties respective absolute group differences permuted data returned. New argument returnPermCentr: TRUE, computed centrality values respective absolute group differences permuted data returned list matrix centrality measure. arguments assoPerm dissPerm still existent compatibility NetCoMi 1.0.1 former elements assoPerm dissPerm returned anymore (matrices stored external file instead).","code":""},{"path":"https://netcomi.de/news/index.html","id":"createassoperm-1-0-2","dir":"Changelog","previous_headings":"New features","what":"createAssoPerm()","title":"NetCoMi 1.0.2","text":"New function creating association/dissimilarity matrices permuted count data. stored count association/dissimilarity matrices can passed netCompare() diffnet() decrease runtime. function also allows generate matrix permuted group labels without computing associations. Using matrix, createAssoPerm() furthermore allows estimate permutation associations/dissimilarities blocks (passing subset permuted group matrix createAssoPerm()).","code":""},{"path":"https://netcomi.de/news/index.html","id":"summarymicronetcomp-1-0-2","dir":"Changelog","previous_headings":"New features","what":"summary.microNetComp()","title":"NetCoMi 1.0.2","text":"Summary method adapted new network properties (analogous summary microNetProps objects, returned netAnalyze())","code":""},{"path":"https://netcomi.de/news/index.html","id":"diffnet-1-0-2","dir":"Changelog","previous_headings":"New features","what":"diffnet()","title":"NetCoMi 1.0.2","text":"New arguments storing association count matrices permuted data external file: fileLoadAssoPerm fileLoadCountsPerm storeAssoPerm fileStoreAssoPerm storeCountsPerm fileStoreCountsPerm argument assoPerm still existent compatibility NetCoMi 1.0.1 former element assoPerm returned anymore (matrices stored external file instead). Changed output: permutation tests Fisher’s z-test, vector matrix p-values corresponding matrix group differences returned without multiple testing adjustment. Documentation revised.","code":""},{"path":"https://netcomi.de/news/index.html","id":"plotdiffnet-1-0-2","dir":"Changelog","previous_headings":"New features","what":"plot.diffnet()","title":"NetCoMi 1.0.2","text":"New argument adjusted: Indicates whether adjacency matrix (matrix group differences) based adjusted unadjusted p-values plotted. New argument legendPos positioning legend. New argument legendArgs specifying arguments passed legend.","code":""},{"path":"https://netcomi.de/news/index.html","id":"coltotransp-1-0-2","dir":"Changelog","previous_headings":"New features","what":"colToTransp()","title":"NetCoMi 1.0.2","text":"function now exported name changed col_to_transp() colToTransp(). function expects color vector input adds transparency color.","code":""},{"path":"https://netcomi.de/news/index.html","id":"bug-fixes-1-0-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"NetCoMi 1.0.2","text":"major issues fixed release : following error solved: Error update.list(...): argument \"new\" missing. error caused conflict SpiecEasi metagenomeSeq, particular gplot dependency metagenomeSeq. former version gplot dependend gdata, caused conflict. , please update gplot remove package gdata fix error. sparcc() SpiecEasi package now used estimating SparCC associations. users, NetCoMi’s Rccp implementation SparCC caused errors installing NetCoMi. fixed, Rcpp implementation included , users can decide two SparCC versions. VST transformations now computed correctly. Error plotting two networks, one network empty, fixed.","code":""}]