Network building - suggestion of a pipeline and discussion

Ensemble pipeline

Here, the details of the ensemble part of the network building pipeline published in PLoS Computational Biology 8, e1002606 are explained. Tutorial 3 exemplifies this pipeline.
  1. The matrix with read counts is filtered (to discard taxa not occurring in enough samples) and normalized sample-wise for each row group (body sites in our case). After normalization, abundances of higher-level taxa are computed as the sum of member taxa abundances.
  2. For each of four measures (Pearson, Spearman, Kullback-Leibler, Bray Curtis), all allowed pair-wise scores are computed (parent-child relationships between taxa are forbidden) and the top- and bottom-ranked 1,000 edges are kept for each measure, resulting in an initial multigraph with 1,000 x 2 x 4 = 8,000 edges.
  3. The network is recomputed for 1,000 permutations (where taxon profiles are shuffled). The permutations are carried out edge-wise and for Pearson and Spearman are followed by a renormalization of the full matrix. Renormalization can shift the permutation (null) distribution of correlation coefficients away from zero, thus mitigating compositional effects.
  4. In the next step, the network is recomputed for 1,000 bootstrapped matrices, that is matrices obtained from the original matrix by sampling columns (with equal probability) with replacement. Bootstrapping computes a confidence interval around the edge score.
  5. Edges with scores not within the limits of the 95% confidence interval defined by the bootstrap distribution are discarded.
  6. A measure- and edge-specific p-value is then obtained from the Gauss curve defined by the mean and standard deviation of the bootstrap distribution. The p-value is the area under that curve from the mean of the permutation distribution to the left or right tail (the tail depends on whether the measure is a distance or a similarity). In R, the p-value can be computed thus: p-val = pnorm(mean_permut, mean=mean_boot, sd=sd_boot)
  7. In the article, we pooled variances of the bootstrap and permutation distribution as follows: pooled_sd = sqrt((var_boot + var_permut)/2)
    and computed the p-value as: p-val = pnorm(mean_permut, mean=mean_boot, sd=pooled_sd)
    Variance pooling considers the standard deviation of the null distribution, which is otherwise not taken into account.
  8. Since the p-value is computed in a one-sided test, very high p-values correspond to negative relationships (low similarities and high distances). These are converted into low p-values by computing 1 - p-value for all p-values above 0.5.
  9. Each edge is now supported by a set of measure-specific p-values, which are dependent, since the measures are correlated (that is the ranks they assign to edges are correlated). To merge the p-values, we used Sime's method, which keeps the minimum p-value as the merged p-value of the edge.
  10. Merged p-values were then multiple-test corrected with Benjamini-Hochberg (BH). In our article, multiple regression was included as a network inference method, which introduces dependencies between merged edge p-values, thus BH was performed with the Yekutieli correction factor. Without the regressions, BH alone is sufficient. Since p-values of edges not part of the initial edge set are not included in the p-value list, p-values resulting from BH are currently over-optimistic. The initial edge list can of course be extended (by setting less restrictive initial thresholds), but at a computational cost that makes network building intractable.
  11. Edges with p-values above a confidence interval of 0.05 were discarded.
  12. Additional filter steps, such as keeping only edges supported by at least two methods, are recommended.

Further considerations

This pipeline has a number of problems and is still being refined. For instance, Simes does not account for the correlation between methods. We are experimenting with Brown's method of merging dependent p-values (Brown, "A Method for Combining Non-Independent, One-Sided Tests of Significance" Biometrics 31 (4), 987-992), but it has its own disadvantage in the context of our pipeline (namely that we need the p-values of badly scoring edges, which we discard). It can be also discussed whether Bray Curtis dissimilarity should be dropped altogether. We observed it to be close to Kullback Leibler (in terms of edge ranking), but it is sensitive to outliers when used to discover negative relationships and is not suited for that purpose. There is also a conceptual problem of merging hyper-edges coming from multiple regression with edges from similarity measures. We have also observed cases where interaction type (i.e. positive or negative relationship) as defined by the one-sided test (where high p-values point to a negative relationship) is in contradiction to the interaction type defined by the initial thresholds. CoNet now removes all edges with such inconsistent interaction types. When different measures disagree on the interaction type, CoNet sets it to unknown.