ebdbn {ebdbNet} | R Documentation |
A function to infer the posterior mean and variance of network parameters using an empirical Bayes estimation procedure for a Dynamic Bayesian Network (DBN).
ebdbn(y, K, input = "feedback", conv.1 = .15, conv.2 = .05, conv.3 = .01, verbose = TRUE, max.iter = 100, max.subiter = 200)
y |
A list of R (PxT) matrices of observed time course profiles (P genes, T time points) |
K |
Number of hidden states |
input |
"feedback" for feedback loop networks, or a list of R (MxT) matrices of input profiles |
conv.1 |
Value of convergence criterion 1 |
conv.2 |
Value of convergence criterion 2 |
conv.3 |
Value of convergence criterion 3 |
verbose |
Verbose output |
max.iter |
Maximum overall iterations (default value is 100) |
max.subiter |
Maximum iterations for hyperparameter updates (default value is 200) |
An object of class ebdbNet
.
This function infers the parameters of a network, based on the state space model
x(t) = Ax(t-1) + Bu(t) + w(t)
y(t) = Cx(t) + Du(t) + z(t)
where x(t) represents the expression of K hidden states at time t,
y(t) represents the expression of P observed states (e.g., genes) at time
t, u(t) represents the values of M inputs at time t,
w(t) ~ MVN(0,I), and z(t) ~ MVN(0,V^(-1)),
with V = diag(v_1, ..., v_P). Note that the
dimensions of the matrices A, B, C, and D are (KxK),
(KxM), (PxK), and (PxM), respectively. When a network is estimated with
feedback rather than inputs (input
= "feedback"), the state
space model is
x(t) = Ax(t-1) + By(t-1) + w(t)
y(t) = Cx(t) + Dy(t-1) + z(t)
The parameters of greatest interest are typically contained in the matrix D, which encodes the direct interactions among observed variables from one time to the next (in the case of feedback loops), or the direct interactions between inputs and observed variables at each time point (in the case of inputs).
The value of K is chosen prior to running the algorithm by using hankel
.
The hidden states are estimated using the classic Kalman filter. Posterior distributions
of A, B, C, and D are estimated using an empirical
Bayes procedure based on a hierarchical Bayesian structure defined over the
parameter set. Namely, if a(j), b(j),
c(j), d(j), denote vectors made up of the
rows of matrices A, B, C, and D respectively, then
a(j)|alpha ~ N(0, diag(alpha)^(-1))
b(j)|beta ~ N(0, diag(beta)^(-1))
c(j)|gamma ~ N(0, diag(gamma)^(-1))
d(j)|delta ~ N(0, diag(delta)^(-1))
where alpha = (alpha_1, ..., alpha_K), beta = (beta_1, ..., beta_M), gamma = (gamma_1, ..., gamma_K), and delta = (delta_1, ..., delta_M). An EM-like algorithm is used to estimate the hyperparameters in an iterative procedure conditioned on current estimates of the hidden states.
conv.1
, conv.2
, and conv.3
correspond to convergence
criteria Delta_1, Delta_2, and
Delta_3 in the reference below, respectively. After
terminating the algorithm, the z-scores of the D matrix is
calculated, which in turn determines the presence or absence of edges in the network.
See the reference below for additional details about the implementation of the algorithm.
APost |
Posterior mean of matrix A |
BPost |
Posterior mean of matrix B |
CPost |
Posterior mean of matrix C |
DPost |
Posterior mean of matrix D |
CvarPost |
Posterior variance of matrix C |
DvarPost |
Posterior variance of matrix D |
xPost |
Posterior mean of hidden states x |
alphaEst |
Estimated value of alpha |
betaEst |
Estimated value of beta |
gammaEst |
Estimated value of gamma |
deltaEst |
Estimated value of delta |
vEst |
Estimated value of precisions v |
muEst |
Estimated value of mu |
sigmaEst |
Estimated value of Sigma |
alliterations |
Total number of iterations run |
z |
Z-statistics calculated from the posterior distribution of matrix D |
type |
Either "input" or "feedback", as specified by the user |
Andrea Rau
Andrea Rau, Florence Jaffrezic, Jean-Louis Foulley, and R. W. Doerge (2010). An Empirical Bayesian Method for Estimating Biological Networks from Temporal Microarray Data. Statistical Applications in Genetics and Molecular Biology 9. Article 9.
hankel
, calcSensSpec
, plot.ebdbNet
library(ebdbNet) tmp <- runif(1) ## Initialize random number generator set.seed(125214) ## Save seed ## Simulate data R <- 5 T <- 10 P <- 10 simData <- simulateVAR(R, T, P, v = rep(10, P), perc = 0.10) Dtrue <- simData$Dtrue y <- simData$y ## Simulate 8 inputs u <- vector("list", R) M <- 8 for(r in 1:R) { u[[r]] <- matrix(rnorm(M*T), nrow = M, ncol = T) } #################################################### ## Run EB-DBN without hidden states #################################################### ## Choose alternative value of K using hankel if hidden states are to be estimated ## K <- hankel(y)$dim ## Run algorithm net <- ebdbn(y = y, K = 0, input = u, conv.1 = 0.15, conv.2 = 0.10, conv.3 = 0.10, verbose = TRUE) ## Visualize results ## Note: no edges here, which is unsurprising as inputs were randomly simulated ## (in input networks, edges only go from inputs to genes) ## plot(net, sig.level = 0.95)