Part of Advances in Neural Information Processing Systems 28 (NIPS 2015)
Siddhartha Banerjee, Peter Lofgren
We develop a new bidirectional algorithm for estimating Markov chain multi-step transition probabilities: given a Markov chain, we want to estimate the probability of hitting a given target state in ℓ steps after starting from a given source distribution. Given the target state t, we use a (reverse) local power iteration to construct an expandedtar≥tdistribution′,whichhasthesamemeanasthequantitywewant→estimate,butasmal≤rvariance--thiscanthenbesamp≤defficientlybyaMonteCarloalgorithm.Ourmethodextends→anyMarkovcha∈onadiscrete(f∈iteorcountab≤)state-space,andcanbeextended→computefunctionsofμ<i-steptransitionprobabilitiessuchasPa≥Rank,graphd⇔usions,hi∈greturn×,etc.Ourma∈rest̲istˆ∈sparse' Markov Chains -- wherein the number of transitions between states is comparable to the number of states -- the running time of our algorithm for a uniform-random target node is order-wise smaller than Monte Carlo and power iteration based algorithms; in particular, our method can estimate a probability p using only O(1/√p) running time.