Part of Advances in Neural Information Processing Systems 11 (NIPS 1998)
Michael Lewicki, Terrence J. Sejnowski
A common way to represent a time series is to divide it into short(cid:173) duration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the tem(cid:173) poral alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time series that does not require blocking the data. The algorithm finds an efficient representation by inferring the best temporal po(cid:173) sitions for functions in a kernel basis. These can have arbitrary temporal extent and are not constrained to be orthogonal. This allows the model to capture structure in the signal that may occur at arbitrary temporal positions and preserves the relative temporal structure of underlying events. The model is shown to be equivalent to a very sparse and highly over complete basis. Under this model, the mapping from the data to the representation is nonlinear, but can be computed efficiently. This form also allows the use of ex(cid:173) isting methods for adapting the basis itself to data. This approach is applied to speech data and results in a shift invariant, spike-like representation that resembles coding in the cochlear nerve.