The Interplay of Symbolic and Subsymbolic Processes in Anagram Problem Solving

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper


David Grimes, Michael C. Mozer


Although connectionist models have provided insights into the nature of perception and motor control, connectionist accounts of higher cognition seldom go beyond an implementation of traditional symbol-processing theories. We describe a connectionist constraint satisfaction model of how people solve anagram problems. The model exploits statistics of English orthography, but also addresses the interplay of sub symbolic and symbolic computation by a mechanism that extracts approximate symbolic representations (partial orderings of letters) from sub symbolic structures and injects the extracted representation back into the model to assist in the solution of the anagram. We show the computational benefit of this extraction-injection process and discuss its relationship to conscious mental processes and working memory. We also account for experimental data concerning the difficulty of anagram solution based on the orthographic structure of the anagram string and the target word.

Historically, the mind has been viewed from two opposing computational perspectives. The symbolic perspective views the mind as a symbolic information processing engine. According to this perspective, cognition operates on representations that encode logical relationships among discrete symbolic elements, such as stacks and structured trees, and cognition involves basic operations such as means-ends analysis and best-first search. In contrast, the subsymbolic perspective views the mind as performing statistical inference, and involves basic operations such as constraint-satisfaction search. The data structures on which these operations take place are numerical vectors.

In some domains of cognition, significant progress has been made through analysis from one computational perspective or the other. The thesis of our work is that many of these do(cid:173) mains might be understood more completely by focusing on the interplay of subsymbolic and symbolic information processing. Consider the higher-cognitive domain of problem solving. At an abstract level of description, problem solving tasks can readily be formal(cid:173) ized in terms of symbolic representations and operations. However, the neurobiological hardware that underlies human cognition appears to be subsymbolic-representations are noisy and graded, and the brain operates and adapts in a continuous fashion that is diffi(cid:173) cult to characterize in discrete symbolic terms. At some level-between the computational level of the task description and the implementation level of human neurobiology-the symbolic and subsymbolic accounts must come into contact with one another. We focus on this point of contact by proposing mechanisms by which symbolic representations can modulate sub symbolic processing, and mechanisms by which subsymbolic representations

are made symbolic. We conjecture that these mechanisms can not only provide an account for the interplay of symbolic and sub symbolic processes in cognition, but that they form a sensible computational strategy that outperforms purely subsymbolic computation, and hence, symbolic reasoning makes sense from an evolutionary perspective.

In this paper, we apply our approach to a high-level cognitive task, anagram problem solv(cid:173) ing. An anagram is a nonsense string of letters whose letters can be rearranged to form a word. For example, the solution to the anagram puzzle RYTEHO is THEORY. Anagram solving is a interesting task because it taps higher cognitive abilities and issues of aware(cid:173) ness, it has a tractable state space, and interesting psychological data is available to model.

1 A Sub symbolic Computational Model

We start by presenting a purely subsymbolic model of anagram processing. By subsym(cid:173) bolic, we mean that the model utilizes only English orthographic statistics and does not have access to an English lexicon. We will argue that this model proves insufficient to ex(cid:173) plain human performance on anagram problem solving. However, it is a key component of a hybrid symbolic-subsymbolic model we propose, and is thus described in detail.

1.1 Problem Representation

A computational model of anagram processing must represent letter orderings. For ex(cid:173) ample, the model must be capable of representing a solution such as , or any permutation of the letters such as . (The symbols "<" and ">" will be used to delimit the beginning and end of a string, respectively.) We adopted a representation of letter strings in which a string is encoded by the set of letter pairs (hereafter, bigrams) contained in the string; for example, the bigrams in are: . The delimiters < and > are treated as ordinary symbols of the alphabet. We capture letter pairings in a symbolic letter-ordering matrix, or symbolic ordering for short. Figure lea) shows the matrix, in which the rows indicate the first letter of the bigram, and the columns indicate the second. A cell of the matrix contains a value of I if the corre(cid:173) sponding bigram is present in the string. (This matrix formalism and all procedures in the paper can be extended to handle strings with repeated letters, which we do not have space to discuss.) The matrix columns and rows can be thought of as consisting of all letters from A to z, along with the delimiters < and>. However, in the Figure we have omitted rows and columns corresponding to letters not present in the anagram. Similarly, we have omitted the < from the column space and the> from row space, as they could not by definition be part of any bigram. The seven bigrams indicated by the seven ones in the Figure uniquely specify the string THEORY.

As we've described the matrix, cells contain the truth value of the proposition that a par(cid:173) ticular bigram appears in the string being represented. However, the cell values have an interesting alternative interpretation: as the probability that a particular bigram is present. Figure l(b) illustrates a matrix of this sort, which we call a subsymbolic letter ordering matrix, or subsymbolic ordering for short. In the Figure, the bigram TH occurs with prob(cid:173) ability 0.8. Although the symbolic orderings are obviously a subset of the sub symbolic orderings, the two representations play critically disparate roles in our model, and thus are treated as separate entities.

To formally characterize symbolic and subsymbolic ordering matrices, we define a mask vector, /-£, having N = 28 elements, corresponding to the 26 letters of the alphabet plus the two delimiters. Element i of the mask, /-£i, is set to one if the corresponding letter appears in the anagram string and zero if it does not. In both the symbolic and sub symbolic orderings, the matrices are constrained such that elements in row i and column i must sum

E H 0 R T Y > 1 0 0

< 0 0 0 0 E H