Urs A. Müller, Michael Kocheisen, Anton Gunzinger
The performance requirements in experimental research on arti(cid:173) ficial neural nets often exceed the capability of workstations and PCs by a great amount. But speed is not the only requirement. Flexibility and implementation time for new algorithms are usually of equal importance. This paper describes the simulation of neural nets on the MUSIC parallel supercomputer, a system that shows a good balance between the three issues and therefore made many research projects possible that were unthinkable before. (MUSIC stands for Multiprocessor System with Intelligent Communication)
1 Overview of the MUSIC System
The goal of the MUSIC project was to build a fast parallel system and to use it in real-world applications like neural net simulations, image processing or simulations in chemistry and physics [1, 2]. The system should be flexible, simple to program and the realization time should be short enough to not have an obsolete system by the time it is finished. Therefore, the fastest available standard components were used. The key idea of the architecture is to support the collection and redistribution of complete data blocks by a simple, efficient and autonomously working commu(cid:173) nication network realized in hardware. Instead of considering where to send data and where from to receive data, each processing element determines which part of a (virtual) data block it has produced and which other part of the same data block it wants to receive for the continuation of the algorithm.