A new learning algorithm, Learning by Choice of Internal Rep(cid:173) resetations (CHIR), was recently introduced. Whereas many algo(cid:173) rithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperative adaptation of the weights (e.g. by using the perceptron learning rule). Since the introduction of its basic, single output ver(cid:173) sion, the CHIR algorithm was generalized to train any feed forward network of binary neurons. Here we present the generalised version of the CHIR algorithm, and further demonstrate its versatility by describing how it can be modified in order to train networks with binary (±1) weights. Preliminary tests of this binary version on the random teacher problem are also reported.