If neurons sum up their inputs in a non-linear way, as some simula(cid:173) tions suggest, how is this distributed fine-grained non-linearity ex(cid:173) ploited during learning? How are all the small sigmoids in synapse, spine and dendritic tree lined up in the right areas of their respective input spaces? In this report, I show how an abstract atemporal highly nested tree structure with a quadratic transfer function associated with each branchpoint, can self organise using only a single global reinforcement scalar, to perform binary classification tasks. The pro(cid:173) cedure works well, solving the 6-multiplexer and a difficult phoneme classification task as well as back-propagation does, and faster. Furthermore, it does not calculate an error gradient, but uses a statist(cid:173) ical scheme to build moving models of the reinforcement signal.