In the present paper, we propose a method to unify information maximization and minimization in hidden units. The information maximization and minimization are performed on two different lev(cid:173) els: collective and individual level. Thus, two kinds of information: collective and individual information are defined. By maximizing collective information and by minimizing individual information, simple networks can be generated in terms of the number of con(cid:173) nections and the number of hidden units. Obtained networks are expected to give better generalization and improved interpretation of internal representations. This method was applied to the infer(cid:173) ence of the maximum onset principle of an artificial language. In this problem, it was shown that the individual information min(cid:173) imization is not contradictory to the collective information max(cid:173) imization. In addition, experimental results confirmed improved generalization performance, because over-training can significantly be suppressed.