This paper analyzes approximation ability and generalization ability of deep learning models with reasoning layers. The analysis connects underlying algorithm property and the performance of the deep learning models. In the learning theory analysis, the local Rademacher complexity technique is utilized to obtain tighter bound, which enables to reveal trade-off corresponding to the number of layers. The theoretical findings are justified from numerical experiments. This paper deals with a new problem setting and gives a nice first step. Although its problem setting is quite simple, it is expected that this kind of study will open up a new direction of researches. The numerical experiments support the theoretical analysis well.