NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5087
Title:Random Tessellation Forests


		
All reviews for this paper are positive and the authors response looks convincing. I think the paper should therefore be accepted. Personally, while I like very much the proposed approach and I agree that the technical contribution is strong, I'm a bit disappointed by the empirical validation: while some improvements are shown with respect to existing baselines, only very small, non standard, datasets are considered and methods are only compared in terms of classification error, which does not really allow to show the benefit of using a Bayesian framework. I'm also missing among the baselines non bayesian forests of oblique trees, since the possibility to consider non axis-parallel splits seems to be one of main benefits of the approach. Any attempt to improve the empirical validation along these lines would be greatly appreciated and would clearly strengthen the paper I think.