NeurIPS 2020

Federated Bayesian Optimization via Thompson Sampling


Meta Review

The paper contains a range of interesting ideas. In its current (submitted form), it's not a great paper, but OK. However, we expect the authors make the changes the reviewers asked for (and the authors committed to) to make raise the paper to the next level. Specifically: 1. Adapt RGPE and TAF to the federated setting (that is 'easily possible') and provide a comparison. 2. Discuss whether/how alternative GP approximations (e.g., inducing point methods) can be used in this setting (i.e., avoid to share raw observations between agents). 3. Interpret the current approach as a specific instance of Thompson sampling scheme under a model mixture and relate to existing literature on BO under model mixtures. 4. Discuss a setting where agents have different levels of fidelity (if possible) and discuss your approach in the context of existing literature on multi-fidelity / multi-source / multi-model BO. 5. Discuss to what extent the proposed approach could be applied to more generic hyperparameters (e.g. kernel hyperparameters beyond coefficients in the linear case). 6. Place the paper in the context of previous works on parallel/distributed BO/TS.