Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
Shivam Gupta, Jasper Lee, ecprice, Paul Valiant
We consider 1-dimensional location estimation, where we estimate a parameter λ from n samples λ+ηi, with each ηi drawn i.i.d. from a known distribution f. For fixed f the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as n→∞: it is asymptotically normal with variance matching the Cramer-Rao lower bound of 1nI, where I is the Fisher information of f. However, this bound does not hold for finite n, or when f varies with n. We show for arbitrary f and n that one can recover a similar theory based on the Fisher information of a smoothed version of f, where the smoothing radius decays with n.