__ Summary and Contributions__: This paper consider the problem of robust regression, and consider the simple algorithm: run SGD on ell_1 loss. And the authors establish a convergence rate on this simple algorithm: 1((1-eta)^2*n), though the dependence on eta might not be optimal, given the simplicity of the algorithm and strong empirical performance, I think this result deserves presenting at the conference.

__ Strengths__: Clear theoretical justification of the proposed approach with guarantees.

__ Weaknesses__: No real data experiments to verify the effectiveness of the approach in real world.

__ Correctness__: Theoretical results are clearly stated and seem correct. Empirical results look reasonable.

__ Clarity__: This paper is well written.

__ Relation to Prior Work__: Related work are extensively discussed.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: This paper performed a theoretical analysis on the empirical loss minimization with absolute loss. In a setting they called the online oblivious response corruption, they proved the average of a sequence obtained by the SGD achieves convergence rate of 1/n.

__ Strengths__: They show the proposed algorithm (averaging of a sequence of SGD) is highly scalable and optimal.
They show other benefits of the proposed algorithm such as low dependency on noise level and feature confounding.

__ Weaknesses__: Their analysis seems to strongly depend on the assumption that data is drawn from 0 mean gaussian distribution.
When it is said 1/n is the optimal rate, one usually assumes much less.

__ Correctness__: Their proof seems to be correct.

__ Clarity__: Yes

__ Relation to Prior Work__: The authors relates this work to others from both aspects of robust statistics and stochastic optimization.

__ Reproducibility__: No

__ Additional Feedback__:

__ Summary and Contributions__: The robust linear regression problem is studied in the online setting. Under certain conditions, the convergence rate of the averaged iterate is obtained.

__ Strengths__: The online version of SGD for the robust linear regression problem seems novel.

__ Weaknesses__: The linear model is simple, and the assumptions on the data seems a little bit restrictive.

__ Correctness__: The proofs seems correct, but the referee did not check them line by line.

__ Clarity__: yes

__ Relation to Prior Work__: seems so

__ Reproducibility__: Yes

__ Additional Feedback__:
After the rebuttal:
I am not very familiar with this area, while the authors addressed my concerns partially. Hence, I decide to remain the current score.

__ Summary and Contributions__: This paper addresses the task of online learning for robust linear regression (with L1 loss). In particular, based on the smoothing mechanism, the authors propose a stochastic gradient descent on the l1 loss with guaranteed convergence. The authors show some encouraging results on robust regression.

__ Strengths__: - The problem of online learning for robust regression is quite important in several machine learning tasks, where the algorithm only has access to the the data in a streaming manner. Thus, the proposed algorithm would be very useful for such applications.
- The non-smothness of the l1 loss is addressed using Gaussian smoothing. Although Gaussian smoothing is not new, using it in this online learning context is rather interesting.
- The final algorithm is quite simple, which can be easily implemented using existing optimization frameworks.
- A proof for convergence guarantee is provided.

__ Weaknesses__: - As can be seen from Lemma 3, the higher the outlier proportion, the local conditioning becomes worse. Also, if the noise level tends to 0, the problem becomes non-smooth. It is not clear if the proposed algorithm still well-behaves in such extreme cases.
- Some work on robust online regression had been previously discussed, for example:
+ Briegel, Thomas, and Volker Tresp. "Robust neural network regression for offline and online learning." Advances in Neural Information Processing Systems. 2000.
It is not clear why such references are not mentioned and compared in the paper.
========
Update after rebuttal:
I thank the authors for providing feedback to all reviewers' comments. Some of my concerns have been addressed. Therefore, I have upgraded my rating for this work to "7. Accept".

__ Correctness__: - The proofs look reasonable (although the details have not been carefully checked).
- The empirical results support the theory.

__ Clarity__: - The paper is well written where most details are clearly explained

__ Relation to Prior Work__: Some related works that addressed the similar problems are not thoroughly discussed.

__ Reproducibility__: Yes

__ Additional Feedback__: - The robustness of the algorithm is attained through L1 (or Huber loss). However, in many practical applications with high outlier rates, one needs to use more difficult loss functions, e.g., Tukey loss. It would be interesting to discuss if it is straight-forward to extend this work to such loss.