Hacker News new | past | comments | ask | show | jobs | submit login

No, that's exactly why absolute error is better. "Big errors" are called outliers, they're (relatively) rare, often caused by bad data (measurement errors, typos, etc.) and substiantially influence the outcome of your calculation. In other words, squared error is less robust.

But squared error is easier to compute. So, in practice, what you do is you remove outliers (e.g. cap the data at +-3sigma) then use squared error.




> So, in practice, what you do is you remove outliers (e.g. cap the data at +-3sigma) then use squared error.

But if you are say fitting a function to the data, you can't tell beforehand which data-points are the outliers. So in that case perhaps you need an iterative approach of removing them (?)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: