Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if reasoning is intrinsically tied to the need for a model to be able to perform well with extremely limited training data. Current LLMs require the sum total of all human knowledge to work correctly (even a few decades ago, there was no where near enough data to train one to be useful, regardless of available computing), meanwhile humans only need access to a few books and conversations with uneducated people growing up to achieve brilliant ideas.



> Current LLMs require the sum total of all human knowledge to work correctly

the total sum of human knowledge is not found in the digital world.


I'd argue the vast majority of it is, from newspaper articles spanning back centuries to the vast majority of books written, down to every conversation someone has had on public social media websites such as Reddit.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: