Hacker News new | past | comments | ask | show | jobs | submit login

There is an old AI joke about a robot, after being told that it should go to the Moon, that it climbs the tree, sees that it has made the first baby steps towards being closer to the goal, and then gets stuck.

The way that people who are trying to use ChatGPT is certainly an example of what humans _hope_ the future of human/computer interaction should be. Whether or not Large Language Models such as ChatGPT is the path forward is yet to be seen. Personally, I think that model of "every-increasing neural network sizes" is a dead-end. What is needed is better semantic understanding --- that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words. We don't know how to do this today; all we know how to do is to make the neural networks larger and larger.

What we need is a way to have networks of networks, and creating networks which can handle memory, and time sense, and reasoning, such that the network of networks has pre-defined structures for these various skills, and ways of training these sub-networks. This is all something that organic brains have, but which neural networks today do not..




> What is needed is better semantic understanding --- that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words. We don't know how to do this today; all we know how to do is to make the neural networks larger and larger.

It's pretty clear that these LLMs basically can already do this - I mean they can solve the exact same tasks in a different language, mapping from the concept space they've been trained on english in to other languages. It seems like you are awaiting a time where we explicitly create a concept space with operations performed on it, this will never happen.


> that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words

I feel like DNNs do this today. At higher levels of the network they create abstractions and then the eventual output maps them to something. What you describe seems evolutionary, rather than revolutionary to me. This feels more like we finally discovered booster rockets, but still aren't able to fully get out of the atmosphere.


They might have their own semantics, but its not our semantics! The written word already can only approximate our human experience, and now this is an approximation of an approximation. Perhaps if we were born as writing animals instead of talking ones...


This is true, but I think this feels evolutionary. We need to train models using all of the inputs that we have... touch, sight, sound, smell. But I think if we did do that, they'd be eerily close to us.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: