Whatever you saw a decade ago, it definitely wasn’t this.
I do recommend you play with it. But if you don’t feel like signing up with their free account, here’s a screen recording of me asking it some random general knowledge questions and instructing it to use a different language in the response each time: https://youtu.be/XX2rbcrXblk
I had to look up MegaHal, apparently that was based at least in part on a hidden Markov Model.
Using "it" in this way to refer to both that and GPT-family LLMs, or similarly saying "I remember being shown this software almost a decade ago" like the other commenter, is like saying "I remember seeing Kitty Hawk fly, and there's no way that can get someone across the Atlantic" when being presented with a chance to a free seat on the first flight of the Concord. (Actual human level intelligence in this analogy is a spaceship we still have not built, that's not where I'm going with this).
It’s not clear to me MegaHAL language models are less powerful than transformer models. The difference is the “largeness”, but that’s a hardware/dataset/budget detail.
I do recommend you play with it. But if you don’t feel like signing up with their free account, here’s a screen recording of me asking it some random general knowledge questions and instructing it to use a different language in the response each time: https://youtu.be/XX2rbcrXblk