It really does seem like a lot of the insane valuations are turning out to be exactly that.
There was a lot of trust within the markets that AI companies would be able to leverage closed models for permanent rentseeking. Deepseek proved that this is - at least for now - wishful thinking not based on reality. Similar for the "new dawn" of nuclear power for supporting increasingly more power hungry LLM server farms.
I'm certainly excited for the future open models. Interesting times indeed.
No technical moat, no money moat. Anybody can get to SOTA AI fastly, with reasonable money (no U$S 500B datacenters, no 5 to 10B cash in the bank required), just like anybody could develop the next TikTok or Instagram.
The most important thing is that new hardware is most probably demonstrated now NOT to be able to add any significant moat for future frontier models. If you own or have access to relatively old GPU datacenter level hardware (like the ones offered in many public clouds and/or private offerings -think old crypto farms pivoting - everywhere), you should be good provided to develop frontier models in a snap (2-3 months), from the scratch.
There's quite new thing, distillitation of models that can be done using R1, and now you could train a relatively week model -not yet frontier - but you could upgrade the thing right to the SOTA level - right now o3 probably - just distilling it with a R1 reasoner. This changes the game because you already have lots of advanced publicly downloadable models, plus whatever you can train, then you can now just begin to distill stuff seriously improving intelligence in those, it is so new that we have yet to see how it develops in the next weeks, months.
Or that moats exist but in the old-fashioned way: brand, user experience, great understanding of users, highly optimised sales tactics. In short: no free lunch :)
Deepseek proved that there is no moat. Thus no path to profitability for openai, anthropic & co.
Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.
Now the same is being done to openai. Well, too bad.
> Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.
OpenAI and other LLMs scraping the internet is probably covered under fair use. DeepSeek’s violation of OpenAI’s terms is pretty clearly a violation of their terms and not legal.
And within half an hour somebody invested in nvidia stock is going to swoop in and explain how they totally (trust me bro) made x thousand with an app written by llm.
Every. Single. Time.
Almost as if there was a financial incentive to do that.
Because the hype is not only annoying, but it makes potentially cool and interesting technology toxic once people figure out that the people hyping things up know it to be mostly bullshit.
Great things take many years, sometimes decades to develop properly. Different generations of people to experiment and try things out.
That is not good for the ones pushing up the hype. You don't get rich quick by doing that, you don't get to scam enough investors by something being slowly improved.
You may call it bitterness, whereas I am just jaded by watching things play out.
This is an extremist position and disagrees with a century of science on the topic. That simplistic "weak-minded" indicates that you are just looking for a chance to make yourself feel/look better than others and are unable to get positive stimuli on your own positive merits.
Additionally, you seem to need this illusion of control, that you and your "strong mind" are actually in charge, which current research heavily disagrees with. If you want to put that to the test, I've got a few ideas for you...
Humans are not as simple as you seem to think. If only it were that easy.
I'd argue, actual adults are primarily able to discuss a topic without shitting on those they perceive as "weak".
Greetings from somebody who used to work on the treatment side of things.
> unable to get positive stimuli on your own positive merits
Ahh, the typical reaction, lashing out, and ad-hominem. I am sooo unsurprised.
I am well aware that everyone has their breaking point. What is easy for one individual is hard for another. No worries. However, I never expected phone compulsion to make someone fail the Gom Jabbar test. I mean, seriously? If you have trouble regulating your tool usage, how do you even get through life!
What's with that idea that compulsions aren't real? They are just as real and difficult to drop as substance-based addictions, for a rather simple reason:
The physical addiction is only a compounding factor, not the core difficulty in a "classic" addiction.
What really makes people always come back, no matter what, is the psychological addiction, not the physical one. Which is also why phones can be just as difficult to stop as gambling compulsions or drug addictions.
Because a massive share of the kagi users are part of the hn-adjecent crowd. When you look at the most manually upranked domains, you'll probably get a clearer picture.
https://imgur.com/a/1Ed23d6
The typical kagi user uses hn. In the past, hn was even further up, though I guess they're slowly getting "normal" people too.
Hype. HN is _the_ platform to create hype among the early adopter, super-spreader/tech exec kind of people and because of that has an absolutely massive indirect reach.
Just look how often PR reps appear here to reply to accusations - they wouldn't bother at all if this was just some random platform like reddit.
I'm not convinced HN is awash in bots, but there are certainly some inauthentic accounts here.
What if you want to change public opinion about $evilcorp or $evilleader or $evilpolicy? You could explain to people who love contrarian narratives how $evilcorp, $evilleader and $evilpolicy are actually not as bad as mainstreamers believe, and how their competitors and alternatives are actually more evil than most people understand.
HN is an inexpensive and mostly frictionless way to run an inception campaign on people who are generally better connected and respected than the typical blue check on X.
Their objective probably isn't to accumulate karma because karma is mostly worthless.
They really only need enough karma to flag posts contrary to their interests. Even if the flagged posts aren't flagged to death, it doesn't take much to downrank them off the front page.
You can't underestimate this being a bot playground/training ground with no particular purpose beyond getting the bot to say realistic/interesting replies.
I have zero interest in bots, but if I did, the hacker news API would be exactly how I would start.
I've hardly seen here any proselytizers from Oracle, Salesforce, IBM and they are dong just fine. Ditto for Amazon/Google/Microsoft/Facebook - they used to be represented more here, but their exodus hardly made any difference.
Gartner has more influence on tech than Hacker News.
There was a lot of trust within the markets that AI companies would be able to leverage closed models for permanent rentseeking. Deepseek proved that this is - at least for now - wishful thinking not based on reality. Similar for the "new dawn" of nuclear power for supporting increasingly more power hungry LLM server farms.
I'm certainly excited for the future open models. Interesting times indeed.
reply