Hacker News new | past | comments | ask | show | jobs | submit login
Why OpenAI's $157B valuation misreads AI's future (Oct 2024) (foundationcapital.com)
138 points by pcurve 2 days ago | hide | past | favorite | 132 comments





"Linux ultimately prevailed-not because it was better from the start, but because it allowed developers to modify the code freely, run it more securely and affordably, and build a broader ecosystem that enabled more capabilities than any closed system"

Deepseek followed llama and will be followed by others in the usual mushroom fashion of open source. People really dont appreciate the magnitude of the disruptive force that is unleashed by the open source paradigm. In a year from now the landscape will be brimming with new initiatives. In a few years nobody will even remember "open"ai.

Conventional economic theory will always misread the future of computing (and thus "AI"). The zero marginal cost and infinite replicability is not a bug, its a feature. But so far we dont really have a good model how to think about it and merge it with mainstream business models. Something must pay the bills eventually but these are very different bills from those of conventional scarcity based businesses. Ironically in the end the main scarcity is human ingenuity. Read the interview of the Deepseek founder on why their models are open source.


With current generation AI we really need AI + Humans to have good results. It seems likely that the entire LLM branch of products will have this limitation. If that's the case let's race to make the best open source AI as fast as possible, so it can be spread as wide as possible and can be used to fix our shared problems. It's the spreading widely that will lead to breakout results in fields such a cancer research just as much as having the most intelligent system, because humans bring some X factor for creativity/ingenuity/novelty.

DeepSeek has demonstrated that there is no technical moat. Model training costs are plummeting, and the margins for APIs will just get slimmer. Plus model capabilities are plateauing. Once model improvement slows down enough, seems to me like the battle is to be fought in the application layer. Whoever can make the killer app will capture the market.

Model capabilities are not plateauing; in fact, they are improving exponentially. I believe people struggle to grasp how AI works and how it differs from other technologies we invented. Our brains tend to think linearly; that's why we see AI as an "app." With AI (ASI), everything accelerates. There will be no concept of an "app" in ASI world.

Hype. We were already plateauing with next token prediction and letting the models think out loud has simply pushed the frontier a little bit in purely test-taking domains.

Can you give examples? From gpt-4 came out in 2023 and since then nothing similar to (3.5 to 4 or 2 to 3) has come out. It has been 2 years now. All signs point towards OpenAI struggling to get improvements from its llms. The new releases have been minor since 2023

The chain of thought models provide huge improvements in certain areas. 2 years is also hardly enough time to claim theyre stuck

Money is not cheap anymore and OpenAI costs a lot to run. The longer it takes between impactful releases, the harder it gets for OpenAi to raise money especially in the face of significant competition both nationally, in the open source world and from China

I keep reading this comment all over the internet.

They are absolutely not improving exponentially, by any metric. This is a completely absurd claim. I don’t think you understand what exponential means.

Something worth noting is that ChatGPT currently is the killer app -- DeepSeek's current chart-topping app notwithstanding (not clear if viral blip or long-term trend).

ChatGPT Plus gives me a limited number of o1 calls and o1 doesn't have web access, so I mostly have been using 4o in the last month and supplementing it with DeepSeek in the last week, for when I need advanced reasoning (with web search in DeepSeek as a bonus).

The killer app had better start giving better value, or I'd gladly pay the same amount of DeepSeek for unlimited access if they decided to charge.


You can't be a killer app if your competitor is just as good and free.

For me ChatGPT was not that useful for work, the killer app was Cursor. It’ll be similar for other industries, it needs to be integrated directly in core business apps.

Killer app for what platform?

Can you unpack why you think there'll be defensible moats at the application layer?

(I thought you had this exactly right when I read it, but I kept noodling it while I brushed my teeth and now I'm not so sure llms won't just prove hard to build durable margins at meaningful volume on?)


I agree that the moats are weak for applications, but I think there are possible strategies to capture users. One way is to make it difficult to switch, similar to Apple Music vs Spotify or iPhone vs Android. Although these platforms offer very similar features, switching has high friction. As an example, I can imagine an AI app that has been adapted to your use cases, maybe it has a lot of useful information stored about you and your projects which makes it much more effective, and switching to some other app would mean you have to move all this data or possibly start over from zero.

Just playing devil's advocate:

VCs (esp those who missed out on OAI) are heavily incentivized to root for OAI to fail, and commoditize the biggest COGs item (AI models).

This guy is just talking his book.


wise man once said: "the devil doesnt need advocates, hell is full of them".

you can both talk your book and also sincerely believe what you say. ad hominem (or whatever the latin equivalent is of ad bookinem) is not as substantive a criticism as you make it out to be. he can be both biased and correct.


And you can even turn the viewpoint around and say that he is putting his money where his mouth is.

That doesn't make any sense. A lack of investment is not an investment against by any means, unless this VC invested in the concept of less spam or more workers.

VC's raise money from investors. Investors want to put their money with good VCs. If OpenAI is huge, and you missed it, that seems bad. If OpenAI flops and you strategically held off - that seems good.

VCs with deep pocketbooks, their startups, and the hardware vendors they purchased from (not to mention politicians) are heavily incentivised to believe that their value-add can't be commoditized.

If your grand dream is to dominate the market through sheer massive scale and that's what you're selling to capital, you're not exactly looking for reasons to buy less hardware and your vendor is hardly going to talk you out of it.

"It's hard to get a man to understand something when his fat valuation depends on his not understanding it"


Whether you love or hate OpenAI, the CapEx involved with this company will be viewed as historic in the future, and will change (has already changed) the paradigm of how tech startups/projects are funded

I think it will have an adverse affect to funding ecosystem.

The inevitable haircut all the funds are going to take in OpenAI and other AI startups when revenue fails to materialize[1] will herald a bust cycle and lot more circumspection in large investments like it happened few years back when a large number of Soft Bank investments did not pan out, notably most of them relied on big funding rounds to muscle out other players, not all of them have failed but all of them lost of enterprise value for investors.

---

[1] This is inevitable regardless of success of the space, either because cost of inference keeps dropping in combination with competing with high quality open weight models as DeepSeek, Stable Diffusion and others have shown. It will have strong downward pressure on pricing impacting both revenue and profit.


If it happens, I call dibs on the name "AI Nuclear Winter".

AI China Syndrome?

It changed the paradigm negatively so far, it vacuumed so much money that anything non-AI is not getting much funding, it locked CapEx and now it's looking like there won't be a massive RoI for the capital spent.

The AI hype might have as well setback a lot of other products that never got the necessary funding to get up from the ground, it's not looking pretty.


We’ve just learned that it’s possible to do AI on less compute (deepseek). if OpenAI doesn’t scale and that’s the problem then I’d argue that in the long run, if you believe in their ability to do research, then the news this week is a very bullish sign.

IMO the equivalent of moores law for AI (both on software and hardware development) is baked into the price, which doesn’t make the valuation all too crazy.


> We’ve just learned that it’s possible to do AI on less compute (deepseek).

There's a huge motte and bailey thing with DeepSeek conversation, where the bailey is "It only took $5.5 million!*" (* for exactly one training run for one of several models, at dirt-cheap per-hour spot prices for H100s) and the motte is all sorts of stuff.

Truth is one run for one model took 2048 GPUs fulltime for 2 months, and my experience with FAANG ML, that means it took 6 months part-time and another 1.5-2.5 runs went absolutely nowhere.


> is baked into the price, which doesn’t make the valuation all too crazy.

Valuations for most large companies have been crazy for a while now. No one values a company based on fundamentals anymore, its all pure gambling on future predictions.

This isn't unique to OpenAI by any means, but they are a good example. Last I checked their revenue to valuation multiplier was in the range of 42X. That's crazy.


Does anyone know how Deepseek does it yet?

(Summary from Reddit)

- fp8 instead of fp32 precision training = 75% less memory

- multi-token prediction to vastly speed up token output

- Mixture of Experts (MoE) so that inference only uses parts of the model not the - entire model (~37B active at a time, not the entire 671B), increases efficiency

- PTX (basically low-level assembly code) hacking in old Nvidia GPUs to pump out as much performance from their old H800 GPUs as possible

Then, the big innovation of R1 and R1-Zero was finding a way to utilize reinforcement learning within their LLM training.


They also use some kind of factorized attention that somehow leads to compression of tokens (I still haven't read their papers, so I can't be clearer than this).

Honestly, I’m not sure I’m completely sold on the value of LLMs long term but this is the most realistic and reasonable take I’ve read on this post so far.

If anything, it’s an downward adjustment in the cost implications but could actually unlock exponential improvements on a shorter time horizon than expected because of that. Investors getting scared probably is a good opportunity to buy in.


Bullish on the use. Bearish on the profit margins for the big players.

If (big if!) I understand correctly, the ceiling for edge/local/offline AI has just blown off.


Is there an acronym for edge/local/offline? ELO could be confused with something AI already dominates at. As someone working in the edge/local/offline space it’s interesting to hear these together though. Offline is local but local often isn’t offline :)

Bullish on the prospects for small players, then.

It's always been possible to "do (worse) AI on less compute". We've had years of open models! I also don't understand how anyone can see this as anything but good news for OpenAI. The ultimate value proposition of AI has always depended on whether it stretches to AGI and beyond, and R1 demonstrates that there's several orders of magnitude of hardware overhang. This makes it easier for OpenAI to succeed, not harder, because it makes it less likely that they'll scale to their financial limits and still fail to surpass humans.

The point is that this was developed outside of OpenAI.

So the real question is why does anyone believe that OpenAI will bring AGI when actual innovation was happening in some hedge fund in China while OpenAI was going on an international tour trying to drum up a trillion dollars.


Okay, that argument makes no sense to me. I thought the whole point of VC is that money is cheaper than time to market? So OpenAI didn't microoptimize their training code, sure, but they didn't need to. All the innovation of R1 is that they managed to match OpenAI's tech demo from like a year ago using considerably worse hardware by microoptimizing the hell out of it. And that's cool, full credit to them, it's a mighty impressive model. But they did it like that because they had to. It's very impressive given their constraints, but it doesn't actually advance the field.

The interesting part is that distillations based on reinforcement learning based models are performing so well. That brings the cost down dramatically to do certain tasks.

I thought the distillations were SFT only?

They're SFT on the chain of thought output of R1

> But while Facebook’s costs decreased as it scaled, OpenAI’s costs are growing in lockstep with its revenue, and sometimes faster

And here comes DeepSeek and takes the steam out of this and the cost arguments that follow it.


It's lose-lose for their valuation regardless. This scenario might if anything be worse for them. Now they have massive sunk capital investments that the second mover might be able to avoid. If the open source models get small enough and high enough quality, the rationale for runing them in the cloud in the first place start to evaporate.

How does OpenAI get paid for a use case that can easily be run locally on an iPhone?


Because they deliver models you cannot run locally on an iPhone, and those are almost always the models you want to run.

Are they though? Are they the ones you want to run "almost always"?

Some of the distilled models we're seeing are very good.


Very good at what use cases?

Isn't that argument that after all the spending by OpenAI, an upstart derived from their investment a cheaper but similarly capable alternative? Where's the profitability in the equation coming from?

Now there's another supplier to match the (potential?) consumer or corporate demand that's diffused among more competitors, and open source.


Until now we know only what the they claim what that costs are.

Inference costs can't be faked though, since the model can be run locally by anyone with capable hardware.

Even if the whole story about the training cost was fake, R1 and the distilled models are still very efficient at inference.


The shock for the industry was the claimed training costs and used hardware.

unless OAI has all the optimization tricks already, which they probably do

Is the model architecture actually that different from anything else? Or are you just saying that you can get away with smaller models now?

What DS did is in line with my expectations as I see a lot of performance optimizations possible at algorithmic level. So, even if the DS numbers are BS somebody else will tomorrow reach and even beat it.

People are announcing the death of foundational models too early. Don’t people realize that the big AI players will take all of the proprietary things they’ve been building up behind closed doors and simply layer onto them all the winning techniques everyone else is publishing (like what DeepSeek has used)? DeepSeek itself is taking ideas that have proven out in various other papers and stacking them up to produce their gains (which they’ve been transparent about in their papers).

I also still don’t believe their cost figures, and think they’re leaving out the capital to acquire their secret GPU stash and the cost of pre training their base model (DeepSeek-V3-base). I also suspect their training corpus, which they’ve only vaguely described, would reveal the savings came from working off other foundational models’ work without counting those costs in their figure.

For now, I treat the cost claim as simply a calculated strategy for China to not look like they’re behind in the most important race, to prevent investors from continuing to boost US technology by causing them to doubt the ROI, and to take value out of the US stock market as they did today.


Yeah, the cost figures need more scrutiny; they started with Llama3 which they got for free; had they had to build it from scratch it would have cost more than $6M.

But as for your first paragraph: even if the "big AI players" have some secret sauce that will make their products better (and that they can actually keep secret), it seems unlikely it would be enough to command higher prices durably.

A model would have to be incredibly superior to justify paying for it, when there are so many free (or dirt cheap) alternatives that are simply good enough.


I don't know where you're getting your information from. Maybe you're confusing DeepSeek v3/r1 and the distilled r1 models.

DeepSeek V3/R1 architecture isn't anything like Llama 3. Llama 3 isn't even a mixture of experts, not to mention the various other differences like attention compression etc


Indeed I got confused. DeepSeek V3 is not based on Llama 3. Sorry about that.

You make a good point, that maybe the models won’t perform much better with those improvements, or at least not enough to get people to pay more.

I’m curious about the Llama3 bit - do you have a source for that? I’ve been hearing they trained using OpenAI outputs (not sure how that would work).


Pricing is the betting or wishing in the valuation. The buyer thinks it will increase; the seller thinks it's enough. No one knows what will happen in the future. Maybe the Fed will print too much money. Does anyone know what will happen in the future?

A pretty good read that succinctly picks apart the realities of current AI businesses. Easily something I’d reference as a “primer” to someone that is more business-minded than technically-minded.

One point I’ll agree on is his final one: that the true big players haven’t even been founded yet. Right now, the AI hype seems to still revolve around the dream of replacing humans with machines and still magically making Capitalism work in the process, which is something I (and other “contrarians”) have beaten to death in other threads. That said, what these companies have managed to demonstrate is that transformer-based predictive models are a part of the future - just not AGI.

If I were a VC, I’d be looking at startups that take the same training techniques but apply them in niche fields with higher success rates than general models. An example might be a firm that puts in the grunt work of training a foundational model in a specific realm of medicine, and then makes it easier for a hospital network to run said model locally against patient data while also continuously training and fine-tuning the underlying model. I wouldn’t want to get into the muck of SaaS in these cases, because data sovereignty is only going to become an ever-thornier issue in the coming decades, and these prediction models can leak user data like a sieve if not implemented correctly. Same goes for other narrow applications, like single-mode logistics networks or on-site hospitality interfaces. The real money will be in the ability to run foundational models against your own data in privacy and security, with inference at the edge or on-device rather than off in a hyperscaler datacenter somewhere.

Then again, I could be totally wrong. Guess we’ll all find out together.


I believe one of the real insights of the widespread adoption of LLMs across problem domains is that the general knowledge insight of such models actually maps to increased performance on specific domain tasks. Hence finetuning is a better approach than training from scratch, unless you have insane compute (at which point, why restrict yourself to a narrow domain?)

Aren't there already a ton of startups doing finetunes for their local niche? Many aren't even "AI" companies - it's pretty easy to slap a finetune together if you enough data.

If you mean developing a model from scratch just for your niche - the bitter lesson is that scale is everything and that a finetune from an internet-scale model will outperform you easily.


DeepSeek has some something pretty remarkable. It’s certainly not “just” fine-tuning a Llama or a GPT prompt. More of a order of magnitude optimization

A year ago Sam Altman was going around trying to convince people we all needed to drop 7 trillion dollars to build hundreds of fabs and nuclear power plants to fuel his AI ambitions. Only a week ago he was triumphantly announcing 500 billion dollar deals with our new President.

The (regrettably temporary) ousting of Sam Altman looks like the right call, in hindsight. Of course some amount of showmanship is expected, but the extreme nature of this self-serving BS is just laughable.

6 months from now we may be looking at Sam Altman the way we look at Adam Neumann.


Or Elizabeth Holmes....

At least OpenAI has a useful product

Said another way…Elizabeth Holmes product hurt people by failing. Sam’s product may well end up hurting society by working.

The bitter lesson was never about hardware scaling being the only ingredient. It was about favouring generalist approaches that can scale with your hardware budget over carefully handcrafted bespoke solutions that conserve the "cheap" resource.

I'm not seeing anyone at OpenAI abandon the static weights model and yet they have audacity to claim that they just need to scale more?


The timing with the 500b deal is just perfect

Reality can rarely compete with good showmanship. Weren't we discussing HyperNormalisation just yesterday?

Latest word from The Leader:

"Even as some U.S. tech stocks plunged on Monday after it appeared that DeepSeek could produce similar results as rival models with a system that was cheaper to build, Trump projected confidence, calling it “very much a positive development.” He reasoned that American companies would be able to adapt and evolve based on DeepSeek’s demonstration that effective systems can be developed more easily than some assumed."


It’s not wrong

> I’d argue that the most valuable companies of the AI era don’t exist yet. They’ll be the startups that harness AI’s potential to solve specific, costly problems across our economy—from engineering and finance to healthcare, logistics, legal, marketing, sales, and more.

I feel like the author's concluding point contradicts himself. There is a gold rush and OpenAI is selling shovels.


I’d say Nvidia is selling shovel factories (training hardware), OpenAI is renting shovels (trained models as a service), and DeepSeek gave everyone a shovel for free.

But Nvidia is also selling steroids (inference hardware) that everyone will need to use their new free shovels.

This analogy may have gotten out of hand.


I thought Nvidia is selling shovels

Nvidia is selling shovels. OpenAI is renting out mining crews

Mickey Mouse animated the shovels with the big spell book, and now they're marching around and need shovels of their own...

This has me imagining it in terms of the game Cuphead. That’s also good for visualizing this I think.

It's that OpenAI is investing tens of billions of dollars in shovels and others, like deepseek, are open sourcing equivalently good shovels.

The vertical specific companies, though, are harder to clone as the invest in the product offering around/on top of AI


> OpenAI is selling shovels.

I think the author argues that OpenAI is not the only one selling shovels, and their shovels won't be always better that others'.


Glad to finally see Ashu Garg's writings on HN.

It’s the year 2000. We have the internet, a technology that will change the world. Yahoo is the most valuable company on earth. Among the coolest things people do is go to CompUSA and pay money for a web browser, Netscape Navigator, because it supports the <blink> tag so you can make your geocities page even more awesome. Google is still operating out of a garage somewhere, and won’t be household name until after the bubble bursts.

That’s where we are in the AI journey in 2025. The year 2000.


Everyone thinks their bet is the next Ford, Wright bros, Netscape, etc.

Before or after the bubble popped ?

During

and microsoft literally spent 80 billions on top of its, like bro imagine 80 billions dollar company is like top 0,01 percent

and that valuation would crumble because of deepseek


What did they spend it on? I’m sure the next gen models will still be better to train on that new infra

that total investment (so far), I believe Microsoft has back deal as well with open ai outside series funding investment

well since most of that money comeback anyway to MS since OAI use Azure heavily but it still a lot of money and stock value of OAI would tank sooner or later when competitor like deepseek come


Why do we not value QBASIC in billions? Honestly, we value current Van Gogh paintings in billions. The past cost us more, we got here because of that art fought through decades of litigation. Does progress mean we forget all of that and hope on a promise of easy answers?

Scarcity.

Van Goghs have it; QBASIC doesn't. Anyone can download QBASIC for free.


For about a month now I've been paying $20-$30/day to delegate the bulk of my coding to Sonnet. The agentic loop thats trained into it is just simply not matched by another other model.

I can't admit to myself there's any open question as to if there is any long-term value.

I expect within 2 years, this will seem like a non-controversial idea, and it won't bring in a ton of assumptions about the speaker.

I have invested much time and effort making sure local models are a peer to remote ones in my app, and none, including DeepSeek's local models, are remotely close to the things needed to make that flow work.

EDIT: Reply-throttled, so answering replies here:

- The machine is building the machine: Telosnex, a cross-platform Flutter app

- it can do 90% of the scope, especially after I wrote precanned instructions for doing e.g. property-based testing.

- Things it's done mostly wholesale: -- secure iframe environment, on all 6 platforms, to: execute JS in, or render react components it wrote. -- completely refactoring my llama.cpp inference to use non-deprecated APIs.

- Codebase is about 40K real lines of code. (I have to think this helps a lot I doubt that ex. from scratch it would be able to build a Flutter app that used llama.cpp.)

- $30/day!?! -- Yeah, it's crazy, its up an order of magnitude from my most busy days when I just copy-pasted back and forth. It reads as much code as it wants, and you're doing more work literally, so it adds up.

- $20/day is realistic average

- Lines added per day +55%, lines deleted per day +29%, files changed per day 9 -> 21 https://x.com/jpohhhh/status/1881453489852948561


$20 to $30 a day? How?? I have been using Sonnet every day for a month and have spent a total of... $7 on OpenRouter (7/30 = 0.23 a day on average).

Granted, I ask it very specific questions that generate short answers (many of which are incorrect, btw), but still, it's difficult to imagine what kind of tasks, done by a single person, would generate such amounts?


What sort of coding? Are you maintaining any large projects?

What tools are you using to delegate?

My own app, it's called Telosnex.

Unfortunately, the current available version doesn't have the agent stuff yet.

Hopefully in a week, realistically two.

I had the existing client app I've released-but-not-released-out-loud. Couple days before Christmas, for fun, I spent a couple hours wiring up the Anthropic Model Context Protocol filesystem server example. Within an hour it was clear this was special and I needed to get it out ASAP. Stunning stuff in action.


The website for your editor (https://telosnex.com/) has some... character. However I do believe its worth a second look at making it look nicer, I know you aren't a designer and probably think you're going for a more "raw" and "friendly" look by not putting that much effort in and using conflicting fonts, colour schemes etc, and I agree there is a lot of value in avoiding corpo-internet styles, but I still think it could stand to look less like a mixture of ai sludge and poor photoshop jobs on the homepage.

Maybe consider something like https://www.gyan.dev/ffmpeg/builds/

or https://nemo.foo/

Nonetheless I will try it out


> Hopefully in a week, realistically two.

Wouldn't it be just a few prompts to get it done?


Scope creep! ;)

What a great take, I have thought this for a while.

The OpenAI vs. DeepSeek debate is fascinating... but I think people are oversimplifying both the challenges and the opportunities here.

First, OpenAI’s valuation is a bit wild—$157B on 13.5x forward revenue? That’s Meta/Facebook-level multiples at IPO, and OpenAI’s economics don’t scale the same way. Generative AI costs grow with usage, and compute isn’t getting cheaper fast enough to balance that out. Throw in the $6B+ infrastructure spend for 2025, and yeah, there’s a lot of financial risk. But that said... their growth is still insane. $300M monthly revenue by late 2023? That’s the kind of user adoption that others dream about, even if the profits aren’t there yet.

Now, the “no moat” argument... sure, DeepSeek showed us what’s possible on a budget, but let’s not pretend OpenAI is standing still. These open-source innovations (DeepSeek included) still build on years of foundational work by OpenAI, Google, and Meta. And while open models are narrowing the gap, it’s the ecosystem that wins long-term. Think Linux vs. proprietary Unix. OpenAI is like Microsoft here—if they play it right, they don’t need to have the best models; they need to be the default toolset for businesses and developers. (Also, let’s not forget how hard it is to maintain consistency and reliability at OpenAI’s scale—DeepSeek isn’t running 10M paying users yet.)

That said... I get the doubts. If your competitors can offer “good enough” models for free or dirt cheap, how do you justify charging $44/month (or whatever)? The killer app for AI might not even look like ChatGPT—Cursor, for example, has been far more useful for me at work. OpenAI needs to think beyond just being a platform or consumer product and figure out how to integrate AI into industry workflows in a way that really adds value. Otherwise, someone else will take that pie.

One thing OpenAI could do better? Focus on edge AI or lightweight models. DeepSeek already showed us that efficient, local models can challenge the hyperscaler approach. Why not explore something like “ChatGPT Lite” for mobile devices or edge environments? This could open new markets, especially in areas where high latency or data privacy is a concern.

Finally... the open-source thing. OpenAI’s “open” branding feels increasingly ironic, and it’s creating a trust gap. What if they flipped the script and started contributing more to the open-source ecosystem? It might look counterintuitive, but being seen as a collaborator could soften some of the backlash and even boost adoption indirectly.

OpenAI is still the frontrunner, but the path ahead isn’t clear-cut. They need to address their cost structure, competition from open models, and what comes after ChatGPT. If they don’t adapt quickly, they risk becoming Yahoo in a Google world. But if they pivot smartly—edge AI, better B2B integrations, maybe even some open-source goodwill—they still have the potential to lead this space.


AI is still a fad.

I wrote a tested prototype MMO in the last few weekends with AI as my power tools.

You're holding it wrong.


I'm VERY interested in your project. Not playing it, I mean more the techniques and tech stack. That sounds entirely out of my reach with an AI, and ive written game engines in c++ before. Networking, synchronisation problems, etc are really really hard.

What process did you use with the AIs, any prompting insights - context, agentic prompts etc?

What tech stack did you use that you found AIs were familliar enough with, ive found them woefully misinformed about most libraries and technologies ive tried them with in game development, often confidently mixing out of date and new information


I'm using Cursor's composer agent mode with Sonnet 3.5 (I don't use OpenAI on principle, snakes). It does a great job of finding the relevant code without overloading its context window.

I experimented today with Aider (to get R1 involved) and had less success, but it might be that I don't have the workflow down.

I have found cursor can handle a .NET C# back-end using highly standard code structures very well. SignalR for networking.

I've created servers and very basic HTML visualization for three projects - a fairly simple autobattler (took a day), a web-based beat-em-up (2 days), and now a bit more ambitiously my dream RTS-MMO (3rd weekend running).

I started with concise MVP specifications including requirements for future scaling, and from these worked with the AI to make dot point architectural documents. Once we had those down I moved step-by-step, developing elements and tests simultaneously then having the agent automatically run the tests and debug. The test-driven debugging is the part that saved the most frustration, as the initial implementation was almost always broken, but leaving the agent to its own devices (tabbing in and typing "continue" when hitting Cursor's 25 tool call limit, sometimes for hours) the tests guided bug fixing and amazingly it got there fairly consistently, though occasionally it will go off the rails and start modifying the tests to pass or inventing unwanted functionality.

The code is as standard as possible, with the servers all organized identically API -> Application -> Domain <- Infrastructure, and well separated between client/server. Getting basic HTML representations wasn't an issue, but it does begin to struggle and requires a lot more direction when it comes to client-side code that expands beyond initial visualization. I had a lot more success with Monogame C# than Phaser or other web formats (e.g. I quickly gave up on SFML, same issues you were having).

I'm a professional game developer but without formal CS/programming training, so I'm aware of my requirements but not always how to implement them cleanly. I understand the code it writes which feels vital when it occasionally rolls a critical miss, but these projects would have taken me months without AI.


How so? There are so many tangible applications already: reduced customers service costs, legal research, analysis of medical records or imaging, self-driving Waymos, and so on. Just the things I listed will have profound impacts on cost savings, productivity, and quality of life.

Little Boy certainly had "profound impacts on cost savings, productivity, and quality of life" in Hiroshima.

Impact isn't inherently positive.


It certainly lets untrained people create stuff they probably should not

How so?

When everyday people are telling you their opinions about what AI is gonna do you know it’s a fad.

Everyday people were talking about iPhone. That was what made iPhone a fad and prevented Apple from being the company with the highest MarketCap.

iPhone was a fad. Everyone and their grandmother had an idea for some kind of app. It was all people talked about, apps apps apps.

Then the market blew up and a few big winners ate up all the profit and everyone else died on the long tail.

Today, iPhone exists, but people don’t even think about them much. They just use it.

Pestering someone now about your iPhone app idea is like pestering someone about your website idea back in 2015.


What you’re talking about really isn’t that iPhone is or was a fad. What’s really happened is innovation has essentially halted and it has become a commodity like all smartphones.

Everyone and their dog keep trying to force me to use their apps, like there's a mantra inside their marketing divisions that apps are the shit. Doesn't look like apps are over. The market is just saturated of "app for X". The hype of getting rich from zero through one app is over, but apps aren't.

VR movies was a fad. You can date a TV by it's "VR mode" feature on the remote to a few years. No one is trying to sell me VR TVs anymore. That's what a fad looks like.


Every day people were doing that with the Internet.

Yea, and then came the dotcom bust.

Yeah, and the Internet just never recovered!

I wish this whole internet fad would just die already!

I'd write a script that crawls all these AI topics and use an LLM to count how many times this conversation has been had but I'm too tired to do so lol. Just don't feed the troll / dumb person. If they're too stupid to see how revolutionary LLMs are by this point.. just let them be stupid. Just downvote and move on with your life

We’ll know when it can park a car in an everyday parking spot without messing up your Grandma’s Camry I suppose.

Oh yeah, let's all wait till then to get the value out of models today.

I’m not saying it doesn’t have value, but it’s not worth my time to spend 20 minutes to prompt engineer a tool to write me a plagiarized document I could write myself in 20 minutes? Why would I invest my time into using a tool that undermines my own value? What’s the value prop for me?

Your inability to find value propositions and use cases is your issue.

K, just keep this conversation in mind in a decade or two when you realize that your input is the product, not what you got out of it.

Oh yeah—tell that to my AI pipelines on my local compute.

Local pipelines are great and all for now but there’s practically no way those will be able to keep up with server based models long-term. If those are useful for you today, that’s great.

Right and those "server based models" can be deployed by me or my team, on clusters we own.

You're talking in a thread about Deepseek...

Only if you’re moving the goal posts every other week.

Oh! It's "Open"AI because there's no moat! /s



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: