I am getting 100x value out of my 30 chatgpt bucks. I am doing things that I could not have done pre-gpt4, being more productive by a factor of, idk, 1.25 maybe.
It's quite simply the largest/simplest productivity improvement in my life, so far. Given it's only going to get better, unless they are underpricing the service by a enormous margin (as in: defrauding shareholders margin) I have a hard time understanding what shape the bubble could possibly have.
I get that there are limitations with LLMs, but I don't understand people saying it has no value, just because it occasionally hallucinates. Over the past week I've used chatGpt to code not one, but two things that were completely beyond my knowledge (an auto delete js snippet, and a gnome extension that turns my dock red if my vpn turns off). These are just two examples. I've also used it to write a handy regex and write a better bash script.
LLMs are insanely helpful if you use them with their limitations in mind.
> LLMs are insanely helpful if you use them with their limitations in mind.
This depends on your use case. I can honestly tell that all the chat bot AIs don't "get" my kind of thinking about mathematics and programming.
Since some friend who is graduate student in computer science did not believe in my judgement, I verbally presented him some test prompts for programming task where I wanted the AI to help me (these are not the most representative ones for my kind of thinking, but are prompts for which it is rather easy to decide whether the AI is helpful or not).
He had to agree from the description alone that the AIs will have difficulties with these task, despite the fact that these are common, and very well-defined programming problems. He opined that these tasks are simply too complex for the existing AIs, and suggested that if I split these tasks into much smaller subtasks, the AI might be helpful. Let me put it this way: I personally doubt that if I stated the subtasks in a way in which I would organize the respective programs, the AI would be of help. :-)
What was just important for me was to able to convince the my counterpart that whether AIs are helpful or not for programming depends a lot on your kind of thinking about programming and your programming style. :-)
I would say that the ability to break a problem down into manageable chunks is the mark of a sr dev. I think of chatGpt as a jr that's read a lot but understands only a little. To crib Kurtzwell you gotta 'run with the machine'
This is a rather long post, I'm genuinely curious why you did not describe the problem that you want to solve. Is it too complex for even humans to understand?
Don't take the following prompts literally, but think into the directions of:
"Create a simple DNS client using C++ running on Windows using IO Completion ports."
"Create a simple DNS client using C++ running on GNU/Linux using epoll."
"Write assembler code running on x86-64 running in ring 0 that sets up a minimal working page table in long mode."
"Write a simple implementation of the PS/2 protocol in C running on the Arduino Uno to handle a mouse|keyboard connected to it."
"Write Python code that solves the equivalence problem of word equivalence in the braid group B_n"
"Write C++|Java|C# code that solves the weighted maximunm matching matching problem in the case of a non-bipartite graph"
...
I experimented with such types of prompts in the past and the results were very disappointing.
All of these are tasks that I am interested in (in my free time), but would take some literature research to get a correct implementation, so some AI could theoretically be of help if it was capable of doing these tasks. But since for each of these tasks, I don't know all the required details from memory, the code that the AI generates has to be "quite correct", otherwise I have to investigate the literature; if I have to do that anyway, the benefit that the AI brings strongly decreases.
I tried the first one, but currently I don't have time to verify what it generated.
But I have done many Arduino/Raspberry PI things lately for the first time in my life and I feel like ChatGPT/Copilot has given me a huge boost even if it doesn't always give 100 percent code out of the box, it will give me a strong starting point where I can keep tweaking myself.
> LLMs are insanely helpful if you use them with their limitations in mind.
the fact that LLM responses can't be add supported (yet) make them much more valuable than internet search IMO. You have to pay for chatgpt because there's no ads. No ads no constant manipulation of content and your search to get more ads in front of you.
Having to pay for using genai is it's best selling point ironically.
You're really generating $3000 per month from ChatGPT? Can you give a hint about what you've built that generates this kind of ROI?
I have only seen people making money in AI by selling AI products/promises to other people who are losing money. The practical uses of these tools still seem to be largely untapped outside of as enhanced search engines. They're great at that, but that does not have a return on value that is in proportion to current investment in this space.
> Can you give a hint about what you've built that generates this kind of ROI?
Sure. Absolutely nothing amazing: (Mostly) internal software for a medical business I am currently building.
It's just that the actual cost of hiring someone is even quite a bit higher, than what is printed on the paycheck and the risk attached to anyone leaving on a small team is huge (n=0 and n=1 is an insane difference). GPT4 has bridged the gap between being able to do something and not being able to do something at various points over the past year.
EDIT: And to be clear, while I won't claim "rockstar programmer", I have coded for roughly 20 years, which is the larger part of my life.
Just spoke to a restaurant group owner in Mexico who was able to eliminate their web developer because he can now ask ChatGPT to draft up a basic website.
The kicker? It couldn't do the interactive menu their old website did, so now clicking menu links to a PDF. Which is always, ALWAYS, better.
Even just looking at ChatGPT as a better frontend to the Wix help docs, ChatGPT empowered this restaurant owner to do the job themselves, rather than having to have a person do it. Which means that person is out of a job. Good for the restaurant owner, but bad for that person. Which means it's down to personal relationships and how you treat people and all those soft skills that aren't programming.
yes, but which one of thouse thousand, how long would it take to learn how to use it, etc. Still less friction in just asking ChatGPT to do this via the same interface you ask it to do a bunch of other stuff.
Sorry, but why is pdf better than html? If pdf is better, would you prefer every website just downloaded a pdf to your phone when you visit their url, instead of serving you html? If not, why is it different for a restaurant menu?
It's better in the pragmatic sense of like, it's more likely to be updated. They already have a PDF or docx laying around because they had to design their print menu, so now they can just upload it. But yes, ideally the menu would be html and would be accurate and up to date and responsive on mobile.
This is just a +1 to the ROI discussion, but I'd say that AI tooling roughly doubles my development productivity.
Some of it's in asking ChatGPT: "Give me the 3 possible ways to implement X?" and getting something back I hadn't considered. A lot of it is in sort of "super code completion".
I use Cursor and the UI is very slick. If I'm stuck on something (like a method that's not working) I can highlight it and hit Cmd+L and it will explain the code and then suggest how to fix it.
Hit Cmd+K and it will write out the code for you. Also, gotten a lot of mileage out of writing out a rough version of something in a language I know and then getting the AI to turn that into something else (ex: Ruby to Lua).
You are only looking at one dimension. What is your hourly rate based on your salary. If ChatGPT saves you 10 hours a month that could easily be over $2000.
But that’s only true if it eventually puts an extra $2,000 in your pocket or an extra 10 hours in your life.
If you estimate that it saves hou 10 hours per month, but your salary stays the same and you don’t work less hours, did it really give you $2,000 in value?
Obviously I don’t know the details of OPs situation. Maybe they aren’t salaries. Maybe the work for themselves. Etc.. I just think people tend to over estimate the value of GPTs unless it is actually leaving them with more money in their pocket.
It's that "it's only going to get better" part that is driving the bubble, I think.
The market has this idea that over the next year, we're somehow going to have AI that's literally perfect. Yet that's not how technology works, it takes decades to get there.
It'd be like if the first LCD TV was invented, and all of a sudden everyone is expecting 8k OLED by the next year. It just doesn't work like that.
Fair – but again: If it just stayed frozen in the state it is now (and that assumption is about as unreasonable, as it being "perfect" in a year), it's already going to be tremendously useful for increasingly many people (when cost will go down, and they will, accessibility will go up) — at least until something better comes around.
For those who extract value right now, the simple alternative (just not using it) is never going to be the better choice again. It's transformative.
Yeah, but this is different because it's largely just money -> more GPUs -> more people looking at the problem -> better results. You can't stumble upon an 8K TV overnight but you can luck upon the right gold mining claim and you can luck upon some new training algorithm that changes the game immediately.
Do any AI companies actually turn a profit? I feel like the only real winner is Nvidia because they are selling shovels to the gold diggers, while all the gold diggers are trying to outspend each other without a business model that has any consideration for unit economics.
I love a prudent take on company money – but given how investing works and how young this entire thing is and the (to me) absolutely real value, I find it hard to be very worried about that part right now.
I can literally run a ballpark model on my MB Pro, right now, at marginal additional electrical cost. I will be the first to say that all of this (including GPT4) is still fairly garbage, but I don't know when there was the last time in the history of tech, where less fantasy to get from here to what will be good was required.
The thing is that the bigger business giants like MSFT or Amazon are probably profit quite nicely from AI. Smaller companies, not aligned with any big giants - probably not.
I’m really curious what is making you so much more productive. My experience with AI has largely been the opposite. Also curious how you’re using AI to make $3,000 per month more than you would without it.
I feel the same way. I think LLMs are neat, and I find them interesting from a technical standpoint, but I have yet to have them do anything for me that's more than just a novelty. Even things like Copilot, which I'll admit has impressed me a bit, doesn't feel like it would radically change my life, even if it was completely foolproof.
Then he is either making an extra $3,000/month or working 30 hours less per month. If the former isn’t true, I am extremely doubtful that the latter is.
Seems more likely that he is over estimating the value that LLMs are bringing him. Or he is an extreme outlier, which is why I was asking for further details
> largest/simplest productivity improvement in my life, so far
many productivity improvements in the last years: Internet Search, Internet Forums, Wikipedia, etc.
LLMs and other AI models is continuation of the improvement of information processing.
The bubble is that every $1 in capital going to OpenAI/Nvidia is a $1 that cannot be invested anywhere else: Healthcare, Automotive, Education, etc. Of course OAI and Nvidia will invest those funds, but in areas beneficial purely to them. Meta has lost $20bn trying to make Horizon Worlds a success, and appears to have abandoned it.
Even government-led industrialization efforts in socialist economies led to actual products, like the production of the Yagan automobile in Chile in the 1970s[0].
We've already had a decade plus of sovereign wealth funds sinking tens of billions into Uber and autonomous driving. We still don't have those types of cars on the road and it's questionable whether self driving will even generate the economic growth multiplier that its investment levels should merit.
I did a quick check and it seems like the entire Uber and clone industry is net negative. Uber, Lyft, Didi, Grab seem to have lost more money than was invested and once they stabilize they look like mediocre businesses at a global scale (Uber's been banned from many jurisdictions for predatory practices and in many other jurisdictions it seems to trend towards being as expensive as taxis or more once profitability becomes a target).
This sounds like the broken window fallacy. You could use the same logic to suggest that Meta dump piles of cash on the sidewalk in front of their office - it’d circulate but it wouldn’t help them.
If the same $20bn was spent on fixing a bridge, people would spend those wages to boost economic activity AND have a fixed bridge that will improve output even more. Horizon Worlds isn't a productive use of capital in that regard.
It'd be one thing if they open-sourced their VR tech, some of that could lead to productive tech down the line, but as a private company, they're not obliged to do any of that.
Last night I asked ChatGPT 4 to help me write a quick bash script to find and replace a set of 20 strings across some liquid files with a set of 20 other strings. The strings were hardcoded, it knew exactly what they were in no unclear terms. I just wanted it to whip up a script that would use ripgrep and sed to find and replace.
First, it gave me a bash script that looked pretty much exactly like what I wanted at first glance. I looked if over, verified it even used sed correctly for macOS like I told it, and then tried to run it. No dice:
replace.sh: line 5: designer_option_calendar.start_month.label: syntax error: invalid arithmetic operator (error token is ".start_month.label")
Not wanting to fix the 20 lines myself, I fed the error back to ChatGPT. It spun me some bullshit about the problem being the “declaration of [my] associative array, likely because bash tries to parse elements within the array that aren’t properly quoted or when it misinterprets special characters.”
It then spat out a “fixed” version of the script that was exactly the same, it just changed the name of the variable. Of course, that didn’t work so I switched tactics and asked it to write a python script to do what I wanted. The python script was more successful, but the first time it left off half of the strings I wanted it to replace, so I had to ask it to do it again and this time “please make sure you include all of the strings that we originally discussed.”
Another short AI example, this time featuring Mistral’s open source model on Ollama. I’d been interested in a script that uses AI to interpret natural language and turn it into timespans. Asking Mistral “if it’s currently 20:35, how much time remains until 08:00 tomorrow morning” had the model return its typical slew of nonsense and the answer of “13.xx hours”. This is obviously incorrect, though funnily enough when I plugged its answer into ChatGPT and asked it how it thought Mistral may have come to that answer, it understood that Mistral did not understand midnight on a 24 hour clock.
These are just some of my recent issues with AI in the past week. I don’t trust it for programming tasks especially — it gets F# (my main language) consistently wrong.
Don’t mistake me though, I do find it genuinely useful for plenty of tasks, but I don’t think the parent commenter is wrong calling it snake oil either. Big tech sells it as a miracle cure to everything, the magic robot that can solve all problems if you can just tell it what the problem is. In my experience, it has big pitfalls.
I have the same experience. Every time I try to have it code something that isn't completely trivial or all over the internet like quicksort, it always has bugs and invents calls to functions that don't exist. And yes, I'm using GPT-4, the best model available.
And I'm not even asking about an exotic language like F#, I'm asking it questions about C++ or Python.
People are out there claiming that GPT is doing all their coding for them. I just don't see how, unless they simply did not know how to program at all.
I feel like I'm either crazy, or all these people are lying.
> I feel like I'm either crazy, or all these people are lying.
With some careful prompting I've been able to get some decent code that is 95% usable out of the box. If that saves me time and changes my role there into code review versus dev + code review, that's a win.
If you just ask GPT4 to write a program and don't give it fairly specific guardrails I agree it spits out nearly junk.
> If you just ask GPT4 to write a program and don't give it fairly specific guardrails I agree it spits out nearly junk.
The thing is, if you do start drilling down and fixing all the issues, etc, is it a long term net time saver? I can't imagine we have research clarifying this question.
> People are out there claiming that GPT is doing all their coding for them. I just don't see how, unless they simply did not know how to program at all.
I doubt it and certainly not for anything beyond basic. I've seen (and tried GPT's for code input a lot) and often they come back with errors or weird implementations.
I made one request yesterday for a linear regression function (yes, because I was being lazy). So was chatGPT... It spat out a trashy broken function that wasn't even remotely close to working - more along the lines of pseudo code.
I complained saying "WTH, that doesn't even work" and it said "my apologies" and spits out a perfectly working accurate function! Go figure.
I hear you. It's all pretty bad. I have spent half-days getting accustomed to and dealing with gpt garbage — but then I have done that plenty of times in my life, with my own garbage and that of co-workers.
On the margins it's getting stuff good enough, often enough, quick enough. But it very much transformed my coding experience from slow deliberation to a rocket ride: Things will explode and often. Not loving that part, but there's a reason we still have rockets.
I've had the same experience, but I usually get what I want. Admittedly, I'm using it to script around ffmpeg which is a huge pain in the ass.
That said, every single script it churns out is unsafe for files with spaces on the first go round. Like.. Ok. It's like having a junior programmer with no common sense available.
How many GPUs are being delivered today and for how long will they be used / what's their life?
Who is funding the purchase of those GPUs?
If VC money then what happens if the startups don't make money?
Are users using AI-apps because they are free and dump them soon?
Isn't their competition in semiconductors? Won't we have chips-as-a-commodity soon? LLMs-as-a-commodity?
Is Big Tech spending all this money to create VALUE or just to survive the next phase of the technological revolution? (e.g. the AI rush)
If prices are high, and sales are high, and competition is still low -- then how much is nvidia actually worth? And if we don't now why is it selling for so many times earnings?
I am getting 100x value out of my 30 chatgpt bucks. I am doing things that I could not have done pre-gpt4, being more productive by a factor of, idk, 1.25 maybe.
It's quite simply the largest/simplest productivity improvement in my life, so far. Given it's only going to get better, unless they are underpricing the service by a enormous margin (as in: defrauding shareholders margin) I have a hard time understanding what shape the bubble could possibly have.