I want to like LLMs. People are clearly betting trillions on them, and if those bets go sour, it will be bad for everyone. But my impression right now is that a large percentage of LLM use is for spamming, scamming, and cheating on homework. On the plus side, some software engineers save a couple minutes here and there with a slightly nicer autocomplete, white collar workers can get some advice on email style, and maybe it helps with sifting through millions of pages of legal discovery.
Overall, this stuff by itself really doesn't seem world-changing to me. People should continue learning the same basic skills that have been valuable in the past. Speculation on how it will change society seems about as baseless and premature as guesstimating how society will change if quantum computers can somehow solve all problems instantly, ignoring the fact that they almost certainly can't.
I am hopeful for some of the more advanced applications, e.g. proof assistance in math, but I think it will take a while to know what the killer apps are there, and what their impact will be.
The next decade will be interesting, when the chatgpt babies hit the market, having done 100% of their homework with AI, passed their interviews thanks to AI, then enter code bases written by AI.
The other problem is that unless we hit AGI real fast the quality can go nowhere but down, there will be less and less non polluted data to train from, the code bases will become more and more bloated because let's be real LLMs don't produce very good code, &c.
In the meantime we're left with AI generated scams, AI generated cooking recipes which don't work, AI generated art that look like ass, AI bots flooding social media, &c.
> But my impression right now is that a large percentage of LLM use is for spamming, scamming, and cheating on homework.
LLMs as we have them now are the application which makes a computer be able to understand a non-programmer and to communicate to a non-programmer, it is the new eyes, ears and mouth of computers.
This works so well, that it can already pretend to be a human. This is like realizing: Yes, in fact, this mouse allows me to move a cursor in the computer with which I can click on an application, as if it were my finger in the computer.
But I guess it will ultimately be agents what will show the true power of this new interface. Agents which assist you with your memory by properly reminding you of things, by allowing you to ask it things while you are leaving the house and so on, it even contacting other agents or even persons in order to answer your questions or execute what you asked it to.
Its funny how back in the day there were people who were CLI elitist, and forwent using a mouse for as long as possible because "the CLI is just better at everything". People like this still exist to some extent, and man can they ever wield a keyboard, but if computer development was left to them, we would all be regular people slogging through a command line everyday to do everything.
I've actually seen much more of the opposite since I got into the industry a decade ago, with some developers absolutely terrified of touching a CLI. That approach can work out fine for some roles but man I think it's unnecessarily limiting. I'm far from a CLI guru but find some basic knowledge there pretty useful from time to time. Most of the CLI experts I've personally met are experts at much newer tools as well.
Gatekeeping is not always bad. Most people, including many members of this forum, have not seen their happiness and value to society increase because of ubiquitous personal computing. Perhaps the CLI interfaces should have prevailed and compute should have remained the domain of the educational, research, industrial and financial sectors.
Perhaps more people should heed the 'AI' skeptics. Just because we can do something doesn't mean we should.
It isn't even a specifically nerd thing for terminal tools to be more efficient as a whole. Pharmacies here until very recently used MS DOS based software (probably running on an emulator) to type your data, ordered drugs etc and the efficiency was much greater than your average mouse bumbler. I cringe every time I see a point of sale device with a touchscreen. Absolute abominations.
For one, the principal-agent problem. Who's building, running, and maintaining these agents? I doubt that these are running locally and tweaked by the user themselves. The business model is to sell these things as a service running in the cloud. That means the operator has final say on how these systems are implemented. Sure, your helpful AI agent today may remind you to pick up some soda at the store. But if the last decade+ of "enshittification" has shown, it's only a matter of time before your helpful AI agent will remind you to pick up some delicious Mountain Dew (TM) at the Piggly Wiggly (TM) and use your American Express (TM) card to earn Roblox (TM) points.
And even if we assume these agents don't eventually come to serve masters other than the end-user, the internet is already a user-hostile place. It's entirely reasonable to believe it'll be agent-hostile too, where APIs and UI designs adapt to trick AI agents with "dark patterns" to do things that aren't necessarily in their users' best interests.
Techno-utopianism might feel good, but economics and money-making will drive how these things actually play out.
You are not wrong, just ignore the comments here. Current day LLMs are a "deep seek" mechanism. For every task that had little or no equivalent in the internet, they were miserably bad (from O1 to deepseek to claude). We clearly need a different paradigm for "intelligence" and a sort of "consciousness" that gets a "handle" of reality. LLMs are very inefficient because they are trying to implement these mechanisms within a "predict the next token" environment.
All advancements in "AI" in the last year or so were in smaller models becoming more powerful, more companies reaching openai levels and a bit more accuracy. I think this is the plateau for the current theoretical tech.
+: one case I bumped into today: socks5 with the latest hyper version to proxy https requests. All models were completely off-track. None was able to make any functioning implementation. This is just a few lines of code.
I’m waiting too because people around me are ecstatic (without them knowing the technical details and limitations) but I don’t see the point.
The internet, email, Gmail, smartphones were some kind of revolution because it would help me. AI is only here to prevents me from thinking instead of improving my life one way or another.
All is see is preventing me from doing what I do, potential hallucinations that can harm work and relationships, and decreasing the quality of that work (especially SWE which is lowering every decade). That’s a bad kind of revolution.
>On the plus side, some software engineers save a couple minutes here and there with a slightly nicer autocomplete, white collar workers can get some advice on email style, and maybe it helps with sifting through millions of pages of legal discovery.
Lol get a grip. Jeff Dean just said 25% of the code being checked into Google's internal codebase is now AI generated. If you think it's just kids using it for homework and engineers saving a few minutes here and there you're kidding yourself.
>I am hopeful for some of the more advanced applications, e.g. proof assistance in math,
"I expect, say, 2026-level AI, when used properly, will be a trustworthy co-author in mathematical research, and in many other fields as well." You just aren't in the same universe as what the top experts are saying about this tech.
You seem like one of those people that used GPT3.5 once and hasn't updated at all since then. You're criticizing the wright flyer and we're already at turboprops.
> Jeff Dean just said 25% of the code being checked into Google's internal codebase is now AI generated.
This assertion feels misleading. When I worked at a FAANG, 10% of our codebase was generated Protobuf. Obviously, this was highly repetitive boilerplate that was equivalent to maybe a few thousand actual lines of code. What does the 25% of LLM-generated code actually look like? I'm guessing it's doing nothing particularly novel or interesting. (Also, I have to wonder if a large chunk of that 25% could be encapsulated as a more reliable and deterministic abstraction.)
I feel like that's people not yet knowing how to use them. With just a little bit of context-setting, they can be force multipliers. I do agree with you that you need the underlying skills. Frequently I use them to do something I've done before, bypassing the time for me to look up syntax, etc.
I've used them for syntax too, and gotten genuine benefit from that. I just don't think it was all that hard before, and the risk that people feel they no longer need to actually understand their tools looms large in my mind.
Perhaps, but once upon a time you'd either memorize syntax or work with a reference book in front of you. Over time we came to use various syntax completion and introspection tools, as well as Internet references, and overall I'd say we aren't dumber than we were then.
It's taboo to talk about how much you use it? Tell that to my LinkedIn feed. Everyone is desperate to tell me how desperately I need to use AI or I will fall behind in the rat race. Every app has an animated glowing starry "use our new AI features!" thingy that I mentally block out, no matter how much it (often literally) jumps up and down and waves at me. There is no question in my mind how badly the powerful want me to use AI and get others to use it.
The things LLMs can actually be put to good use for either are bad for society (spam, scams, cheating on homework) or do not justify the massive investment. Better emails is great, but its just not worth billions of dollars.
I want to be clear, from an academic perspective these things are fascinating and very very cool! You can teach a computer to talk like a person might simply by having it read all the books! But there is not a business reason to have this. Not at the current costs. Maybe LLMs are a step we need to get actually useful AI tools, but as currently constituted its just not there.
The killer app of LLMs is how its gonna kill the industry when the investments fail to make the expected return. A lot of people are gonna lose their jobs when managers try to replace them with AI, and even more are gonna lose their jobs when that doesn't work and all those companies go under, and there's another tech bubble burst
>Better emails is great, but its just not worth billions of dollars.
I mean lets just say your premise is right, how much time is spent worldwide writing emails that could be automated? Just on a pure man hour basis it's surely worth billions if it did nothing else.
>But there is not a business reason to have this. Not at the current costs.
ChatGPT Pro tier is $200/mo, if you can't find ways to get that value I'm sorry but it's a skill issue and you're ngmi. The API tier is like $/million tokens. It's a joke to complain about the cost at this point...
Assuming the mean the popular definition of "the internet", it wasn't new 20 years ago. If you want to relate the current state of LLMs, you'd have to go back 30.
And now look at how politically divided most western countries are, how everything is some kind of ad delivery mechanism, how brains get fried by constant attention traps... yeah I'm not sure these people were 100% wrong
Honestly, each year it seems like the value that the internet offers goes down while the harm that it causes goes up.
The enshittification of search and Quora means that outside of Wikipedia and sometimes Reddit, the Internet's offer of "access to all of the world's information" is mostly dead.
With social media becoming a hellscape, people have mostly retreated to semi-private Discord servers; which severely damages the Internet's offer of "participate in a global commons".
There are a few bright spots like easy video sharing. However, I'm not sure I trust them not to get enshittified next.
The invention of the chainsaw didn't make the knowledge of how to fell a tree obsolete, but it sure made the process a whole lot faster.
If you're writing software and not using LLMs, you're working at a fraction of the pace of those that do. While it's not going to create some world-changing algo today, it completely eliminates most of the boiler plate you need to type out, makes implementing tests much, much faster. At least that has been my experience in Golang at work.
> At least that has been my experience in Golang at work.
Maybe so for you, but you might want to think about why your work involved so much boilerplate and routine content in the first place. If so much of it is easily completed by an LLM already, was your employer really paying you some large share of your 5-6 digits of compensation for every year? Do you think most of your value to the company has just been in typing boilerplate and mindlessly generating tests until now?
Personally, I'd feel pretty insecure in my career and underdeveloped in my trade if I could answer yes to that.
If it does apply to you (or others reading this), hopefully you can exploit this brief window of gained personal productivity to develop more valuable skills and get your tasked to more valuable roles asap. Software engineering as a profession isn't going anywhere, but the people most helped by today's fairly primitive LLM coding assistants are the ones that are actually vulnerable to be squeezed out of the trade because it means they're the not actually contributing very much specific expertise.
The productivity change you've experienced should be taken as a bright red warning flag that you've unwittingly settled into a role as an technician getting paid highly because of ephemeral demand bubble rather than the valuable engineer you might imagine yourself to be.
I just want to add that many desk jobs do the equivalent of writing boilerplate code. Copying from calendar A to calendar B is a real 70k/yr job I’ve seen. And from what I can tell, being a human shell script + representing that function (responsible face and name) characterizes lower tier white collar jobs.
So the mere existence of “worthless” churn in your job isn’t evidence that you aren’t solving business problems that are worthy of pay.
To put it another way. Most jobs exist to make a problem go away for an executive.
I understand you are also getting at the ethos of good engineering and encouraging this person to take pride in their work.
> you've unwittingly settled into a role as an technician getting paid highly because of ephemeral demand bubble rather than the valuable engineer you might imagine yourself to be.
I have not seen this put so succinctly and this cannot be repeated enough.
Most of what I do at work is not necessarily the same thing as the value that I add. I solve business problems with software. A very good portion of that is just boiler plate and lines of code that anyone can write. Someone has to write the main.go, read in the cli args, the environment args, define and initialize the API clients, define the server api spec, and then finally, you write the business logic. Then you write the tests to validate the business logic.
Many of those tasks can be completed via normal copy and paste. It's much faster to have the LLM do it. When you're initializing a client for a database in go, you can refer to the library on how to do it, or you can just let the LLM do it.
Maybe you have a photographic memory and you don't need to reference docs for other people's libraries, I don't have that ability. But the LLMs do, so I use them.
I saw a post recently on LinkedIn from a designer, and I thought it was remarkably accurate. It basically said that AI was going to eliminate a lot of the "low value" creative jobs that don't actually have a lot of creativity: e.g. technical writing and illustration that really serves just to summarize some application (AI is great at summarization), social media ad copy, etc.
But, the weird part to me is that this guy said he was optimistic because he works with world class designers and writers who will be able to use AI to enhance their creativity. I agreed with that statement, but I completely don't understand his optimism given obviously the vast majority of creatives (or really, any occupation) are not "world class", by definition. AI will just set a much higher bar for being able to contribute economically.
We're basically headed to a world where most non-physical occupations look like professional sports or acting. That is, a world where a few "stars" that are unique in their abilities vacuum up the vast, vast majority of the money, and everyone else can essentially have their skills replicated, and thus replaced, by AI.
The reason I think the AI optimists are full of shit is because I've never seen a decent response to this argument. Usually I just get something hand-wavy, or worse, "well people at the bottom will just need to up-skill and become world class." Yeah, just tell everyone to be in the top 5%, that makes mathematical sense...
> Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.
> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
The idea that distributing a "universal basic compute budget" to every person would do anything to solve potential economic inequality that may arise due to hypothetical AGI is just comically simplistic and childish. Giving people access to AI won't fix power imbalances if wealth continues to concentrate.
Sam Altman is a professional bullshit peddler—particularly at this stage in his career, I rarely hear of him doing anything else.
Even if that turns out to be true - I don't want human life to be about accomplishment only. We're not here as economic machines to squeeze value out of the next business idea.
If you broaden the notion of "accomplishment" enough then I guess this sentiment is ok. But raising a family, enjoying nature, or attaining good health (for example) don't really seem like things a chatbot could help with much. I guess, charitably, AI could increase the amount of wealth in society to the point where people have more time to pursue those things, but I'm not sure that's the most likely outcome.
I’m unconvinced. Working hours in industrial countries have gone down over the last century or two, not up. Things were even worse for the vast majority of people before the 1800s, most people didn’t even get “careers” or have the luxury of thinking in terms of “accomplishments”.
I think waking time away from work, and not work pay, is the most relevant metric to answer the question raised in this sub-thread. (Though there is obviously overlap, what you can do with your free time and how much you have is obviously driven by your income.)
While you have a good point - there is real inequality and it’s still a struggle to be poor, no question - this doesn’t actually contradict my point, which is that people have on average more free time than they used to, and that free time is what I think @v3xro is referring to.
“Productivity” is a loaded word, it means dollars collected, and a productivity increase does not necessarily mean workers worked harder (though it does happen sometimes). While it sucks for workers if companies can collect more dollars without paying more for production, it’s not directly relevant to worker’s lifestyle.
This chart shows adjusted wages increasing at a very slow moderate rate for the last 50 years: https://fred.stlouisfed.org/series/LES1252881600Q. It’s not that bad if wages “stagnate” if we’re talking inflation adjusted purchasing parity, which I think you are. While it might be unfair for capitalists to reap relatively more, that doesn’t actually reflect on the question of what human life is about and whether “accomplishments” at work are the primary meaning of our lives.
Or even rich countries. How many people live paycheck to paycheck in shitty conditions in the top 10 richest countries in the world the world ?
Technosolutionists have a huge blind spot, as long as we run the free for all capitalist software it doesn't matter if we get to AGI, immortality and superman powers, most people get fucked
he's either delusional, or more likely lying. what he's saying is absolute nonsense and he should know better, but if he said the truth people might ask why he needs tens of billions of dollars
Remember that time when The Economist did a dumb article about your country or your area of expertise. These are the same people writing again about something they have no clue about. For clicks.
There have been a number of times when I've seen their reporting to be higher-quality than anything available domestically in the country. Above all when they dedicate a 20-page Special Report on a country or a new technology. They go into a level of detail and analysis that you basically can't get anywhere else. And especially with technology, they tend to look at the bigger social picture -- the wider implications -- that purely technology-focused sources tend to either ignore or else have shallow "hot takes" on.
Agreed, the fact that the technology quarterly and other tech writings appear to be written by people who understand what they're writing about is exactly why I'm willing to extend some trust to them on subjects where I'm clueless.
Every single article about my country is ridiculous. They confuse the political parties. And about economics, they pick something random and use it to explain why that is the single cause of all the problems.
Sure, The Economist is above average. But the average is pretty low and they are still mostly useless, IMO.
Used to be every piece was made by hand. That was the only way it could be done at all. It took however long it took, it cost however much it cost, and you could (slightly) tweak those knobs to control the overall quality of the finished product based on the customer's budget and tastes.
Along comes mass-produced flat-packed particle board stuff. It's cheap garbage, everybody makes it, most customers buy it, and it works "fine" but perhaps there is something more spiritual about it that we have all collectively lost.
The thing is, a small group of people still perform the old craft for a small group of customers who still want to pay for it. The market for it shrank, but it still exists. It sort of doesn't matter what happens in the flat-packed space, because handmade furniture is almost an entirely different universe from particle board.
I'm sure somebody with a better grasp of macroeconomics could mop the floor with me, but that's my intuition about it.
It's not the spirituality of it. It's the longevity.
If you own your living place or have an expectation of stability, well-built furniture is an investment -- not necessarily a hugely expensive one -- that can last your entire life.
If you have to move frequently -- one year rentals, turbulent economy -- anything that's heavy and bulky doesn't fit into your plans. Or not as much of it. But the depressing thing is that the flatpack stuff is prone to breaking when you move it, so it becomes disposable much faster.
It's not even spiritual, it's just care. If you care about something you build, and you are able to put in the required time and effort, there is a great possibility your output will be high-quality.
Unfortunately, high-quality is playing second fiddle to low-cost (and does the job) for the majority.
Not sure what the relevance is - there's free versions of that, not to mention Gemini and co. In any case what you're saying is true again, of any productivity tool.
As he was delivering that segment, there was a feeling in me that just kept emanating that there is no way Fareed knows anything about this but he is broadcasting with such authority. And it's true, he had no idea about how Deepseek trained off OpenAI.
This is probably very true for all mainstream media talking about AI, they literally are clueless and can be used to parrot anything that sounds world-changing.
cnbc the other day had the reporters warning how deep seek can download all the data in your phone and send it to china and you should remove the app immediately. It froze my brain for a good minute.
There’s no hope for someone if they take Zakaria seriously about anything. He’s the pinnacle of vacuous talking head infotainment.
It used to be generally well known that Zakaria-types were for dimwits who mistook the appearance of insight for insight, but in an increasingly vacuous, video-soaked world, what used to be obviously pap for the plebs seems now solidly respectable, middle-brow.
The devil is probably in the details. I don't stop watching someone because I disagree with them, if they have a generally calm and moderate tone, that's good enough for me (Fareed is awesome in that regard). The best way to see if someone is compelling on a topic is to see if they discuss it on a regular basis.
Fareed is very good on the middle-east, for example. AI is a big topic, but so is transgenderism. Not everyone should be talking about it (this is indictment of many, Joe Rogan a major culprit). I'll leave it there.
My fear is that AI will further hurt neurodivergent people's ability to compete in tech. It seems to me that AI deemphasizes some of the advantages that some neurodivergent people have; e.g. hyperfocus, nonlinear thinking, etc. At the same time it introduces extra context switching which many neurodivergent people struggle with.
I've already seen tech become much more hostile to neurodivergent people over the years. I'm afraid that this will just make things worse.
It reminds me of the experiment with Gary Kasparov and chess, where different teams with computers and non computers played chess against each other. The findings showed that teams that effectively combined human strategic thinking with computer analytical power performed better than either humans or computers alone. This highlighted the potential for synergy between human and artificial intelligence, where the combination of both could surpass the capabilities of each individually.
I think that’s the challenge of this time. Clearly AI is a tool to decrease the speed to do something and it favors people who are generally good or generally have some domain knowledge with the skills to leverage it.
> This highlighted the potential for synergy between human and artificial intelligence, where the combination of both could surpass the capabilities of each individually.
This potential only existed for a very short period of time; current chess engines eclipse human capabilities completely; human assistance is about as useful to them as an amateur players advice would be to Magnus Carlsen.
Please boost this response. People are willfully blind or are being deceived into thinking Human + AI offers anything but liability in competition with pure AI system. The quoted example was only true for a very short period of time in a specialized form of chess, correspondence chess. No system with Human in the loop can currently come close to beating the top chess engines.
This is only ever true during "transition" periods. Chess computers at this point are strictly better than just humans or human computer teams. Of course, it is currently true for LLMs, but the question is how long?
I reached 70 this year (still hacking...). I have seen a lot of changes in my life time, technical, economic and geopolitical. Is hard to predict the future. A common underlying pattern of change has been that income disparities will grow, both at a personal level as well as between companies. AI will accelerate this phenomenon. Yes jobs will be lost and unemployment will produce social friction and instability. At the moment, perhaps to most HN readers is a productivity tool (and not very good at it).
I am hoping that applied AI would bring the costs of services and products down, through efficiencies so that people can still survive.
All these optimists want to say it will just "enhance" our abilities, not replace us. Well AI did "enhance" our chess playing skills, but it completely replaced the top skill level. That's ok for a game without economic value by being the best. But with these same people advocating that we can't stop because the potential economic value is so great and economics improves peoples' lives... We will use the replacement AI for economic value, not the inferior "enhanced" chess players.
I don't buy it that AI will widen social divides. The evidence points the other way. LLMs are mostly useful for beginners, and they are basically a fast lane to knowledge. It's never been easier to start from nothing, absorb lots of synthesized knowledge, and go beyond what an LLM can do - essentially becoming "the best" according to the article. The path to being "the best" in any discipline is not decades today, it is years.
Personally, I've picked up so many new skills in the last five years thanks to the LLMs. I now do my own car maintenance (fully), I'm restoring my home, and handling years of programmer health neglect. Each of these would have taken me either years to learn, or at least months with professional supervision. Thanks to LLMs, I basically have a direct line to a somewhat knowledgeable "person" that can answer all my questions immediately, and it takes days to learn how to be somewhat good at anything. I'm not saying I've become an expert car mechanic, builder, or personal health expert, but I have become functionally good. I can look at my work and say "I've actually done this better than the contractor I hired 2 years ago!", etc.
It's true that there is a divide between "the best" and "the rest", but the divide is that "the best" don't benefit much from LLMs. At work where I am a senior SWE, there is nothing the current LLMs (including GitHub Copilot) can help me with. But on the weekends, when I'm learning a fun new programming language as a beginner, I can get up to speed in a few hours. That is the different effects LLMs have on "the best" and "the rest".
If LLMs did not exist, and one needed to have really extensive domain expertise, plus ML, tensorflow, and python skills - then I would agree with The Economist that AI benefits technologists more. But LLMs exist, are widely available, and also are probably the main application of ML today. So I think the article misses the point very much.
To some extend I agree, that if you are professional in some are, current LLMs may not solve the issues you are facing. But, while I consider myself as "expert" in one very specific area, and don't find huge help from ChatGPT in that domain, I don't live in a bubble. I have to sometime touch on completely orthogonal skills where I am not expert at all. And in these areas, I get huge benefit even though LLMs give me "average" responses. It is still better than my own, below average judgements. That is very strong point.
Yes, that is a very strong point. And we all have differing levels of expertise in different areas. We are all pulled up to some extent.
The article, though, speaks about the context of work, research, and industry - the professional context. In strictly that context, if you are "the best" at your day job, perhaps you won't benefit much there. By saying that "the best" benefit more than "the rest", The Economist is completely at odds with reality.
If more of us grew up with unconditional postive regard, and internalized that, I reckon this focus on competition would be less.
It's okay to draw lines in the sand. Yeah, I grew up with computers, and yeah, they're cool and all, but unnecessary for a meaningful life. I have yet to intentionally use ML/LLM tech, wnd I make efforts to turn it off in DuckDuckGo (haven't made an account to save that setting yet...) and others when I'm aware of it.
I would far rather promote education of humans by humans, and the idea of "public luxury, private sufficiency."
Two books come to mind:
The Evolved Nest, by Darcia Narvaez
Invisible Doctrine, by George Monbiot and Peter Hutchison
I recommend checking your public library for these, or bookshop dot org or thriftbooks, rather than Amazon.
Not really. I think it will shadow and cover the defects of people like they’re inability to communicate well. And it might shine light on what they do right. I will optimize people to only be specialized in a limited set of skills. And this will likely hurt their development of those very skills
It's the sweet dream of raising far right. Of course, "the best" will be decided not through the AI enhanced productivity, but through the ability to utilize "the rest" and keep them where they are or dump even lower.
It's like having a personal assistant for free. For example tell it what you want to do that day, and it can schedule your day. Ask for markdown check list. Ask it to improve your health and it can make a cookimg and shoppimg list for you. It gives you ideas you can then improve.
Great time saver too...
Ask how to do something obscure in the terminal and it just returns the commands in seconds you would need to research for hours.
Open-source models drastically reduce the ability of the closed-source proprietary models to be used to control markets, establish monopolies, and extract rents from users. Since the economist doesn't even mention this divide, the article is best ignored.
The real DEI crisis is incompetent LLMs being promoted into jobs over qualified humans. LLMs suck at almost everything they try to do, while fan boys continue to make excuses for them.
As an analyst it is already vastly superior to any human.
Yet there appears to be an inventiveness gap. As an inventor -- or at the frontiers of scientific discovery -- it's hopeless and still vastly inferior to the average practitioner. (Hence Dwarkesh's question: https://marginalrevolution.com/marginalrevolution/2025/02/dw... )
I've tried prompting o1-Pro to invent something genuinely novel in materials science, and it resists. It appears incapable.
On that note, there was this AI materials science project that was run by Deepmind. They spent a ton of money to identify a bunch of "new compounds" -- which, it turns out, were not "novel, credible, or useful." (https://www.theregister.com/2024/04/11/google_deepmind_mater...)
...Most interestingly, as I recall, the ones that were "credible" and "useful" were not novel, they were derivative of existing compounds already known to science. So the AI really achieved nothing at all, and, pointedly, this AI didn't have a gimped set of instructions.
What we're seeing is that AIs and SotA LLMs excel at certain things -- analysis, linear prediction, and linguistic processing -- but fail utterly at other things such as multidisciplinary logical inference.
So what I predict is not that AI will divide the best from the rest, but that it will exceed the best in some tasks, and prove almost entirely incapable of others. Bullish for science and advanced engineering, bad for financial analysts...
You're 19 and choosing a college major. Should you study languages in the hope of working as a translator?
> slaves to Sam Altman
Um, what would Sam Altman need slaves for, when the AI does everything better, potentially for less than the cost of feeding them? And, in that scenario, Sam Altman had better have a pretty good Robot Army(TM) to protect him from the torches and pitchforks.
It's also a pretty significant assumption to think that a giant network of truly intelligent machines, robotically running an entire world economy, keeps listening to what Sam Altman wants. Maybe for a while.
You know the most far-out thing I remember in sci-fi from when I was a kid? Some guy was on his way to start in the Space Patrol Academy or whatever, and his phone rang. Like this 18 year old kid was gonna be carrying a phone around with him everywhere. I mean, sure, nuclear rockets, you can see how to do that, but some kind of nationwide radiophone system?
Anyway, 2 to 20 is only about a 90 percent confidence interval. It could get stalled by power or space demands.
Hey! I understood that reference! That was something that stayed with me long after the "big science" (atomic rockets, Venusians, etc.) of Space Cadet. It was something that everybody had, and then it started to come true, which was amazing.
Unfortunately, it's getting to be where the part where he just mailed the phone back to his family is the more science-fictional scenario...
I was there when musk said we'd have fully self driving tesla in 2 year, in 2021. I was there when he said we'll be back on mars in 2025, I was there when uber said they will buy every single car tesla can manufacture by the year 2020.
It's all bullshit, how can people fall from it, and keep falling for it over and over and over ?
The line went up a bit faster in the last 5 years so the line must continue to go up exponentially ? That's about the only "logical" way to get from LLMs to AGI.
Replacing "the best of us" in 2 years... come on, even Altman isn't getting that high on his own supply
> I think you're mistaking me for somebody who's spent a lot less time on these issues.
Unless you're God himself I doubt you can predict the future of events that never happened in the past with any kind of accuracy. There have been thousands of people like you with prophecies of AGI in the next "10 years" since computers are a thing.
What? LLMs aren't going to be AGI, or at least I'd be really surprised if anything that was basically an LLM got to AGI. LLMs may be components of AGI, but even a "reasoning" LLM isn't going to be AGI. Not even a multimodal model. Probably not even with a planning scaffold. And even if you can call the core of your first AGI an LLM, I really doubt it'll recognizably be a transformer.
What gets you to AGI is dumping, I don't know, maybe 100 billion dollars of investment into trying out the kinds of not-particularly-LLM-like architectural innovations that people are already playing with on top of deep learning generally, plus the hybrid architectures that will inevitably pop up.
We are probably 2 or 3 breakthroughs away from AGI. When you have that much attention on a topic, especially when it's aided by what we already have, you can reasonably expect to get them relatively soon. Then there are the scaling issues.
Missing capabilities that probably won't be achievable purely with LLMs or similar:
+ Long-term learning (not just context hacks) from live experience, including deciding what it needs to learn and choosing what to do or ingest to achieve that. This basically eliminates the training-versus-inference distinction.
+ Critical evaluation of incoming data during ingestion, in all learning phases, or at least after very early pretraining.
+ Better critical evaluation of its own ideas (although "reasoning" is a step toward this one).
+ A decent predictive understanding of the physical world, which probably demands embodied robotics with a variety of sensory and motor modes (probably plus "pure digital data" modes like existing models have).
+ Complex, coherent, flexible and adaptable long-term planning and execution at a variety of time scales and levels of abstraction.
+ Generally better attention control in the "nontechnical" sense of attention.
+ Self-improvement, including both architectural innovation and performance optimization (probably falls out of the above).
+ Scalability and performance. Even if you build significantly superhuman AGI, it doesn't change the world much unless you can run a lot of it.
I would expect most architectural breakthroughs to help with more than one of those, and a really good architectural breakthrough to help with most of them all at the same time. So I'm guessing two or three breakthroughs. And I don't mean "breakthroughs" in the sense of, say, relativity. Just significant new architectural insights, on the order of transformers or maybe deep learning generally.
The one I'm least sure will be solved soon is the scaling one. If it turns out that the only way to scale up enough is to completely reinvent computers, perhaps out of biotech or nanotech or whatever, it's going to take a long time. On the other hand, if you do get even remotely decent scaling, and you have even the clunkiest form of generalizable self-improvement, you're probably basically done.
> We're also "2 or 3 breakthrough away" from anything
That whole line of argument is invalid from the start, since it has nothing to do with the actual subject under discussion. But I'll take the bait.
We are not, in fact, 2 or 3 breakthroughs away from any of those. Nor is there remotely the level of investment in any of those that AI and ML are getting, which means any breakthroughs we do need are going to take longer.
> fusion
Probably one breakthrough needed... but with or without that, fusion energy may not be feasible at all, at least not at a price point that makes it competitive with the energy sources we already have.
If feasible, fusion will need a crapton of incremental, interdisciplinary engineering and testing on top of the breakthrough. That kind of grind work is harder and slower than breakthroughs, because there are so many conceptually unrelated, but interacting things to get right before you have a power plant.
Then you have to get the permits.
Fusion plants will probably not be on the grid in 20 years. If they are, it'll be because of some radical changes beyond the borders of energy generation.
> space exploration
There's space exploration going on now, and has been for decades.
If you mean long-term, self-sustaining space living, there are a ton of engineering problems to grind through, but I don't know if you need even one technical breakthrough. The bigger problems are that--
+ You have to do it on an immense, prohibitively expensive scale for it to be viable at all.
+ The social and political problems are enormous.
+ In the end, space just plain sucks as a place for humans to live anyhow.
You might be able to fix that last one with bioengineering capabilities on the order of the ones you'd need for immortality. Which is a long-term project.
Personally I think it's not worth the trouble.
> immortality
You might eventually be able to get to (quasi-)immortality, but it's going to require a ridiculous number of diverse interventions in extremely complex systems... plus the tools to actually do those interventions. Which depend on understanding the systems. Which requires more tools.
You need to do massive amounts of grunt work and probably make a lot of breakthroughs. Nobody who has any real understanding of the problem has ever suggested that it's two or three breakthroughs away. And you might need superhuman AGIs first to understand the total effect of all the hackery you were doing.
Moderately better than human AGI can probably speed up any of those, but probably not as far as some people think. Unless you're a superintelligence, you have to put in lab time for all of that stuff. You might have to do that even if you are a superintelligence. And superintelligence feasibility, capabilities and timelines are harder to guess than simple AGI.
I'm against the idea that past overestimates of technological progress imply that CurrentTechPredictionX must also be an overestimate. It's a lazy way of thinking that fails to actually look at any specific details of either situation.
I'm not implying the current prediction >must< also be an overestimate. I'm saying be careful when adjusting your expectations based on current rapid momentum.
IMO it is better not to assume a comment is trying to support one of two sides of an argument. For example, it might be critiquing the general way the argument has been framed (where the line’s been drawn between the two sides), or the method of making the argument (if everyone is sitting around looking at some obviously incomplete data and trying to tweak it to push their point, it is worth pointing out that whatever conclusion the group comes to is probably meaningless).
Or it could just the a bit vague and hard to interpret.
Overall, this stuff by itself really doesn't seem world-changing to me. People should continue learning the same basic skills that have been valuable in the past. Speculation on how it will change society seems about as baseless and premature as guesstimating how society will change if quantum computers can somehow solve all problems instantly, ignoring the fact that they almost certainly can't.
I am hopeful for some of the more advanced applications, e.g. proof assistance in math, but I think it will take a while to know what the killer apps are there, and what their impact will be.
reply