> And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you.
This is how I would like it to be only because I can't trust it to treat my information well. If it was a trustworthy system, this would be an incredible capability.
It would be great to have this kind of agent and trust it is truly working for you, yes. Given economic (and political) reality, I don't think anyone needs to wonder where things are most likely to land. There will be practically no market for actual user agents that work purely on behalf of end users. Rather, "your" agent will tattle to the police if you do something naughty, tell advertisers how you spend all your time and every preference you express, tattle to copyright holders if you play music for a party of unlicensed listeners, or just make sure you're billed properly every time you attend such a party. I think Gates refers to the question of how data sovereignty and privacy play into this as an "open question" in order to avoid addressing it, knowing full well that lack of data sovereignty turns this tool of empowerment into something much more dystopian.
I'm anxiously watching two developing spaces for this reason:
1. Homomorphic Encryption
2. Self-Hosted AI Systems
Homomorphic encryption is the idea that you can perform ML operations on encrypted data -- so that the providers of powerful AI need not know precisely what data you have.
Self-Hosted AI is just what it sounds like -- what if your Alexa processed on-device, and never left your home? What if your mobile devices could "phone home" to it from wherever you were, using an encrypted tunnel?
The second point is possible today, but the issue is keeping your local model to pace with the cloud offerings.
I think device specific self hosted AI is coming. At least that's what I see from marketing stuff the chip companies send me. I think it could make some sense from the a cost perspective. The manufacturer of an appliance likely doesn't want to pay reoccurring fees for smart features which require uploading audio and video to the cloud.
Worth noting that that's probably how most people will look at things, and I'm not sure if most people have the background to be able to understand the privacy issues.
I guess I'm suggesting that Mr. Gates is probably right, even if it makes us here on HN uncomfortable.
If that's the future of using computers, he can keep it. I want none of that, personally. Most of what he lists is stuff that comes with such serious downsides as to be actively objectionable.
How do plan to avoid the AI overlord camera system that is already in many grocery stores? Instant facial recognition when you walk into the store. Directly tied to your credit card you used on the way out last time. They mapped out your route you are most likely to take. How long until your phone is buzzing to alert you of a hot deal on aisle 12, when you are in aisle 12, for that sugar free soda you drink?
Oh yeah, don't forget they are sharing this information with an ad agency who covered the installation cost of not only your grocery store, but also your bank and gas station. All of those prefernces, tied together in a neat little digital folder, ready to sell to the highest bidding government agency.
I was talking about my use of my machines (the topic of TFA), not how others use their computers against me.
On your topic, I'm not totally helpless, though. I can't do anything about store surveillance except go to stores that don't do that (while they still exist).
That said... I can and so put my phone into airplane mode before I enter stores, and I avoid using credit cards in them, specifically to help protect against those kinds of spying.
"The next generation of interesting software will be done on the Macintosh, not the IBM PC" (1984)
"I see little commercial potential for the internet for the next 10 years" (1994)
"Today's Internet is not the information highway I imagine, although you can think of it as the beginning of the highway" (1995)
"There are no significant bugs in our released software that any significant number of users want fixed" (1995)
"One thing we have got to change in our strategy - allowing Office documents to be rendered very well by other peoples browsers is one of the most destructive things we could do to the company. We have to stop putting any effort into this and make sure that Office documents very well depends on PROPRIETARY IE capabilities" (1998)
Under oath: "I don't recall" 6x, "I don't remember" 14x, "I don't know" 22x (1998)
"[E-mail] spam will be a thing of the past in two years' time" (2004)
> And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life.
That understanding will belong to some company whose interests and goals are not yours.
This will not bother most people. Most people are willing to use GMail, even though it snoops on their private emails and uses that info for advertising purposes.
> This will not bother most people. Most people are willing to use GMail, even though it snoops on their private emails and uses that info for advertising purposes.
While I agree with your larger point and am no fan of Google's privacy practices, they stopped this specific behavior in 2017
No, at that time Google issued an announcement that at some future time they would stop doing that. However, Google did not modify their terms of service [1] to prohibit them from doing that.
"The activity information that we collect may include:
Terms that you search for
Videos that you watch
Views and interactions with content and ads
Voice and audio information
Purchase activity
People with whom you communicate or share content"
Google does write "We don’t show you personalised ads based on your content from Drive, Gmail or Photos." That's a narrow exclusion. That info could still affect your YouTube selections or search results.
I desperately need a personal assistant because it has become impossible to get anything done now that companies have automated everything and made mistakes and edge cases impossible to resolve. I hope this new wave of automation won't make things even more difficult to manage.
You can bet that it will be much more difficult. Look at Google, how impossible it is to resolve anything with them and how successful they still are. This will be the new normal for everything.
Another take on this topic: It is reality right now, but only for poor people, so we can't see it. This will change, we will all suffer from automation.
The idea of my agent and Google's customer service agent babbling at each other in endless futility and failing to solve a problem that two humans could solve together in 15 minutes is a dystopian hell. But it will be celebrated as a wonderful productivity boost because no humans will be on a phone call.
An agent is an amazing idea, and it would add enormous convenience - if implemented well.
But we're a long, long, long, long, long way away from agents. What we have now aren't even precursors to agents, more like very distant cousins to an ancestor.
In order to make what he's talking about, you'd need true general artificial intelligence operating at a human's level or above.
And if we have that, the entire world is going to change regardless - personal assistants will be insignificant in comparison.
It's difficult to predict when a breakthrough like that would happen, but I doubt it'll be within the next 200 years.
What we have now isn't even a stepping stone in the right direction, it's just smoke and mirrors that looks like it is if you only look at it on a surface level.
What happens when slack starts remembering tasks someone asked you to do... follows up with you to make sure you did it? And the person that originally asked you?
Right now you arrive at work and you have a few things to do, but your brain only remembers 1 of them, and barely at that. With AI it will have a few iterations on emails ready for you.
This is all possible right now with an LLM of today's caliber. He's not talking about general AI.
What he's talking about is absolutely not possible with an LLM. An LLM is just for language and has no deeper understanding, reasoning, logic, learning and growth, etc.
See here:
"Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior."
None of that is possible with today's LLMs - not in the way he's describing it.
> They’re proactive—capable of making suggestions before you ask for them.
An LLM can capture things that are "todos" very easily now. If someone messages you and says "can you collect data on the Brenda client" it can know that this is a request.
Further, if you give it access to notion or company wiki, it could be tuned to understand your clients, work and much much more.. The LLM can, in today's work: open an email, or a VSCode instance with some pre-filled connection to your analytics database - maybe even a sample query with the Brenda client pre-filled.
Across applications - this is more-or-less something that will be coming soon. Within a few months you will see pretty much every major OS (linux, Mac, Windows) announce OS level APIs to interact with a basic LLM, and paid for LLMs. This will enable applications to start communicating with each other because they will generally understand what you're typing and output function calls..
When applications can ubiquitously call "predict(...)" at the OS level, and probably many other AI APIs that will be introduced, you will see a HUGE change in interactivity.
Now that being said, it sounds like you're picturing Tom Cruise in Minority Report, or Will Smith in I-Robot.
There's a huge in-between you're extending his conversation to that's totally unnecessary.
> In order to make what he's talking about, you'd need true general
artificial intelligence operating at a human's level or above.
Worse, you'd need an AGI that was relentless, creative, able to lie
and deceive, to mimic, to charm and flatter, to subtly threaten or act
subservient, to be taciturn and know when to be silent, and generally
socially engineer, understand rules and break them.
And that's just me dealing with a bank on a normal day with someone
who actually wants to help me but is hamstrung by policies,
checklists, scripts and service non-interoperability.
Wait until your "agent" is dealing with another "agent" purposed by
the company (adversary) to also dodge, lie, trick you into agreement,
procrastinate, time-waste, discombobulate and deliberately annoy.
The moment Bill Gates' "Agent" meets its counterpart from Verizon it
will be like that first time you set your mail auto-responder to
respond to their auto-responder.... they will instantaneously consume
all the computing resources on the planet while locked in a battle of
corporate shitfuckery to the death.
This is the endgame for "online computing services".
> Worse, you'd need an AGI that was relentless, creative, able to lie and deceive, to mimic, to charm and flatter, to subtly threaten or act subservient, to be taciturn and know when to be silent, and generally socially engineer, understand rules and break them.
> The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems
Yeah could you f**ng not? I pay taxes and insurance to get proper medical guidance, not answers from a bot. Public healthcare in the uk is nearly dead anyway and their patch fixes are to use “virtual” nonsense, while private healthcare is riddled by “ai” that does nothing but annoy customers. BabylonHealth, a former uk ai “healthcare” darling, rightfully went under because among many things it was gibberish.
Use of GPT-4 to Analyze Medical Records of Patients With Extensive Investigations and Delayed Diagnosis
"Six patients 65 years or older (2 women and 4 men) were included in the analysis. The accuracy of the primary diagnoses made by GPT-4, clinicians, and Isabel DDx Companion was 4 of 6 patients (66.7%), 2 of 6 patients (33.3%), and 0 patients, respectively. If including differential diagnoses, the accuracy was 5 of 6 (83.3%) for GPT-4, 3 of 6 (50.0%) for clinicians, and 2 of 6 (33.3%) for Isabel DDx Companion"
That's great the bot can tell you that you have cancer, congrats. It isn't human, can't comfort the cancer patient, can't actually _help_ the cancer patient in any meaningful way to solve their issue.
AI is not intelligence. It lacks empathy. It lacks just about everything a doctor does except regurgitate diagnosis based on data -- data gathered by doctors.
Ah yes, a study made on six patients. Very science, fits the pattern.
But this alone invalidates the paper:
“AI relies on clinical imaging.1 In low-income countries, where specialist care may be lacking”
Have they been to a british hospital recently? Specialist care scarcity is fast approaching that of low income countries. The only option left is the poor man’s choice - chatbots.
> In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do.
This kind of thinking, to me, smacks of the kind of ignorant projections people used to make in the early 20th century about technology in the 21st century.
“Robotics will advance, so everyone will have their own personal robot assistant!” Nope, though we do have robots doing things that are commensurate with their cost and the value they provide.
“Advances in aviation mean everyone will have a flying car.” Nope again. Turns out the added utility of being able to fly directly to places vs the cost doesn’t trump a car in most cases, and when those cases don’t apply, airplanes are plenty economical.
“You’ll just tell your device what you want to do.” No again. In some cases this is useful (I already tell my device to remind me about something or to schedule some sort of event on my calendar, where the time to input far exceeds speaking it), but in most cases, I can do it faster myself with a good UI, or maybe I don’t want everyone around me hearing me blather into my phone.
It’s like, just imagine for a second what it would be like to have to dictate everything you do on your phone. It would be utterly exhausting.
The question I have is, what if these agents are really good at doing things they learned about, but not in new things? How will that impact people's actions?
Will we end up with a static world of people doing only things the AI can do? A kind of 1999 Matrix stasis world of 'the best we can do now'?
What will happen to what Arthur Kessler called 'blue' thoughts in such an AI world?
Because IMO, the purpose of computers is to make models and see what happens. Test ideas in a powerful way via computation. If the AI can ONLY select existing models and not make news ones, or even prevent you from making new ones, then I don't see how these AI agents are anything but a step backwards.
Alan Kay gave a great talk called The Best Way to Predict the Future is to Invent it taht touches on some of these ideas.
Is this motivated reasoning from the perspective of an OS vendor? It seems like intermediating the user's intent using AI has the same hazards as intermediating the internet through a single search provider... i.e. it'll happen, but will tend towards benefiting larger interests, leaving the experience a little less rich than before.
I think I'm really going to enjoy this future. Modern LLMs are fantastic. I've fixed my Peloton that I was stuck on, diagnosed a small lesion, handled ancient proprietary systems. The future is going to be good.
The writing is on the wall. It's not right now, but it will be in my lifetime. What are some AI-proof(ish) industries we can jump into after software engineering is drastically reduced?
AI will never write software for the same reason professional chefs will never be replaced by robots. Sure, certain parts of our jobs (the fast food equivalent work) will be assisted, and then replaced, with AI help. Ops is already benefitting from AI assistance with things like log correlation and managing incidents, and all of that is great! But the fact is that AI will still require the input of humans to solve problems, and specifying a problem completely enough that a machine can write the code for it is equivalent in effort to just writing the code yourself.
I see the introduction of AI to the SWE workforce as roughly equivalent to the introduction of digital kitchen scales, thermometers, etc. to professional kitchens. It's going to help us up our game, which is awesome! It's going to make a lot of the boring, mechanical stuff simpler so we can focus on the creative aspects of our jobs. But a burger flipping robot isn't replacing a michelin star chef anytime soon, and AI isn't replacing you anytime soon.
I’m hopeful this is the right answer, but I’m not optimistic it is. AI seems more like the introduction of the assembly line for software workers, minus huge blockers for physical labor. It’s not there now, but at this rate? Not infeasible either
How will you tell AI what you want to build? I'll take an example from my current work: I'm currently working on adding producing data to a kafka feed from an API server that will be aggregated downstream into a big huge in-memory datastore for fast access. That's a moderately sized problem, one that a reasonably competent engineer can solve in a week or two, but how much work will you have to do to tell the AI that this is what you want? And if you tell it at a higher level than this, how will it know what tools it has available? How will it deploy stuff?
There are some fairly intractable problems in the "AI writes software" space. I don't think they're getting solved any time soon.
If software engineers have been replaced, it won't be any of us telling the AI anything, it would be a manager. Ask a manager about Kafka, I recon more would respond with something about Franz than anything about Apache.
> And if you tell it at a higher level than this, how will it know what tools it has available? How will it deploy stuff?
Step 1. Ask it to come up with a plan, after giving it read access to your corporate documents.
Step 2. Ask it to execute the plan it came up with in step 1, after giving it write access.
Of course right now there's a step 3: "Get real engineer to read and then perform business continuity/disaster recovery plan because current AI is about as good at this as someone fresh out of university, what on earth were you thinking, …" etc.
I don't know how long we've got in these jobs, but based on the rate of change (and the training cost for the larger models and supply shortage for the chips used to train the better models) I'm thinking at least 2 and no more than 15 years — though I'm hoping more towards the latter.
Sounds more like hope to me. There's nothing special about creativity.
The real reason chefs won't be released by robots sooner than AI and software devs is a lot more boring. Sensorimotor and physical perception is harder than reasoning and "creativity".
It doesn't help that we don't know how to digitize taste so any model with a good sense of taste will have to develop it indirectly incentivized by something else (eg a language model training on recipes).
GPT is a predictor. It will just continue to reduce loss until it has modelled the data entirely.
I see GPT and similar LLMs as basically like managing an over-eager intern. Have you ever tried to do that?
They're overflowing with ideas and knowledge and passion, and "all" you have to do is point them in the right direction. Except that when you review their code, you find that they didn't consider about a thousand different edge cases. Oh, and they didn't follow the style guide. Oh, and they are way over-focused on the wrong parts of the problem, prematurely optimizing performance in places where it doesn't matter. Oh, and their code is nigh-on unreadable. Oh, and they wrote a bunch of code that is already provided in libraries, but they just didn't know about it, and their implementation is probably full of subtle bugs so they should just use the library version. Oh, and they forgot to update the CI because now they need to pull in a new dependency to run their tests. Oh, and... the list goes on.
I don't ever see AI progressing past the regurgitation stage; you can give it as much knowledge as you want, and it can rearrange and reproduce and restate that knowledge in a thousand different ways; but we're so far from AI being able to handle all the details of our work, that as I said above, fully specifying the problem is going to be just as much work as doing the work yourself. And you'll still need expertise to do so, because our software systems are complex and full of nuance that can't be easily communicated to a machine.
AI doomers always strike me as people who have never had to try to corral interns; maybe that's an overly specific life experience to expect someone to have, but it's a really useful proxy for how much "more productive" we're going to get with AI. A really good intern can solve simple problems given exact constraints, but anything requiring lateral thinking will take them a lot of coaxing to get to the right solution. And that's okay! People can learn and get better. But I don't see AI getting better "enough" to take on anything beyond that first rung of the ladder of complexity.
>I see GPT and similar LLMs as basically like managing an over-eager intern. Have you ever tried to do that?
So? Assuming you're right, GPT-3 was not at the level of intern. GPT-2 could not even write coherent text.
I bet you didn't expect any of those developments either.
It's interesting how much people struggle to look forward. I guess we never needed to for nearly all of our evolutionary history. Like very few people genuinely think they'll be replaced right before they are.
Your argument basically boils down to "I have a hunch language models will stop improving".
Your hunch is unfounded, backed by nothing except vague assertions that "surely this time" these goals won't be reached.
Assertions that many people parroted just a few years ago and have continued to be proven wrong.
The text is data. Gradient descent and prediction will strive the model the data. The data will be modelled. That's really all there is to it.
Is it though? It is not having a significant impact in most fields, despite all the hype. Companies will use it to continue to make customer services worse so they can spend less money on it. People writing marketing copy that no one wants to read anyways will lose their jobs. Semi-competent software engineers will use it instead of searching Stack Overflow five times per day and still write questionable code.
To be clear, I'm referring to LLM. Obviously ML is used for a lot of analytics, but I also think that most of the hype around ML has died down as companies realized that not every problem has a ML solution. In a few years the same will happen with LLM.
> It is not having a significant impact in most fields, despite all the hype. Companies will use it to continue to make customer services worse so they can spend less money on it.
There's some delicious irony in this: tech-savvy HN posters are so bullish on AI replacing everyone's jobs, yet there's been so many posts here about how people have lost their Google/Apple/bank/whatever accounts due to AI accidentally flagging, then being unable to recover because they can't get in touch with an actual human, with begging for help on social media as their last resort.
It seems like a variant of Gell-Mann amnesia[0]: people believe AI is capable of replacing humans in so many aspects of daily life, except for the aspects of daily life which they personally experience.
This type of sentiment is not valuable at all and overly pessimistic. Current AI tech is not very likely to replace work that requires a modicum of logical reasoning and the tech to do so has no known path forward as of now.
Why do you think those things? Logic & theorem provers massively predate LLMs, and getting LLMs to use them is as easy as asking the LLM to write a proof in the language of your theorem prover of choice, which you can then copy-paste into that theorem prover and execute. And if/when it doesn't work, the error message itself helps with a significant fraction of most other programming problems, so my guess is they would also help here.
Also, there have been substantial new developments and discoveries about what transformer models do (both internally and in terms of capacity) every week or two for most of this year, so why do you think there's no known path forward?
Thinking of logic, I just tried the following with gpt-3.5, gpt-4, and gpt-4-1106-preview. The newest model spotted the trick (and then still got it wrong), the older two didn't even spot the trick. Can you spot the trick?
-
A person is in Nairobi. They board a plane, fly 9000 km north, then 1000 km east, then 9000 km south, then 1000 km west. Where are they now?
Highly reducing just means you can do far more with the same resources.
I don't know about you but my team's backlog is virtually endless. If AI/LLM/whatever can automate away a ton of dev work, that doesn't mean we're going to fire 90% of our team. It means we might have a glimpse of being able to actually make a dent in all the work that we want to do but can't.
Are you currently functionally limited by your typing speed? No? You're probably fine. I'd take a team of juniors that have full autonomy to make informed decisions with some hand holding than a tool that has no logical reasoning and just spits out what the statistically average next thing to do is.
In Fifth Element, the bartender is a robot. (Works of fiction may or may not reflect our future!)
Whether customers prefer that or not may or may not matter to the business decisions that are made (given that it's not a perfect customer-driven market.)
Have you been to McDonald's? They really want you to use a touch screen in-person, or an app even though you're in your car, talking to a person.
Perhaps an underground movement can maintain a grass roots database of "real human service" locations, after the revolution.
A bar is not McDonald’s. Robot bartenders have existed for some time but people go to bars for the human connection not simply to consume alcohol. Even here, a human bartender could probably benefit from a robot doing the work of cleaning glassware and restocking ingredients for them.
>What are some AI-proof(ish) industries we can jump into after software engineering is drastically reduced?
Hospitality will always be a thing. There's a reason why the Enterprise-D still has a human bartender that makes real drinks, even though it's completely superfluous. People will always want to be served by other humans, even if AI Android tech became perfect.
The Doylist reason was "we need a role for Whoopi Goldberg". There is no Watsonian reason, as Data did bar duty in the episode they had their memories wiped.
Looking back at them, the taking points about AI are basically the same now as in each episode of TNG, DS9, and VOY that is focused on AI, in the form of Data, Moriarty, The Doctor (and if he has copyright over his holonovel), that Irish village in Voyager, Wesley Crusher's nanites or the ship's main computer becoming sentient…
>The Doylist reason was "we need a role for Whoopi Goldberg"
Fair, but the point still stands. A better example would be Quark's from DS9. Even in a world of perfect replication and perfect androids, people are always going to want a human touch for these things.
This in turn may or may not be sufficient for work once the issues with bioprinting and/or tissue/organ culture get solved and someone prints/grows a brainless body, connects its organic lower nervous system to a silicon chip, and has the higher functions all performed by an AI on that chip.
And by "someone" I mainly mean The Thought Emporium given they're already trying to do exactly that.
We're talking about the guy who missed the boat on the Internet and had to rewrite part of his book and change the direction of the company when it was obvious that MS couldn't just ignore it.
Enshittification has really ruined the promise of every new technology. Sure, I believe there's _potential_ for something cool in AI. But all of that potential is overridden by the rent-seeking profit interests of companies that want to carve out a monopoly position in a space and extract the maximum value forever.
> Imagine that you want to plan a trip. A travel bot will identify hotels that fit your budget. An agent will know what time of year you’ll be traveling and, based on its knowledge about whether you always try a new destination or like to return to the same place repeatedly, it will be able to suggest locations. When asked, it will recommend things to do based on your interests and propensity for adventure, and it will book reservations at the types of restaurants you would enjoy.
So sort of like AirBnB, but for the entire process of booking a vacation? Sounds neat at first, though I'm skeptical that an AI will ever match my ability to choose destinations, events, and travel plans that encompass all of my minute preferences (many of which have taken me years to discover about myself). I'm sure that this "agent" won't start recommending shitty and overpriced vacations when it achieves critical market share and the time comes to turn the profit screws!
This gets much, much, much, much scarier in the next section:
> The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
Just what I want: our private healthcare industry defining how I should seek medical attention using a faceless algorithm. Nurses and doctors, naturally, have empathy for other human beings and occasionally work in the grey boundaries of the system to help people. But that costs profit. Fortunately, the AI will optimize that empathy away!
> AI agents that are well trained in mental health will make therapy much more affordable and easier to get.
Do mental health experts truly believe that an AI agent can replace an actual therapist? What a scary, dystopian world, where Bill Gates can afford daily therapy sessions with a human but the lowly Microsoft code monkeys will only be able to speak to a machine...
The rest of the post is an absolute horror show of the kind of out-of-touch nonsense I've come to expect from billionaires like Gates:
> liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job
Paperwork, sure. What are these "other tasks", though? Lesson planning? I'm pretty sure optimizing away a small portion of busywork is not the most pressing issue in the teaching world. And naturally this task's dark side is a massive yearly cost extracted from our school system, and oodles of collected personal data about every child, starting in pre-K. Think of the efficiency when we can start advertising to these kids based on what subjects they know best!
> If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes.
The next level of "fellow kids".
> If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like.
If AI can do all of that, what's the human business owner even doing?
> If your friend just had surgery, your agent will offer to send flowers and be able to order them for you.
Isn't the whole point of that kind of statement that a human being thought of another human being and reached out? But I guess Gates just sees the opportunity to take a cut of flower orders. At least most of your friends won't be able to afford surgery when their insurance AI agents use watch data to deny them healthcare.
> Spotify has an AI-powered DJ that not only plays songs based on your preferences but talks to you and can even call you by name.
Wow, AI that _knows my name_? We're really living in the future.
> If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
At what point does a human being manage to outsource hobbies to AI? Because if a friend came to me asking about cameras today, but didn't want to do any of the research or learn anything about the space at all to inform a decision, I would simply respond "don't buy a camera, you won't have any clue how to use it".
> To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want.
Interesting. Didn't Gates open this blog post claiming that we wouldn't use apps at all any more? Why would I use an AI agent to create an app in a post-app world?
> Although some agents will be free to use (and supported by ads), I think you’ll pay for most of them, which means companies will have an incentive to make agents work on your behalf and not an advertiser’s.
If there's one thing smart TVs, Spotify, iOS, macOS, Android, and Windows have taught me, it's that paying does not matter. If a company can make more money by collecting data and inserting ads, they will. You'd think Gates would know that when Windows literally advertises right on the desktop.
> Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
Aaaaaaaaaaaaaaaand now we've come to Gates' main point: he apparently wants to live in communist china, where a single entity owns literally everything. At the rate of monopolisation we've seen in the USA, at least I can believe this prediction.
If you read beyond the headline (shocking, I know)
“As an asset class, you’re not producing anything and so you shouldn’t expect it to go up. It’s kind of a pure ‘greater fool theory’ type of investment,” Gates said on CNBC’s “Squawk Box.”
“I agree I would short it if there was an easy way to do it,” he said.
Anyone who thinks the entire "value" of crypto isn't a giant pyramid scheme (or pump and dump) is deluding themselves
Just thinking about bitcoin utility. Could it be wise for a country to hold it reserves in Bitcoin. Just in case other country tries to freeze its holding.
Good, that's how I would like it to be!