So much this, when you see Zuck or even Jensen Huang saying "software engineers won't be needed anymore" and being excited about it you get pissed off as a software engineer lol.
I feel like Microsoft's whole thing with Windows 11 has been "just force the users to do what we want them to do, we know better than they do" so it doesn't surprise me that 365 went the same way.
I'm saying this and I'm a person that's usually extremely enthusiastic about new tech, but I'm just burnt out on these companies trying to shove AI down our throats.
Had it been opt-in and gradual, I would be far more optimistic and enthusiastic. I guess my question is "why such a rush?". Even Apple rushed into it with something half-baked and unfinished.
> So much this, when you see Zuck or even Jensen Huang saying "software engineers won't be needed anymore" and being excited about it you get pissed off as a software engineer lol.
The real story nobody is saying out loud is that CEOs are much more replaceable by AI than are software engineers.
Exactly zero management or executive positions at my workplace have had the “can an AI do this?” exercise intended to explore ways of reducing headcount.
CEO is the one job role that AI can’t take because AI lacks accountability. Who is the person using the AI that will get blamed by the board if they screw up? That’s the CEO, even if you decide to give them a different title.
CEOs also lack real accountability though. Every time something goes really wrong they claim that they have no responsibility because people below them caused the problems.
How the F do you square that circle? If you're at the top you either are responsible or you aren't.
That applies to many roles. Lawyer AI can’t actually lawyer because someone needs to be accountable. War fighting AI needs to know where to kill. Doctor AI needs handholding. If we can find a legal construct for an AI surgeon operating on your child I think we can find one for an agent running a marketing company working on shareholders behalf.
Yeah, the accountability argument doesn't make sense to me from a practical viewpoint. The benefit of accountability is that it provides a path to avoiding repeated errors. There are other ways to achieve this using software tools.
I've been saying for a long time that the real definition of 'personhood' is the ability to take liability. If something can be sued in court (not counting civil forfeiture sophistry) then it counts as a person.
How about "If it gets sentenced to prison time" instead because corporations routinely get sued in court only to be charged a small percentage of the profit they make through criminal activity and that isn't "liability" it's just paying the justice system a cut of the action as a cost of doing business.
What accountability does Elon Reeve Musk, richest grifter on the planet and CEO of half a dozen companies ever face?
An AI can be turned off and replaced at any time.
> The real story nobody is saying out loud is that CEOs are much more replaceable by AI than are software engineers.
Sorry, but that’s not true at all. It doesn’t really make a lot of sense. Who is replacing the CEO of a company with AI? The board? The board doesn’t want/can’t run the company. They will hire someone to “run the CEO AI”? Won’t that just be a CEO using AI? Maybe that makes it so the CEO is paid less Because now they just run OpenCEOv4? I don’t see it happening though. Also a very large portion of the day to day of CEO level execs at those big companies interpersonal and/or performative. You won’t be replacing that with AI anytime soon. You still need a face of it at the end of the day.
Lets say that CEOs are no more or less replaceable with AI than all the other jobs.
Throwing away all the humans to replace them with AIs is a move only an AI CEO would make, because and AI couldn't give less of a shit if something it does blows up in its face. Well, apparently real CEOs couldn't care less either. Imagining you're going to run a successful software organization (once you've hollowed out the people who both understand the work and actually give a shit about it) is insane. These people know that it will blow up, but they hope that it is only after they can rake in their bonus pay for the mounds of short term profits that result form layoffs, after which they'll float away under their golden parachutes leaving their former companies to collapse under the weight of institutionalized incompetence.
Yes, I agree with the picture you're painting here, but it befuddles me where all these grifters think they'll go?
They'll have to live in bunkers for the rest of their lives.
Imagine that you're one of them, and you want to go watch a game (or a T. Swift concert) -- so you hop on your private jet and go to the stadium/arena. If you happen to be caught on-camera and your face is up there, on the Jumbotron, every single person there is gonna boo you. That's what your life will be like. What's the point?
again, that also makes no sense. If AI reaches that level of competence, then why would it only replace the CEO position or be better at being a CEO than say a people's manager? an engineer, HR, legal, sales, marketing etc? In fact, why not just create an "All AI company". you just register it, and run OpenCompanyAI, feed it some parameters "You are a company selling
This just isn't true unfortunately, but the angle of attack that is true is AI replacing the need for the rest of the company for you to be able to make money as an individual.
>I feel like Microsoft's whole thing with Windows 11 has been "just force the users to do what we want them to do, we know better than they do"
My biggest gripe about Win11 is they stole from us the ability to move the task bar. It can ONLY be pinned to the bottom of the screen now. For the longest time, I was a top of the screen taskbar user. From what I've read, they have no plans to implement or change this "feature".
Microsoft not the only one.Mac OS upgrades effectively brick their own hardware. I’ve got a beautiful 27 inch iMac with a retina screen 5k … unusable now on last 2 OS updates.
“As to be expected, macOS Sequoia support is still in active development. This is a community-driven project, and as such we ask users to keep expectations in check and use older OSes if you encounter issues that affect you.”
Perhaps, after decades of application bugs caused by developers not properly accounting for taskbar position and size, random ordinary users being distressed by accident'ly moving or resizing the taskbar and unable to get it back (leaving aside savvy users), Microsoft have given up? I seem to recall various articles on this over the years, including on Raymond Chen's "The Old New Thing".
This seems extremely niche. Maybe even petty? Why do you care where the taskbar is? Remember that every change breaks someone's workflow: https://xkcd.com/1172/
It's been a feature of Windows for a decade+, why on earth would they remove it? I've had mine aligned to the top of the screen for as long as I can remember, and there's even 3rd party tools that restore this basic ass behavior in Win11 (not that I'll ever use that bloatware willingly again). M$ has no excuse, other than them being a completely incompetent entity of course.
But can you explain why this feature is/was so important? The fact that there are 3rd party tools that can restore it makes it even more marginal. There are gazillions of UI changes in every app we use, it seems strange that this very marginal one should get someone up in arms.
> M$ has no excuse, other than them being a completely incompetent entity of course.
No, it's because "M$" doesn't think it's an important feature. There's no incompetence. Do you actually believe what you said?
Is this some form of OCD or hypersensitivity? Because I struggle to understand the type of mind that cares at all about this. And how do such people go through life, especially working in software, where things change constantly?
Do you actually not understand why customization is something people want out of the operating system they use for upwards of 8+ hours a day, every single day? Should we disallow changing background images? Should we let people set color accents as they can right now? Should we even let them choose where they want their desktop icons to be, or which desktop icons they can have there? Or should we disable choosing which applications can be locked in the taskbar? Are all of these equally incomprehensible as the choice of where the giant-ass taskbar that permanently fills up a non-negligible percentage of your screen real estate sits to you?
Would you also defend any of the above?
Also, this has been a feature since the XP days, people have built muscle memories around it, and then for literally no reason M$ decides to remove it, requiring 3rd party devs to do their job for them and restore this basic functionality that has been there forever.
> There are gazillions of UI changes in every app we use
And this is a good thing to you somehow? I don't want my OS switching up on me at random every day of the week, I want to login the next day and have everything be where I last placed it, not have some troglodyte PM at M$ trying to suckle on the promotional teet decide where my icons go for me.
> "M$"
Are you perhaps one of the aforementioned troglodyte PMs over there? I've noticed that M$ employees get pretty bothered about that little dollar sign in the name.
Perhaps the problem is a deeper one, which is constant and likely unnecessary UI makeovers. But among the endless blizzard of these, this is an odd one to make your last stand for. More to the point, onto:
> Do you actually not understand why customization is something people want out of the operating system they use for upwards of 8+ hours a day, every single day?
All the examples you listed are much more significant IMO than whether the taskbar goes to the bottom or top of the screen. I imagine that MS would be less likely to remove those features.
> M$ decides to remove it, requiring 3rd party devs to do their job for them
But why isn't that the end of the issue then? The feature can be fully restored.
> But among the endless blizzard of these, this is an odd one to make your last stand for. More to the point, onto
For me it's personally not, it's mostly about how slow and bloated W11 feels. On my 9800X3D (an insanely powerful and fast chip!), explorer is slower than it was in the windows 95 days on the i386 of yore. They've converted half of everything in Windows into a web view and you can feel it, don't even get me started on the adware.
> All the examples you listed are much more significant IMO than whether the taskbar goes to the bottom or top of the screen. I imagine that MS would be less likely to remove those features.
I guess we'll have to agree to disagree here, to me the taskbar is one of the most central parts of a modern OS next to file navigation and things like the start menu. It sits on your screen, visibly and permanently, and takes up a good chunk of your screen real estate. I know for many people they prefer it vertical because otherwise they have a kilometer-wide, mostly-empty taskbar taking up ~5-10% of the bottom of their screen. For me I keep it on the top of the screen because A) It's what I'm used to since the XP days and B) I find it easier to reach for than on the bottom, because my cursor is pretty much always on the top half of the screen rather than the bottom half.
> But why isn't that the end of the issue then? The feature can be fully restored.
You could make this same argument for basically every feature though, including the ones I listed. Why not remove background images and let it be handled by some 3rd party tool as well? It's about as basic as customization can get in an OS, so it should be handled natively by the OS. Not to mention that, due to it being 3rd party, it's also prone to breaking on Windows updates, and some applications rely on the OS to handle things like where to place the application window, and having things hack around OS-imposed limits like this does lead to weird behaviors and bugs.
It shouldn't require someone download 3rd party tools for functionality this basic, especially since it's already been in there since the basically the inception of the OS. Even MacOS lets you do the vertical taskbar/Dock, and that's the epitome of closed-down customization (not that Apple doesn't have similar problems, such as requiring 3rd party tools for external mice or window tiling via Rectangle).
> But can you explain why this feature is/was so important?
1) It's a line in the sand between decades of customization versus "my way or the highway [third party tools, some of which get banned for 'hacking' to implement their features, all of which are generally banned in things like corporate environments]".
2) It's a customization feature that has existed since Windows 95. Removing that feature broke decades of user habits.
3) At least one of the third-party tools has been briefly banned by Microsoft Defender for "hacking"/"reverse engineering" Windows. Most of them can be accused of that. The existence of third party tools today does not imply the continued existence of third party tools.
4) It's often compared to how Apple prefers a lack of configuration for "strong opinions". It's an especially funny comparison because Apple has almost always allowed you to move the macOS Dock to different screen edges.
5) It's a great waste of space on Widescreen monitors, and especially Ultrawide monitors. I've been using a right-hand side taskbar since square CRTs, because I felt even in 4:3 that horizontal real estate is at a much higher premium than vertical space. As a user of widescreens, including ultrawide, I especially feel like horizontal real estate is much useful to me as additional app space than vertical space.
6) Microsoft knows how much real estate is spent on a bottom taskbar. They know that's a valuable band of real estate. They've been selling selling ads on it, and assuming things like Copilot can go directly on it without user opt-in because they seem to feel they've got all the space they want and "own" that space. It's not the user's to own anymore.
It's a marginal feature in the number of users that used it, but those that did use it, used it for decades in many cases (myself included), and taking it away is a message that Windows belongs less to the users and User Customization is less important in today's Windows than yesterday's. It's emblematic of so many other problems in Windows 11. As a modest feature, it feels so much like a synecdoche, a part that resembles the whole, a part (of the problem) that represents the whole (problem).
> It's a marginal feature in the number of users that used it, but those that did use it, used it for decades in many cases
Not only that, but the users who did move the taskbar are more likely to be the "power users" who will help to evangelize Windows in an organization. Those are users who Microsoft should want to keep happy. When you lose them, you lose any sort of grassroots support in an org.
It's a small change in terms of code and number of users affected. It's a big change to the affected users and in user perceptions.
Apparently, it isn't that rare, as I too used to place the taskbar at the top of the screen (I switched to Mac 10 years ago). I simply wanted the taskbar to be near the menu bar so I wouldn't have to move the mouse more than necessary - that was the first thing I customized when installing Windows.
Well, it is not software developers that won't be needed anymore. It is large corporations. If a small team of developers can make huge projects. There is no reason for them to work for a large business.
I think this part depends on the person. I've personally been programming since I was a kid making games for my TI-83+, and in all that time and fatigue have been the limiting factors in how much of what I wanted to build that I actually could build.
So something able to write code rigorously enough to replace SWEs would be an absolute dream! I love programming with all my heart, and it's the thing I've spent most of my life doing... but I feel in love with it because it could make things.
A way to make even more things at a greater scale I'm individually capable off is such a joyous idea that if anything, I get annoyed at the idea they'd tease that without knowing it's possible (of course, they're trying to raise so...)
The aspect of wanting to replace SWEs is completely ok with me and I think there should be a rush to see it through. Imagine if every researcher could have an army of top tier SWEs at their beck and call for example. Or even imagine learning to program alongside a personal world-class expert from day 1, after all the fact AI could do it wouldn't mean we couldn't still do it ourselves if we wanted to.
-
Unlike the "AGI in 3 years crowd" I don't actually know that it's possible, but where I agree with them is that the route there is probably not going to be a slow burn. Most companies need to demonstrate some external value along the way or they won't be able to continue, hence the chasing down of usecases that they can ship today.
Unfortunately not all of us can raise $1B on a txt file and a promise not to release our product :)
I would guess that the pushback is more about the possible economic imbalance that will happen and less about being replaced on the actual effort of coding.
Maybe because I'm not originally from this country my view is different, but I think the fact the majority of the world's increase in population is about to happen in extremely poor places that will be subject to the worst of climate change means that the AI will have to be immensely powerful to actually increase worse imbalances than we're already headed for.
So powerful that it'd also raise the floor on quality of life for the 8 billion people on earth almost with ease, even if its owners stayed deeply profit/power motivated.
After all, it's not like the tech bros will get to make money by hoarding the AI and it's fruits after all. They need to apply to downstream tasks to actually cash in on its value. They could hoard the AI itself, but if OpenAI was suddenly able to break into every industry with a tireless AI army of top engineers and researchers they'd still be be producing real advancements for the world.
(and to be clear that's closer the worst timelines where AI advances so greatly. I think the more realistically we'd seem competition lead to something much closer to widespread advancements rather than some singular superpower emerging)
I am almost certainly wrong, and we will find some solution, or sue the hell out of the genAI firms, but this is the economic issue I see. It competes with productivity as a core economic driver of human wellbeing:
Concentration of wealth.
GenAI consumes content, even that created in low resource languages and regions, and spits it back out, separating the creator from the traffic due to their labor.
This isn’t entirely unknown - we’ve all been inspired by someone else stuff and copied our own.
Now, genAi firms have inserted themselves into this loop. And they’re cutting out the creator.
The scaled, automated pseudo workers that these firms promise, are owned by the firms. The productivity they create accrues to a small group of foreign multi nationals.
Economically - this shouldn’t be an issue. More productivity, means more capability of people doing newer work.
I do expect this to happen. However firms are also very good at making sure they capture the greater share of the market.
That researcher probably wouldn't have an army of SWEs at his or her disposal but be out of a job like the SWEs. If they get AI to a point where it can be a safe and competent senior SWE, it'll be able to fill a huge breadth of other roles as well. Human creativity isn't looking like quite the moat it was supposed to be.
Our societies are not in any way equipped to deal with putting what may well be a sizable majority of working-age people out of work, possibly for good, nor are we in any way ready for the kind of power certain tech billionaires would have if their workforce were to scale with just the amount of hardware they own.
At this point I kind of hope the current breed of AIs will plateau quickly and stay there for a while so that maybe society can catch up instead of getting surprise bulldozed by a gaggle of tech giants.
Of what use are the peasants in an AI driven and dominated feudal society, though. Maybe us peasants be useful in wars or battles between the different lords, but this would also probably be performed by automated drones and robots.
It would seem that the Lord‘s will have nothing to Lord over though. Who’s gonna buy their crap and for what reason will they they create anything? The whole thing seems like a massive doom spiral.
> It would seem that the Lord‘s will have nothing to Lord over though.
You have machines that can design and build mega-yachts, mansions, private space craft, ....
You can afford to purchase vast areas of land from people selling whatever they have to get by, ...
--
Asking what billionaires will do when they can't sell to the poor, is like asking why the human economy didn't crash ages ago because we have any off planet aliens to sell to.
Or how did we keep the economy going all this time, given the ants and trees couldn't afford anything we produce?
All an economy needs is someone with the means of production, who is able to get resources, and use those resources to produce something they want. I.e. you can have a working economy with just a single person. Or self-interested AI.
Hermits have an economy. Now imagine the hermit has trillions of dollars of resources and square miles of intelligent circuitry and robotic servants.
That hermit doesn't need the rest of us. Customers? Where we are going, we don't need customers.
> Our societies are not in any way equipped to deal with putting what may well be a sizable majority of working-age people out of work
Our societies were built before there was a technology that could replace its smartest people with machines that never tire?
I don't get why people keep trying to imagine current society + super-intelligent AI: by definition it won't be our current society if we can actually get there would it?
I mean if we have AI that can even replace the researchers (I wouldn't dream so boldly tbh), imagine how much faster the pace of scientific discovery becomes. Imagine how much more efficient we can make power generation and transmission, discover new treatments for disease, democratize learning at costs never before possible...
I don't love to spend too much time daydreaming what we could do down that because SWEs already feels like a bit of a pipedream, so all novel research being automated away is just completely in fantasy land... but realistically we're already on a pretty terrible trajectory otherwise.
Our next billion people are about to be born into some of the worst off parts of the planet. AI becoming good enough to replace researchers would be an infinitely more positive trajectory than some of the others we could end up on on otherwise.
Social media could have been utopian, too, yet those apps are algorithmic manipulation hellscapes that threaten to bring down even the most robust democracies. The same people who make it so are poised to be the ones in control of these AIs. I don't think they want the kind of utopia you imagine.
What I described doesn't have to be utopian in an absolute sense, just significantly better than where we're currently headed.
I think a lot of the unchecked pessimism around super-intelligent AI is just people being a bit naive or shut off from the reality of just how terrible things are going to be over the next century.
We're waging 25% tariffs over planefuls of people, what's going to happen when it's 100 million people trampling over borders trying to escape disease, famine, and temperatures incompatible with human life?
Compared to that, even if these companies abuse their ownership of AI and monopolize the gains, an AI capable of producing novel research and development by itself would still bring us much closer to solving major problems than otherwise.
And you think the tools that are, as we speak, boiling rivers and lakes in order to power the insanely resource-hungry AIs is the solution to any of those problems, such as famine and rising temperatures? If anything, they're accelerating us towards these issues.
There's about to be 500 billion dollars invested in generating even more electricity for these monstrosities, instead of literally anything else actually useful today that we could be putting towards climate research or renewable energy. Nope, we're just gonna generate even more spam and bullshit while spinning up nuclear reactors to power it all.
If they don't pan out we won't even reach 500 billion dollars actually invested and tbe bubble will pop. And it'll take many many many trillions before AI is even close to the biggest reason climate change is killing people.
People tend to take different pieces of mutually exclusive end states. Like my comment is speaking to a fantasy like outcome where again, we have AI that puts all researchers out of jobs.
If it can do that (which I don't think it will, but if we're dreaming it), let it boil oceans, let them abuse it, you can become unfathomably wealthy by solving world hunger.
Let them force countries to take out loans that practically put then in OpenAI's permanent debt to access the advancements they come up with.
It's still better than the alternative, and it still allows room for them to be as evil as they want.
I mean either way countries are going to become indebted to companies eventually if we keep down the current path. I don't see how the ultra wealthy can't use mass upheaval and desperation to secure extremely cheap labor and solidfy their power even without AI.
At least now, even if it's for their own personal gain and amusement, they can trivially solve
-
It's like we're talking about someone potentially developing a cure for all cancer, but everyone is worried because the company behind it is evil and will hoard it for themselves while charging $1B per dose: let's still get to that cure if it can be developed.
It'll be a truly miserable and awful world watching people we know die while there's a cure, and watching them die because they're not able to pay this villain... but today there's no cure, no proof one can exist, no hints on how to get one.
It's better to have the cure and no clue how to fix the broken situation that it creates, than to have no cure and no clue how to fix cancer, because one is a problem of people/power/ethics and the other is a problem of unknown proportions and guaranteed ongoing suffering not more damaging than having a cure, no matter how inequitable access to it is.
Though I think you misunderstand or underestimate human nature of self interest. Everyone that is in control of a superpower like this will abuse it. Be it on a presidential level, be it on CEO level, be it a major shareholder of a foreign NGO. That is why we had democratic splits of types of power in the first place. I say "had" because the trend globally is leading to right wing autocratic ideas due to manipulation of social media.
Human self interest and egocentric world views is what gave us this mess.
The only thing capable of evening out the odds is a federalistic decentralized approach, which we desperately need for AI. Something like a legislative system for lots of overfitted mini AI assistants that also give outliers a chance to be the social trend.
Otherwise we will land up with the ministry of truth, which, right now is Facebook and TikTok effectively. The younger generations that grew up with social media tend heavily towards populist right wing ideas because those are easily marketable in 30 seconds. Paint the bad guy, say that it is established fact, next video. Nobody is interested in the rationale behind it, let alone finding and discussing a compromise like you would in a real debate that wants to find a solution.
We need to find a way to change beliefs through rationale rather than emotions. Ironically this problem is also reflected in trained LLMs that turn into circlejerks because they've learned that from the dataset of us easily manipulateable humans.
I don't think it's that simple. For one thing, AI isn't a democratizing force. If it's as good as you think it will be, it will be less like having a good education and more like having an indentured servant. Some people will have whole fleets of such servants doing their bidding, while others will have none.
For another, research isn't an end unto itself. As you note, for some people an already-unfathomable level of societal knowledge has resulted in nothing but continued poverty. Benefit from scientific knowledge requires a stable economy full of consumers who can and will purchase high-tech items. Where will that wealth come from once the value of human intellectual labor has been so undercut by cheap AI intellectual labor? Without the capitol to make AIs work for us, most if not many people will be left to live their life as servants to AIs so that those AIs are able to have autonomy in the real world.
My thought has been they are forcing it knowing nobody, as in 98% of people, would give a crap about most of the AI features. People have been using these tools for decades now to solve their problems and there's a lot of muscle memory to overcome even if the 'new way' were in fact better. I myself have found I only adopt new software, and techniques (including things like using/learning keyboard shortcuts, etc), that's a minimum of 2X better/faster than my de facto personal preference or legacy approach of tackling the problem. And, even if >10X, if it's something I do infrequent I still won't be interested in changing my ways. I have a lot of muscle memory that goes into how I build something like a new spreadsheet, even complicated ones. I'm not interested in putting AI into that process.
I have a grandfather that actually took an early retirement package, age 55, specifically because company gave him an ultimatum regarding switching from typewriter to a PC in the 80s. I feel like AI is pushing me towards making that same choice, I don't really care for using it in my work specifically (have no ultimatum at present).
I use it sparsely and it's more of a toy/novelty to me. Although, I do see how it helps other fields more/less and could replace humans in some professions - I'm not a SWE.
I feel like Microsoft's whole thing with Windows 11 has been "just force the users to do what we want them to do, we know better than they do" so it doesn't surprise me that 365 went the same way.
I'm saying this and I'm a person that's usually extremely enthusiastic about new tech, but I'm just burnt out on these companies trying to shove AI down our throats.
Had it been opt-in and gradual, I would be far more optimistic and enthusiastic. I guess my question is "why such a rush?". Even Apple rushed into it with something half-baked and unfinished.