Surprisingly enough studies have never been able to reliably correlate phone or screen usage to declines in cognitive ability of well-being. Some weak effects appear and disappear in different studies but you'd think if the effect is this profound on society the results would jump out in studies. I can buy that it's a factor but there's more going on.
> in other words, an end in and of itself — we have recently considered the possibility that enhanced attention is instead a means to an end, with that end being better probabilistic inference [54]
Reminds me of the "running on cars!" video that went over some of the teaching tricks in older video games. How it would show a new enemy or mechanic in a way that is safe, but then drop you in on it with changes such that they feel different. The final "climb" of Super Mario Brothers is a great example. Just taking out the lower parts of the tower adds to the tension, when it should have no impact.
I doubt phones were resonsible for the initial drop from the pandemic, but I can easily accept that they play a huge role in our inability to return to pre-pandemic levels.
My wife is a teacher and phones are a constant problem but the administration refuses to do anything about them. It seems they recognize that some parents will freak out if they do not have access to their kids at all times and the admins are too spineless to take a stand against those few parents for the sake of everyone's education. It's absurd.
Several schools in my state (including one in the same school district as my wife) have been participating in an experimental phone ban for the last year and the results have been better than I could have expected. Grade levels and attendence are up by over 10%, fights are down significantly, and the overwhelming majority of parents are supportive. It's only a tiny few who object. But even with the numbers right in front of them, many admins are unwilling to adopt the bans in their schools. At this point it will probably take a mandate from the state government before they do the common sense thing.
Same reply as in the other thread: they don't think about us at all, nor should they have to. What they want will be good for us, too... as long as it's available to us.
Because billionaires think that you are a horse and that the best course of action is to turn you into glue while they hope AGI lets them live forever.
Billionaires don't think about you at all. That's what nobody seems to get.
We enjoy many luxuries unavailable even to billionaires only a few decades ago. For this trend to continue, the same thing needs to happen in other sectors that happened in (for example) the agricultural sector over the course of the 20th century: replacement of human workers by mass automation and superior organization.
In the past, human workers were displaced. The value of their labour for certain tasks became lower than what automation could achieve, but they could still find other things to do to earn a living. What people are worrying about here is what happens when the value of human labour drops to zero, full stop. If AI becomes better to us at everything, then we will do nothing, we will earn nothing, and we will have nothing that isn't gifted to us. We will have no bargaining power, so we just have to hope the rich and powerful will like us enough to share.
If anything like that had actually happened in the past, you might have a point. When it comes to what happens when the value of human labor drops to zero, my guess is every bit as good as yours.
I say it will be a Good Thing. "Work" is what you call whatever you're doing when you'd rather be doing something else.
The value of our labour is what enables us to acquire things and property, with which we can live and do stuff. If your labour is valueless because robots can do anything you can do better, how do you get any of the possessions you require in order to do that something else you'd rather be doing? Capitalism won't just give them to you. If you do not own land, physical resources or robots, and you can't work, how do you get food? Charity? I'd argue there will need to be a pretty comprehensive redistribution scheme for the people at large to benefit.
What we see through history is that human labour cost goes up and machine cost goes down.
Suppose you want to have your car washed. Hiring someone to do that will most likely give the best result: less physical resources used (soap, water, wear of cloth), less wear and tear on the car surface and less pollution and optionally a better result.
Still the benefit/cost equation is clearly in favor of the machine when doing the math, even when using more resources in the process.
What is lacking in our capitalist economic system is the fact of hiring people to perform services is punished by much higher taxes compared to using a machine, which is often even tax deductible. That way, the machine brings only benefits to the user of the machine (often a more wealthy person), less much to society as a whole. If only someone could find a solution to this tragedy.
Forgetting the offhand implication that $6,000 is not out of reach for anyone, this will do nothing. If we're really taking this to its natural conclusion, that AI will be capable of doing most jobs, companies won't care that you have an AI. They will not assign you work that can be done with AI. They have their own AI. You will not compete with any of them, and even if you find a novel way to use it that gives you the gift of income, that won't be possible for even a small fraction of the population to replicate.
You can keep shoehorning lazy political slurs into everything you post, but the reality is going to hit the working class, not privileged programmers casually dumping 6 grand so they can build their CRUD app faster.
But you're essentially arguing for Marxism in every other post on this thread, whether you realize it or not.
Yeah, there's always some reason why you can't do something, I guess... or why The Man is always keeping you down, even after putting capabilities into your hands that were previously the exclusive province of mythology.
I prefer to not use -ist's and -ism's. I read that Marx wrote he was not a Marxist. Surely his studies and literature got used as a frame of reference for a rather wide set of ideologies. Maybe someone with a deeper background on the topic can chime in with ideas?
In my experience it's not even dismissing the humanity of others, it's recognizing their own minds following similar patterns.
In my youth I lacked the confidence to speak without a sentence "pre-written" in my mind and would stall out if I ran out of written material. It caused delays in conversation and lagging sometimes minutes behind the chatter of peers.
Since I've gained more experience and confidence in adulthood I can talk normally and trust that the sentence will "work itself out" but it seems like most people gloss over that implicit trust in their own impulses. It really gets in the way to be too self-conscious so I can understand it being something most people would benefit from being able to ignore...selfishly, at least. Lots of stupidity from people not thinking through the cumulative/collective effects of their actions if everyone follows the same patterns, though.
I think a lot of this confidence that the sentence will "work itself out" has to do with being able to frame a general direction of the thought before you start, but not have the precise sentence. It takes advantage of the continual parallel processing humans perform in their brain. Confidence in a simple structure of what you expect to convey. When LLMs are able to generate this kind of dynamic structure from a separate logical/heuristic process + fill in the blanks efficiently, then I think we are getting close to AGI. That's a very Chomsky informed view of sentence structure and human communication, but I believe it is correct. Currently the future tokens are dependent on the probabilistic next token rather than having the outline of a structure determined from the start (sentence or idea structure). When LLMs are able to incorporate the structured outline I think we will be much closer to an AGI, but that is an algorithmic problem we have not been able to figure out and one that may not be feasible until we have parallel processing equivalent to a human brain.
AI bros have a vested interest in people believing the hype that we're just around the corner of figuring out AGI or whatever the buzzword of the week is.
They'll anthropomorphize LLMs in a variety of ways, that way people will be more likely to believe it's some kind of magical system that can do everyone's jobs. It's also useful when trying to downplay the harm these systems cause, like the mass theft of literally everything in order to facilitate training (the favorite refrain of "They learn just like humans do!" - By consuming 19 trillion books and then spitting out random words from those books, yeah real humanlike), boiling entire lakes to power training, wasting billions of dollars etc.
Many of them are also solipsistic sociopaths that believe everyone else is just a useful idiot that'll help make them fabulously wealthy, so they have to bring everyone else down in order for the AIs to seem useful beyond the initial marketing coolness.
reply