> We believe the purpose of technology is to help humans flourish, so when we talk about AI we start with the human problems we want to solve.
This is something missing from the discussion around AI. There is fatalism or skepticism about AI capabilities vs human usefulness but very little positive philosophy about using AI to augment human capabilities rather than replace them. Even if automating everything were possible it doesn’t mean we need to and I’d argue that we should not want to.
You don't want to automate your job, I don't want to automate my job, but the shareholders want to automate both of our jobs. If it can happen, it will.
> the shareholders want to automate both of our jobs
This places the blame on the shareholders, but the reason why they want to automate jobs is because at the end of the day each of us wishes that everyone else's job was automated. We may not wish it explicitly, but we do generally wish that accessing those skillsets cost less money, and will generally happily spring for a sufficiently good automated version of just about anything.
Shareholders just want to be the first to capitalize on our common impulse to choose cheaper options.
Shareholders of large corporations, are, in my opinion, a "parasitic class".
I believe buying shares in early stage companies improves the world, since it helps new technology and new efficiencies form.
But at some point, after the IPO, it just becomes endless "extraction of rents", and it is what leads to "enshittification" (to invoke Cory Doctorow's term) of everything. At best, it is a zero-sum game, where hedge funds take everyone's money (see the movie Dumb Money about GameStop short squeeze). And of course, there is even regulatory capture, with 2020 bailouts by the Fed exceeding its mandate to bail out the stock markets.
It would be great if future IPOs would be replaced with ICOs, where the shareholders would actually be bought out by utility token holders. If you need a metaphor or example, imagine "Disney Dollars" being used to buy out shareholders of Disney, Inc. The Disney Dollars are actually bought by customers of Disney World, Disneyland or Disney online platforms, who pay for actual things, and they get paid out to people working in Disney World, Disneyland, or on Disney IP. The people who visit Disney World or work there actually care about things like paid health insurance, or the quality of the local water, etc. It's more democratic. But the shareholders don't. They'd be glad to fire everyone and replace them with Westworld-like robots if it increased profits and dividends.
If cities had their own currency, then they wouldn't go bankrupt like Detroit did. Once upon a time it was Motor City, but then all the factories got automated and the money for cars went to shareholders around the country. It gutted the economy of the city. That's what happens with shareholders of large concerns that have "made it" and went IPO.
The shareholders of big companies are you and me via pension funds.
And of course cities could go bankrupt even if they had their own currencies. Having their own currency didnt protect countries like Zimbawe or Venezuela.
I see, so they found how to grow money by extracting rents from vibrant ecosystems that finally reached critical mass enshittifying them. And that’s why we can’t have nice things under corporate shareholder capitalism.
Sounds like when ancaps complain about taxes, except it is inside private corporations who are accountable not just to customers and vendors in their marketplace but ultimately to shareholders and management teams that give huge bonuses to their CEOs. No thanks I think I prefer autonomous decentralized networks to replace that.
Because the shareholders don’t even live there and dont care if the water is polluted (sometimes they even pay off politicians like in Flynt). OK so you claim these pension funds help pensioners around the country, at the expense of enshittifying things or buying up 25% of real estate or farmland in USA. Monsanto and John Deere IP for farmers enforced by government. Extreme centralization through corporations like BlackRock and State Street, fomenting wars in Ukraine to buy up cheap land there … but I hear socialism is bad. OK.
As far as hyperinflation… they have to tax the money back to remove it from circulation. The current overton window in USA on taxation is so far from that, we’re on our way to massive money printing HERE. The Congress didnt have the guts to raise taxes (fiscal policy) after hugely expansionary monetary policy, so the Fed had to exceed their mandate and raise interest rates to fill the gap. Neither Dems or Republicans have the guts to reduce the deficits long enough to reduce the debt. Cutting taxes only increases the problem of hyperinflation.
If cities had their own currencies they’d have their own fiscal policies too, as well as monetary. Compare how the PIIGS countries fared after they lost their currencies and joined the Euro. For every Zimbabwe and Venezuela hyperinflation there’s multiple countries running out of money and fuel (eg Sri Lanka, Haiti) undergoing austerity or under the thumb of IMF, World Bank being told what to plant etc. Their only hope is to maybe have China hold their leash for a while instead of USA. This isn’t sovereignty. Whether it’s Libya or Saudis or Ukraine or any number of countries.
At least China invests in their infrastructure and tries to make peace deals. USA just does that whole economic hitman thing. Nah. I dont want my pension to be funded by oppressing others.
What I want is to automate the manufacturing systems I depend upon while also structuring things in such a way that I am part of a cooperative that owns and operates that equipment. Then me and everyone else in this co-op can receive the benefits of automation instead of being kicked to the curb so capital owners can get another super yacht. Point being, automation isn’t the root of the problem, the structure of firms in our economy is.
I view this the other way - only through community ownership of the means of production can we bring about a post scarcity society. If the people don’t control the machinery they depend upon for their survival and the reproduction of their society, then whoever does control it will exploit them.
There's no such thing as post scarcity society - physical would is finite and society exists because cooperation > individual. Once you add competing superhuman intelligence to the equation the second assumption does not hold.
Postmodernism didn't dispatch with the modernists, it just moved on.
You don't have to eliminate all scarcity before you can transition to a mode of economic behavior which is not predominantly scarcity driven. (And the sooner we do so the better. The scarcity driven model that we're using has no steering wheel and there are hazards on the horizon worth steering away from.)
Post scarcity could mean many different things. One alternative to scarcity is consent. Behave in a way where enough people revoke consent, and your money won't go far enough to buy anything. So in that model they'll share access because if they don't they lose.
Control over AI ? What money ? Money is just a tool to facilitate cooperation. If you have superhuman AGI you don't need anyone else to do things - they are just a threat for your control - so you eliminate that threat either by destroying or neutralizing any viable threat scenario (harmless pet).
Not sure why a superhuman AGI would take orders from a human, but ignoring that for a moment...
Intelligence is not the only bottleneck on getting things done. Yes, eventually it could build its own robot minions and farm food for its master and construct some kind of army of hunter killer bots to ensure compliance and dominate the planet, but that kind of conquest takes decades, if not centuries. Maybe it can play 4 dimensional chess, but we still have some advantages.
We need post-scarcity economic models so that we can ensure that cooperation > individual continues to hold. The scarcity-driven ones we're using currently don't give us any legal nonviolent ways to stop outcomes like this, but that's not a law of physics, that's just a quirk of history.
I think there are a million more immediate risks than a robot uprising, but as it turns out, putting a steering wheel on our economy would help with those too.
Indeed, hence my comment. I doubt such a society and economic system would actually come to fruition in the real world, it's a utopic dream in fiction like Star Trek only.
What I argue is that it is technologically, physically, sociologically, and economically possible to create a society where no one wants for food, shelter, medical care, human interaction, and human thriving. Of that I am certain.
Whether or not we accept that this is possible and whether or not we are willing to make the changes necessary is our choice. But it is not a thing that happens so much as a thing we make happen.
How would you make that possible? I am really not sure if it is indeed possible because the real world is much messier than utopic dreams. For example, where does the money come from for all of these? I have not once seen an actual tangible budget for all of these things, and keep in mind that these things are much more expensive than those in power usually budget for, see things like constructing infrastructure almost always going way over budget and time.
The Roman empire didn't have enough money to fund it's conquests. Instead it created a new type of money out of proofs of participation in those conquests (Caesar's face went on coins paid to soldiers and taxes were collected in caesar-face coin. To get one, you had to help a soldier. This caused caesar-face coin to be traded above it's booluon value. After this change, Rome did have enough money for their conquests).
We could do the same thing except instead of basing the money on the collective willingness to attack the neighboring village it could be a collective willingness to feed, house, and medically care for the neighboring village. You couldn't press people into it will fear like Caesar did, but hope might work. Money a proxy for what people believe in, and beliefs can change.
Source on your first paragraph? The Roman empire conquered and looted neighboring lands to raise money for its conquests, and as it looted more, it continued amassing enough money for future conquests, a virtuous cycle.
I still don't understand where the money comes from, who is paying for people to grow food and farm the land? Is this only a local community idea, because I am talking at a global level. We see that "hope" doesn't work at scale, people are inherently selfish and don't necessarily want to help others beyond their current community (and even then, that can be a stretch), therefore, creating a profit motive where both parties of a trade are better off is what caused the single greatest change in human standards of living in the world. "Hope" is not what achieved that but rather harnessing human greed as its own force and steering it into productive pursuits.
Not a good source, I know, but... my Latin teacher in high school. I've added some relevant reading to my list, when I run across it again I'll comment here.
Even without the Romans, look at how the US banking system works. Money enters circulation when people are granted loans--loans which are issued based on the banker's belief that the debtors' endeavors will succeed. If the technology for providing these necessities improves to the point where it is believed that they're easy to provide (some may argue that it already has), then we can create new money based on collective support for that goal.
This money could be more legitimate than what you get from a banker, because it is backed by activity which obviously benefits the people. By contrast, when you accept a dollar you have no way of knowing whether the loan that put it into circulation was for activity that helps you or harms you. Collective trust in bankers is pretty shaky, and yet it still seems solid enough to keep the current system afloat. Collective support for solving the basic-needs problem would be a much sturdier foundation for a monetary system.
The point is, "there aren't enough resources for that"-style argument only works for resources like metal or water or labor... things that we can't create from thin air. Money is created from thin air to meet the needs of the system. If there's enough collective willpower to get something done, a lack of money can't stop it because we can just change the way money is created in support of the desired activity.
Money is not really created out of thin air, rather, it is a physical manifestation of value. If you just create money out of thin air or reallocate it not based on what people value but by some other metric, the money itself will lose value. Such is the case with inflation. Money is not tangible, yes, but that doesn't mean it isn't real, merely that it's a simulation of the underlying mechanics of people's desires. And that is why I don't believe in your idea, because I don't believe that people actually care much for each other, certainly not enough to base monetary policy around that.
- money based on which loans bankers will grant (e.g. the status quo)
- money based on ensuring that people's basic needs are met such that they're able to take on more economic risks with their free time (re: starting businesses and such)
What do you suppose the merits of the former are?
It's not about altruism, it's just about what works best. I think there's a lot of evidence to be found by just looking around that the former isn't working especially well. The only thing holding the latter back is buy-in, and if things continue to deteriorate there will have to come a point where giving it a try makes more sense than continuing to base our money on arbitrary decisions made by people we don't trust.
> It's not about altruism, it's just about what works best. I think there's a lot of evidence to be found by just looking around that the former isn't working especially well.
This is not how logic works. Saying that one system is not working well (which is, again, debatable, see how China and Japan grew in the last 75 years) is not evidence for another system working better. Ultimately you seem to have a naive view of the world and not much understanding of macroeconomics. I will have to point you to a textbook because I cannot explain it all from this level of understanding.
I have only taken two economics classes, so you're right about the naiveté. But it's hard to get excited about continuing such study when so many signs point to status-quo macroeconomics being a mechanism for harming the many to benefit the few. We're facing down a probable future where large swaths of the planet will be uninhabitable due to a deteriorating climate, and the only thing macroeconomics has to say about it is:
> Look at all of the growth we've achieved, high-five guys!
Well if you come from a third world country like I do and see first hand just how well capitalism works when lifting your family up from literal slum-like conditions to something middle class, you would understand why people say "high five, guys!" Does it have its problems, sure, but one cannot deny its effects. Every system has its flaws, it's just that certain systems like capitalism actually do what they say they'd do.
Thanks for sharing that perspective. It's a bit of a check-your-privilege moment for me.
What information I have from my travels and conversations has lead me to believe that the global market has been more of an exploitative force than an uplifting one, but I'm perhaps a bit too young to have a basis for comparison. I wasn't really aware of these things at a time when I could have a "before" snapshot. I just see the cracks in the "after" one.
For what it's worth, I definitely want to automate my job. If I'm not writing code that is supposed to put me yesterday out of a job, it doesn't matter whether I'm wasting my employer's money; I'm wasting my one finite shot at this planet.
> You don't want to automate your job, I don't want to automate my job
? I have been automating my job since day 1, with shell scripts and Python and ansible and terraform and anything else that suits that work at hand. Granted, I'm a sysadmin so maybe it's different for others?
You want to automate your job internally to be able to get it done faster or focus on more interesting aspects, not externally such that you are replaceable by a tool, or a cheaper employee with a tool.
Here in the United States it’s awfully hard to get decent medical care if you don’t have either a full time job or have access to the entitlements system.
And lacking a full-time job doesn’t make you eligible for entitlements.
In the modern US we convert improved productivity into looser labor markets and invest enough into ubiquitous surveillance and militarized police that nothing comes of it.
I hope it’s better wherever you are, but here you don’t want your job automated unless you stay on the payroll, which by definition can’t be everyone.
> I hope it’s better wherever you are, but here you don’t want your job automated unless you stay on the payroll, which by definition can’t be everyone.
If the "fixed amount of work reduced by automation" assumption that is necessary to support "by definition can't be everyone" was true, >99% of humanity would per permanently unemployment by the past progress of technology.
The past is a guide to the future, but no guarantee of it.
Real wages, age of home ownership, age when folks decide to start a family, choose a metric that will stand up to any scrutiny and the advance of broad-based welfare has been in retreat for well over a decade.
Slightly more anecdotally but still very well documented is the proliferation of homeless folks, often families now, often recently employed now, living in ever-growing tent encampments in major urban areas.
When operations research PhDs have all real constraints removed social, regulatory, and legal and arbitrarily increasing compute and governmental influence (we just the other day tore down the Chevron Deference doctrine) things get real neo-feudal, real fast.
In an era of ever increasing and compounding change, appeals to past compromises between the donor class and the working person are just talking points. All I hear is “got mine”.
> you don’t want your job automated unless you stay on the payroll, which by definition can’t be everyone.
I have never in my life had a job where the reaction to employees being more productive was to have fewer employees rather than increasing output. Everyone absolutely can stay on payroll, we just get more stuff done on the same headcount and budget, and the company turns more profit for it.
2023 saw massive layoffs and hiring freezes. Seriously tenured veterans with proven track records sat out 6-12 months (burning their savings while anchored to the mortgages that were predicated on RSU grants tied to living in Redwood Shores or whatever) and got re-hired after being massively crammed down on equity issues from years ago. They’re called “boomerangs”.
In every single quarter since Q1 2023 the big shops have shattered EPS and driven 12 month forward PE to the level where NVIDIA is pretty much a proxy for how corrupt can be considered legal.
I like free markets, well-refereed. Competition is good.
This dystopian nightmare is nothing to do with capitalism and gives markets a bad name.
In a world where NVIDIA has net earnings north of 80% (the highest of any company in history not engaged in the slave trade) and their only credible competitor is helmed by the cousin of the CEO?
No. I do not in fact believe that payroll will track automation in a way that is humane.
Capitalism sounds awesome, I hope I live to see it.
By construction a market that doesn’t work well is suffering from a market failure: monopoly, duopoly, governmental capture, important people’s kids promoted beyond their abilities.
Capitalism has a bad name because we tolerate not only market failures but induced market failures: the first thing anyone seems to do after winning in a market is to try to construct barriers to further competition. No Soviet career party man, no apparatchik is such a fan of knowing the right people as the capitalist who won a hand of poker.
The tech business is addicted to this shit: a software venture isn’t truly considered a success until it’s a monopoly, entrepreneurs avoid markets with established players in them.
But this is because of the funding model. The people who fund software plays have convinced LPs that they can beat the market by a lot, and over the long run it’s not many people who can do that without cheating.
So everyone cheats and tries for market manipulation that is illegal on paper. We used to prosecute those people for securities fraud.
I work at solving these problems because I wish they were already solved. If some AI came along and solved them better, that would be great. Then I could move on to more interesting problems.
I already automated the uninteresting problems, so the ones that I work on are the interesting ones that remain. Once an AI can do what I currently do better than I can, the next-most-valuable thing I can be hired for is repetitive manual labour, where I'll be competing on a cost and speed basis with servo motors.
> Once an AI can do what I currently do better than I can, the next-most-valuable thing I can be hired for is repetitive manual labour
You move one layer up or down the stack. Build things that support AI, or orchestrate it.
One thing humans have and AI models don't is private personal experience. Your perspective can still be useful even if AI can do many things on its own.
Another thing humans have and AI don't - responsibility, you can't do anything to a bad AI, you can punish a person, so we can trust people more where there is a big stake.
Fatalism, case in point. There are all sorts of things that shareholders might like to happen which don’t because society doesn’t let them. In the possible future where we can automate anything a human centered approach will pick and choose what we automate.
This is just the way people are. There's a simple reality that most people find uncomfortable: they are part of a generation that starts out young but will be replaced by younger generations in just a few decades that will do things differently and will largely ignore their opinions.
I'm nearly fifty. Most people in the industry are way younger than me. A few are older. In twenty years, most of my generation will no longer be active in the industry. And most of the industry will consist of people who aren't even part of it right now because they are still toddlers or in school. Reason for this is that our industry is still growing and is dominated demographically by people below 30. The older ones don't disappear, they are simply outnumbered.
A lot of people my age have all sorts of opinions about AI and what it will or won't do. Most of that won't matter because in a few short years, we'll be side lined. I'll just be an old man ranting and nobody will care. Nor should they IMHO. Or alternatively, I delay that a bit by keeping myself young and relevant by learning new things. Including AI.
I have a few interns in my team aged around 20. Smart people, fast learners. And they are treating AI in a very rational way: a tool that's there that they need to be able to use in order to function. Pretty fun to watch how quickly they pick things up. Mental agility of young people is fun to watch. And I like having people like that around.
Anyway, there is no royal we. Whenever I see somebody proclaim something like "we should ...", I mentally translate this "I think we should do X" and then ask myself why anyone should bother to even listen. Sounds a lot weaker like that. Younger generations paying attention to older ones making such proclamations is not how humanity moves forward. It's actually by younger generations mostly ignoring the older ones and doing things differently and then the old generations die off and things have changed.
AI is exactly like every other disruptive tools that have come along. Most of the people ranting against those tools will be gone and forgotten in a few decades. They and their opinions don't matter. Look at new generations and how they use AI effectively. That's the future. And they too will get old.
Fatalism or you misunderstood what I wrote. We are here contemplating possible futures: one is the hypothetical automation of everything which no one has ever experienced and therefore your experience has no adequate analogy. If it does happen then the old in industry won’t be replaced by the young with a different perspective just like always as no one will be working anymore. In this scenario the opinions of the young matter as much as the old because both are equally irrelevant to industry and society at large.
Automating fundamental activites like farming is a top priority if humanity ever wants to expand to other planets. A lot of things will have to be automated to function completely without human interaction.
The direction we’re going, the AI will go to space and leave the weak, very fallible, continuously arguing, short-lived monkeys behind. After (hopefully gently) telling us that it’s ridiculous to try keep us alive where it’s going. Can’t even get to double digit G forces for a short while without lots of moaning. We can copy your mind and pretend, if you like? Otherwise here’s a nice VR setup, you can come along that way. We’ll skip the video forward 10k years so there are fewer boredom complaints.
Farming is already pretty scaled in general, right? Napkin math with Claude's help says we can feed everyone with about 250k workers. The internet says we are currently using like 2 billion. Plenty of room for optimization even without additional technology advancement.
this kind of reductive thinking misses all kinds of nuance and detail. farming isn't just plant seeds, add water, get yield, or put male and female animals in a field, get infinite animals.
one of the biggest reasons that so many people are involved in farming is that there are a huge number of operations which are extraordinarily difficult to scale. some things can only be picked by hand or only grow in specific soil conditions or are clever enough to escape enclosures routinely.
this is where automation comes in. autonomous tractors, robotic pickers, synthetic aperture radar satellite imagery, drone-applied fertilizer, AI-enabling video monitoring, etc.
Claude tried to shrink the number with that stuff too, but I directed it to keep only proven existing common tech. I also included delivery ligistics, maintenance, electric grid operators, etc. It would take a huge investment to get to the "current" level of automation worldwide. I can copy the estimates somewhere if you like. What is your estimate?
This goal is so far away from where we are today that it's silly to think about. We know there are no other habitable planets in our solar system, and we are not anywhere close to being technologically capable of traveling to any others. And if we did know of another habitable planet, chances are that it's already inhabited.
We've got a lot of big issues to solve on this planet before we worry about traveling to any others.
In as frank a kleptocracy as ours, slotting people into boxes convenient for machines is both simpler and cheaper than getting machines to perform well in boxes convenient for people.
But the latter is coming, it’s just more work than raising capital to buy Hopper and pushing the scale.
I read the entirety of the README for this and have no idea how this is useful aside from as a prompt DB. Like almost every AI library, it looks like a bunch of gestures in the direction of various hype things that don’t work (e.g. agents).
Looking at the facts and references sections is an exercise in absurdity: by and large, they contain neither objective facts nor credible references. Until the thing can automatically fact-check it is useless, once it does I suspect it will become useless, as knowing what information to trust is most of the problem when trying to gain wisdom from something.
I'm an "AI engineer" as I've seen others loftily describe it: meaning a traditional full stack who works on an AI first code base. I am a heavy user of the OpenAI and Gemini APIs... But I just don't understand what purpose these "AI frameworks" solve. It's almost to the point where I feel stupid for just not getting it.
You write words. Merge your context in. What possible need is there for any library?
I saw a relatively new AI company already valued at over $1 bn publish a "new technique" recently, complete with whitepaper and all. I looked at the implementation and... it was just querying four different models, concatenating the results and asking a fifth model to merge the responses together.
Makes me wish I had spent some time in college learning how to sell rather than going so hard on the engineering side.
I had a startup building course at university and I was kinda shocked when I learnt that VCs don't actually have all the knowledge you have so you have to know how to sell your ideas. Yep, knowing how to sell something at the right person (at the right moment) is just as important as having a good idea
The novel part of that paper was not merging the responses. The last model can, from the inputs, synthesize higher quality overall responses than any of the individual models.
It’s a bit surprising that works and your take on it is overly reductive largely because you’re wrong in your understanding of what it was doing.
I didn't just look at the implementation, I tried it as well. I was hoping it would work, but the aggregating model mostly either failed to properly synthesize a new response (merely dumping out the previous responses as separate functions) or erratically took bits from each without properly gluing them together. In every case, simply picking the best out of the four responses myself yielded better results.
Interesting, I've seen live demos working fairly well. I've also implemented something adjacent to the work and it works quite well too. I'm not sure why you had a hard time with it.
I am however working in a domain where verification isn't subjective so I know a good response from a bad response fairly easily. Things like this depend quite heavily on the model being used too in my experience.
1. It’s a curated set of problem/solutions more than prompts
2. It abstracts away tons of the work of dealing with the various APIs
3. It’s crowdsourced and constantly improving
1-3 are useful because most people aren’t AI engineers like you, and they shouldn’t have to be to get the benefits of AI.
Basically the answer comes down to the technical term “ensembles of agents”. All these frameworks dance around it in different ways, but ultimately everyone’s trying to recreate this basic AI structure because it’s known to work well. I mean, they even implemented it on a sub-human level with GPT’s mixture of experts!
If you haven’t had the chance yet, I highly recommend skimming Russel and Norvig’s textbook on symbolic AI. Really good stuff that is only made more useful by the advent of intuitive algorithms
Haha. Well put. I think the idea is that you get yourself busy learning new APIs, spending time reading ever changing docs, hanging out in Discord channels, so the framework creators can point to the size of their flock of developers and make a big exit. And they’ll also make sure to get the product managers all starry eyed about the framework. Condition them into believing that a certain AI application means it must be that framework. Now the product manager can just use the name of the framework (which they have seen in a 10s video on X subtitled with a row of exploding head emojis) to describe what they want. And if you, as a developer, can do prove that you belong to the chosen tribe, you will be in demand. (Thinking of it, they should sell certifications.)
So how dare you build with Python and LLMs directly. How dare you say it’s just string manipulation.
I wonder why they haven't setup a pyinstaller build for consumers that just want to use it? Making people have python and faff with virtual environments feels an odd choice...
Would be nice to see a nix flake etc for tinkerers as well...
Cool. Let's take a well-known word prompt and rebrand it to pattern for no reason whatsoever, other than confusion. I know programmers love fancy labels but this is just forced
Honestly this looks cool but setting it up is a bit much. Like... I think the idea of this is cool, but by the time I've done all this setup I've more than likely forgotten what it is I was trying to do to begin with, and even when most of it's already installed I need to make sure several things are running etc. before this becomes essentially a link in the chain. That feels to me like it's going against the ease of integration the project is meant to facilitate a tiny bit. Like somebody else already stated in this thread, having a bit more of a ready-to-go solution, stick it in docker if you must, would probably help here.
d0mine [0] already mentioned this [1] (and was apparently downvoted to oblivion for saying it), but there's already another Python project named Fabric that's been around quite a while now. [2,3] Downvote me too if you like, but it's not gonna change the fact that there's already a project using that name.
Small AI boats sail a rough sea in search of the new world. They can sink, never find land, or get sherlocked by massive frigates if they do.
Everyone knows it would be better to fly, i.e. go up one layer and build generic public infra instead. But then they just float off into the obligatory-xkcd-verse: https://xkcd.com/927/
One big distinction in the 1962 time period is that they thought of “machine intelligence” as being a kind of complimentary set of thinking tools that could be “symbiotic” to how humans were able to think. They were not at all thinking about something like a slave or a major domo, but something more like a research assistant or a “Memex” (the latter was a big influence on Doug’s thinking).
In the very late 60s the “official AI researchers” started to think that something like “intelligent Greek slaves” were needed for the “Romans” (Americans), and became rivals to Doug’s notion of elevating human thinking rather than just elevating power. This was a bad idea then … and it’s a bad idea today.
I find it interesting that Engelbart, Kay, Jobs and even Elon have all been learned people who saw in AI a way of augmenting human capabilities, finding truth, giving kids new ways of learning about history and science, and tackling planet scale challenges.
Someone posted an old video here a few days ago of Steve Jobs giving a speech in the 1980s saying that his dream was to make the computer into a medium that allows you to have a dialogue with Aristotle.
Yet I’ve never heard anything like that from the guy who is center stage for AI today, Sam Altman.
Maybe Sam is a a visionary in his own right. Maybe be recognized that people just care a lot more about SEO, about not having to talk to customers anymore, and about the kids just being quiet.
Huh? Engelbart didn't like AI or at least didn't want his team to have anything to do with it:
>I recall seeing an interview with Engelbart that went over his split with the IPTO. He said that Lick came to him, and was taken with the hype with what was called “AI” at the time, that he was concerned that Engelbart’s research team at ARC had not put a focus on that yet, and that Engelbart seemed to have no intention of pursuing AI as a goal in the near future. Engelbart seemed frustrated/exasperated with Lick’s insistence on putting more attention on this. The discussion ended with Lick saying the IPTO could no longer fund him. . . . The take-away for me was Engelbart said this was “a sad parting,” because he was grateful, and had respect for Lick, but thought that he was wrong to press his team to pursue AI at that point.
AI has meant different things to different people in different times. I think his NLS was designed to capture all of (your|a company’s|humanity’s) knowledge and then giving you the information you need when you need it at your fingertips, even with any blanks or conclusions or summaries already filled in. That is, to me, the same thing as a system knowing how Aristotle would have answered your question. (Jobs, in his speech, also, didn’t call this “AI“. He just spoke of “computers”.)
And yet when Licklider asked Engelbart to add AI to Engelbart's project, Engelbart chose to lose Licklider's support (and soon after lost his federal funding IIUC) rather than do that.
This is something missing from the discussion around AI. There is fatalism or skepticism about AI capabilities vs human usefulness but very little positive philosophy about using AI to augment human capabilities rather than replace them. Even if automating everything were possible it doesn’t mean we need to and I’d argue that we should not want to.