Hacker News new | past | comments | ask | show | jobs | submit | insane_dreamer's comments login

I prefer the physical cubes (or other shapes) that you rotate

The problem is eventually what are LLMs going’s to draw from? They’re not creating new information, just regurgitating and combining existing info. That’s why they perform so poorly on code for which there aren’t many many publicly available samples, SO/reddit answers etc.

Fwiw, GPT o1 helped me figure out how a fairly complex use case of epub.js, an open-source library with pretty opaque documentation and relatively few public samples. It took a few back-and-forths to get to a working solution, but it did get there.

It makes me wonder if the AI successfully found and digested obscure sources on the internet or was just better at making sense of the esoteric documentation than me. If the latter, perhaps the need for public samples will diminish.


Well Gemini completely hallucinated command line switches on a recent question I asked it about the program “john the ripper”.

We absolutely need public sources of truth at the very least until we can build systems that actually reason based on a combination of first principles and experience, and even then we need sources of truth for experience.

You simply cannot create solutions to new problems if your data gets too old to encompass the new subject matter. We have so systems which can adequately determine fact from fiction, and new human experiences will always need to be documented for machines to understand them.


In my experience o1 is not comparable to any other llm experience. I have had multiple phd friends test it - it's what has turned them from stochastic parrot campers to possible believers

and to be clear - as a layman, (in almost every field) I've recognized that llm's weren't up to the challenge of disavowing that notion from my phd friends up until o1 and never even tried, even though I've 'believed' since gpt 2


I haven't found really any use case that o1 was better than 4o or 3.5 Sonnet that related to actual work.

Any time I tried some of the more complex prompts I was working through something with Sonnet or 4o, o1 would totally miss important points and ignore a lot of the instructions while going really deep trying to figure out some relatively basic portions of the prompt.

Seems fine for reasonably simple prompts, but gets caught up when things get deeper.


Yeah, I generally agree with that. Why I said it only moved them from stochastic parrot campers to "possible" believers - to clarify, the few I've had test it have all pretty much said "this feels like it could lead to real reasoning/productivity/advances/intelligence".

Experienced the same thing with a library that has no documentation and takes advantage of c++23(latest) features.

Same, I’m pretty convinced it does in fact do genuinely original reasoning in at least a few areas, after enough prompts and with enough prodding.

But it takes so long and so much prompting skill to get to that point that the use cases seem limited.


I'm having a similar experience with o1. It's the only model that can identify causes of bugs for me. Perhaps it's already clever enough to be used in generating synthetic data about coding to train and improve less capable models. Even synthetic StackOverflow Q&A format.

To be honest, many times GPT 4o helps me understand poorly written emails by colleagues. I often find myself asking it "Did he means this when he wrote this?"... I'm a bit on the spectrum so if someone asks me vague questions or hallucinates words for things that don't exist, I have to verify with chatGPT to reaffirm that they are in fact just stupid.

Curious about your complex use case of epub.js. What were you trying to do with it?

I'm building an e-reader app where "enhancement content" such as illustrations, context-approprate summaries, and group chat can be integrated into the reading experience.

The way I am connecting external content to the epub is through an anchoring system -- sequences of words can be hashed to form unique ids that are referenced by the enhancement. Doing this lets me index the enhancement content in such a way that is format-independent and doesn't require modifying the underlying epub.

The specific task o1 helped me with was determining what the text is visible at any given point in time. This text is then turned into hashes to pull the relevant enhancement content.

Getting the current words on the page doesn't seem all that complex, but the epub.js API is pretty confusing. There are abstractions like "Rendition", "Location", "Contents", "Range", and it's not always intuitive which of these will provide the appropriate methods. I'm sure I would have figured it out eventually by looking at the API docs, but GPT probably saved me and hour or two of trial-and-error.


> group chat

hmm having a sort of mini-forum-like experience tied to particular pages in a book seems like a fascinating idea! being able to discuss plot twists and such only once you've already gotten to that point?

wow this seems like an amazing idea actually! any names yet? I'd love to check it out once it's done!


Interesting. How did you come across this idea. And how long have you been working on it?

I'm not sure when I first had the idea. I read a lot of mystery, but I'm often frustrated trying to remember all the details. A virtual notebook seems like it could help a lot for me personally.

My mom and grandma also read books with a physical notebook handy, and seems like modern technology should make that unnecessary.

I was laid off as part of the massive layoffs in 2022 and made a first pass at the project. I didn't use epubjs and instead wasted a lot of time making a custom rendering engine -- this was dumb. Eventually I got another job and paused the project. I started again in earnest a month or so ago using epubjs as the base.


Why group chat? Where did you think of that?

From being in book clubs

> They’re not creating new information

Most of this "knowledge sharing on online Q&A platforms" is NOT creative activity. It's endless questions about the same issues everyone is having except the system developers themselves. Much of this is just displacing search platforms.


It can be simultaneously true that (1) most answers on online Q&A platforms are not novel, and (2) that LLMs reduce the proportional novelty of answers.

For the purposes of the argument it is: these are the interface between the unseen "real world" and the LLMs. So information coming from these forums, even if regurgitated from "real life" or "education" or "experience"... the writer or someone else's, is a "creation" to the LLM.

It may be an interesting side effect that people stop so gratuitously inventing random new software languages and frameowrks because the LLMs don't know about it. I know I'm already leaning towards tech that the LLM can work well with, simply because being able to ask the LLM to solve 90% of the problem outweighs any marginal advantage using a slightly better language or framework offers. Fro example, I dislike Python as a language pretty intensely, but I can't deny that the LLMs are significantly better in Python than many other languages.

It kinda makes me sad. I hope we don't enter an era of stagnation because LLMs only understand our current tech. There are so many brilliant ideas that mainstream languages haven't adopted (and may never be able to adopt). There are language features outside of python and JavaScript that will change the way you think about systems and problem solving. Effects systems, structured concurrency, advanced type systems, distributed computing... There are brilliant people discovering safer and smarter ways to solve these problems but it seems nobody will care anymore.

Yes, sticking to the most popular technologies increases quality of the output, enabling even smaller startups to build applications like https://youtu.be/oafdA2WXvEc?feature=shared

I created a new framework and fed my documentation + certain important code snippets into it. It worked out fantastic. Now adays though, the LLM never follows links and will hallucinate the whole thing, in a completely wrong language.

You can probably fine-tune a general purpose programming model on the code and documentation of your language project (the documentation being in large part written by an LLM too, of course).

Alternatively, esoteric languages and frameworks will become even more lucrative ,simply because only the person who invented them and their hardcore following will understand half of it.

Obviously, not a given, but not unreasonable given what we have seen historically.


> become even more lucrative

why would it be lucrative? The person paying would consider whether they'd get locked in to the framework/language, and be held hostage by the creator(s). This is friction to adoption. So LLMs will make popular, corporate backed languages/frameworks even more popular and drown out the small ones.


<< why would it be lucrative?

Scarcity of some knowledge. Not all knowledge exists on SO. You are right about the popular stuff, but the niche stuff will be like everything else niche, harder to get and thus more expensive. COBOL is typically used as an example of this, but COBOL was at least documented. I am saying this, because, while I completely buy that there will be executives who will attempt to do this, it won't be so easy to rewrite it all like Jassy from Amazon claims ( or more accurately, it will be easy, but with exciting new ways for ATMs, airlines and so on to mess with one's day).

<< The person paying would consider whether they'd get locked in

I want to believe a rational actor would do that. It makes sense. On the other hand, companies and people sign contracts that lock them in all the time to all sorts of things and for myriad of reasons including, but not limited to being wined and dined.

Again, I think you are right about the trend ( as it will exacerbate already existing issues ), but wrong about the end result.


I mean, this calculus was already there before LLMs when choosing a stack.

> The problem is eventually what are LLMs going’s to draw from?

Published documentation.

I'm going to make up a number but I'll defend it: 90% of the information content of stackoverflow is regurgitated from some manual somewhere. The problem is that the specific information you're looking for in the relevant documentation is often hard to find, and even when found is often hard to read. LLMs are fantastic at reading and understanding documentation.


That is only true for trivial questions.

I've answered dozens of questions on stackoverflow.com with tags like SIMD, SSE, AVX, NEON. Only a minority of these asked for a single SIMD instruction which does something specific. Usually people ask how to use the complete instruction set to accomplish something higher level.

Documentation alone doesn't answer questions like that, you need an expert who actually used that stuff.


And even for trivial questions, there is a lot out there that the doc developers ignored or hid.

For me, the current state of affairs is the difficulty of aiming search engines to the right answer. The answer might be out there, but figuring out HOW t oask the questions, which are the right keywords, etc - basically requires knowing where the answer is. LLMs have potential in rephrasing boththe question and what they might have glanced at here and there - even if obvious in hindsight.


Don’t use search engines unless you know what you’re searching. Which means I start with another material first (books, wiki, manual,…) which will give me enough ideas of what I want will be. Starting with a search engine is like searching for a piece of a jigzaw with only the picture on the box, you have to know it’s shape first and where it would go.

It's a very similar situation with concurrent programming. Knowing which instruction does a CAS, or the exact memory semantics of a particular load/store/etc., doesn't do much on its right own.

Published documentation has been and can be wrong. In the late 1990's and early 2000's when I still did a mix of Microsoft technologies and Java, I found several bad non-obvious errors in MSDN documentation. AI today would likely regurgitate it in a soft but seemingly mild but arguably authoritative sounding way. At least when discussing with real people after the arrows fly and the dust settles, we can figure out the truth.

Everything (and everyone for that matter) can be and has been wrong. What matter is if it is useful. And AI as it is now is pretty decent at finding ("regurgitating") information in large bodies of data much faster than humans and with enough accuracy to be "good enough" for most uses.

Nothing will ever replace your own critical thinking and judgment.

> At least when discussing with real people after the arrows fly and the dust settles, we can figure out the truth.

You can actually do that with AI now. I have been able to correct AI many times via a Socratic approach (where I didn't know the correct answer, but I knew the answer the AI gave me was wrong).


Yeah, this is wildly optimistic.

From personal experience, I'm skeptical of the quantity and especially quality of published documentation available, the completeness of that documentation, the degree to which it both recognizes and covers all the relevant edge cases, etc. Even Apple, which used to be quite good at that kind of thing, has increasingly effectively referred developers to their WWDC videos. I'm also skeptical of the ability of the LLMs to ingest and properly synthesize that documentation - I'm willing to bet the answers from SO and Reddit are doing more heavy lifting on shaping the LLM's "answers" than you're hoping here.

There is nothing in my couple decades of programming or experience with LLMs that suggests to me that published documentation is going to be sufficient to let an LLM produce sufficient quality output without human synthesis somehwere in the loop.


> regurgitated from some manual somewhere

yes, a lot of stuff is like that, and LLMs are a good replacement for searching the docs; but the most useful SO answers are regarding issues that are either not documented or poorly documented, and someone knows because they tried it and found what did or did not work


Knowledge gained from experience that is not included in documentation is also significant part of SO. For example "This library will not work with service Y because of X, they do not support feature Y, as I discovered when I tried to use it myself" or other empirical evidence about the behavior of software that isn't documented.

Following the article’s conclusion farther, humans would stop producing new documentation with new concepts.

I find that it sloppily goes back and forth between old and new methods, and as your LLM spaghetti code grows it becomes incapable of precision adding functions without breaking existing logic. All those tech demos of it instantly creating a whole app with one or a few prompts are junk. If you don't know what you're doing then as you keep adding features it WILL constantly switch up the way you make api calls(here's a file with 3 native fetch functions, let's install and use axios for no reason), the way you handle state, change your css library, etc.

{/* rest of your functions here*} - DELETED

After a while it's only safe for doing tedious things like loops and switches.

So I guess our jobs are safe for a little while longer


Naively asking it for code for anything remotely complex is foolish, but if you do know what you're doing and understand how to manage context, it's a ridiculously potent force multiplier. I rarely ask it for anything without specifying which libraries I want to use, and if I'm not sure which library I want, I'll ask it about options and review before proceeding.

It’s just like any other tool. If you know how to leverage it you’ll gain a lot more from it.

LLMs show their limits as you try to ask something new(introduced in last 6-12 months) being not used. I was asking Claude and GPT4o about a new feature of go, it just gave me some old stuff from go docs. Then I went to go docs(official) and found what I was looking for anyways, the feature was released 2 major versions back, but somehow neither GPT4o nor claude know about this.

With GPT 4o I had some success pointing it to the current documentation of projects I needed and had it giving me current and actual answers.

Like "Help me to do this and that and use this list of internet resources to answer my questions"


Simple. Codebases on GitHub, research papers posted on journals, raw data posted online, phone data, emails..

There is plenty of data even without QA forums.


Speech. The speech-to-text pipeline is inherent in us. The convertion model relies on our education and cultural factors. The models can transcribe speech and do this conversion for new data generation. Have 10 mics at a public square and you'll have an infinite dataset (not a very smart one, necessarily...).

All of the conversations they are having with real humans? If it's as big as chat gpt why can't it just consume its own information which is at least half from a human?

Seems like the humans using it can be also training and effectively paying for the right to do so? I would guess this could be easily gamified.


They aren’t even regurgitating, they’re just making it up. It may regurgitate something it has seen, or it may invent something which isn’t even a part of the language. It’s down to chance.

I thought synthetic data is what is partially training the new multimodal large models, i.e. AlphaGeometry, o1, etc.

Synthetic data can never contain more information than the statistical model from which it is derived: it is simply the evaluation of a non-deterministic function on the model parameters. And the model parameters are simply a function of the training data.

I don't see how you can "bootstrap a smarter model" based on synthetic data from a previous-gen model this way. You may as well well just train your new model on the original training data.


>Synthetic data can never contain more information than the statistical model from which it is derived: it is simply the evaluation of a non-deterministic function on the model parameters. And the model parameters are simply a function of the training data.

The Information in the data isn't just about the output but its rate of occurrence/distribution. If what your base model has learnt is only enough to have the occasional flash of brilliance say 1 out of 40 responses and you are able to filter out these responses and generate as much as you like then you can very much 'bootstrap a better model' by training on these filtered results. You are only getting a function of the model's parameters if you train on its unfiltered, unaltered output.


It's been already proven possible https://arxiv.org/abs/2203.14465

Synthetic data without some kind of external validation is garbage.

E.g. you can't just synthetically generate code, something or someone needs to run it and see if it performs the functions you actually asked of it.

You need to feed the LLM output into some kind of formal verification system, and only then add it back to the synthetic training dataset.

Here, for example - dumb recursive training causes model collapse:

https://www.nature.com/articles/s41586-024-07566-y


Anecdotally, synthetic data can get good if the generation involves a nugget of human labels/feedback that gets scaled up w/ a generative process.

There are definitely a lot of wrong ways to do it. Doesn't mean the basic idea is unsound.

Yeah, There was a reference in a paywalled article a year ago (https://www.theinformation.com/articles/openai-made-an-ai-br...): "Sutskever's breakthrough allowed OpenAI to overcome limitations on obtaining high-quality data to train new models, according to the person with knowledge, a major obstacle for developing next-generation models. The research involved using computer-generated, rather than real-world, data like text or images pulled from the internet to train new models."

I suspect most foundational models are now knowingly trained on at least some synthetic data.


Edit: OP had actually qualified their statement to refer to only underrepresented coding languages. That's 100% true - LLM coding performance is super biased in favor of well-represented languages, esp. in public repos.

Interesting - I actually think they perform quite well on code, considering that code has a set of correct answers (unlike most other tasks we use LLMs for on a daily basis). GitHub Copilot had a 30%+ acceptance rate (https://github.blog/news-insights/research/research-quantify...). How often does one accept the first answer that ChatGPT returns?

To answer your first question: new content is still being created in an LLM-assisted way, and a lot of it can be quite good. The rate of that happening is a lot lower than that of LLM-generated spam - this is the concerning part.


The OP has qualified "code" with bad availability of samples online. My experience with LLMs on a proprietary language with little online presence confirms their statement. It is not even worth trying, in many cases.

Fair point - I actually had parsed OP's sentence differently. I'll edit my comment.

I agree, LLMs performance for coding tasks is super biased in favor of well-represented languages. I think this is what GitHub is trying to solve with custom private models for Copilot, but I expect that to be enterprise only.


Data annotation is a thing that will be a huge business going forward.

Curious about this statement, do you mind expanding?

I'm also curious. For folks who've been around, the semantic web, which was all about data annotation, failed horribly. Nobody wants to do it.

There is still publicly available code and documentation to draw from. As models get smarter and bootstrapped on top of older models, they should need less and less training data. In theory, just providing the grammar for a new programming language should be enough for a sufficiently smart LLM to answer problems in that language.

Unlike freeform writing tasks, coding also has a strong feedback loop (i.e. does the code compile, run successfully, and output a result?), which means it is probably easier to generate synthetic training data for models.


> In theory, just providing the grammar for a new programming language should be enough for a sufficiently smart LLM to answer problems in that language.

I doubt it. Take a language like Rust or Haskell or even modern Java or Python. Without prolonged experience with the language, you have no idea how the various features interact in practice, what the best practices and typical pitfalls are, what common patterns and habits have been established by its practitioners, and so on. At best, the system would have to simulate building a number of nontrivial systems using the language in order to discover that knowledge, and in the end it would still be like someone locked in a room without knowledge of how the language is actually applied in the real world.


> sufficiently smart LLM

Cousin of the sufficiently smart compiler? :-p


The answer is already known, and it is a multi billion dollars business: https://news.ycombinator.com/item?id=41680116

That's a separate issue. That discussion is about training the model's responses through fairly standard RL (human-generated in this case), not providing it with a new corpus of data from which to draw its responses.

luckily LLMs do not upvote SO/Reddit answers, which is still done by humans.

There are a great many problems with LLM's and AI in general. The underlying problem is, LLMs break the social contract in about as many ways as a human can dream up. There is no useful beneficial purpose that doesn't also open the door to intractable destructive forces manifold over that benefit; its a modern day pandora's box, and its hubris to think otherwise.

This underlying societal problem has knock-on effects just like chilling effects on free speech, but these effects we talk about are much more pernicious.

To understand, you have to know something about volunteer work, and what charity is. Charity is intended to help others who are doing poorly, at little to no personal benefit to you, to improve their circumstances. Its gods work, there is very little expectation involved on the giver. Its not charity if you are forced to give.

When you give something to someone as charity, and then that someone, or an observing third-party related to that charity extorts you and attempts to coerce you for more. What do you think the natural inclination is going to be?

There is now additional coercive personal cost attached to that which was not given freely; what naturally happens. This is theft.

Volunteer psychology says, you stop giving when it starts costing you more than you were willing to give. Those costs may not be monetary, in many cases they are personal, and many people who give often vet those who they are giving to so as to help good people rather than enable evil people. LLM's make no distinction.

Less people give, but that is not all. Less people who are intelligent give anything. They withdraw their support. Where the average person in society was once pulled up to higher forms of thought by those with intelligence and a gift for conveying thought, that vanishes as communication vanishes. From that point forward, the average is pulled only to baser and baser forms of thought and the average people seem like the smartest people in the room though that feeling is flawed (because they aren't that intelligent).

The exchange for charity is often I'll help you here, and you are obligated to use it to get yourself into a better position so you can help both yourself and other people along the way. Paying it forward if you can. Charity is wasted on those that would use what is given to support destructive and evil acts.

There are types of people who will just take until no more can be given which is a type of evil, which is why you need to vet people beforehand but an LLM is simply a programmatic implementation of taking. The entire idea of a robot worker replacing low-mid level jobs during limits of growth is one that seems enticing from the short-sighted producer side except it ends up stalling economic activity. Economies run through an equal exchange between two groups. The moment there is no exchange because money isn't being provided for labor, products can't be bought. Shortages occur, unrest, usually slavery then death when production systems for food break down.

We are already seeing this disruption in jobs in the Tech sector that is 5 times the national unemployment during peak hiring (where offpeak has hiring freezes), in a single year. When you can't keep talent, that talent has options and goes elsewhere, either contributing within the economy or creating their own (black markets). ECP in non-market socialist systems (like what we are about to have in about 5 years), guarantees a lot of death or slavery.

People volunteer to help others, if someone then uses those conversations to create a humunculus that steals that knowledge and gives it to anyone that asks, even potentially terrorist or other evil actors, no one who worked for the expertise will support that.

Inevitably it also means someone will use that same thing to replace those expert jobs entry level portions with a machine that can't reason, and no new candidates can get the experience to become an expert. Its a slow descent into an age of ruin that ends in collapse.

You end up with cyclical chaotic spirals of economic stagnation, followed by shortage, followed by slavery or death from starvation; no AGI needed. All that's needed is imposing arbitrary additional cost on those least able to bear it. Interference in the job market, making it impossible to find work, gender relations (making it impossible to couple up and have babies), political relations (that drive greater government and control at the same time limiting any future). Its dystopian but its becoming a reality because of evil people in positions that allow front-of-line blocking to course correct continue their evil work; and they are characteristically wllfully blind to the destructive acts they perform and the consequences.

There are a lot of low pay jobs that will just vanish, people don't have enough money to raise children as it is; old crowds out the new until finally, a great dying occurs. That is the future if we don't stop it and resolve the changes that were made (as the bridges were burnt) by the current generation in political power. They should have peaked in power in 95, exited in 2005-2010. They still hold the majority and its 2024 (30 years after taking power). They'll die of old age soon, and because they were the only ones that knew all the changes that were made, new bridges will have to be built from scratch with less knowledge, resources, goodwill/trust, and people. They claimed they wanted a better world for their children, while creating a hellscape for them instead made of lies and deceit.

Very few people are taught specifically what the social contract entails. Its well past time people learn, lest they end up being the supporting cause of their own destruction.

Thomas Paine's Rights of Man has a lot of good points on this.


> There are types of people who will just take until no more can be given which is a type of evil, which is why you need to vet people beforehand but an LLM is simply a programmatic implementation of taking.

Your first argument largely rests on this, but isn't the fact that so many LLM-created answers get posted actually proof of the opposite? The LLM is, after all, "giving" here - and it's not like those answers are useless, just because they're made by an LLM. LLMs are perfectly capable of giving good answers to a large number of common tech support questions, and are therefore ("voluntarily") helping people.

> We are already seeing this disruption in jobs in the Tech sector that is 5 times the national unemployment during peak hiring (where offpeak has hiring freezes), in a single year. [...] Inevitably it also means someone will use that same thing to replace those expert jobs entry level portions with a machine that can't reason, and no new candidates can get the experience to become an expert. Its a slow descent into an age of ruin that ends in collapse.

This is your other argument, which is slightly more solid - I think that at least in the short term, we are going to see many of those effects. But overall, this dystopian "collapse" scenario is economically naive, for at least two reasons:

1. As it becomes harder to enter professions requiring high degrees of specializiation as the entry-level positions dry up, the demand for real specialists in these fields increases accordingly. At some point, it becomes viable for new talent to enter those sectors even in the face of a low number of "true" entry-level positions, as there is now an incentive to accept a much steeper upfront cost in terms of time invested on learning the specialization. If there are jobs that can at all be done and that pay 500,000$ per year, somebody is going to do them.

2. And I mean, besides, the entire "ECP in non-market socialist systems [...] guarantees a lot of death or slavery" argument is plain wrong. ("Pure ideology! *sniffs*"). We already live in a post-market capitalist society. Amazon controls enough market share in a large number of sectors that their operating model resembles Soviet-style demand planning when calculating how much produce to order (that is, order to manufacture). The Economic Calculation Problem, in the age of computers, is solved. We are not playing with abacuses anymore.


Why would you trust Amazon ?

At least the USSR's ideology claimed to represent the worker's interest, and so once that stopped happening in practice, the system collapsed. (And they were already in the age of computers, trying to do some of the first cybernetics.)


I would not dare say I trust Amazon; however, our lived experience of ordering from there entails neither death nor slavery.

If the latter were true, then we would have seen the demand increase for all those specialized fields from graduates, but we have seen its just like what it was, no jobs and more debt, Older people fulfilling roles, until they retire or die and positions that aren't getting filled despite advertising for months to years. My generation will be dead within that time period, you should see at least some replacement by now but if you look closely, there isn't much replacement going on at all because there's no money in it. Greybeards are the only minders left running many of these IT departments if they are left at all and not completely outsourced.

For the amount of responsibility to get things right, you now have jobs that have significantly less responsibilities or entry skills that are now competing with the same positions thanks to inflation. Who would bother signing up for that stress for no economic benefit when they can get any number of other jobs at 3/4 of the pay and none of the responsibility. This type of chaos is what you've discounted as not happening (ECP).

> The economic calculation problem ... is solved

Then why are we having chaotic disruptions, in growing frequency and amplitude, continually more concentrated business sectors based in Lenin's comments on attacking capitalism through debauching currency through inflation as he mentioned to Keynes (towards non-market socialism), and more importantly shortages that are starting to self-sustain at the individual level.

When common food goods are not getting replenished on a weekly basis, and they are 2-3x the normal price, there is an issue. When it hits 2-3 weeks it starts to compound as demand skyrockets for non-discretionary goods (and they can't keep up). Goods sell out almost immediately (as they have been more recently), and people start hoarding (which they are doing again with TP).

You don't seem to realize this is the problem in action. It is hardly solved, we aren't even to the non-market part yet and we are far enough along to see that chaos is increasing, distortions are increasing, and shortages are increasing and not being corrected within JIT shipping schedules.

Those three observations are cause for concern to stop and lookaround because chaos is a major characteristic of the ECP. Moving goods around doesn't perform economic calculation, and soviet-style demand planning failed historically once shortages occurred, the next things coming up in the near future is sustaining shortage and death from famine. India stopped exporting its food because they see this on the horizon and have a intimate history with famine.

The 1921 famine (bolshevik non-market) reduced national population by 4% YOY and dropped the birthrate negative; Mao killed off more than 33+ million during his non-market stint which caused famine. No accurate numbers to draw up a percentage YOY but it was significant.

Every single non-market socialist state has failed in the history of socialism, and that includes its derivatives. Its not appropriate to call it ideology when there is a historic record where not even a single non-market state exists today. The fact that they all had to have markets tied to capitalist states to get their prices shows the gravity of the issue, and now that the last capitalist state is falling to non-market socialism (through intentional design/sabotage), what do you suppose is going to happen to all those market-socialist countries that can no longer get their price data. Its already happening, and chaos is unpredictable; and inflation/deflation measures are limited in resolution (as a high hysteresis problem).

Non-market socialist states run into the slavery and death part, but you are not recognizing that, and discounting it because it hasn't happened to us yet (though its likely within 5 years for a point of no return), and it has happened to every non-market system.

The point of no return to non-market socialism is when debt growth exceeds GDP (outflows exceed inflows). This is structural of any ponzi scheme. Who decides, the producers. This is when rational producers abandon the market, and start shutting down because its no longer possible to make a profit in terms of purchasing power. From there its a short but inevitable jaunt to collapse.

Have you noticed that China's stimulus through printing isn't doing much? Every printing requires exponentially greater amounts, with diminishing returns.

I think you've sorely confused objective observation with ideology and discounted it without taking into account the factors rationally; these are factors showing a clear and present danger to all existing market-socialist states, as well as those capitalist bastions that will be pulled down in the near future from ongoing subversion.

Amazon doesn't produce, its a logistics company. Shortages self-sustain because producers are leaving the market, we saw this first with baby formula in 2020 when the last producer was closed, but what's coming will leave only state-run capacity for most sectors and industry.

Most knowledgeable people know these systems are brittle and parasitic and aren't capable of ECP for a myriad of already written and rational reasons by themselves.

Socialist production absent a market, doesn't work without slave labor. It doesn't even work longer-term either with slave labor because of inefficiencies at current population don't scale well with tech levels.

You might be fine claiming everything is fine now, despite a lot of evidence to the contrary that things are getting progressively worse, and will continue down that path because of these consequences, but what about in 5 years when you can't get food. Will it matter then, when you can't do anything about it, because the time to do something came and passed?

You think the Elites in China don't know this is going to happen? They are pushing so hard for the BRICS because they know this is going to happen, because they helped cause it in their dogmatic and blind fervor and orchestration.

Name one modern real non-market socialist state that exists today that has not failed, and I'll be willing to consider your last statement. As far as I know, none exist and that should frighten every single person today because that is where we are going whether we like it or not (as a function of system dynamics).


> Name one modern real non-market socialist state that exists today that has not failed, and I'll be willing to consider your last statement.

Your focus on the false dichotomy between state action and private action makes me want to go tongue-in-cheek and name France. Definitely a non-market socialist state.

And for the record, I do agree that everything will get progressively worse, for ideological and for material reasons. But in both cases for different reasons than you suggest. The unholy marriage of extreme concentrations of wealth and the reemergence of fascism has been blessed with many children, but the Musks, Murdochs, Kochs and Thiels of this world will have little recourse against an economic reality wrecked by climate change, the drying up of fossil fuels and the bitter demographic outlook.


While Frances debt to GDP as a single country has reached above that threshold, the driver of statism is the primary currency they use, and they pooled their primary currency with the eurozone (i.e. the euro), and the aggregated country debt ratio for the Euro is still only around 88% largely because they have tied it indirectly to the USD.

Still though, you see high concentration, loss of jobs, high unemployment, all classic signs often seen in socialist states right before major calamity or outright failure.

I'd say though the France won't have gotten to non-market socialism until their primary underlying currency hits those milestones, but they will see the chaos much sooner when the shoe does drop.


In a very real sense, that’s also how human brains work.

This argument always conflates simple processes with complex ones. Humans can work with abstract concepts at a level LLMs currently can’t and don’t seem likely capable of. “True” and “False” are the best examples.

It doesn’t conflate anything though. It points to exactly that as a main difference (along with comparative functional neuroanatomy).

It’s helpful to realize the ways in which we do work the same way as AI, because it gives us perspective unto ourselves.

(I don’t follow regarding your true and false statement, and I don’t share your apparent pessimism about the fundamental limits of AI.)


User data is the new gold, at least until AI is good enough to create that gold from iron.

AI companies are already paying humans to produce new data to train on and will continue to do that. There's also additional modalities -- they've already added text, video, and audio, and there's probably more possible. Right now almost all the content being fed into these AIs is stuff that humans can sense and understand, but why does it have to limit itself to that? There's probably all kinds of data types it could train on that could give it more knowledge about the world.

Even limiting yourself to code generation, there are going to be a lot of software developers employed to write or generate code examples and documentation just for AIs to ingest.

I think eventually AIs will begin coding in programming languages that are designed for AI to understand and work with and not for people to understand.


> AI companies are already paying humans to produce new data to train on and will continue to do that.

The sheer difference in scale between the domain of “here are all the people in the world that have shared data publicly until now” and “here is the relatively tiny population of people being paid to add new information to an LLM” dooms the LLM to become outdated in an information hoarding society. So, the question in my mind is, “Why will people keep producing public information just for it to be devalued into LLMs?”


How would a custom language differ from what we have now?

If you mean obfuscation, then yeah, maybe that makes sense to fit more into the window. But it’s easy to unobfuscate, usually.

Otherwise, I‘m not sure what the goal of an LLM specific language could be. Because I don’t feel most languages have been made purely to accommodate humans anyway, but they balance a lot of factors, like being true to the metal (like C) or functional purity (Haskell) or fault tolerance (Erlang). I‘m not sure what „being for LLMs“ could look like.


Hmm, has there even been much success in training neural networks on taste, touch or smell ? I kind of doubt we have good enough sensors for that ?

I don’t understand why, given that Germans are generally quite precise - German engineering and manufacturing is world-famous for a reason, why DBahn can’t get the trains to run on time. This isn’t Italy after all. The Swiss and Japanese manage to do it. Can someone explain?

DBahn was privatized in a way that combines the worst of both worlds: private but 100% owned by the state. Opaque labyrinth of 600 subcompanies.

Also, some textbook examples of bad incentives. For example, repairing bridges is paid by DB. If a bridge is beyond repair, then the tax payer pays. Predictable result: doing hardly any maintenance until the bridge fails. (That example is by now fixed, but shows the clear lack of planning)

Finally, the system is just over capacity. Many more trains but same infrastructure as decades ago. If you got rid of half the schedule and used a swiss-like system of stopping several minutes at every station, the trains could be on time. But the trains already are overcrowded, so then what do you do with all the passengers?


It's the classical combo of poor budget and conflicting budgeting with aging infrastructure and bad management. DB was privatized 20, 30 years ago, but parts of the infrastructure is public property and managed by the state. So DB willingly let rot them until the state will start renewing, which does take ages of course. At the same time, DB also started saving significant money, because that's what private companies are doing, so they removed parts of the infrastructure, which later started harming the planing on a nationwide level. And on top of this, there are parts of the infrastructure which are literally up to 150 years old, but still in use for whatever reason.

Or in other words: classical underfunded, overused legacy system showing its age.


> At the same time, DB also started saving significant money, because that's what private companies are doing, so they removed parts of the infrastructure

As strongly encouraged by their regulators (the Bundesnetzagentur and the Eisenbahnbundesamt), though, too, because for a long time the main political mandate regarding the railways was mainly to save money.


Makes sense. Is SBB also privatized, or is it state-owned?

Mismanagement. Years of cutting maintenance and personell to make profits.

It's money. If you look at investement into the railway system per capita, Italy actually spends more on their railway system than Germany: https://www.allianz-pro-schiene.de/presse/pressemitteilungen...

Because it's a cliche that's untrue since at least the 90s.

Germans are generally quite precise, which can be both good and bad. It can mean that things are done correctly, but it can also mean that people waste endless time obsessing over how to exactly implement every minute detail of a rule.

Just to give you an example of the latter: a Vietnamese-German woman (a German citizen who speaks German natively) was recently in the news because the local authorities have refused to issue her baby a birth certificate for over 6 months.[0] When she went to get a birth certificate, the authorities told her that her last name, Le Nguyen - which already appears on all of her German documents - does not meet their standards. They refuse to issue a birth certificate for her baby until she changes her last name to something the German regulations allow.

To make the situation even more absurd, the problem is that the German regulations state that double family names do not exist in Vietnamese (surprise: they do).[1] If the mother had a French double name, it would be no problem.

The problem is not even the baby's last name - it has the father's last name. The problem is that the birth certificate has to list the mother and father, and the authorities refuse to write the mother's name.

Instead of just giving the mother a bit of leeway and issuing a birth certificate, the authorities insist on inflexibly following some completely incomprehensible interpretation of the regulations. Suddenly, after thirty-something years of living in Germany, this lady has to suddenly change her last name, and until she does that, her baby - a German citizen - has no official documentation and cannot receive any government benefits.

Repeat this daily across many aspects of daily life, and you can see how it would become a problem.

0. https://www.tagesschau.de/inland/regional/berlin/rbb-keine-g...

1. https://www.rbb24.de/panorama/beitrag/2024/10/interview-berl...


> I don’t understand why, given that Germans are generally quite precise - German engineering and manufacturing is world-famous for a reason

Yeah, that reason is propaganda and myth, it might have been true in the 1800s and early 1900s but now it's the same shit as everywhere else

Even Spanish trains are faster and more punctual on average than German ones, and Spaniards are world world-famous for siestas


That’s a nice but false narrative. The Swiss are wealthier than the Americans and yet they use public transport extensively.

Good public infra is wealth with much higher utility than private infra.

Roads are public infra. I agree they should be kept in good order.

In general yes. In particular some roads provide negative utility and should be closed or not built.

And he pays no attention to the reports about car crashes on the road because those aren’t even reported.

That is pure BS. At least as some general sweeping statement.

> t is the phones specifically that are the problem, not social media.

disagree; it's the social media that's the problem; the phone just makes it possible to access it at all times

remove social media and you won't have kids on their phones all the time (or, if you're very lucky, they might use them for something productive)


Mobile operating systems are designed to engage and distract the owner. Lost your attention for a few minutes? Screen dims and threatens to darken! Oh no! Touch its face and keep it alive! Notifications from everything. Alarms that won't shut off until you do something about it. Pleading with us to recharge the battery. Insisting that radios be turned back on and sensor access be granted.

Every app, every website you visit is infested with dialogs and pop-ups that you're brushing out of the way, trying to get something accomplished.

It really tests your resolve and concentration, to see if you finished that one task you had in mind when you unlocked your phone, without going into 5 more on the side, or whether you can tame your phone sufficiently to be an assistant or productivity tool, rather than a firehose of marketing from dozens of companies to you, the consumer.


> remove social media and you won't have kids on their phones all the time

This is a testable hypothesis. We shouldn’t conclude either way without more evidence. As it stands, enforcing a phone ban has advantages over an app one.


FYI: Schools have firewalls that block social media, but kids learn to use VPNs.

Firewalls also do nothing if the kid has a decent data plan.

Small sample size but already tested in my household. Block the social media apps and website and phone use is greatly reduced.

Yeah in computer class in high school the normal kids were on MySpace or chat in the computer lab instead of doing the boring work.

smartphone-free cafes are starting to pop up [0]

[0] https://www.facebook.com/reel/1041312411329542


Isn't it already illegal in most places? (sure peolpe do it anyway, but if you're caught there are heavy fines)

Last year in Michigan it was made a primary offense. Driving was really pleasant for a month or two and they realized nobody enforces it where it would help most.

Montana is still legal apparently but I'm sure cops could write you a ticket for the infractions you commit while on your phone, just not for the phone itself.

They're never caught

> downloading huge quantities of textbooks and the like onto my phone at the age of 14 or 15

I also used the internet to get interesting tech manuals (and plenty of other books) from IRC channels when I was young

But that was before social media, which has IMO destroyed (not entirely, but largely) the positive aspects of internet connections for teenagers

A phone is now primarily a source of always-on entertainment (in 10 second bytes).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: