Hacker News new | past | comments | ask | show | jobs | submit | ccvannorman's comments login

Articles on his blog about Unity are on point! e.g. https://garry.net/posts/unity-can-get-fucked and several others leading up to it

1. Awesome and well done!

2. But it doesn't have the classic "win" sound that I remember so fondly :(

3. Prepare to be destroyed by Nintendo


If I had children aged 7-17 and felt China was intentionally nudging them via algorithmic suggestions away from STEM and toward vapidness, and if I was unable to control their access to it, I guess I might appreciate that my government had banned it. But, as others have mentioned, it sets a dangerous precedent. If nothing else, this attempted ban has raised national awareness about the negative impacts of TikTok. What could the US Federal Government do instead, assuming it is important to consider such platforms as per their effects on the population?

If China sold candies that contained poison and marketed them to Us children, it would be easy, since the FDA prohibits this. If the FDA didn't exist, perhaps poisoned candy sales would prompt the creation of such a regulatory body.

So I guess I oppose the ban while recognizing the danger, and suggest we consider regulating digital goods in the same manner as consumable foods; if provable harmful effects are evident then that is grounds for a ban of a product on the basis of health protection.


The forced divestment is for national security reasons. Bytedance, as a Chinese company, is required by law (Cybersecurity Law of the People's Republic of China) to provide full data access to the Chinese government on request, and they are compelled not to reveal when this occurs. Since this is done through legitimate channels (on Bytedance's side), this won't even be caught with an audit. So you have a situation where an app installed on half of America's phones shares all its data with China, along with any potential changes the government recommends for influencing the content.


Meta was selling data to Chinese groups and buried a report stating this until recently. This has nothing to do with national defence and everything to do with ensuring American companies control the narrative without competition.


“selling” data is not in the same bucket for risk as CCP using TikTok for propaganda.


> Meta was selling data to Chinese groups

Source?


>The forced divestment is for national security reasons.

Would you like to buy a bridge?


Meta & co are required by US law to do the same for people in the rest of the world. Didn't see a huge US outcry about that, in fact I saw a lot of hate for things like GDPR


The hate for the GDPR I read of is actually about the "allow cookie" popups that aren't needed at all are are just a form of protest by those individual sites because they are storing and selling personal information including IP addresses.

If you aren't engaged in those practices then there's no need for any GDPR annoyances for users.

I may misunderstand, I'm in America currently.


The allow cookies was already there because of the cookie banner law, unfortunately GDPR did not stop the cookie law, but GDPR does say you need a way to agree to tracking etc. and to be informed when it happens so it sort of seems reasonable the allow cookies popups would be used for this.


I think the easier framework is this: China has banned her citizens from using most United States-based social networks. This prevents American companies from accruing profit from Chinese citizens and advertisers, and shrinks their potential pool of user data for refining algorithms or selling. As such, it's effectively a trade policy for us to in turn ban her social networks. Unless and until we are equally able to harvest Chinese data and suck yen out of China, she will not be allowed to harvest American data and suck dollars out of here.


While this is a fair take, it's not what the law has in mind.


China is classified as a foreign adversary, so this goes beyond trade policy. Foreign adversaries show a pattern of conduct that threaten national security. People are not comfortable with foreign adversaries having a direct line to our youth's attention and having their finger on the dial.


i agree wholeheartedly. i provided this as an additional set of reasoning for people who are either America-haters or doves.


It’s overly simplistic. It doesn’t take into account the ideals the USA was founded on (including free speech as an inalienable right), nor does it take into account the large shift in US government policy.


[flagged]


i'm not sure how much, if any, game theory you've studied, but an always-cooperate strategy rarely works in repeated, cooperative games.


That’s asinine. Every nation responds to things such as tariffs with a proportional response.

We have plenty of evidence that the U.S. has been harmed with our open approach to unfettered access to our electronic systems. Meanwhile our geopolitical adversaries have no qualms about fire walling their citizens from accessing foreign networks at all.

This is a clear case where the U.S. should treat them as they treat us. IMO any 1st amendment arguments are made in bad faith because there are no shortages of non-hostile channels for Americans to speak freely and openly.

Does anyone else remember “free speech zones” from the Iraq War era? Where was this argument then?


Wait, so your position is that the US is harmed by being the global master of the internet (through companies like Meta that are synonymous with the internet in some places in the world) and that we should build the great American firewall to keep our internet in and other's out?


Well, it does offer an avenue for inacting some form of ban. And not so sure it's all that morally low.

Because what China's ban of US social media might say is that China recognizes social-media's power to influence the populace (think Russia's use of Twitter and FB in the 2016 election). Yeah, am actually in favor of some form of restrictions, because we as a country need to realize that social media is a tool that can be used against us.

Yeah, if it was an outside country owning a major US newspaper, it'd be more clearcut.


If social media is so bad let's regulate ALL of it, US firms included.

Personally I fear US-controlled social media more than Sino-controlled ones. It's not like the CCP can come and arrest me here in the US, or really use my data against me in any way. Both have plenty of reasons to throw propaganda at me, or censor certain viewpoints.

At least when it's a foreign country I have a chance of seeing through it, compared to so many domestic media sources currently licking Trump's feet. I'm supposed to feel secure that they'll be telling me the truth over the next 4 years? Acting in my interest?


I understand the sentiment, and in the short term I agree with you (what can a foreign-controlled tik-tok going to do to us?). But think giving influence to our online behavior directly over to an adversarial foreign-power doesn't sound like a good idea. To me, it reminds me of WWII and the codebreakers in England who broke the German codes. They didn't use it to decode messages all time, because they didn't want to alert the Germans, only during significant events. Yeah, control over a major social media outlet could be a tool to use selectively during important events to influence the masses. Not in a good idea in my opinion, but am no expert in this.


That's fair. I do think China would probably just omit viewpoints rather than outright lie and it'd be hard to detect.

I guess I don't personally fear China as much as Congress does, I view them more as a rival than an enemy. I wish them well, and hope they make more cool apps and games for me to enjoy. And they'd probably love nothing more than for us to overthrow our oligarchs (just like we want to see the fall of the CCP), and that's not necessarily against my interests as a regular Joe.

I can understand why Congress is concerned, I just wish they'd try to address the root of the problem (enforce common safety and transparency standards for all social media) rather than the targeted approach they've taken.


Ah true that, congress (and the law) definitely isn't very consistent in their enforcement, nor well thought-out.

And actually, didn't fear China, but the invasion of Taiwan is becoming more and more real. A bit scary.


meta also cannot arrest you here in the US. we should maintain tighter controls on the state's use of data and access thereto because it's the entity we've given a monopoly on violence. that is the appropriate point for controls to be applied.


> If China sold candies that contained poison and marketed them to Us children, it would be easy, since the FDA prohibits this.

The FDA was created by an act of Congress, as was this ban. These are identical scenarios -- the FDA has a mandate to block certain things, as does the TikTok ban. What's being debated is the constitutionality of it; and there are arguments both ways, but it seems very likely that the ban will hold.


I do have children in that age range and see US social media damaging them. Would HN be OK with European governments banning Meta, X, Discord etc?


> Would HN be OK with European governments banning Meta, X, Discord etc?

I'm a bit surprised it hasn't happened yet, although those companies are also willing to adjust policies in foreign nations—for instance, Meta saying it won't eliminate fact checking outside of the US.


A very naive and hopeful part of me would wish for Facebook, Twitter, and other vapidness-enhancing platforms be regulated too. But the untrusting, freedom loving red-blooded American in me is also wary of government controls and power consolidation bordering on censorship. No easy answers I suppose; we'll just have to find a way to thrive in spite of platforms that profit from our wasted time.


Hey, one platform is Chinese, the others are American. That's the difference you're looking for


I appreciate a response like this on HN.

IF there is a problem, let's solve the root issue (which may include looking at the algo feeds of all big tech, etc).


> felt China was intentionally nudging them via algorithmic suggestions away from STEM and toward vapidness

A ban based on a feeling?


I think the US social media mega corps are kindred spirits and if TikTok is considered harmful/propaganda then so are the US products. The subject draws an uncomfortable amount of heat.


I think that's where we were with seatbelts in the 1950s, tobacco in the 1920s and alcohol in the 1850s. In all of those cases, society ultimately decided that guardrails were needed.


Yet imagine a law that mandates seatbelts only for non-US manufactured cars...

The main problem is the hyperoptimized addictive nature of some modern social media apps, not who makes them.


> Candies

Starts with an F ends in L


Speed-ran the game using this (well, I injected jquery first to select the element using $() because I'm an absolute Baboon) in about 45 seconds, spam clicking all the upgrades, and clicks stopped going up after hitting "342,044,125,797,992,850,000,000,000,000 stimulation" with 10k clicks per second.

What a ride. Love the implied commentary on our over-stimulated lives!


Fun fact: browsers' devtools consoles have de-facto standardized convenience aliases for querying the DOM, similar to jQuery [0][1][2][3][4]. This means you could do something as simple as:

    setInterval(()=>$('.main-btn')?.click(), 0)
    setInterval(()=>$$('.upgrade')?.forEach(_=>_.click()), 1000)
to create the simplest dependency-free cheat speed runner. (And, as mentioned earlier, shrinking -- or logically also zooming in -- the page results in more DVD bounces.)

[0] https://devtoolstips.org/tips/en/query-dom-from-console/ [1] https://firefox-source-docs.mozilla.org/devtools-user/web_co... [2] https://developer.chrome.com/docs/devtools/console/utilities... [3] https://learn.microsoft.com/en-us/microsoft-edge/devtools-gu... [4] https://developer.apple.com/library/archive/documentation/Ap...


I get a reference error when I try this (chrome stable on linux)


Ah, thanks for the heads-up, apparently there is something borked in Chromium wrt $ / $$ encapsulation, as it seems they are nor reachable from the (global) context setInterval so doing `window.$ = $; window.$$ = $$;` fixes that in Chrome. Not sure why. (Yet again embarrassed myself by trying a snippet that "simply must work ® according all documentations ™" in single a browser only before posting. Sigh.)



I bet it's working as intended. The $ symbol is probably a special feature of the console and is not intended to be a property of window. Inside setInterval, the function is no longer being executed in the special console environment, which has access to that symbol.


Yes, I guess there could be some intention behind that, presumably some security precautions, but still: the fact that you can see $ in the globalThis (as a non-enumerable prop), and that globalThis you see from inside the timeout-ed function is strictly equal to globalThis seen directly from the console, that makes it somewhat spooky.

    console.log(Object.getOwnPropertyDescriptor(globalThis, '$'))
    // {writable: true, enumerable: true, configurable: true, value: f}
    globalThis.globalThat = globalThis
    globalThat.$ === globalThis.$
    // true
    setTimeout(()=>console.log(globalThis.globalThat === globalThis))
    // true
    setTimeout(()=>console.log(Object.getOwnPropertyDescriptor(globalThis, '$')))
    // undefined (!)
    $ = $
    setTimeout(()=>console.log(Object.getOwnPropertyDescriptor(globalThis, '$')))
    // { writable: true, enumerable: true, configurable: true, value: f}
And it (`setTimeout(()=>{console.log(typeof $==="function")},0)`) works in Firefox. (Interestingly, you cannot get $'s descriptor in there, but you have it always available in timeout.)


Was closed as "Infeasible", i.e. wontfix.


> I injected jquery first to select the element using $()

In Chrome and Firefox, $ and $$ are available in the console as replacements for document.querySelector and document.querySelectorAll, respectively.

This doesn't work in scripts though; only in the console. In a script you can use this:

   const $ = document.querySelector.bind(document);


I wonder what the limiting factor is here; I'm currently at

332,446,225,163,762,970,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 stimulation

131,903,042,042,866,960,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 stimulation per second

And there doesn't seem to be an end in sight.


going to the ocean is the end, I got there with only a couple dozen million stim


My brain has never been happier to see a credits screen


Nah, just remove the element and keep going.


I envy your rig - mine glitched a lot to get it in <3min. Might not be doing myself a service by actually answering the Duolingo questions via LLM... https://www.youtube.com/watch?v=I-J0ppP-H9s


Ah, a new yardstick for browser performance :-P


Sir that's true evil! That's evil you know?


Majestic.


Wonderful article on fractals and fractal zooming/rendering! I had never considered the inherent limitations and complications of maintaining accuracy when doing deep zooms. Some questions that came up for me while reading the article:

1. What are the fundamental limits on how deeply a fractal can be accurately zoomed? What's the best way to understand and map this limit mathematically?

2. Is it possible to renormalize a fractal (perhaps only "well behaved"/"clean" fractals like Mandelbrot) at an arbitrary level of zoom by deriving a new formula for the fractal at that level of zoom? (Intuition says No, well, maybe but with additional complexities/limitations; perhaps just pushing the problem around). (My experience with fractal math is limited.) I'll admit this is where I met my own limits of knowledge in the article as it discussed this as normalizing the mantissa, and the limit is that now you need to compute each pixel on CPU.

3. If we assume that there are fundamental limits on zoom, mathematically speaking, then should we consider an alternative that looks perfect with no artifacts (though it would not be technically accurate) at arbitrarily deep levels of zoom? Is it in principle possible to have the mega-zoomed-in fractal appear flawless, or is it provable that at some level of zoom there is simply no way to render any coherent fractal or appearance of one?

I always thought of fractals as a view into infinity from the 2D plane (indeed the term "fractal" is meant to convey a fractional dimension above 2). But, I never considered our limits as sentient beings with physical computers that would never be able to fully explore a fractal, thus it is only an infinity in idea, and not in reality, to us.


> What are the fundamental limits on how deeply a fractal can be accurately zoomed?

This question is causing all sorts of confusion.

There is no fundamental limit on how much detail a fractal contains, but if you want to render it, there's always going to be a practical limit on how far it can accurately be zoomed.

Our current computers kinda struggle with hexadecuple precision floats (512-bit).


1. No limit. But you need to find an interesting point, the information is encoded in the numerous digits of this (x,y) point for Mandelbrot. Otherwise you’ll end up in a flat space at some point when zooming

2. Renormalization to do what ? In the case of Mandelbrot you can use a neighbor point to create the Julia of it and have similar patterns in a more predictable way

3. You can compute the perfect version but it takes more time, this article discusses optimizations and shortcuts


1. There must be a limit; there are only around 10^80 atoms in our universe, so even a universe-sized supercomputer could not calculate an arbitrarily deep zoom that required 10^81 bits of precision. Right?

2. Renormalization just "moves the problem around" since you lose precision when you recalculate the image algorithm at a specific zoom level. This would create discrepancies as you zoom in and out.

3. You cannot; because of the fundamental limits on computing power. I think you cannot compute a mathematically accurate and perfect Mandelbrot set at an arbitrarily high level of zoom, say 10^81, because we don't have enough compute or memory available to have the required precision


1. You asked about the fundamental limits, not the practical limits. Obviously practically you're limited by how much memory you have and how much time you're willing to let the computer run to draw the fractal.


1. Mandelbrot is infinite. The number pi is infinite too and contains more information than the universe

2. I dont know what you mean or look for with normalization so I can’t answer more

3. It depends on what you mean by computing Mandelbrot. We are always making approximations for visualisation by humans, that’s what we’re talking about here. If you mean we will never discover more digits in pi than there is atoms in the universe then yes I agree but that’s a different problem


Pi doesn't contain a lot of information since it can be computed with a reasonably small program. For numbers with high information content you want other examples like Chaitin's constant.


> Pi doesn't contain a lot of information since it can be computed with a reasonably small program.

It can be described with a small program. But it contains more information than that. You can only compute finite approximations, but the quantity of information in pi is infinite.

The computation is fooling you because the digits of pi are not all equally significant. This is irrelevant to the information theory.


No, it does not contain more information than the smallest representation. This is fundamental, and follows from many arguments, e.g., Shannon information, compression, Chaitan’s work, Kolmogorov complexity, entropy, and more.

The phrase “infinite number of 0’s” does not contain infinite information. It contains at most what it took to describe it.


Descriptions are not all equally informative. "Infinite number of 0s" will let you instantly know the value of any part of the string that you might want to know.

The smallest representation of Chaitin's constant is "Ω". This matches the smallest representation of pi.


„Representation“ has a formal definition in information theory that matches a small program that computes the number but does not match „pi“ or „omega“.


No, it doesn't. That's just the error of achieving extreme compression by not counting the information you included in the decompressor. You can think about an algorithm in the abstract, but this is not possible for a program.


You seem wholly confused about the concept of information. Have you had a course on information theory? If not, you should not argue against those who’ve learned it much better. Cover’s book “Elements of information theory” is a common text that would clear up all your confusion.

The “information” in a sequence of symbols is a measure of the “surprise” on obtaining the next symbol, and this is given a very precise mathematical definition, satisfying a few important properties. The resulting formula for many cases looks like the formula derived for entropy in statistical mechanics, so is often called symbol entropy (and leads down a lot of deep connections between information and reality, the whole “It from Bit” stuff…).

For a sequence to have infinite information, it must provide nonzero “surprise” for infinitely many symbols. Pi does not do this, since it has a finite specification. After the specification is given, there is zero more surprise. For a sequence to have infinite information, it cannot have a finite specification. End of story.

The specification has the information, since during the specification one could change symbols (getting a different generated sequence). But once the specification is finished, that is it. No more information exists.

Information content also does not care about computational efficiency, otherwise the information in a sequence would vary as technology changes, which would be a poor definition. You keep confusing these different topics.

Now, if you’ve never studied this topic properly, stop arguing things you don’t understand with those who’ve learned do. It’s foolish. If you’ve studied information theory in depth, then you’d not keep doubling down on this claim. We’ve given you enough places to learn the relevant topics.


Actually it does, you can look it up. It’s naturally a bit more involved than what I use in a causal HN comment.


> could not calculate an arbitrarily deep zoom that required 10^81 bits of precision. Right?

I’m here to nitpick.

Number of bits is not strictly 1:1 to number of particles. I would propose to use distances between particles to encode information.


... and how would you decode that information? Heisenberg sends his regards.

EDIT: ... and of course the point isn't that it's 1:1 wrt. bits and atoms, but I think the point was that there is obviously some maximum information density -- too much information in "one place" leads to a black hole.


Fun fact: the maximum amount of information you can store in a place is the entropy of a black hole, and it's proportional to the surface area, not the volume.


Yeah, I forgot to mention that in my edit. The area relation throws up so many weird things about what information and space even is, etc.


10^81 zoom is easy. You run out of bits at 2^(10^81) or 2^100000000000000000000000000000000000000000000000000000000000000000000000000000000.


We can create enough compute and SRAM memory for a few hundred million dollars. If we apply science there are virtually no limits within in a few years.

See my other post in this discussion.


In the case of Mandelbrot, there is a self similar renormalization process, so you can obtain such a formula. For the "fixed points" of the renormalization process, the formula is super simple; for other points, you might need more computations, but it's nevertheless an efficient method. There is a paper of Bartholdi where he explains this in terms of automata.


As for practical limits, if you do the arithmetic naively, then you'll generally need O(n) memory to capture a region of size 10^-n (or 2^-n, or any other base). It seems to be the exception rather than the rule when it's possible to use less than O(n) memory.

For instance, there's no known practical way to compute the 10^100th bit of sqrt(2), despite how simple the number is. (Or at least, a thorough search yielded nothing better than Newton's method and its variations, which must compute all the bits. It's even worse than π with its BBP formula.)

Of course, there may be tricks with self-similarity that can speed up the computation, but I'd be very surprised if you could get past the O(n) memory requirement just to represent the coordinates.


This is right up my alley :-) I'll message you


"goal" in this case meaning not "good for the economy, most businesses, and everyday people" - I think the implicit goal being "give asymmetrical power to larger and more entrenched organizations, at the detriment of literally everyone else, to help maintain and consolidate power." I've gotta admit DMCA has been extremely beneficial as a regulatory capture method.


Just because this article isn't well formed or sourced doesnt make its claim incorrect.

I program daily and I use AI for it daily.

For short and simple programs AI can do it 100x faster, but it is fundamentally limited by its context size. As a program grows in complexity AI is not currently able to create or modify the codebase with success (around 2000 lines is where I found it has a barrier). I suspect it's due to exponential complexity associated with input size.

Show me an AI that can do this for a 10,000 lines complex program and I'll eat my own shorts


Doesn't even take that much. Today I did some basic Terraform with the help of GenAI - it can certainly print out the fundamentals (VPC, subnets) faster than I can type them myself, but the wheels came off quickly. It hallucinated 2 or 3 things (some non-existent provider features, invalid TF syntax, etc).

When you take into account prompt writing time and the effort of fixing its mistakes, I would have been better off doing the whole thing by hand. To make matters worse, I find that my mental model of what was created is nowhere near as strong as it would have been if I did things myself - meaning that when I go back to the code tomorrow, I might be better off just starting from scratch.


Here's a thought experiment. Think back on how that statement would have sounded like to past-you, 3 years ago. You would probably have dismissed it as bullshit, right? We've gone a long way since then. Both in terms of better, faster and cheaper models, but also how they're being intertwined with developer tooling.

Now imagine 3 years from now.


You could have said the same for crypto/blockchain 3-4 years ago (or whenever it was at peak hype).

Eventually we realized what is and isn't possible or practical to use blockchain for. It didn't really live up to all the original hype years ago, but it's still a good technology to have around.

It's possible LLMs could follow a similar pattern, but who knows.


What good thing has blockchain ever done that isn't facilitating crime or tax evasion?


It created a speculative asset that some people are passionate about.

However, if you saw the homepage of HN during blockchain peak hype, being a speculative asset / digital currency was seen almost as a side effect of the underlying technology, but it turns out that’s pretty much all it turned out to be useful for.


As you inadvertently pointed out, AI improvements are not linear. They depend on new discoveries more than they do iteration. We could be in either out of jobs or lamenting the stagnation of AI (again).


After an innovation phase there is an implementation phase. Depending on the usefulness of the innovation, the integration with existing systems takes time. It is calculated in years, tens of years. Think back on the 80-90s, where it took years to integrate PCs into offices and workspaces.

From your comment, is sounds like you think that the implementation phase of LLMs is already over? And if so, how do you come to this conclusion?


It's not as if we have no idea how to make use of AI in programming. We've been working on AI in one form or another since the 70's, and integrated them with our programming workflow almost as long (more recently in the form of autocomplete using natural language processing and machine learning models). It's already completely integrated into our IDEs, often with options to collate the output of multiple LLMs.

What further implementations of integrating AI and programming workflows have LLMs shown to be missing?


You can imagine all sorts of things, and then something else might happen. You can’t rely on “proof by imagination” or “proof by lack of imagination.”

We shouldn’t be highly confident in any claims about where AI will be in three years, because it depends on how successful the research is. Figuring out how to apply the technology to create successful products takes time, too.


Same thing that can be said about autonomous cars in 2014?

Not everything will grow exponentially forever


GPT-4 has been out for 1.5 years and I haven't seen much improvement in code quality across LLM's. It's still one of the best.


Or you are extrapolating from the exponential growth phase in a sigmoid curve. Hard to say.


Ten years ago when Siri/Google/Alexa were launching, I really wouldn't have expected that 2024 voice assistants would be mere egg timers, and frustrating ones at that - requiring considered phrasing and regular repeating/cancelling/yelling to trick it into doing what you want.

A 10x near future isn't inconceivable, but neither is one where we look back and laugh at how hyped we got at that early-20s version of language models.


It is a great point.

It also might be that the language everyone uses 20 years from now that gives a 50X from today is just being worked on right now or won't come along for another 5 years.

The way people who would have thought that humans could never fly were not completely wrong before the airplane. After the airplane though, we are really talking about two different versions of a "human that can fly".


In my very uninformed opinion, all we need is more clever indexing, prompting, and agents that can iteratively load parts of the codebase into their context and make modifications.

Real engineers aren’t expected to hold 10,000 lines of exact code in their head, they know the overall structure and general patterns used throughout the codebase, then look up the parts they need to make a modification.


I suspect it's due to exponential complexity associated with input size.

I am curious how you get to exponential complexity. The time complexity of a normal transformer is quadratic.

Or do you mean that the complexity of dealing with a codebase grows exponentially with the length?


Generally speaking, complexity grows exponentially with input size.

A program of length 100 lines has the potential to be 1000x as complex as a program with length 10 lines.

Consider the amount of information that could be stored in 4 bits vs 8 bits; 2^4 vs 2^8.

As the potential to be complex grows, so does current AI's ability to effectively write and modify code at that scale.


I would recommend giving a try to o1-preview in coding tasks like this one.

It is one level above Claude 3.5 Sonnet, which currently is the most popular tool among my peers.


This is incorrect. Links to EULA are not enough, it must be separate and distinct from any other terms. Words like "BUY" are also expressly forbidden.

Quoted from the link in parent comment ( https://legiscan.com/CA/text/AB2426/id/2966792 )

- (1) It shall be unlawful for a person to advertise or offer for sale a digital good with the terms “buy,” “purchase,” or any other term which a reasonable person would understand to confer an unrestricted ownership interest

(B) The affirmative acknowledgment from the purchaser pursuant to subparagraph (A) shall be distinct and separate from any other terms and conditions of the transaction that the purchaser acknowledges or agrees to.


> Words like "BUY" are also expressly forbidden.

I strongly disagree.

(b)(1) says that "buy" is not permitted for these goods... EXCEPT

(b)(2)(A) says that it IS permitted, if you follow the rules in subsections i through iii.

> (2) (A) Notwithstanding paragraph (1), a person may advertise or offer for sale a digital good with the terms “buy,” “purchase,” or any other term which a reasonable person would understand to confer an unrestricted ownership interest in the digital good, or alongside an option for a time-limited rental, if the seller receives at the time of each transaction an affirmative acknowledgment from the purchaser of all of the following:

My read on that is that either (b)(1) controls and you cannot use the words "buy" and friends, OR you do the things in (b)(2) and you CAN use "buy" & etc.

My read on subsection (ii) when combined with (i) is that simply "providing" the EULA for a digital software download and making the customer tick a box saying that they've "received" the EULA would be sufficient. If it's not (and it might not be), then having them scroll through the whole EULA to "prove" that they read it would clearly be sufficient, as it's common practice.

> (B) The affirmative acknowledgment from the purchaser pursuant to subparagraph (A) shall be distinct and separate from any other terms and conditions of the transaction that the purchaser acknowledges or agrees to.

Yes, but I think that this just means that this acknowledgement is a thing that's separate from the EULA, and separate from extended warranties, and such. The language that says that the customer must acknowledge that they received the license for the thing they're "purchasing" indicates that they must be -at minimum- given a chance to read the EULA... and I'm pretty sure common practice is to either provide a link to the EULA, or force you to scroll through it.


That's interesting. I don't for a second think this will actually curtail the harmful business practices, but what do you recon they'll write on their buttons? Maybe just dance around any meaningful verbiage with a button that just has a dollar sign or shopping cart on it? Just "Proceed" or "Confirm"?


“Get” is already used on iOS for this purpose.


“Get” replaced “free”, because it was misleading to call apps free when most have in-app purchases.


“Get” sounds good to me. I’ll know not to get any games that have “Get” button. Hopefully this law spreads to Steam across the board so that people outside of California can also benefit from it.


"add to cart" and "checkout"


I’d argue that a reasonable person would understand these terms to confer an unrestricted ownership interest.

I’m putting this good into a metaphorical container and taking it to a metaphorical till. This implies a sort of tangibility, a property of physical goods that I’d walk out of the metaphorical store to own.


That’s a good point. The real world experience they’re analogizing is me putting a bottle of ketchup in a shopping cart at a grocery store and checking out at the cashier. Afterward, I own that bottle of ketchup, not a license to ketchup, but that instance of it. “Shopping cart” and “checkout” imply “buying”, and I can’t think of a counterexample.


That’s so naive. They’ll just replace terminology industry-wise and continue on the wave of irony about it.

Feels like regulators never were in kindergarten or at least school, could be a freshening experience for them, cause it all works like there.


Replacing the terminology is the first step to this methinks. You'll always be able to buy a bagel, but not a video game. It's still shitty, but it's not deceptively shitty.


This struck me from two angles: First, it's beautiful, well produced, clear, and concise representation of the federal budget including key areas like the deficit and spending breakdowns. However, it also struck me as "useless", in the sense of "I found it difficult to take away any useful, new, or actionable information". I'm not sure what Ballmer's intended audience or result was, but it must not've been me ...

Personally I would have benefited from less "moving flashy graphs" / "Steve explaining each node in moderate detail", and more "Here's one clean boring way to look at the data", "here's what X means for us / here's what people are considering because of X."

That said I respect what's being done and hope he continues to produce more informational content!


It gives people a baseline for where to start in any discussions regarding the Federal budget, whether that’s immediately actionable, it is useful to have the vocabulary to discuss the budget if you’re going to talk about it at all.

I’m occasionally having to correct people who believe and are insistent that our largest ticket spending item is the Military. It’s certainly the largest part of our discretionary budget, but if you’re going to talk about the Federal budget, it is unhelpful to disregard the non-discretionary budget that goes mostly into Social Security, Medicare and Medicaid and giving people an education on discretionary vs non-discretionary spending drags out conversations; then it’s a coin toss if that person retains any of that the next time you talk politics, or if it got muddled up in their minds by poor news reporting in-between, the news they follow to “stay informed”.


It’s because he presented what could have been a single chart and did so by basically just reading the data. There was no analysis other than the tidbits about debt to GDP being at a high point. That said, great production and I hope it reaches people less familiar with these budgetary line items. If you’re aware what they are, like I am, it landed pretty flat but if you’re not familiar with them I could see it being quite informative. It’s good to build a baseline of financial literacy that I think we also have a general deficit of too.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: