Hacker News new | past | comments | ask | show | jobs | submit | astrange's comments login

Well, low crime, not high trust. Like, noone trusts each other to throw out their trash the right way.

Were those rides in hybrids? I've noticed the way they brake is a lot more likely to get me carsick if I'm sitting in the back, especially Priuses.

Cars of all kinds, really. But It seems to be more common in either crappy old gas cars or Teslas. I know Teslas have weird brake settings, but every ride I've experienced this has definitely been from aggressive and reckless driving.

The government isn't giving 500 billion to anyone. They just let Trump announce a private deal he has no involvement.

Correct, as I stated the government is just giving their 'blessing'.

There's no zero sum games in a growing industry like this.

As long as the semiconductor fabs are running at capacity, yes, it's one big zero sum game. If you win one chip, I lose one chip, and vice versa.

This situation is temporary, of course. China previously had a large incentive to get their own leading-edge nodes into production, and now they have a Manhattan Project-size incentive.


They aren't running at full capacity on AI chips though - TSMC's main customer is iPhones as far as I know. So you could take away production time for them, though it's still zero-summy.

But TSMC is building new leading edge fabs right now.


You forgot "are a child".

ChatGPT can't provide answers on any current events because it has to be updated first.

That is impossible. There's several layers deep of sole suppliers for them.

There isn't a "the pipeline". You'd have to work at DeepSeek for your low-level work to affect it.

Compression is inherently unpredictable (if you can predict it, it's not compressed enough), which is vaguely speaking how it can help x264.

I agree that compression is all about increasing entropy per bit, which makes the output of a good compressor highly unpredictable.

But that doesn’t mean the process of compression involves significant amounts of unpredictable branching operations. If for no other reason than it would be extremely slow and inefficient, because many branching operations means you’re either processing input pixel-by-pixel, or your SIMD pipeline is full of dead zones that you can’t actually re-schedule, because it would desync your processing waves.

Video compression is mostly very clever signal processing built on top of primitives like convolutions. You’re taking large blocks of data, and performing uniform mathematical operations over all the data to perform what is effectively statistical analysis of that data. That analysis can then be used to drive a predictor, then you “just” need to XOR the predictor output with the actual data, and record the result (using some kind of variable length encoding scheme that lets you remove most of the unneeded bytes).

But just like computing the median of a large dataset can be done with no branches, regardless of how random or the large the input is. Video compression can also largely be done the same way, and indeed has to be done that way to be performant. There’s no other way to cram up to 4k * 3bytes per frame (~11MB) through a commercial CPU to perform compression at a reasonable speed. You must build your compressor on top of SIMD primitives, which inherently makes branching extremely expensive (many orders of magnitude more expensive than branching SISD operations).


> You’re taking large blocks of data, and performing uniform mathematical operations over all the data to perform what is effectively statistical analysis of that data.

It doesn't behave this way. If you're thinking of the DCT it uses that's mostly 4x4 which is not very large. As for motion analysis there are so many possible candidates (since it's on quarter-pixels) that it can't try all of them and very quickly starts trying to filter them out.


> it uses that's mostly 4x4 which is not very large

That's 16x32 which is AVX512. What other size would you suggest using and (more importantly) what commercially available CPU architecture are you running it on?


Not inherently, it's just a linker default. You can run 32-bit processes through WINE.

Only in Rosetta. On arm64 these binaries will not be allowed to load.

Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: