Hacker News new | past | comments | ask | show | jobs | submit | frozenport's comments login

I did speak to folks on the North Vietnamese side. I kinda found it really interesting.

They read these stories of escape as emblematic of the southern government’s cowardice rather than heroism. In some ways these are stories of active military deserting their posts.

It was no surprise to these North Vietnamese patriots that they triumphed.


Considering the hundreds of thousands (millions?) that were imprisoned after the war for any suspected contact with the Americans or South Vietnamese government, and the horrific conditions they were held under (forced labor, malnutrition, disease, death) they maybe shouldn't act so surprised?

Even today in Vietnam, these families are still "marked" by the regime and not allowed to serve in government roles for three generations. It's a blood liable.


I like to think if I saw an opponent escaping a lost war with his family I would understand that someone who served is not a coward.

Not a good idea since they might be regrouping to fight another day.

war is aweful.


Given the acts of brutality committed by the VC against those who didn't willingly join their cause, I wouldn't have stuck around with my family, either. One example from, as I recall, early in the division was burning alive the mayor of a village that refused to adopt communism. These things never get talked about and the only reason I've even heard of this was from listening to an interview on Dan Carlin's Hardcore History Addendum podcast.

> I wouldn't have stuck around with my family, either

Which is a major plot point of stories like Handmaid's Tale where you get caught in a civil war with opposing views of the territory you're in when it starts. You didn't start the war, but now you're an enemy for just living in your home. Do you just give up your beliefs or do you try to get out of there?

All war is hell, but civil wars especially


It's true-- just like in Marvel the comic book movie, where the bad guy causes upwards of 50% casualties on the entire planet!

the US literally industrialized the process of burning alive people in vietnam who didn’t support the colonial government. would love a source on the mayor claim

You're a perfect example of why reporting about Vietnam was horribly broken. We all know about the atrocities committed by the Americans, but nobody understands how horrendous the VC were, and the direct threat they presented to those conquered.

I've already given you the source for the mayor story, but you're not actually interested in it, are you? You'll stay blissfully ignorant with false plausible deniability.


do you have a source that’s written aka not a podcast? googling the mayor burning claim just brings me to your comment

even just a transcript of the episode you’re discussing. generally i find if someone responds with ire rather than a link, the event usually did not happen as they remembered it


Wikipedia has an article, with links to sources. I think the main source is Douglas Pike's The Vietcong Strategy of Terror[0]. Another Vietnam Expert, Bernard Fall, documents some of it in Vietnam Witness.

https://en.wikipedia.org/wiki/Viet_Cong_and_People%27s_Army_...

[0] https://web.archive.org/web/20230405095258/https://vva.vietn...


there is no mention of this 'mayor burning alive' story in either of these sources. nor do i really trust this book, written during the vietnam war, while the US propaganda efforts were mounting up.

I disagree with whoever is downvoting you - what you describe is exactly how this would have been presented to northern soldiers and civilians, whether or not it's true.

The real problem is that MDPI occasionally hosts more legitimate work.

What if they just charged Apple users 30% more?


In the past, Apple explicitly forbade this except in certain negotiated cases like Spotify and Netflix if I recall correctly. Has that changed?


Explicitly forbidden by Apple ToS.

You mention any of that and your app will be rejected.


No it’s not. Spotify did just that before they removed IAP.


You’re repeating the type and hoping it doesn’t change when you really just wanted the value.


Can we direct link to the actual chat site?



Mermaid doesn't scale beyond simple plots try https://perfetto.dev/ which can ingest basic csvs


    “exotic” signal representation without much practical utility despite the fact that they have been around the signal and image processing community for more than 30 years now. 
Maybe they aren't that good? Maxwell's equations got a lot better when they dumped them, same thing with the few uses in video game physics/camera tracing.


Actually Maxwell's equation become unintuitive and become very difficult to model when not using quaternion. Unlike other waveform for example sound, electromagnetic (EM) waves has polarization components that can be intuitively, properly and comprehensively modeled using quaternion. Currently we are using quaternion to model robust and reliable wireless PHY modulation based on polarization that can work in a very limited Line of sight (LoS) environment, the performance is much better than conventional wireless PHY.


are you referring to a particular EM modeling software or codebase?


I'm referring to the EM model of propagation including its polarization in 3D representing the real world scenario.


I didn't think quaternions are even exotic and have used the extensively to represent rotations. I'm not familiar with video games but it seems a pretty natural fit to the problem. What is the alternative that is so much better?



In 3D space, rotors are quaternions with different labels. If rotors are better, then the only conclusion we can draw is that people don’t like the name “quaternion”.


They do sound weird.

I propose we call them fanciful numbers.


I mean if you're in highschool and poking under the hood of Roblox or whatever to write your first mod, rotor is a better name for the thing that manages rotations than quaternion.


I would go for something like “orientation”. Or “rotation”. Ordinary English words that represent what are, more or less, ordinary, familiar concepts.

The fact that it’s a “quaternion” or a “rotor” is kind of an implementation detail.

Of these four terms, Quaternion is the most precisely correct. The reason is that rotors are, by definition, constrained. Quaternions can take any value. Due to floating-point precision problems, your rotor will not always be exactly a rotor, but may some multivector which is not a rotor.

This is kind of like representing a point on a sphere using (x,y,z) coordinates. You can call it PointOnSphere or Vector. I would rather call it Vector, because PointOnSphere implies a constraint which won’t be exactly satisfied, and I want to be reminded of that fact. The type name (Vector here, and Quaternion above) represents the object’s structure and the constraints which are actually enforced by the underlying representation.


From your link: "We can notice that 3D Rotors look a lot like Quaternions". "In fact the code/math is basically the same!"


Yeah, why remove quaternions if the math is the same as with rotors? They also have their own value, without applications to geometry. The article is good, though.

BTW, it's easy to understand the relations between i,j,k if you're familiar with Pauli matrices.


The article itself says they are the same in 3D space. Not very convincing that they are actually better. In fact a rotor is a unit quaternion...


Thank you for this link.

I just dabbled in webgl/threejs and tried creating a small movement engine and was confused from beginning to end. Quaternions as a black box is pretty accurate.


To add to what the other guy said, rotors (by extension Clifford Algebra) is better. A fatal issue with Quaternions is that while they handle rotations perfectly, things like the norm or cross product of two vectors is messy.

This is unsurprising when several of your coordinates become -1 when multiplied. That's why historically Gibbs Heaviside (dot and cross product) became the dominant vector algebra over quaternions.

Clifford Algebra is the better than both, as you can seamlessly do dot, cross (wedge in CA) and can also embed quaternions within the system. I've heard that it can also accommodate some of the nonmetrical aspects that make differential forms appealing for manifold integration, but that's currently outside of my range of knowledge.


Dare to elaborate why Maxwell's equations got better, and the use cases in the gaming industry which improved thanks to dropping them?


Assuming this is not a tongue in cheek question, you can read about it more here:

https://hsm.stackexchange.com/questions/8173/did-maxwell-ori...

The answer also includes a link that shows many other representations including Einstein's "4d generally covariant tensor calculus".

The point is that the most common formulation today is not the one Maxwell made (with 12 equations). The idea is preserved, but it was Gibbs and Heaviside that formulated the current 4 equation representation.


> Maxwell's equations got a lot better when they dumped them,

Tell me you don't understand special relativity.


Tell me you don't understand differential forms.


geometric algebra anyone?


My PhD suggests otherwise lol


Struggling with the use case.

It seems like this is when you have the source or the libs but choose to mix x86 and arm?

It would seem if you have the source etc you should just bite the bullet and port everything.


Two use-cases jump to mind:

* Allows incremental porting of large codebases to ARM. (It's not always feasible to port everything at once-- I have a few projects with lots of hand-optimized SSE code, for example.)

* Allows usage of third-party x64 DLLs in ARM apps without recompilation. (Source isn't always available or might be too much of a headache to port on your own.)


3. Improve x64 emulation performance for everybody. Windows 11 on ARM ships system DLLs compiled as Arm64EC - makes the x64 binaries run native ARM code at least within system libraries.


It's not worth using ARM64EC for just for incremental porting -- it's an unusual mode with even less build/project support than Windows ARM64 and there are EC-specific issues like missing x64 intrinsic emulations and slower indirect calls. I wouldn't recommend it except for the second case with external x64 DLLs.


At that point why trust the emulator over the port? Either you have sufficient tests for your workload or you don’t, anything else is voodoo/tarot/tea leaves/SWAG.


"Why trust the emulator?" sounds a lot like asking "why trust the compiler?". It's going to be much more widely-used and broadly-tested than your own code, and probably more thoroughly optimized.


We might be lucky and the emulator guys might have enough testing


> Allows incremental porting of large codebases to ARM. (It's not always feasible to port everything at once-- I have a few projects with lots of hand-optimized SSE code, for example.)

Wouldn't it make more sense to have a translator that translates the assembly, instead of an emulator that runs the machine code?


Yeah but you need to port the SIMD before shipping anyways?

So if you're doing incremental stuff might as well stub out the calls with "not implemented", and start filling them in.


The SIMD part will be emulated as normal, as far as I understand. So you can ship a first version with all-emulated code, and then incrementally port hotspots to native code, while letting the emulator handle the non-critical parts.

At least in theory, we'll see how it actually pans out in practice.


I feel like binary translation is a better approach. It’s a temporary workaround that allows users to use non-native programs while they are ported properly. ARM64EC seems like it will incentivize “eh that’s good enough” partial porting efforts that will never result in a full port, while making the whole system more complicated, with a larger attack surface (binary translation also makes the system more complicated, but it seems more isolated/less integrated with the rest of OS).


My understanding is that ARM64EC only makes sense in terms of binary translation. That is, the x64 bits get translated and the ARM bits don’t.


The use-case is huge apps that have a native plugin ecosystem, think Photoshop and friends. Regular apps will typically just compile separate x64 and ARM64 versions


Yes, bite the bullet and port. Of course it makes no sense.

These sorts of things are only conceived in conversations between two huge corporations.

Like Microsoft needs game developers to build for ARM. There’s no market there. So their “people” author GPT-like content at each other, with a ratio of like 10 middlemen hours per 1 engineer hour, to agree to something that narratively fulfills a desire to build games for ARM. I can speculate endlessly how a conversation between MS and EA led to this exact standard but it’s meaningless, I mean both MS and EA do a ton of things that make no sense, and I can’t come up with nonsense answers.

Anyway, so this thing gets published many, many months after it got on some MS PM’s boss’s partner’s radar. Like the fucking devices are out! It’s too late for any of this to matter.

You can’t play Overwatch on a Snapdragon whatever (https://www.pcgamer.com/hardware/gaming-laptops/emulation-pr... ) End of story. Who cares what the ABI details are.

Microsoft OWNS Blizzard and couldn’t figure this out. Whom is this for?


> Anyway, so this thing gets published many, many months after it got on some MS PM’s boss’s partner’s radar.

Arm64EC is not new. It was released back in 2021.


8 year old unicorn++ with a public demo sounds credible?


Samba is on gen 4 silicon and still lagging, somebody over there is doing something wrong


How are they lagging? They are running faster than anyone else at full precision and with many many fewer chips than Groq. Groq is not real.


Well I've been using the groq public api, and its approx. the rates claimed.

Economics and costs are hard to predict. For example, Groq is not using HBM chips. So probably the cards are a lot easier to source.

Its not clear what the capacity of these systems are in terms of total users, or even tokens per second. Then you factor in cost. Then you realize all vendors will match a competitors pricing. Then you realize Groq doesn't sell chips.

¯\_(ツ)_/¯

The only thing you have is the public API to benchmark against: https://artificialanalysis.ai/


- Groq has exactly 0 dollars in revenue - Groq requires 576 chips to run a single model - Groq can do low latency inference, but can't handle batches, and can't run a diversity of different models on each deployment - Groq quantizes the models, significantly affecting quality to get more speed (and don't communicate this to end users, which is very deceptive) - Groq can only run inference, cannot train on their systems

- SambaNova has real revenue from big customers - SambaNova can run any model on a single node at the speed Groq requires - SambaNova can do low latency inference just like Groq, but can also run large batches and host hundreds of models on a single deployment - SambaNova does not quantize models unless explicitly stated - SambaNova can run training at perf competitive with Nvidia, as well as fastest inference in the world at full precision

It really isn't a competition. Groq has done great as garnering hype in recent months, but it is a house of cards.


I think semi analysis commented that they have pipelines instead of batches[1].

So every clock cycle you're doing useful work rather than loading up people into batches. And thats why the arch will probably win for inference, for training you're basically competing with software eco system and silicon density. AKA NVIDIA can give TSMC more money to get more ALUs on the die.

I think other places have attempted dataflow (FPGA etc) but they all basically had buffers (due to non-determinism in networks stack and even ram). SambaNova seems indistinguishable from an FPGA with a few clock cycles difference. I think they blew their shot with a Series D ($600 million???) where they made more of the same old. Maybe Intel will buy them to augment Altera? Looks like chasing parity with existing strategies.

I buy the Groq hype because its something different, certainly the public demo helped. HN is about the future.

[1] https://www.semianalysis.com/p/groq-inference-tokenomics-spe...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: