Hacker News new | past | comments | ask | show | jobs | submit | leucineleprec0n's comments login

Instruction set baselines should ideally be well-regulated open standards. They should also be good, and not moronic academic projects running of 32B opcode space because of religious dedication to silly extensions and the uniformity of an ISA for saving pennies on microcontrollers to high performance CPUs.

RISC-V in principle is a great idea. Hopefully we’ll get something that’s at the caliber of a well-oiled machine backed by real experience and practical high performance use like Arm V8 and V9 someday that’s a bit more open, but as of right now RISC-V not only isn’t that on a technical level but is fighting some serious fragmentation.

https://www.theregister.com/2024/05/29/riscv_messsaging_stru...

And here’s David Chisnall on ISAs, which do matter:

https://queue.acm.org/detail.cfm?id=3639445


I don't see why "moronic" was needed there. Also Risc-V has profiles that group multiple extensions together for specific use cases. For example RVA23[0] which requires 64 bit and vector extension among many others. Operating systems like android can (and most likely will) specify that they only support certain profiles.[1] Lastly, ARM is also fragmented between Armv8 and Armv7 which android developers are still supporting.

>RISC-V not only isn’t that on a technical level but is fighting some serious fragmentation.

Do you have any evidence to support this? Seems like RVA23 will be the first majorly supported extension. All the "high performance CPUs" right now are just dev kits, so I don't see how there can be fragmentation in a market that does not yet even exist.

[0] https://github.com/riscv/riscv-profiles/blob/main/src/rva23-...

[1] https://opensource.googleblog.com/2023/10/android-and-risc-v... (note: the blog mentions RVA22 but this has most likely been switched to RVA23 before full Risc-V support lands in Android).


>All the "high performance CPUs" right now are just dev kits, so I don't see how there can be fragmentation in a market that does not yet even exist.

It comes straight from the "RISC-V know the facts" FUD campaign ARM infamously ran.

Yet, not even these dev kits suffer from "fragmentation". Basically:

- Previous wave implements RVA20, some of them with custom extensions, such as a harmless pre-ratification V extension.

- The open software ecosystem is built for RVA20, which the hardware supports. Vendors run their experiments within custom extension space, no harm is done.

- Current wave implements RVA22 with the ratified V extension, some of them with harmless custom extensions. As newer RVA profiles build on older RVA profiles, these chips run RVA20 code no worse than the previous wave.


I think David Chisnall's article is very good. However I don't think you should oversell the few issues with RISC-V. There are definitely some design mistakes, but overall it is good. I would say as good as ARM (but not as mature yet).

Also... consider how successful an insane instruction set like x86 is! The ISA definitely matters for performance, but it clearly doesn't matter that much.

Also the uniformity of the ISA is very nice for compilers. Sure in practice you're only ever going to use x1 or x5 for your return address but adding the hard constraint that you can only use those is definitely going to complicate some software.

I'm not sure what you mean about fighting fragmentation. I used to think that but I didn't know about the RVA profiles.


Indeed, I think he's repeating something that is today outdated information. With RVA profiles, we know what desktop-class RISC-V looks like, and that's what people might compare against ARM.


> Avoiding flags also has some interesting effects on encoding density.

That's one thing I liked about the Mill CPU was the belt, but I thought it was misplaced for data, and would be a great way to just carry the FLAGS register instead.

This would make conditional testing much easier and would mean you don't have to be as concerned about intermediate instructions updating flags before you use them.

I never had time to deeply think about it. Does someone want to tell me why this is actually a bad idea?


Surprisingly this page looks like it’s been updated more recently. They also added RISC-V support for Starnix recently.


This is naïve though. Regulation — especially such as this — has to be enforced and there is obviously room to over and under interpret the text of the law on a whim, or varying fines. OAI knows this and looking at the EU lately, what they’re doing is wise.


Yes. The whole “non-maneuverable” line is kind of a blurry one with modern algorithms/comms and the altitude adjustments possible


Yeah it’s not new, people have known.


I don’t think it was 50K in practice, published maybe but it was widely known it reach higher.


Absolutely. This is the primary reason those “UFO’s” AKA balloons and drones are concerning: Signals intelligence (and/or radar jamming in the same vein which the DOD reported has occurred off the coast of Virginia).

It’s alarming many leaped to suggest LEO satellites obviate the need for balloons/drones/spy planes because it really isn’t true; there are some things for which a proper resolution and capture is simply only possibly with proximity, at least more than a satellite has. In fact that’s why we still use U-2 spy planes (upgraded) and did for the balloon.

Given the number of unidentified drone/balloon incursions reported by the Pentagon in the last few years near ships and air force bases I do wonder what’s been exposed about our radars and or datalinks. It also doesn’t necessarily matter that the data is encrypted (a weird refrain I saw) because the operating frequencies and behavior of the emitters on our aircraft, ships is in and of itself valuable information.


Can you expand on this? Any links?


I could what are you curious about? I tried to describe at the highest level possible what Apple is doing in the AI space without naming anything specific.

Apple is currently designing silicon based on speed and efficiency bottlenecks. They analyze their performance and adjust their design to work with these bottlenecks. If one looks at their AI oriented frameworks and their job processing framework they are both being optimized for their custom silicon. Apple hires many PhDs to work on some of these issues.


"But I actually think, 10 years from now, everyone in the world is going to be trying to figure out how [best] to use Fuchsia. I think there’s gonna be some serious competitive advantages that using Fuchsia is going to give companies, and they’re going to need to figure out how they’re going to adopt it. That’s where I think [Fuchsia] will be in about 10 years."

- Chris McKillop.

Google are also making Fuchsia compatible with the ADB tool for developers at the moment. (https://9to5google.com/2022/08/26/fuchsia-adb-proposal/) (https://fuchsia-review.googlesource.com/c/fuchsia/+/715977/)

And just finished their Fuchsia rollout for Nest Hub Max, with the rollout for the entry-level Nest Hubs completed last year. (https://9to5google.com/2022/08/24/nest-hub-max-fuchsia-rollo...).


I like the part just above that:

I think there’s a small chance that everything that Fuchsia has done ends up being inside the Linux kernel. They’re trying! They’re actually doing things now, which is great. That would be an awesome outcome, because it’s really about the features we were trying to build.

I’m not a big believer in NIH [not invented here] or that kind of thinking, but I do feel people become complacent. Linux had “won,” and so it was very easy to be complacent. When you have new things come along and show that new ideas are possible, then other people adopt those new ideas. I think that’s awesome. That’s always a possibility.

I think the problem with Fuchsia is that it is just too much of a Google-only project. Samsung has basically only contributed F2FS, and that is presumably so that they can sell their flash hardware to Google. Other than that, few, if any, major companies are hacking on Fuchsia. Compared to dozens on Linux. One of Linux's biggest strengths is that it has been a big tent that balances the needs of different vendors (who are often in competition) in a mostly fair manner.


If anything I believe this casts more doubts than certainties around fuchsia


Why is that?


I love the freedom Android provides what with the utter clusterfuck that is the Linux kernel’s driver interface and GPL. Yay freedom.

An MIT license is fine. Great, even, because Fuchsia is in fact still an open source OS.

Hardware OEM’s don’t owe the public transparent firmware blobs.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: