I attended a talk by these folks at KubeCon last year - really great stuff. Sadly I'm not in a position to do anything with WASM in my professional life (and I have other priorities in my personal projects), but this for me is the "next big thing", more than any AI fluff - it will go unnoticed because it's low-level and transparent, but I'd bet paycheques on it having a huge impact in the coming years. Seems like a really exciting thing to be a part of.
Theoretically, any language that can target WASM, like Rust, Zig, or even C. I think the plan is to eventually support higher level languages with garbage collectors, but I don't know how much progress has been made on that front.
I haven't played with it enough to know which one provide the best experience at the moment, so I'd be interested to hear suggestions from others who have used it more extensively, especially with newer parts of the ecosystem like the component model, interface types, WASI, and generating ergonomic bindings to other languages (e.g. JS <> WASM)
Basically higher level languages with garbage collectors have to be converted twice, from their own bytecode into WASM, runing on a runtime that will never be as fined tuned as their own language specific runtime.
Example, the WASM GC design doesn't support all use cases of .NET GC with interior pointers.
There is really no reason to use this instead of packaging higher level languages with garbage collectors into a regular Kubernetes container.
Why? It's just an (allegedly) faster and lighter weight applications runtime on Kubernetes. For many applications, this will be less convenient than proprietary serverless stuff.
Ae you saying this is bigger than AIs that are so smart that they can do software development?
I can't comment on Spinkube specifically, but I'd bet on serverside WASM-based orchestration, and WASM in general, being a big part of the next generation of IT stacks. If somebody can get the UX right, the "write once, run anywhere" promise of Java may finally come true.
The thing that interest me about wasm is having ultra lightweight small processes, which happen to tap into very large pools of already resident libraries in a safe sandboxed fashion.
There's been huge huge strides in low-profile elegant serverless platforms built around v8 isolates. We can get all those wins and more - in an way more scaled out more multi-tenant way - with wasm. If it lives up to it's aspirations, which it seems on target to do.
Luke Wagner's WasmCon talk iterated through the core attributes of what makes wasm wasm and what those attributes unlock. The ability to host much much much more with a muchower profile, with extremely short-lived sandboxed runtimes, is incredibly enticing. The app-server model could finally be disrupted.
I have no patience for "bigger than AI" apples to oranges comparisons. Wasm is orthogonal, has it's own features & distinctions. The direct run off comparison shirks both of their value.
> I have no patience for "bigger than AI" apples to oranges comparisons. Wasm is orthogonal, has it's own features & distinctions. The direct run off comparison shirks both of their value.
Eh, fair. At the time I thought it was actually a fairly reasonably and pithy way to compare an hype-inflated technology that is being mindlessly applied to every possible target (while also, I concede, being thoughtfully, practically, and usefully applied in some cases, too), to an under-hyped and (AFAICT) almost unheard-of technology which seems on-track to provide an unshowy-but-significant optimization to basically any web technology ever, everywhere - but you're right that they are, indeed, trying to do different things, and comparing their impact is somewhat reductive. WASM is certainly not better than AI at the things that AI is trying to do.
In this context WASM is more like a slower container than an application runtime. The potential advantages being:
* Far better isolation than a normal Linux container
* If WASM takes off, maybe architecture-independent artifact distribution
* If WASM takes off, maybe improved ability to call from one language into another
As a Kubernetes container runtime WASM today is probably most comparable to gVisor, with the somewhat huge caveat that it prevents you from just running normal Linux software.
So interesting seeing this hit the front page while being sat in an auditorium in Barcelona at wasm i/o listening to one of the original creators of Krustlet, Taylor Thomas from Cosmonic, compare and contrast literally this 'wrapped in k8s' approach to the 'alongside k8s' interop alternative.
Great talk, highly recommend watching it on YouTube if you're into this kind of thing. No link but I imagine it'll be up pretty soon.
From what I've read, it seems much more complicated then it has to be. I've been running crun with wasmtime enabled since about 1 year in my private test clusters. No shim, no operator, no runtime class manager. I just have to set one extra annotation to the pod.
Only problem is that most k8s distros ship with runc. So there is no out of the box experiance yet, but it is a super efficient setup.
Your approach seems more managed/enterprise ready, but also a more complicated. I don't want to be rude, I just want to get what additional value I would get from this since there a so many small differences in the WASM runtime world and I'm not super familiar with most of them
For context, the shim is just an implementation detail of how containerd runs workloads; some come pre-built by default on some systems (like runc), and others you configure (like kata-containers, or in this case, the Spin shim); and some Kubernetes distros already have built-in shims (or will have soon), like k3s.
Your setup seems to work really well for you (and as a side note, I'm curious to learn more about the kinds of workloads you are running) — I'll note that you can also set the runtime class on a regular Kubernetes deployment / pod spec and you can run the workload like this with the Spin shim (for reference https://github.com/spinkube/containerd-shim-spin/blob/main/d...).
For Spin, we focused on the end-to-end developer experience, and on building a set of tools that take advantage of the benefits you get from Wasm, while integrating with a lot of cloud native tools — while allowing you to more easily build APIs and event-driven functions.
Our goal with SpinKube is to integrate with how organizations run workloads on in their infrastructure, and to make building, distributing, and running Wasm as seamless as possible.
Happy to dive into details for any of the points above, and would love to learn more about what you are running as Wasm.
Why is anything special or unique required to run WASM on kube? A container is a container, no? Serious question. To me the benefit of kube is that the api is a container and allows for any container to run there.
With WASM, you get better isolation than a regular virtual machine, you can be more granular with scheduling and the attack surface is far smaller than a regular VM. When compared to namespaces containers, you don’t need to rely on the kernel attack surface being tight for security. And you get to intercept all syacalls ala gvisor with less complexity. The downside is interaction with specialty hardware and performance.
A container can be a VM, this provides a container with similar isolation characteristics to a VM with less complexity on the orchestrator/runc side of things.
Hey SpinKube maintainer here. You're right that you could just run your Wasm (or Spin app) directly in a container and there often are reasons you might want to do that. But, SpinKube executes WebAssembly using a containerd-shim which means we avoid the overhead of starting a container and rather just directly execute the Wasm.
The project is focused on supporting Spin. But Spin supports a large amount of the WASI API surface and is headed in the direction of using WASI in place of Spin specific APIs wherever possible.
You are correct that there is nothing extra required if what you want is to run Wasm inside a container.
However, that comes with a few disadvantages, primarily stemming from bundling your wasm runtime with each container image — and while your wasm workload is portable, the runtime itself needs to be compiled for the specific OS and architecture you need to execute on.
SpinKube comes with a containerd shim (or plugin) that executes Wasm workloads directly, so you can continue to integrate with the Kubernetes API (either by using regular pods with a runtime class, or the Spin application manifest), but not get the overhead of a full Linux container for your lightweight wasm component.
Krustlet implemented the Kubelet API rather than the containerd-shim API. That means that with containerd shims, you can use the default K8s kubelet, containerd, and extend your containerd runtime with a shim for executing Wasm. Kubelet is much higher in the stack and requires much more to implement it and everything below it in the stack.
Just FYI but krustlet is dead and the community has moved on to approaches where the container runtime handles the difference instead of having special nodes for it.