Hacker News new | past | comments | ask | show | jobs | submit login

For a given function like `power`, you can unroll the interpreter loop and eliminate branches on the instruction, so the code has less branching overall.



But can you only do that when you know what the inputs are? That is, if x and y are determined only at runtime for power(x, y) then I don't see what can be optimized. But that seems like the common case to me. Likely I'm missing something and maybe not asking the question clearly, sorry.


> That is, if x and y are determined only at runtime for power(x, y) then I don't see what can be optimized.

Yes, the example in Max's post is specifically assuming one wants to generate a specialized version of `power` where `y` is fixed.

To take it back to weval: we can know what the bytecode input to the interpreter is; we provide an intrinsic (part of the "wevaling" request) to indicate that some function argument is a pointer to memory with constant, guaranteed-not-to-change content. That, together with context specialization on PC (another intrinsic), allows us to unroll the interpreter loop and branch-fold it so we get the equivalent of a template method compiler that reconstitutes the CFG embedded in the bytecode.


Thanks, I think I see now. So `y` is the bytecode, in the analogy. Makes sense.

(For me at least a concrete example would have helped, something like showing the specialized output of running on the bytecode for `power` with that interpreter. But maybe that would be too verbose...)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: