> That is, if x and y are determined only at runtime for power(x, y) then I don't see what can be optimized.
Yes, the example in Max's post is specifically assuming one wants to generate a specialized version of `power` where `y` is fixed.
To take it back to weval: we can know what the bytecode input to the interpreter is; we provide an intrinsic (part of the "wevaling" request) to indicate that some function argument is a pointer to memory with constant, guaranteed-not-to-change content. That, together with context specialization on PC (another intrinsic), allows us to unroll the interpreter loop and branch-fold it so we get the equivalent of a template method compiler that reconstitutes the CFG embedded in the bytecode.
Thanks, I think I see now. So `y` is the bytecode, in the analogy. Makes sense.
(For me at least a concrete example would have helped, something like showing the specialized output of running on the bytecode for `power` with that interpreter. But maybe that would be too verbose...)
Yes, the example in Max's post is specifically assuming one wants to generate a specialized version of `power` where `y` is fixed.
To take it back to weval: we can know what the bytecode input to the interpreter is; we provide an intrinsic (part of the "wevaling" request) to indicate that some function argument is a pointer to memory with constant, guaranteed-not-to-change content. That, together with context specialization on PC (another intrinsic), allows us to unroll the interpreter loop and branch-fold it so we get the equivalent of a template method compiler that reconstitutes the CFG embedded in the bytecode.