Hacker News new | past | comments | ask | show | jobs | submit login

64 bit integers are one small example of something you can't do no matter how fast you make JavaScript.



JavaScript needs 64-bit integers no matter what; the Node folks have been clamoring for it for a long time. We need to add them. Same with SIMD.

JavaScript is not some immutable thing. It needs enhancing, not replacing.


Value types (64-bit, DFP, etc) are possibly on the table for ES7. Brendan has a strawman he shows in talks. If you look at his JSConf.EU video he discusses it.

edit: Video link http://www.youtube.com/watch?v=IXIkTrq3Rgg


That's not quite true -- it is entirely possible to write a 64-bit integer class entirely in Javascript, and have a JIT / ASM.js compiler recognize that and swap out a native version (See ecmascript_simd [1] which I believe was written with this idea in mind)

[1] https://github.com/johnmccutchan/ecmascript_simd


Do you have any pointers to how value types can be implemented efficiently without adding type tags to the language? I can see how that would work in simple cases (say within one function), but it seems that there would be many cases where the vm cannot assume the type of a variable.

I looked at the spec briefly (http://wiki.ecmascript.org/doku.php?id=strawman:value_object...), but it doesn't talk about the how.


Many JS implementations use type tags already. SpiderMonkey, for example - they call it 'nan-boxing'. See http://stackoverflow.com/questions/9435338/what-is-the-aim-o...


Sorry I meant type "tags" in the sense of modifying the language syntax to have required type annotations.

I understand nan-boxing and similar techniques, but they seem to imply at least some runtime overhead to test the type of the value in some cases. Also, AFAIK, 64 bit integers cannot be represented with nan-boxing, as there are only 51 bits available for the payload.


> Sorry I meant type "tags" in the sense of modifying the language syntax to have required type annotations.

There are already multiple types in JavaScript, and this already affects performance; for example "+" is defined for both numbers and strings. All JavaScript engines handle this in the same way, more or less. You start out with a "baseline JIT" that does not make assumptions about the types of objects and has type-check-and-dispatch on operations like "+". These type checks record the types of objects that have flowed through each point. Once enough types have flowed through so that we can reasonably predict the types, we recompile the function to assume that the same type (for example, number) goes through, which enables greater optimizations. You then hoist type guards up (or eliminate them if you can prove them never taken) so that you deoptimize and bail out.

Value objects don't change this overall picture, they just add more types.

All of this, however, is irrelevant to asm.js. With an ahead-of-time optimizing compiler, we know what the types are beforehand via the asm.js spec. So there are no type guards inserted at all, and none of this is an issue. For example, NaN-boxing is not used in the Firefox asm.js compiler (OdinMonkey). We know which values are doubles and which are integers, so we need no runtime guards or type tests at all.


I was responding specifically to the comment that if v8 team continues optimizing in response to asm.js as they have been, perhaps there is no need for NaCL (or asm.js). I don't think that will ever be true.


That's correct, nan boxing doesn't help for 64-bit integers.

On the other hand, Dart doesn't have required type annotations either and it has 64-bit ints, doesn't it? So it should be possible to introduce in JS.

Runtime overhead is likely, but then when you're in the JIT sweet spot the types are all known so you get JITcode that doesn't check types.


I think that part of the value people see in things like NaCL is not having to rely on JIT magic to figure out when to optimize 64 bit math. You get direct control over what is going on. It may be true that in certain edge cases the JIT can actually out-perform clean native code, but people are willing to sacrifice that for direct control over what the machine does.

GC is a related example of a place where VM designers told us not to worry ourselves, but it turns out that GC is inherently hard and you are going to pay for that convenience - either in performance or in memory. See: http://sealedabstract.com/rants/why-mobile-web-apps-are-slow....


> I think that part of the value people see in things like NaCL is not having to rely on JIT magic to figure out when to optimize 64 bit math. You get direct control over what is going on. It may be true that in certain edge cases the JIT can actually out-perform clean native code, but people are willing to sacrifice that for direct control over what the machine does.

This is precisely what "use asm" is for. It gives you direct control: each allowable operation in the subset has a direct analogue to the appropriate machine instruction(s). If you fall off the happy path and stray into "JIT magic" territory, you get a message in the developer console telling you to fix your code.

Unfortunately V8 is opposed to "use asm" in favor of the "JIT magic".


That post you linked to is actually full of some fundamental misunderstandings about GC and JS VMs :) But yes, it is the case that GC isn't a free lunch - it comes with costs and you have to design your applications to avoid the weaknesses of a given GC. It's rough.

I agree that not having to rely on the JIT to figure out how to optimize your code with 'magic' is preferable. I tend to lean towards that where possible in most of the JIT/GC-based environments I use (C#, JavaScript, etc), and it tends to pay off.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: