Hacker News new | past | comments | ask | show | jobs | submit | feralchimp's comments login

"gimme gimme gimme" from man after midnight, you say?


Really nice job with this project, and especially the clear/concise write-up of the pipeline steps.


If this is real, I don't think it's about release timing at all, but the technical complexity of implementing and integrating the specific new features they've chosen.

I don't have a list of specific examples to back up that claim, but I encourage other apple devs here to think about recent releases from that perspective.


This is a great review for two reasons:

a) it provides readers with a laundry list of things to go study independently

b) the book author can, given time and inclination, do the same study and improve the book


"Grave error" implies something that a non-fool might blunder into. Fools don't blunder, per se, they simply exist.

One of the first jokes I ever heard at a startup was "We'll give the product away and make it up on volume!" Changing the joke to "We'll pay people to take the product and make it up on volume!" doesn't make it less obviously ridiculous.

I'm amazed they were able to hire employees, let alone find investors. Doesn't make the guy's death any less tragic though.


That's an oldie but a goodie.

Saturday Night Live Clip (First CityWide Change Bank 2) http://www.imdb.com/video/hulu/vi416284697/


Before startups or the Internet existed, the old joke was "Lose a little on every sale and make it up in volume".


If you're wondering how to write thorough doc for an engineering audience, I highly recommend looking at IBM's doc for z/OS. The first thing you'll notice is: there's a lot of it.


And yet, an order of magnitude less arrogant than the recruiting attitude that led to its authorship.


The market for people who want a native terminal on their phone should be roughly equivalent to people who ever want to terminal INTO their phone.

The market already well served by iOS, Android, etc., is people who want to terminal OUT OF their phone and onto a real computer someplace.

What are the real-world tasks you expect to do with the former that you couldn't do, or couldn't do as well, with the latter?


If I'm understanding you correctly, you wouldn't need to SSH in to your phone from another device. You could run something like SQLite locally and access it from the terminal, or write Ubuntu applications that expect services to be available on certain ports.


Does Linux behave similarly by default if one uses calloc instead?


Of course, it's basically a wrapper around malloc. All allocated memory is subject to this behaviour: malloc/calloc, forked "copy-on-write" memory, mmaps and statically allocated segments (especially the stack).

In a system with MMU all allocating memory does is tell the kernel to not give you a segfault when you access some range of virtual addresses. To actually materialize a virtual page you have to access it.


I wasn't sure why that would be the case. To me, the naïve answer would be that they wouldn't behave the same, as calloc is required to return cleared memory, meaning memset or similar. Saving time on allocation but clearing it on access would seem to require kernel-level access below that of the C library. So I took a look.

Interestingly, in glibc-2.15, the code for calloc (well, public_cALLOc) is longer than that for malloc. Most of that actually seems just to be doing magic to figure out when it actually needs to clear it. So in any cases where calloc is touching reused memory, it clears that itself. Otherwise, things end up at either mmapping MAP_ANONYMOUS or sbrking, which simply allocates pages that all get cleared on access (well, on write) anyway for security reasons.

Honestly I was surprised how much indirection and optimization there was. I knew that there were some clever things being done and they try to take advantage of processor features. But this stuff really does everything possible to avoid even a single unnecessary cycle. I'm impressed.

So in the general case, calloc might do some extra work. But where you're just callocing heaps and heaps (tee hee) it's not likely to. Strange, I could have sworn I just stumbled upon a case where someone claimed doing that was noticeably slower, but now I can't find it... was going to be very curious how that was the case. But after this romp I'm tired.


It doesn't behave the same with calloc because calloc zeros the memory and therefore writes to each page.

Just try the same program replacing malloc( 1 << 30 ) with calloc( 1, 1<<30).


>because calloc zeros the memory and therefore writes to each page.

One does not imply the other. Internally what the kernel can do is link the page address it gives you to the zero page and mark it as copy on write. Only when you actually write to it will it allocate an actual page to back it. Only if your libc implements calloc as malloc+memset would this be a problem. Does glibc do that?

In fact the copy on write is probably also done on malloc as well. Even though the manpage implies different behavior (malloc doesn't guarantee setting the memory to 0, while calloc does) I don't think any sane kernel will give you someone else's free()'d memory. It would be a security leak.


> Only if your libc implements calloc as malloc+memset would this be a problem. Does glibc do that?

I just checked (see my reply to the parent) and it doesn't.

> In fact the copy on write is probably also done on malloc as well. [...] I don't think any sane kernel will give you someone else's free()'d memory

You won't get someone else's freed memory but you're quite likely to get your own back and in that case it won't necessarily be zeroed.


>You won't get someone else's freed memory but you're quite likely to get your own back and in that case it won't necessarily be zeroed.

Surely the kernel never gives a process it's own pages back. It would keep an unneeded page around that could just be pointing to the zero page.

Reading your other comment I assume what you mean is that it is a two step process where the kernel always gives zero pages but glibc's malloc implementation keeps some stock of pages and will return them back in a malloc() after a free(). That way you're not guaranteed to get zero'd memory on every malloc() since not all of it comes straight from the kernel.

The calloc() implementation has checks for that and will do the clearing when the memory is coming from the glibc stock and not the kernel. But even in that case it's only doing clearing when the page is already in the process address space. So a process will always receive zero pages from the kernel, but the malloc() implementation is made more efficient by giving you back some of your own free()'d memory that from the kernel's point of view was never given back.

Does that sound about right?


I didn't look in enough detail on the Linux side to see if it always gives a zero page reference, or does/doesn't clear out a fresh page when it's referenced based on how it was allocated and by whom. I could see that saving time but I can easily see just nuking a whole page being faster than the work of tracking and checking.

From the glibc side I believe you are exactly correct.


> Only if your libc implements calloc as malloc+memset would this be a problem. Does glibc do that?

OK, it's not guaranteed that it will be, but the source shows several code paths where memset can be called during calloc():-

http://sourceware.org/git/?p=glibc.git;a=blob_plain;f=malloc...

My quick experiement with 32-bit libc-2.13 showed that using calloc() is significantly slower than using malloc().

[EDIT] Should have read pedrocr's response fully.


This is awesome, Danny. Now get on that 30-page version. :)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: