Hacker News new | past | comments | ask | show | jobs | submit login

How does the reporter know this is a leak? High memory consumption is not a "memory leak".

The RFC says the data can be retained in history buffers as part of normal operation and as far as I can tell that's what is happening here.




It is OK to have in-memory cache for certain resources. It isn't OK when this cache doesn't shrink under memory pressure. This usually leads to swap trashing after the cache becomes bigger than the available physical RAM on the system. Swap trashing makes the system unusable in most cases. See the comment #6 http://code.google.com/p/chromium/issues/detail?id=81517#c6 .


Ah, that was added after my comment. Nevertheless, there is no evidence this is a memory leak.


It depends on how strictly you define "memory leak." I'm currently in the process of diagnosing a problem that causes steadily increasing memory consumption in one of our servers. Odds are that we are simply keeping a bunch of data around past the time when it should be detected as stale and expunged. If that's true, we still have pointers to the objects and know exactly where and what they are, so the memory has not been "leaked" by the strict definition that we are no longer able to access or deallocate it. Still, pending diagnosis, everyone refers to it as the "memory leak," technical and nontechnical people alike. Usually there is a lot of discussion of such a bug before the cause is detected and not so much afterwards, so there would not be much opportunity to use the term "memory leak" if it was only used postmortem. (Not to mention that true memory leaks are pretty rare nowadays except for those unfortunate enough to be programming in C.)


I've had Ruby processes run wild (think turning off the garbage collector) and grow to 20-30GB on my MacBook and it didn't make my system unusable. Things slowed down, but I could pretty easily open up the Activity Monitor and kill the offending process.


In the highly unlikely scenario where you actually allowed the system to allocate 30GB of swap space, this might not cause the system to become unresponsive since most of the memory in "use" would never be touched, as it comes from discarded but uncollected Ruby objects.


This is a pretty solid hypothesis, but wouldn't the same be true of the exploited cached images? They are essentially the same thing: stale resources that won't be used again, and those regions need to be free'd up.


I suspect there are two major causes for the differences: (1) the faster allocation rate in the browser prevents the system from keeping OS/GUI-critical pages in RAM, and (2) the browser is constantly checking the cache for matching images, so the pages used by the cache are being kept in RAM and/or repeatedly loaded and unloaded.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: