Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for mentioning my project (fastgron).

It reads the file into memory once, then just goes through it only once, so it shouldn't need much more memory than the file size.

Also I put a lot of work into making fastgron -u fast, but you can grep the file directly as well.




Thank you for making fastgron! I use it daily and also have it aliased to 'gron' in my shell so I don't accidentally forget to use it.


Thanks, it would be great to make it official gron 2.0, I tried to achieve full functionality parity with the original. Also a serious buffer overflow bug was just fixed, so make sure to upgrade to 0.7.

I'm thinking of doing some marketing (for example a blog entry just to show what was the main learnings in I/O and memory management in order to achieve this speed).


Maybe I'm misunderstanding, but why does it need to read the file into memory at all? Can't it just parse directly as the data streams in? It should be possible to gron-ify a JSON file that is far bigger than the memory available - the only part that needs to stay in memory is the key you are currently working on.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: