If you haven't already, check out Synaptic [1] by Juan Cazala. I use it in many of my ML projects and it contains a lot of useful examples and documentation. I recommend you take a look at the source as well, which is easy enough to understand.
Those of you HNers working with or studying artificial neural networks (or armchair enthusiasts), is there any research you've seen recently that had you particularly excited?
Jason Weston (Facebook) etc's work on Memory Nets[1]
Anything to do with transfer learning (the 2013 Zero-Shot Learning Through Cross-Modal Transfer[2] paper is a good place to start)
The increasing amount of demos around using NNs to generate "things" that look kinda-almost "intelligent". I can't point at a paper, but Andrej Karpathy demo of generating Shakespere-like writing, "Wikipedia" pages and "C" code in The Unreasonable Effectiveness of Recurrent Neural Networks[3] is the kind of thing I'm talking about.
The beginnings of work around goal-seeking. The (now Google) DeepMind Atari demo[4] and Marl/O[5]
Finally, the work being done on making this stuff usable by programmers (Torch etc).
Browsers don't support WebCL yet and their asm.js is single-threaded. And their auto-vectorization abilities also might be questionable. I don't see how it could possibly compete with things NNs running on a GPU.
"Fast" might simply refer to efficient algorithms implemented in C?
The WebCL spec has been finished for well over a year now, and was fairly stable for a while before that, yet no browsers have implemented it. Mozilla has publicly stated they aren't interested and are going with WebGL 2.0 instead.
The 'fast' in the title isn't really a claim about the JS version; it comes from the library that was emscriptened - http://leenissen.dk/fann/wp/ - it's 'fast' in the sense that it uses efficient algorithms rather than it's actually fast.
Besides the fann library, I will benchmark Jet's Neural Library [Heller, 2002] (hereafter
known as jneural) and Lightweight Neural Network [van Rossum, 2003] (hereafter
known as lwnn). I have made sure that all three libraries have been compiled
with the same compiler and the same compile-options.
I have downloaded several other ANN libraries, but most of them had some
problem making them difficult to use. Either they where not libraries, but programs
[Anguita, 1993], [Zell, 2003], they could not compile [Software, 2002], or the
documentation was so inadequate that is was not possible to implement the features
needed in the benchmark [Darrington, 2003].
Even though I will only benchmark two libraries besides the fann library, I still
think that they give a good coverage of the different libraries which are available. I
will now briefly discuss the pros and cons of these two libraries.
Yeah, so the author picked two simple, "easy to use" NN libraries to benchmark against, had trouble with others and just dropped them and then claimed it's fast because it performed "up to" 140 times faster than one of those two libraries.
Seems kinda like a "get a paper out" work that is technically correct but doesn't measure up to real world standards.
Real implementations would use vectorized math, math libraries with hand-optimized assembly or run straight on the GPU.
[1] https://github.com/cazala/synaptic