ConvNets can handle sequences just as well (cf. TDNNs), you just treat time as another dimension. Whether you want to call this a "frame" or not is subject to question.
With an explicit (potentially arbitrarily large) memory component, they can also capture long-term dependencies that don't have to be maintained in, say, the convolution parameters between presentations of data. This is one thing we want to pursue with the Faiss library at FAIR.
Regardless of how something is implemented, it's the what that is important. What's the actual algorithm or algorithms that the brain uses? It's likely easier to explore this question in floating-point math than putting together a lot of spiking neuron like things and trying to cargo cult your way to the answer.
It's interesting you used a Feynman reference here, as he was actually involved spawning the neuromorphic field at Caltech with Carver Mead [0].
You're thinking about this as if it's purely an algorithm problem and that computer architecture should always be designed to do the algorithms bidding.
Neuromorphic is flipping the problem around and creates efficient architecture where algorithms do the architectures bidding.
The former is much better in the cloud with massive armies of general purpose computers while the latter is much better on the edge for anything implantable (very little heat can be generated or else you cook) or needs super low power (less than 1W).
Having worked on both sides of the spectrum I think there's room for both and the relative research funding seems reasonable; much more invested in straightforward neural network research with GPUs etc versus the neuromorphic field where it's mostly a handful of Caltech folks left at places like Stanford, UCSD, GT, John's Hopkins, or UF.
https://arxiv.org/pdf/1611.02344.pdf
With an explicit (potentially arbitrarily large) memory component, they can also capture long-term dependencies that don't have to be maintained in, say, the convolution parameters between presentations of data. This is one thing we want to pursue with the Faiss library at FAIR.
Regardless of how something is implemented, it's the what that is important. What's the actual algorithm or algorithms that the brain uses? It's likely easier to explore this question in floating-point math than putting together a lot of spiking neuron like things and trying to cargo cult your way to the answer.