Authors here. If you have any questions we'll do our best to answer them! Glad to see people find our work interesting thus far.
We also encourage anyone interested to play with the linked Google Colabs [1][2] and read the other articles in the Distill thread. In the Colabs you'll find a bunch more pre-trained textures as well as a workflow to train on your own images, plus some of the scaffolding to recreate figures.
This is the first I've ever read about neural cellular automata. I think I was able to pick up the broad strokes from context, but is there a good introductory resource for neural cellular automata?
Really impressive work - in seconds, I see so much both richness of ideas and potential!
And, as is so often the case, the really interesting work happens on the intersection of two fields - neural nets and cellular automata here. I've got tons of new reading to do now!
There's some recent work that involves NCAs in a 3D setting by Horibe et. al [1] and tweet [2]. Other work by Risi and collaborators is definitely worth checking out as well.
Great post, thanks! I saw Growing Neural Cellular Automata document you describe a strategy to get the model to learn attractor dynamics. I was kind of reminded of Deep Equilibrium Models (https://arxiv.org/abs/1909.01377).
Is there a relationship between these models and do you think these root finding and implicit differentiation techniques could be used to train Cellular Automata too?
The second cell looks a section title ("Imports and Notebook Utilities"), but contains the definitions of these functions and the imports. Run this cell and I suspect things should work.
In typical parlance today, "seminal" means "from which a bunch of important things have sprung" but I think there is an older definition which is simply "first".
Apologies, not my intention. I was also under the impression seminal could be used to mean ”first” in the succession of our works and this is what I had intended to communicate.
As a non-native but long-term speaker of English, I understand "seminal" as in "their seminal work" as "groundbreaking" (and better to be avoided when referring to one's own work). But slips of the pen are inevitable, so no harm done.
The article says "our NCA model contains 16 channels. The first three are visible RGB channels and the rest we treat as latent channels which are visible to adjacent cells during update steps, but excluded from loss functions."
Thanks for noticing. This is a typo stemming from early experiments. We started out with 16 channels, but switched to 12 channels of state when this worked just as well. I've submitted a correction.
If you haven't heard or seen any presentations about the work coming out of the Levin lab, it is super interesting. I don't really know anything about biology, but the work around modifying organisms via changing electrical circuits rather than genes is fascinating, and to a lay-person such as myself seems like the future of bio.
This reminds me a lot of the WaveFunctionCollapse texture generation algorithm [0]. It "generates bitmaps that are locally similar to the input bitmap."
This is the first I've ever read about neural cellular automata, though I thought I was relatively up to date on both cellular automata and deep learning. I think I was able to pick up the broad strokes from context, but is there a good introductory resource for neural cellular automata?
See also: using differentiable approximations of cellular automata in PyTorch to reverse Conway's Game of Life; in some cases, you can get striking Turing patterns similar to what's described in this paper! http://hardmath123.github.io/conways-gradient.html
I usually have a very strong (feeling unbearably itchy) trypophobic response to anything "organic" that has clusters of protrusions or holes e.g. lotus seed pods, but none of these examples have any such effect. I think it's because of the bright colours and low resolution.
No problem if it's inorganic. I expect there must be some kind of evolutionary reason for it - the fact that it's an itching/crawling feeling strongly suggests it has/had something to do with bugs.
> In the same way that cells form eye patterns on the wings of butterflies to excite neurons in the brains of predators, our NCA’s population of cells has learned to collaborate to produce a pattern that excites certain neurons in an external neural network.
I know there has been other work on adversarial networks, but this analogy (along with the photo of the butterfly) really communicates the idea well. And although I'm generally skeptical of claims that ANN "x" is the true model of how the human brain works, it makes a lot of sense to me that this is how adversarial self-organizing biological structures interact.
Also, it's a powerful example because of just how effective the butterfly wing's "eye" is. Despite understanding that it's a decoy, I still can't look at it and not be unnerved a bit by it.
What is the significance of this, e.g. can we use this approach to build arbitrary material or even living tissue? I can't help but think of this video [0], it seems there may be commonalities between what's happening in life and simple cellular automata.
This reminds me of a shareware program I had way back in '98 or something. It let you generate (or evolve?) seamlessly tiling textures using a cellular automaton with a bunch of parameters. I remember it being really cool at the time but can't for the life of me remember what it was called.
Besides the organic, slightly bacterial-colony forming nature of them, I wonder if you also have trypophobia. I feel like some of the images were triggering the same kind of feelings I get from that.
Very nice! The results look similar to my experiments in applying feedback to style transfer networks (https://www.youtube.com/watch?v=fGSXbYDpI9c), though the self-healing properties of CA make this more interesting!
This does a good job of illustrating how patterns in nature formed by cells can self organize. I haven't dug in to see how similar the implementation is, but when you look at the way these textures develop it sure looks like it.
We also encourage anyone interested to play with the linked Google Colabs [1][2] and read the other articles in the Distill thread. In the Colabs you'll find a bunch more pre-trained textures as well as a workflow to train on your own images, plus some of the scaffolding to recreate figures.
[1] https://colab.sandbox.google.com/github/google-research/self... [2] https://colab.sandbox.google.com/github/google-research/self...