The color of each point is the number of iterations it takes to "escape" past the (-2-2i, 2+2i) barrier. So if you start at a point and iterate 3 times before escaping, it gets one color. If you start at a different point and have to iterate 4 times, then it gets the next color. All the colored pixels are outside the Mandlebrot set, and all the black pixels are inside (or you haven't done enough iterations to know).
For the Mandelbrot set, I'm familiar with the proof that says that if |z_n| > 2, then 0 iterates to infinity regardless of c. The proof depends on it being an analytic function. How do you know that's also true for this equation?
I agree. It's very strange that extremely simple, very fundamental functions lead to such stochastic-looking nonsymnetric behavior. Where does all that "entropy" come from? It's certainly not hidden in any big numbers in the fractal definition.
Your scare quote suggests you may already know this, but for the benefit of others, the answer is that there isn't any. Fractals may look visually complicated, but their information content is fully captured by the routines used to generate them, which include the formula and the coloring system being used.
This is one of the ways in which "information" is a highly counter-intuitive quantity for people. Very small numbers of bits in a given encoding scheme can produce incredibly complicated pictures, but there's still no more information that what was put in to start with. Simply looking at something and going "Yup, that's complicated" does not mean it has a lot of information in it.
Yes, that is what I mean. Why is there a disconnect between the complexity of the compact description and the naive description? Why are simple expressions not simple in all "natural" representations?
This is not addressable with the contrived example of "you can create an encoding scheme that reduces an arbitrarily complex description to an arbitrarily short identifier". After all, this is not a constructed compression scheme. Why does nature expand simple expressions to these particular complicated forms?
I don't really agree that the primes “have a very simple definition” in that the notion of ‘primeness’ is one of satisfiability and no constructive algorithm is known to exist.
I'm not sure what you mean by not having a constructive algorithm to produce primes. The Sieve of Eratosthenes will output any given prime, with the primes coming out in order, given enough time. There may not be an efficient algorithm, but for most definitions of "information" that won't matter. The sequence of all primes is both so richly structured we've probably only scratched the surface and almost entirely bereft of information.
I'm not sure what you're getting at here. Is it not relevant that there is a simple algorithm to enumerate the primes, or, even more so, determine whether a given value is prime?
> This is not addressable with the contrived example of "you can create an encoding scheme that reduces an arbitrarily complex description to an arbitrarily short identifier". After all, this is not a constructed compression scheme. Why does nature expand simple expressions to these particular complicated forms?
If not these particular forms then it would be some others. That fractals exist is pretty-much the fact that pseudo-random number generators exist: you plug in a small amount of input and it generates noise. The fact that it looks like something is more a sign of the overactive human facility for pattern recognition than anything else. (Also I suspect it would look a lot less like a burning ship if we hadn't been told it was a burning ship)
Well I think there is a lot of complexity in the interpreter of the math string. For example, imagine I write an interpreter that takes a single character, and if it's 'A' gives a mandelbrot fractal, 'B' a burning ship fractal, 'C' an image of the sun that is hardcoded into the program (what's the difference between such hardcoding of information and the 'hardcoding' of how multiplication is performed, anyways?) <-- Here is a program that has a lot of 'complexity' produced with only 3 possible inputs.
My point is basically, the system includes the interpreter all the way up to the eye and the mind, and it seems tricky to completely detach such context from a description of how 'complex' things are.
It comes from the aperiodic structure of real numbers (that is a weird and wrong expression but I cannot find a better way to explain it). Nonrational real numbers are inherently chaotic as seen from the inside (even rational numbers might be seen as such).
Real numbers are weird. Totally.
Also the fact that you are defining something depending on convergence makes it even weirder, as limits do not tend to commute with anything unless smooth conditions take place. And for these fractals, there is no smoothness near the "border".
While "real" numbers are indeed weird, I don't think that's the real explanation here. There are lots of small programs with complex output that have nothing to do with real numbers. If someone gave you one of these programs without telling you what it was for, real numbers would at best be a useful abstraction (bear in mind that there are no actual "real numbers" here, only finite bit-strings). But it might turn out to be the equivalent of a stream cipher, instead. That would, if done correctly, have even more apparent entropy than your typical fractal, which at least has some higher-level structure.
In my view, the real source of complexity is the fact that the algorithm can use arbitrary iterations to magnify small differences. It's the unpredictable nature of computing machines, not real numbers. The nature of real numbers is only a guide to developing and explaining that underlying complexity.
But computing machines aren't unpredictable. They are very predictable. They appear to pull complex structure out of the aether.
Perhaps the interesting thing is that the set of n-bit programs expands to only 2^n possible outputs. Why are some finite number of infinite-length outputs accessible, but not others? Why does nature favor those sequences?
>But computing machines aren't unpredictable. They are very predictable.
Are they though? A 5-state Turing machine might seem trivial, but there exist programs in it, that we don't know if they ever halt or not [1]. Their behavior is totally non-deterministic for our understanding. There are also minimalistic cellular automata that produce completely unpredictable patterns. [2]
There are no non-rational “real” numbers involved whatsoever in drawing this fractal (or e.g. the Mandelbrot set). Only rational numbers, simple purely rational functions, and some amount of truncation of the least significant bits at each step.
The deep “weird” “inherently chaotic” structure you’re talking about is the structure of the rational numbers.
Structure is not the same thing as information. One is accidental, even if useful, amd the other has to be "designed".
People arguing about the complexity of the human brain (especially those that know a lot about the way it works) fall into this trap constantly, and it drives me nuts: there's a ton of structure, but proportionally negligible information in the brain's programming.