They're all strangely intention-less on every imaginable level. That can be unsettling.
I've listened to, and made, a lot of music like that. When I want to relax or think, I like music that engages me in some way but has a sort of drift, an ambient (in the Eno sense) quality even if it's slamming Berlin techno. It's different from the prog-rock I grew up with, which was difficult and loaded with intentionality.
I ended up experimenting with music that was more generative, doing pretty sophisticated things with a degree of structure but without intentionality: always a random factor, a journey to an unknown goal (typical end result: wandering around for a while)
AI art is like that. We see it wandering around for a while. It's getting better at picking up the trappings of identity, but persistently lacks intentionality.
Maybe the trick is to supply the intentionality instead of the identity? Instead of 'creepy moat', a visual identity, make it do a painting of "you are not going to survive and that's good" or "thank you but I am not worthy".
If you have to feed it words, get a poet, don't describe the picture.
There's no difference between this and synthesizer music which sounded shockingly alien when it first came out. My uncle tells me about people going to Pink Floyd or Tangerine Dream concerts and it was virtually a religious experience to hear music so utterly otherworldly. Techno music had similar effects when it first hit the scene with pioneers like Kraftwerk or the Detroit and Chicago Techno artists.
It also came with the criticism/fear that "machines are replacing people as musicians" which was bullshit. We just got a new set of musical instruments based on electronics that musicians could play and that led to an explosion of new musical genres.
Photography was initially derided as "not art" or "machines replacing human artists" too. It was neither. It was just a new way to make pictures that spawned new art forms.
This is just another new set of artistic tools. It will spawn new art forms with their own rules and sense of technique, aesthetics, and style.
>Photography was initially derided as "not art" or "machines replacing human artists" too. It was neither. It was just a new way to make pictures that spawned new art forms.
This is one thing about progress and art, that I have found really interesting.
Like, how the invention of photography completely changed painting.
Ultra-realism became uninteresting over night, as soon as it was possible to create perfect depictions via cameras.
Painting, as an art form, didn't "die". Painters weren't replaced by machines.
Instead, artist's interpretation became more important, than their technical ability to reproduce reality.
We've since seen impressionism, expressionism, and more abstract movements like cubism.
The invention of photography ended up being a gift to art.
And now we see artists embracing digital artworks as their medium of expression.
So, I guess the fear of AI and machine learning being the end of art as we know it, might very well be unfounded.
It looks to me, more as the possible birthplace of new modes of expression. A reason for artists to rethink their art.
I think at the moment there is an important difference that the OP is talking about, namely the ability of the artist to channel their intention into the work. I think even early synthesizers offered a degree of control which even if the results were other-worldly were still channeling the artist’s intention. At the risk of wading into the murky waters of defining “what is art,” I don’t really feel like feeding an ML model a short sequence of words is “art-ing.”
However I think you’re right that in the long run these types of technology will become another tool in the artist’s toolbox and are not in danger of replacing artists. But I also think the OP’s criticism is valid. I suppose you could argue this is a bit like early criticism of photography as “not art” and that feeding the model is like picking what to photograph and how to frame it and such, but I still feel like there is a key difference in terms of the ability to have an intention and to have some ability to foresee how that intention will be realized in the work. Feeding a model inputs and guess-and-checking the results until you get something cool does feel “less artistic” to me.
I think it would be one thing if these tools worked such that with experience you gained a kind of intuitive understanding of how your inputs map to the outputs and a finesse in crafting that poetry for the machine. But based on my (limited) understanding on ML models I have a hard time imagining that is the case. These models are complete black boxes, where a small perturbation of the input can create large, unpredictable variations in output. That makes me think that there is a strong “guess and check” aspect to these creations. And I think tools with that characteristic are limited and frustrating to create with, because you cannot channel your intention effectively through that unpredictable mapping of input to output. But I have no doubt these tools will continue to evolve in the direction of being able to be wielded more intentionally.
"Intention" is what the final fraction of a bit gap in predictive performance feels like from the inside of your head.
It has all the low-order correlations learned well, but there are long-range correlations still lacking. (Think of a detective novel where the clues are hidden thousands of words apart, in very slight tweaks to wording like an object being 'red' rather than 'blue'.) As models descend towards the optimal prediction, 'intention' suddenly snaps into place. You can feel the difference in music between something like a char-RNN and a GPT-2 model: it now sounds like it's "going somewhere". (When I generate Irish music with char-RNN, it definitely feels 'intention-less', but when I generate with GPT-2-1.5b, for some pieces, suddenly it feels like there's an actual coherent musical piece which builds, develops a melody and theme, and closes 'as if' it were deliberately composed. Similarly for comparing GPT-2 stories to GPT-3. GPT-2 stories or poems typically meander; GPT-3 ones often meander too but sometimes they come to an ending that feels as if planned and intended.)
Once this final gap is closed, it will just feel real. Like if you look at No Alias GAN (~StyleGAN4) faces, there's no 'lack of intention' to the faces. They just look real.
"intention-less" - agreed. Lacking in connection with anything human. It feels like a glorified /dev/urandom. I feel empty looking at these. I once wrote a program that iterated over every combination of colours of pixels to create every possible image for the specified size - similarly empty.
You feel empty because you know they were generated by an AI. If you didn't know that beforehand, you would probably ascribe intent to them as you would anything created by a human, you would feel something, but only because an algorithm was pushing buttons in your monkey brain.
That implies "connection" and "intention" aren't properties of the art or the artist but something entirely manufactured by the viewer.
there is, of course, the technical aspect: the blurred edges, the repeated elements, the relative placement of things and unrelated objects, a kind of statistical uniformity/ non uniformity. this is technical aspect which shows they were generated by machine and a particular implementation of neutral networks/ai.
but then there's also the intentlessness that is symptomatic of a lot of modern art, and symptomatic of why a lot of people don't like modern art.
these combine both aspects, and both aspects can be talked about and discussed separately.
To be sure, maybe one day we'll be able to consistently generate something artistic and convincingly filled with what our minds confuse for intent, but these aren't there and ai isn't currently at that level, and there is a particular property of these that make them feel empty, just like a lot of modern art also feels empty and intentless.
>there is, of course, the technical aspect: the blurred edges, the repeated elements, the relative placement of things and unrelated objects, a kind of statistical uniformity/ non uniformity. this is technical aspect which shows they were generated by machine and a particular implementation of neutral networks/ai.
I agree there is a specific signature to this algorithmically generated art that becomes recognizable after you've seen enough of it, but these elements have also been purposely employed by human artists for years, they don't objectively prove something was machine generated.
If you didn't know what to look for, you would probably assume “an abstract painting of a planet ruled by little castles” was done by an artist on a digital canvas. And I could absolutely see “The Wasteland” by T.S. Eliot” as an album cover.
>but then there's also the intentlessness that is symptomatic of a lot of modern art, and symptomatic of why a lot of people don't like modern art.
I think you're inadvertently proving my point. AI generated works feel empty and intentless not simply because of their repetition or arrangement but because they resemble modern art, which despite being created by humans is interpreted by many people as empty and intentless. This interpretation isn't a rejection of machine-generated art so much as a rejection of the aesthetics of human postmodernism.
Very interesting. There is a principle with a lot of modern, abstract art, which is that the work is not complete until the viewer sees it - that is to say, the observer brings a portion of the meaning to the work. This AI art seems to go one step further, there isn't really an artist anymore, in the traditional sense. The creator of the work is more like an observer - choosing which output they like best. The "artist" becomes a consumer!
The artist was already the consumer. Whatever the medium, the artist has to ultimately select particular things as "works of [their] art". If I paint my bathroom and then go on to paint a masterful painting, probably only one of those things is going to end up in a gallery, and that relies on there being a consumer.
I am not sure how to say this best, it probably sounds negative but I don’t mean it that way. It is really interesting because using these tools abstracts away the talent/skills part of creating art - you say your idea, and get the computer to keep showing you completed pieces until you find what you meant, or at least something you found interesting. It reminds me a bit of the role of the DJ to do the searching and curating, but not actually create the music.
In fact you could imagine a musical version of this where the operator is describing feelings or styles and the AI is generating music to match where the operator wants to go.
You're right that it removes some (honestly perhaps most) of the craftsmanship that comes with creating art. I do think the resulting work is still interesting, valuable, and art from a practical point of view, though. There is perhaps a threshold where this is not true, perhaps related to how much curation/pruning/how indiscriminate the human is in determining the final outcome.
That is to say, if there is some text to image generation program, and the human generates 1 image based on the first phrase they thought of, that image they generated is probably not going to be very interesting, at least based on the image generation stuff I've played around with so far. It often takes time and several attempts / variations on the phrasing to get something which is interesting and novel.
People have been finding tricks like appending the words "Unreal Engine" onto the phrase in order to produce certain types of results. Some people actually go into the code to tweak things to get desired results.
I do wonder how true this will be going forward. It feels like this media synthesis stuff is progressing at the speed of light, and perhaps it'll only take 1 iteration / attempt to craft the interesting and novel stuff that we're looking for.
So at least at the moment it's maybe not so different from anything else. I think of it as exploring the possibilities in a given space. I work on video game development, and it's the same story there. If you're creating a game prototype, you might have an initial idea which sparks development, but once the initial seed is implemented the game often takes on and is guided by a life of it's own, shaped by the possibilities in that given space which you are narrowing down into a subset which seem to work and gel together. That's how I view it anyway.
>You're right that it removes some (honestly perhaps most) of the craftsmanship that comes with creating art.
Perhaps art, going forward, is simply described as the process, not the result.
I don't doubt that a computer could (or will) produce a perfectly convincing Charlie Parker solo or Richard M. Powers painting. There really was less there than met the eye (or ear) and most of human output consists of cliches in any case.
My take is to become increasingly archaic and self-sufficient in interests. Greek red figure pottery looks like a good place to end up after the walls are filled with amateur art, but a kiln looks pricey.
‘Beautiful thing’ will indeed be massively available (as it is right now), but ‘beautiful thing touched by a human’ will always remain scarce. Only the latter will be truly considered an art object.
" It reminds me a bit of the role of the DJ to do the searching and curating, but not actually create the music"
It very much depends on the type of DJ. But a good DJ performing live, is indeed generating music, as he feels the vibes of the crowd and mixes in the exact tracks fitting to that moment.
Not predefined track after track, but they blend in and over on each other.
If you do this with parts of a song from here and then there, with a small snippet from something else - it is indeed generating new music in my book.
And I think alien dreams and ai generated art alike ... can move into that direction. Generating art, by knowing the various tools and combining them.
I am a DJ myself, but I would still give credit to the authors of the tracks rather than claiming it for myself (even when it’s a unique interesting combination or flow).
Like is done here with the generated art, it probably does not make much sense for one person to claim he "created" and owns this image.
Would be hard to judge, how much credit goes to the programmers, the basic AI researchers, all the people who created the input on the AI to learn ...
>> In fact you could imagine a musical version of this where the operator is describing feelings or styles and the AI is generating music to match where the operator wants to go.
OpenAI already did this with Jukebox! I'm just surprised more people have not used it as an "instrument" vs as a novelty.
One downside with Jukebox: it takes ~9 hours to generate ~90 seconds of audio (even on a NVIDIA V100 GPU) since it's an auto-regressive model making experimentation and 'co-creation' much harder
>> it takes ~9 hours to generate ~90 seconds of audio (even on a NVIDIA V100 GPU)
btw this is true if you upsample all 3 levels. I have found in practise that you are fine upsampling 2 levels and then using a DAW to "clean up"/"remaster" [0]... again will work for certain sounds better than others.
The last upsampling step is by far the most expensive. so cutting that down cuts total time by 75%
agreed. full waveform "music" (unlike say MIDI) is just many more variables than a "picture" or even "video". Also you have to "stitch together" a lot of these samples to get anything that resembles a "new" song. More akin to mining... but still its kind of crazy what can be done with it.
>> Why doesn't the architecture use a model to generate sheet music / MIDI, then layer another model on top to create instrumentals?
there are various models that do this [0]... just doesn't have the same power to generate waveform music that jukebox does.
Jukebox effectively is a 2 step model a) a 3 layer VQ-VAE compresses the music (so think of this as the MIDI equivalent) and b) a transfomer then learns/generates sequences.
The compression is the expensive part of the model.
I'm not familiar with Jukebox, so maybe I'm wrong. It seems like you'd want a fast architecture to use a model to generate sheet music / MIDI, then layer another model on top to create instrumentals?
To me it feels like AI-assisted divination. It's like staring into clouds of smoke or a crystal ball until you find something that looks significant, but using a computer algorithm instead.
Divination has been around throughout human history, so it makes sense that it would emerge again through the use of machine learning algorithms.
The spiritualist movement and similar late 19th / early 20th century occultism can easily be re-interpreted as oblique science fiction... or vice versa.
The whole industrial revolution is at least culturally and historically inseparable from the occult revival in the sense that all that "natural philosophy" stuff was the primordial soup of ideas from which modern science emerged. Methodological scientific empiricism is the gold that was left in the crucible, and it yields magic that actually works.
... now back to writing obscure runes in an alien language to instruct the daemons in the ether ...
I think it's a little silly to ask "but is it _really_ art?" To me, it feels like it's no less art than photography. Photographers don't create the world they're photographing, but they engage in an incredibly high level of curation to show the viewer something unique.
At the end of the day, anyone who looks at this and enjoys it doesn't really care if it's _really_ art. But they'll get bored with it eventually as they start to notice little similarities. That is, until they find a set of pieces generated by someone else's prompts which, for some reason, feel fundamentally different. At that point I think the "it's not art" argument starts breaking down.
I've made generative art and it involves studying the algorithms, guessing what inputs might produce good output, and generating hundreds of tests and samples, almost all of which are discarded without value. It takes time and skill to predict what'll work, and touch up the result to something presentable.
It's funny the linked article mentions "planet ruled by little castles" specifically because I've also made those [1] using generative methods, though in this case style transfer rather than language visualization.
Interesting! I've also been doing a bunch of generative art recently. That description almost reminds me of this artblocks project https://artblocks.io/project/31
Yes. It feels more like exploring than painting. Often it feels like watching a small child draw and trying to steer what is drawn. Like when I tried to ask for storm clouds made out of lava https://www.reddit.com/r/bigsleep/comments/o033x1/playing_wi... the AI was like "No, Dad! Lava stays on the Ground!"
But, I still believe that "It is Art". (As much as I and everyone else hate that whole debate.) I think of "Can a computer make Art?" the same as "Can a brush make a painting?" Unless you are finger painting, you don't make the image. The brush does. But, without you, the brush out be inert. Or, at least undirected.
With the GAN, random noise inputs get random noise outputs. Finding interesting results is a lot of work. Much like finding an interesting sculpture within a block of stone. "Interesting (to humans)" being the key word that brings the humanity into the process.
It takes 2-3 minutes for an experiment to warm up. If you are paying attention, you usually can tell it's not going well at that point and cancel it. If it looks like it might go well, it can take around 10 minutes to settle. I've seen a few cases where some people got great results by letting it run for hours. But, with long runs I've also seen more cases where the evolution "gets bored" with where it is settling and makes a suddenly makes a large jump that rarely looks better than what it abandoned.
I liked how the TS Eliot one which the author calls "sublime" was actually impressively sophomoric. It really was like a painting an able, shallow person would come up with to reflect their idea of TS Eliot's poetry.
I went and stared at the pictures for a bit after reading this.
Not much experience of art really, but I think you're probably right. They're all interesting pictures - but "let's assemble some related elements in a new combination".
I did really like “a face like an M.C. Escher drawing” though.
Maybe this is the final form of postmodernism? "There's no meaning here, but will you ever figure that out?"
I've noticed that AI-generated images give me a weird, 'skin crawling' sensation. Somewhat similar to nails on a chalkboard. Does anyone else experience this?
Malfunctioning pattern-matching is a hallmark of mental illness, so it makes sense than any but a nearly-perfect "AI" system would often produce work that's creepy or off-putting, for that reason.
What’s creepy about AI images I believe can be traced to the inputs. Give an AI images of people and it doesn’t have the same sensitivity to its own existence to preempt what we meat puppets find shocking and gross.
As an artist who uses generation as part of my art (the rest is me) I find these images extremely cool, though I prefer non objective abstractions. Given the technology is still immature, I wonder if improving the technology will lead to less "art" — what we see currently are really imperfections in the process rather than deliberate choices. Maybe in a few years the end result will be more realistic and less fantastic. Will that be better?
I can imagine there will be software to iterate on an image... "A man. Yes, a very tall man. Not that tall. Standing in a field of wheat. The wheat is abstract. The man is rendered in Unreal Engine."
> Will that be better?
Most importantly, it will be better when it stops putting swirly nightmare creatures in everything. Those give me the heebeejeebees.
There are many theories of what "real" art is: that which is beautiful, that which powerful institutions put in a museum, that which satisfies a particular psychological itch, that which captures the values of our times, and so on. There are entire courses built up on this.
For my purposes, I am happy to say art is whatever people decide to call art. So this is art.
But it isn't "valuable" art, and this is not a deep philosophical point about intention in the making of art, or the ability to pick up human emotions or something.
This is a shallow point about economic supply and demand: if we can produce a lot of this in bulk -- and from this demo it seems we can -- then in general it will be devalued.
I think there is still a role for critics to play though, and that part might be exciting to us old school humans still deploying "natural intelligence." The interpretation for art is still constrained by our ability to imbue the visuals with meaning. In fact, we may be able to use that ability to make particular outputs more valuable than others.
If anything, it might be valuable in that a printer can have access to "unique" designs to print and sell. Some of this stuff (especially cityscapes & landscapes) reminds of posters you buy, pre framed, off the shelf at home stores. The difference here is that these images are somewhat unique, and royalty free (assuming it is...). Once sales of a design start slipping, roll a new image and get a new one to sell.
Also imagine, if a printer can print different images on demand, being able to sell a random image generated with the prompt being your wedding day, etc. It's like printing out and hanging your DNA sequence. A bit of you was the seed for its creation. It's the sentimental bit that could be valuable.
Art exists in context, in its place within history. This junk lacks any context. It's creator lacks understanding, rendering only images without meaning. If people find them pleasant then that's good for them. I find many trees pleasant to look at. That doesn't make them art.
Wow, I didn't know about this and it is definitely connected to the prompt experimentation I've been doing with GPT-3. Incredible images and lots of exciting paths forward to generate new styles of outputs. I wonder how seriously this is being taken in the art world. It seems like it could be an incredible installation in an art museum.
One problem with images is that where a text generator's output could be nonsensical or ungrammatical, the output of an image generator can be nightmarish. See the Face Like an Escher Drawing in the article. I'm not sure I want to spend a hour looking at the outputs of the generator to select the good images and drop the bad ones.
So I'm wondering if image-based NN generation will ever run across the same copyright issues as the source code generation of Copilot that's becoming a hot issue at the moment.
It's already known that Copilot was trained on GPL licensed code, and code can easily be searched to check for copying.
What about a neural net that generates collages that resemble some obscure photographer's work that's under copyright? What about determining the copyright status of a model without knowing its source material? What will be the standard for determining copyright violations when, unlike code, it is not possible to coerce the model to output a near-1:1 duplicate of the original?
And what about training a neural net on images that are illegal to possess in certain jurisdictions?
Does anyone know of any AI generated visual art that's about the output (form and composition) that isn't complete garbage? It's so far away from competing with the generations of aesthetic development we see in the status quo of contemporary fine art... stuff that's advanced enough that we need college degrees to begin to intellectualize their effectiveness. Artworks that better deal with power and agency and the process and implementation of AI are another story, IMO, but those usually manifest as films, performances, or other forms of documentation not an AI generated thing itself.
Are you asking for stuff that's not complete garbage? Status quo? Or, advanced enough that we need college degrees to begin to intellectualize their effectiveness?
Within the tiny community I've been hanging out with, I like to think some of these are not complete garbage.
Art is quite subjective. So hard to say what is "garbage" and what isn't. I assume this comment will get downvoted due to that.
That being said I do agree with you :). After staring an GAN art for a year I find it is extremely "empty" and essentially all looks the same, with very similar visual artifacts that make them look bad.
I view style transfer and GAN art generally as "trippy artistic filters". I don't think it's more than that at the current state of the art. Even in images linked above me: they are cool, but do just seem to be trippy distortions of existing things. If you like that that's fine, but I don't think it's more than that.
Looking at ML as a type of artistic filter though is interesting: and does open up some interesting use cases. I created a project in college using style transfer filters to make a short film. I think it achieved an effect that would have been unachievable without the ML.
https://www.youtube.com/watch?v=mfZ9-9tH_uE
What are paintings if they are not trippy distortions of existing things? ;)
I've been having a lot of fun using GANs as what I think is a better form of style transfer. The default approach uses noise as the initializer. But, by specifying an initial image, you can get a result that is very similar to style transfer, but more scene-coherent.
I appreciate you adding a more technical description to aid my inital sentiment, and thanks for sharing your vid, was funny and you actually managed to harness style transfer filters to achieve a particular feeling, which I haven't seen so much
Whoah, it just came to my mind it could be an interesting mechanism to explore for procedural generation (of levels and other stuff) in games! Does anyone know of any experimental games already trying to do that?
The technology doesn't scale yet to consumer hardware. It's limited to web-accessed mainframes (like AI Dungeon) and researchers with racks of latest graphics accelerators.
This is very interesting, I would love to play around with this.
However, it looks like the quality of the result really depends on the prompt, or in other words, on the quality of the training dataset captions.
Would there be a way to "fine-tune" the network on a smaller, better-captioned dataset to have better control over the output?
Can I make my own dataset and use it with few-shots learning to expand the capabilities of existing networks? Making it better at generating say, spaceships or fantasy animals?
This one's interesting. It seems the typo in "weaping" guided the model to generate images of 40K weapons. I see a definite bolter on the right and probably an Eldar whatchamacallit or Chaos Marine's bolter on the left. Or a Tyrannid bioweapon?
new and impressive kinds of food without nutrition; like a fancy dance ball that never ends - no beginning and no end; like a series of sunrise and sunset from around the world, while you never leave a chair in a closed room. Machine, master?
>It was discovered by @jbustter in EleutherAI’s Discord just a few weeks ago that if you add “rendered in unreal engine” to your prompt, the outputs look much more realistic
If we are calling it art. The people giving it prompts can't certainly be attributed as artists. The machine is the artist you own. Make it a request and it will create a painting for you.
Hmm, I certainly see where you are coming from, certainly a large part of art is that, an artist really did some feat in creating it. I am sympathetic to the notion that aspects of modern art certainly seem value vision more than outright skill.
Would you say that the person who made the machine/software at least has claim to the title of artist?
I think about Sol LeWitt, where his main works are instructions about installing the piece. You have works that are in buildings and such, but then, you have works in museums, where the way you 'acquire' a piece seems to work is a representative from LeWitt's studio (this seems like a requirement? not an authority, any description of the endeavor mentions how exacting it is) helps the museum staff install (i.e. draw, paint) the 'actual' piece.
> Would you say that the person who made the machine/software at least has claim to the title of artist?
Here a machine has memorized/trained on millions of art works. Then you ask it make something out of what it knows (understands?).
This in its own right is like an artist who see other people's artwork and creates his own out of some idea.
If you are a good painter and I ask you to paint something for me because I can't. I will only be claiming that idea not the artwork you created.
It's like GPT-3 which can write article based on some initial idea/prompt. Can I claim the whole thing it generated for me based on the initial prompt? I recently read that "unicorn who speak English" text that it generated and its amazing. It's a generated text but it's amazing. It won't be fair to claim ownership because it was trained on millions of texts written by other humans.
But again that's also how writers train. They read lots of text written by other people. If you write something for me, you are the writer.
This is how I think about art/text generated by machines that learned that specific domain.
EDIT: writing code to draw a line and then drawing a line is different from training on millions of lines drawn by other people to explain what a line is and then asking to draw something that looks like a line. So to answer your original question, we still can't say the person who wrote that software is the artist. He can be if he feeds it his own artwork only but that's not the case here.
doubt it, it really depends on how the public is moved by this form of creation
not necessarily the level of discipline involved, but the lack of discipline necessary and low barrier of entry will ensure that the public is not interested in discerning value to these works
there will only be some personalities that the public gathers around
Except this is not "Art" with a capital "A", it is pretty imagery, with an interesting backstory. The technology is fantastic, but do not get carried away - this is in no way "Art" in the same way human constructions in a museum is Art.
Art is a human expression of an aspect of life, a human reflecting this aspect of life in a manner betraying a comprehension only possible with the emotions and understanding of a human facing our complex society, their limited frame of reference, and describing their situation through the mediums we call "Art".
Art is not pretty pictures. Art is a complex, multi-dimensional communication medium between highly complex, self aware, comprehending entities. AI and AI generated "art" has none of these qualities. These are pretty pictures with an interesting technological backstory. They may be sold as art to unaware consumers, but they are less "art" than a bikini girl poster.
It seems more like mimicry to me than actual “creativity”; although I am pretty certain there will be a whole trend of artists doing this for the next ten years and we will get someone really big out of it. It could be someone like Andy Warhol who would democratise the “fine” art and not just pop art? For example, imagine a combination of 3D printers with brushes and deep learning inspired paintings? These will not be as unique as the original inspirations for these paintings; however they would be a cheap alternative to what you can only get as a very cheap looking art print.
The most fascinating thing about these programs is the undeniably imaginative, dreamlike nature of the output. Spend time actually using them and you will be continually surprised and engaged. Any human nuance you can interpret visually can be generated.
There is a high level of similarity with the mechanism human brains use to visualize things. This makes sense, because the neural networks are being trained to approximate that very function.
The output of these programs is not mimicry any more than the output of a human artist is. It's just more limited in the computational power available to the algorithm. The space of these generative expressions is vast, and significantly overlaps the space of art humans are able to produce.
Moore's law and algorithmic improvements will inevitably expand the capability of software to produce art to such an extent that all "human" capacities will be exceeded, for any and all metrics and nuances you can imagine.
I am in no way saying they will not be popular with consumers, these curated AI constructions; nor am I saying the people behind them will not hire PR agencies to promote them and their work as serious Art. It is curation that carries any human expression within them, and not the generative AI art's idiot savant pooping output.
Ernst gombrich (https://en.m.wikipedia.org/wiki/Ernst_Gombrich ) talked about a “motif in the artists’ inner world” in relation to art creation. There is not an inner world to be represented in the case of the Ai creations.
There is no distinction of art with a capital A versus with a little a.
I agree that there will be challenges to broad respect of this creation process and result, but I can see communities gathering around specific personalities no matter what they create.
Similarly, the public will likely be moved by stories, perhaps even some artists will create new poetry specifically for to generating an image where the poetry itself moves the audience too.
But the distinctions you have created are not a relevant way of articulating this path.
It's a different sophisticated and mysterious brush for a human artist to use - but the AI is not generating any art. The human curating the AI's output through their lens of human experience is the Art.
That's like saying it's not the painter generating any Art, the gallery curating the art through their lens of critical artistic appreciation is the Art.
No, a human behind the brush is significantly different than an algorithm. The human operating the brush "curates" what to create in themselves, the artistic creative process. But, at the same time, many an art gallery is known through their curation more than the specific artists they carry at any given moment. This is similar to the fame of the classic club DJ - their curation of what to play when is the cause for a club's popularity, and not just a listing of the songs played, in alphabetical order. Art is collaborative, even when performed by a single artist. Art is a communication. Algorithms generating images are not communications, unless the developer behind the algorithm is trying to operate "their brush" for specific image creations the artist imagines in their head and is using algorithms in place of a brush because they are that artistically technically sophisticated.
I've listened to, and made, a lot of music like that. When I want to relax or think, I like music that engages me in some way but has a sort of drift, an ambient (in the Eno sense) quality even if it's slamming Berlin techno. It's different from the prog-rock I grew up with, which was difficult and loaded with intentionality.
I ended up experimenting with music that was more generative, doing pretty sophisticated things with a degree of structure but without intentionality: always a random factor, a journey to an unknown goal (typical end result: wandering around for a while)
AI art is like that. We see it wandering around for a while. It's getting better at picking up the trappings of identity, but persistently lacks intentionality.
Maybe the trick is to supply the intentionality instead of the identity? Instead of 'creepy moat', a visual identity, make it do a painting of "you are not going to survive and that's good" or "thank you but I am not worthy".
If you have to feed it words, get a poet, don't describe the picture.