Dall-E 2 mini: what exactly is ‘AI-generated art’? How does it work? Will it replace human visual artists?

Josh, I’ve been hearing a lot about ‘AI-generated art’ and seeing a whole lot of truly insane-looking memes. What’s going on, are the machines picking up paintbrushes now?

Not paintbrushes, no. What you’re seeing are neural networks (algorithms that supposedly mimic how our neurons signal each other) trained to generate images from text. It’s basically a lot of maths.

Neural networks? Generating images from text? So, like, you plug ‘Kermit the Frog in Blade Runner’ into a computer and it spits out pictures of … that?

You aren’t thinking outside the box enough! Sure, you can create all the Kermit images you want. But the reason you’re hearing about AI art is because of the ability to create images from ideas no one has ever expressed before. If you do a Google search for “a kangaroo made of cheese” you won’t really find anything. But here’s nine of them generated by a model.

You mentioned that it’s all a load of maths before, but – putting it as simply as you can – how does it actually work?

I’m no expert, but essentially what they’ve done is get a computer to “look” at millions or billions of pictures of cats and bridges and so on. These are usually scraped from the internet, along with the captions associated with them.

The algorithms identify patterns in the images and captions and eventually can start predicting what captions and images go together. Once a model can predict what an image “should” look like based on a caption, the next step is reversing it – creating entirely novel images from new “captions”.

When these programs are making new images, is it finding commonalities – like, all my images tagged ‘kangaroos’ are usually big blocks of shapes like this, and ‘cheese’ is usually a bunch of pixels that look like this – and just spinning up variations on that?

It’s a bit more than that. If you look at this blog post from 2018 you can see how much trouble older models had. When given the caption “a herd of giraffes on a ship”, it created a bunch of giraffe-coloured blobs standing in water. So the fact we are getting recognisable kangaroos and several kinds of cheese shows how there has been a big leap in the algorithms’ “understanding”.

Dang. So what’s changed so that the stuff it makes doesn’t resemble completely horrible nightmares any more?

There’s been a number of developments in techniques, as well as the datasets that they train on. In 2020 a company named OpenAi released GPT-3 – an algorithm that is able to generate text eerily close to what a human could write. One of the most hyped text-to-image generating algorithms, DALLE, is based on GPT-3; more recently, Google released Imagen, using their own text models.

Related: Why is Ted Lasso actor Brett Goldstein telling everyone he’s actually ‘a human man’?

These algorithms are fed massive amounts of data and forced to do thousands of “exercises” to get better at prediction.

‘Exercises’? Are there still actual people involved, like telling the algorithms if what they’re making is right or wrong?

Actually, this is another big development. When you use one of these models you’re probably only seeing a handful of the images that were actually generated. Similar to how these models were initially trained to predict the best captions for images, they only show you the images that best fit the text you gave them. They are marking themselves.

But there’s still weaknesses in this generation process, right?

I can’t stress enough that this isn’t intelligence. The algorithms don’t “understand” what the words mean or the images in the same way you or I do. It’s kind of like a best guess based on what it’s “seen” before. So there’s quite a few limitations both in what it can do, and what it does that it probably shouldn’t do (such as potentially graphic imagery).

OK, so if the machines are making pictures on request now, how many artists will this put out of work?

For now, these algorithms are largely restricted or pricey to use. I’m still on the waiting list to try DALLE. But computing power is also getting cheaper, there are many huge image datasets, and even regular people are creating their own models. Like the one we used to create the kangaroo images. There’s also a version online called Dall-E 2 mini, which is the one that people are using, exploring and sharing online to create everything from Boris Johnson eating a fish to kangaroos made of cheese.

I doubt anyone knows what will happen to artists. But there are still so many edge cases where these models break down that I wouldn’t be relying on them exclusively.

Are there other issues with making images based purely on pattern-matching and then marking themselves on their answers? Any questions of bias, say, or unfortunate associations?

Something you’ll notice in the corporate announcements of these models is they tend to use innocuous examples. Lots of generated images of animals. This speaks to one of the massive issues with using the internet to train a pattern matching algorithm – so much of it is absolutely terrible.

A couple of years ago a dataset of 80m images used to train algorithms was taken down by MIT researchers because of “derogatory terms as categories and offensive images”. Something we’ve noticed in our experiments is that “businessy” words seem to be associated with generated images of men.

So right now it’s just about good enough for memes, and still makes weird nightmare images (especially of faces), but not as much as it used to. But who knows about the future. Thanks Josh.