Can the outputs of AI systems qualify as artworks?
Anna Linne (2021-12)

I. Recent Technological Developments in AI Art

One of the key characteristics of modern AI technology is its ability to detect feature associations from large training datasets. Various models can be trained to emphasize different associations. To the extent data about human reactions to various art can be captured, similar reactions to similar art can be predicted. In visual art, a large collection of images throughout history are known as great works, and these images, or some of them, are used to train the various AI models. Some AI models are highlighted as follows.

Generative adversarial networks (GANs) were designed by Ian Goodfellow and his colleagues in 2014. A GANs system simultaneously trains two competing machine learning models: one aims to generate images that can be mistaken as real images; the other model aims to accurately identify generated images. As each model gets better, the first model at generating images that evade detection by the second model, and the second model at catching generated images by the first model, the result is that GANs can produce images which even human eyes sometimes cannot tell that they are generated by AI neural networks. The images generated by GANs are new yet similar to known images. Mario Klingemann, an AI art pioneer, has an installment piece, Memories of Passersby I (2018), based on GANs. In this work, brand new portraits are generated by machines in real-time in an endless feedback loop. Klingemann's GANs AI artworks have been exhibited at MoMA New York, the Metropolitan Museum of Art New York, the Photographers' Gallery London, ZKM Karlsruhe, and Centre Pompidou Paris. He received the British Library Labs Artistic Award 2016 and the Lumen Prize Gold Award 2018.

DeepDream was created by Alexander Mordvintsev at Google in 2015. It uses neural networks to find and enhance patterns on existing artworks; the patterns being enhanced are based on the human tendency to see meaningful images out of chaos. That is the same tendency for humans to see shapes of animals or objects from clouds in the sky. With DeepDream, an existing artwork can get enhanced such that it contains many different other images. For example, DeepDream enhances Leonardo da Vinci's Mona Lisa contains numerous animal faces and eyes. The potential for DeepDream is limitless, as a vast number of DeepDream effects can be applied to each painting for vast different effects, e.g., instead of animal faces, fruits, or other objects can be found and enhanced. Each new creation upon a different DeepDream effect is a new artwork, and one could evaluate it based on taste. Gaining in popularity, DeepDream has become a new form of psychedelic and abstract art.

Neural Style Transfer (NST) was introduced by Leon Gatys et al. in 2016. It uses convolutional neural networks to blend two images, one as content and the other as a style reference. The style reference image is often an artwork by a famous painter. The output image looks like the content image is painted in the style of the style reference image. For example, one can use the image of a dog as a content image and Wassily Kandinsky's Composition 7 as the style reference. Upon applying NST, the output image is shown a dog painted in Kandinsky's style. NST can potentially turn many photographs into artworks.

Artificial Intelligence Creative Adversarial Network (AICAN) was introduced by Ahmed Elgammal et al. in 2017. Like GANs, an AICANs system trains two competing neural networks models. One model aims to create new images similar to existing artworks in terms of forms, subjects, and styles, and the other model aims to reject images that are too similar to known artworks. A current AICAN system is trained on 100,000 of the greatest works in art history. As a result, AICAN can produce novel artworks similar to yet different from great artworks. It is said that AICAN art passes the Turing test for art because, most of the time, human observers cannot tell AICAN art apart from art created by human artists. With AICAN art becoming increasingly popular, more and more AICAN artworks are being sold at auctions.

DALL·E was introduced by OpenAI in 2021. It creates images from text prompts, such as “a green leather purse shaped like an apple.” The text prompts are interpreted by the GPT-3 transformer model. The images generated by DALL·E are ranked and curated by another AI technology developed by OpenAI, called Contrastive Language-Image Pre-training (CLIP). The result is that DALL·E can create images with various concepts and combinations of concepts. The text prompts can be any novel combination of concepts, and the artworks can be of various combinations. However, it will remain a great challenge for AI to learn to present art from concepts, especially abstract concepts. The existing technology does not equip AI with the ability to grasp complex concepts, let alone present complex concepts as art. The text prompts processed, and images generated by DALL·E are currently at a rudimentary level.

In addition to these important developments of AI technology in art, some artworks are created due to the AI system's unique ability to process millions of images. For example, artist Refik Anadol uses large datasets and machine learning to create mesmerizing dynamic images from large collections of images. He turns millions of New York city photographs into a machine hallucination movie. Such images are unique to machine outputs because human artists would not be able to create them without machines. Anadol's Machine Hallucinations series is based on NFT and GANs and has become widely sought after. Several projects try to associate images with aesthetic judgment by developing large-scale datasets that contain images annotated with subjective scores of aesthetic evaluations and sentimental reactions. (Cetinic al et., 2021) Training AI models with such datasets will further improve the models' ability to predict aesthetic evaluations and produce images with a higher probability of being deemed aesthetical.



License: Creative Commons License, Attribution 4.0 International (CC-BY-4.0)


SiSU Spine (object numbering & object search) 2022