Yesterday, I wrote about the history and future of AI in general. Today, I am continuing my explorations by examining the role of AI in the arts and the impact of the arts on AI. The exploration is based on this notebook, with 60 sources collected by NotebookLM.
AI in the Arts
AI has a multi-generational history in the arts, transitioning from centuries-old mechanical automatons to symbolic rule-making and eventually to deep learning-based approaches. In the following, I will go through some of the works picked out by NotebookLM. Some of them I know well, others I hadn’t heard about before. A complete list of detected works is at the end of the post.
Fine and Visual Arts
Early AI art was pioneered by Harold Cohen, who developed AARON beginning in the late 1960s to codify the act of drawing through symbolic rules. I still remember a weeklong workshop with Harold Cohen while I was studying at Chalmers in Göteborg in the early 2000s. We had good discussions about the possibilities and limitations of rule-based systems and how he had gradually developed his system over decades. Here are examples of how AARON paints:
In the modern era, Generative Adversarial Networks (GANs) have become central, with artists like Refik Anadol creating immersive “data paintings” from massive datasets and Anna Ridler using GANs to explore financial speculation through AI-generated imagery. I hadn’t heard about Ben Snell before, but I love his sculpture Dio, which was first developed by a computer model and then cast using the ground-up remains of the computer that designed it.
Dance
I have been highly inspired by choreographer Merce Cunningham in my dance studies and even tried his LifeForms software back in the days. I hadn’t made the connection to AI before, though, but recall how he used a machine to “sketch” movement ideas that his aging body could no longer create, leading to the discovery of “impossible” movements for his live dancers.
Similarly, Wayne McGregor collaborated with Google and other labs to create generative choreographic tools and a living archive of movement.
Theatre and Acting
I don’t have a good overview of theatre and acting in general, and even less so when it comes to theatre and acting. NotebookLM brought up examples of how AI has been used to generate scripts, such as the surreal short film Sunspring (2016) written by an LSTM model named “Benjamin.”
There are fewer historical references to the use of AI in theatre, but, of course, today, AI supports all parts of the process, from casting through voice analysis, automates complex lighting and sound cues, and even provides “live-scripts” to actors via earbuds in real time.
Music
There are many examples of AI in music, so I am surprised that NotebookLM didn’t find much beyond the most obvious ones. It starts with what is arguably the first computer-composed piece, the Illiac Suite, written in 1957, which used probabilistic rules. Another “classic” is David Cope’s EMI project (Experiments in Musical Intelligence), which successfully mimicked the styles of classical masters like Mozart and Bach.
When it comes to real-time performance, composer–performer George E. Lewis developed the interactive improvisation system Voyager, modeling real-time musical dialogue between human performers and software. That system is primarily rule-based, as far as I know. When it comes to learning-based human–computer interaction, I think Francois Pachet’s Flow Machines are great examples:
More recently, Suno enables the creation of complete music tracks with simple prompts.
Film, TV, and animation
I was surprised that NotebookLM didn’t find many references to old film and animation. Instead, it points to relatively new systems and productions, including Massive, which uses AI agents and “fuzzy logic” to simulate realistic, large-scale battle scenes involving hundreds of thousands of autonomous individuals, most famously in The Lord of the Rings. AI is also used for de-aging actors (e.g., The Irishman) and creating hyper-realistic digital humans and digital actors via motion capture. Tools like Tilt Brush (Google) is a VR painting and sculpting platform used to create immersive 3D artworks.
Again, these are just a few examples highlighted by NotebookLM. Check the complete list below for more.
The impact of art on AI development
While AI has indeed been used to create art over the years, I think it is also interesting to reflect upon how artistic use—and misuse—has fed back into AI development, pushing technical boundaries and advancing theoretical frameworks.
By attempting to encode the artistic process, like in AARON for painting and Voyager for music, artist–researchers gained insights into how human artistic decision-making can be translated into code. Those systems were developed by the artists themselves, though. Other examples, including the development of LifeForms, were influenced by Merce Cunningham’s discussions with the developers.
I need to dig deeper into whether it is true, but NotebookLM claims that the development of “fuzzy logic” was directly inspired by the challenges of creating realistic crowds in films within Massive. This software allowed AI agents to move from binary true/false reactions to a spectrum of “shades of gray,” enabling digital entities to interact in unpredictable, lifelike ways that traditional programming could not.
Story generation remains an exploratory research stage in AI. Creating narratives is easy for humans but difficult for machines. Early attempts, such as James Meehan’s TALE-SPIN, helped researchers identify the “brittleness of symbolic AI.” Bizarre outcomes in these generated tales highlighted that AI required vast, “mundane” real-world knowledge to simulate human-like reasoning.
One of my big arguments for working on creative AI is that artistic projects are excellent “laboratories” for exploring human–computer interaction. You can quickly get a sense of whether something “works” and test it in real-life settings, such as installations or performances.
Key challenges in art and AI
Artistic work on and with AI both creates and exposes numerous challenges, many of which we will address in MishMash. Some key challenges include:
- Authorship: Who is actually responsible for an artwork created (largely) by AI? What are the relationships between the underlying data and algorithms and the human and machine creators? How does an audience perceive these uncertainties?
- Ethical and legal issues: Many of today’s large AI models are trained on copyrighted data without the consent or compensation of the original artists. There are also numerous challenges related to privacy, and questions about ownership of one’s own voice, appearance, and expression.
- Cultural bias and homogenization: Given the skewness in training material, current large models are skewed towards Western, popular cultural expressions. Thus, new artistic outputs will carry this bias. There is also the risk of “aesthetic convergence,” in which AI models reinforce dominant norms in their training data, potentially erasing minority or radical voices.
These challenges are not easy to solve, but it helps to begin by identifying and criticizing them. Focusing on human–machine co-creation also helps. Such a “human-in-the-loop” approach ensures the human remains in control while leveraging AI-based tools to handle repetitive tasks or generate initial “sketches” for further refinement.
In sum, while I think we should remain critical, I also see significant potential in using AI in artistic projects. Ultimately, artistic (ab)use can also help AI in general move forward.
Catalogue of AI art
Here is a list of all the AI-based artworks NotebookLM found for me. This is undoubtedly not complete, and probably biased, but I keep it here for reference.
| Year | Project Name | Creator | Artistic Domain | Key Technology | Milestone or Impact |
|---|---|---|---|---|---|
| 1940 | Nimatron | Edward U. Condon, Willard A. Derr and Gereld L. Tawney | Games | Electromechanical relays | Machine constructed for the New York World’s Fair that could play two game strategies of Nim against a human competitor. |
| 1948 | Cathode Ray Tube Amusement Device | Thomas T. Goldsmith Jr., Cedar Grove and Estle Ray Mann | Games | Cathode ray tube with overlay | A conceptual shooting game where a point on a tube served as a bullet to hit targets on a transparency overlay. |
| 1948 | Nim machine | Raymond Redheffer | Games | Electrical circuits | A small, five-pound machine for playing the game of Nim; a precursor was planned in 1941-42 with relays. |
| 1950 | Sketchpad (precursor concepts) | Ivan Sutherland | Visual Art / Digital Graphics | Digital graphics foundation | Laid the groundwork for digital graphics and interactive computer design. |
| 1950 | Turing Test | Alan Turing | Poetry | Imitation Game (Logic-based benchmark) | Conceptualized artificial intelligence and whether machines could convincingly mimic human conversation, laying seeds for machine creativity. |
| 1951 | Ferranti Nimrod | John Bennett and Raymond Stuart-Williams | Games | Digital computer (480 tubes) | Exhibited at the Festival of Britain as an ’electronic brain’ to demonstrate the computing capacities of automatic computers. |
| 1952 | Love-letters | Christopher Strachey | Poetry/Literature | Markov-like random selection from database | Used a Ferranti Mark I to combine words from a selection of Roget’s Thesaurus into billions of different possible letters. |
| 1952 | OXO (Tic Tac Toe) | Alexander S. Douglas | Games | EDSAC mainframe computer | First game to use a monitor presentation (cathode ray tube) as a central part of the interface. |
| 1957 | Illiac Suite | Lejaren Hiller and Leonard Isaacson | Music | Markov chains (Probabilistic rules) | First piece of music composed by a computer; a string quartet coded on the ILLIAC I computer. |
| 1958 | Tennis for Two | William Higinbotham | Games | Analog computer and oscilloscope | Early interactive game representing a tennis court side-view; used for a visitors’ day at Brookhaven National Laboratory. |
| 1958 | The Seventh Voyage of Sinbad | Ray Harryhausen | CGI/Digital Effects | Stop-motion animation (manual precursor to digital) | Pioneered mythological creatures and scale disparity effects later used in The Lord of the Rings. |
| 1959 | Stochastic Texts | Theo Lutz | Poetry/Literature | Stochastic procedures | Generated sentences with a correct syntax using a database of 100 words from Franz Kafka’s novel ‘The Castle’. |
| 1960 | Stochastic Algorithms | Iannis Xenakis | Music | Probability-based systems | Pioneered the use of probability-based musical architecture to manage massive sets of sonic variables. |
| 1961 | First CGI Animation | Edward E. Zajac at Bell Labs | Film / Animation | Computer-generated imagery (CGI) | The first tangible application of CGI technology in film animation. |
| 1962 | Spacewar! | Stephen R. Russell, Martin Graetz, and Wayne Wytanen | Games | PDP-1 minicomputer (vector graphics) | First widely recognized computer-based shooter game; included gravity effects and influenced early joystick development. |
| 1963 | Jason and the Argonauts | Ray Harryhausen | CGI/Digital Effects | Stop-motion animation (manual precursor to digital) | Early milestone in creature effects and theatrical reveals matches against giant monsters. |
| 1964 | Computer-programmed choreography | Jeanne Beaman and Paul Le Vasseur | Dance Choreography | Random generation of movement parameters | First known use of computers to generate random, performable dance sequences (70 dances produced). |
| 1966 | ELIZA | Joseph Weizenbaum | Poetry | Pattern matching and rule-based framework | First chatbot; demonstrated how procedural generation could create the illusion of understanding in conversational dialogue and poetic language. |
| 1966 | Poemfield No.2 | Stan VanDerBeek and Kenneth C. Knowlton | CGI/Film | BEFLIX (mosaic patterns) | Produced complex digital animations where text and patterns dissolved into entropic fields; part of a series of ten films. |
| 1967 | Hummingbird | Charles Csuri and James P. Shaffer | Visual Art/Film | FORTRAN-based line transformation | Early example of computer-aided ‘morphing’ where a digitised drawing of a bird was fragmented and reconstructed through 14,000 frames. |
| 1967 | Stick figure animation | Michael Noll | Dance and Film | Random selection of movement | Three-minute animated film of stick figures performing movement selected randomly by computer; compared AI art to Mondrian. |
| 1968 | Cybernetic Serendipity | Jasia Reichardt (Curator) | Multi-domain (Visual Art, Music, Poetry, Film) | Cybernetic systems and algorithms | Landmark international exhibition exploring the relationship between computing and art; showcased animation and computer-composed music. |
| 1968 | SAM (Sound Activated Mobile) | Edward Ihnatowicz | Cybernetic Sculpture | Hydraulic controlled vertebrae | Exhibited at ‘Cybernetic Serendipity’; moved its reflector towards quiet sounds recognized by microphones. |
| 1970 | AARON | Harold Cohen | Visual Art | Rule-based autonomous drawing system | Pioneering computer painter that could make pictures autonomously; exhibited at the Tate, Victoria and Albert Museum, and LACMA. |
| 1970 | Seek | Architecture Machine Group (Nicolas Negroponte) | CGI/Digital Installation | Interdata Model 3 computer and sensors | An environment for gerbils where a robot arm replaced blocks based on movements; anticipated ‘architectural intelligence’. |
| 1970 | The Senster | Edward Ihnatowicz | Cybernetic Sculpture | Electro-hydraulic servo systems (Philips P 9201) | Large steel structure that reacted to visitors’ motions and sounds; considered a precursor to artificial intelligence. |
| 1971 | Computer Space | Nolan Bushnell (Nutting Associates) | Games | Arcade cabinet with TTL logic | The first commercially sold arcade game; an adaptation of ‘Spacewar!’. |
| 1972 | Odyssey | Ralph Baer (Magnavox) | Games | Home game console | First home video game console connected to televisions; included a tennis game that was a successor to ‘Tennis for Two’. |
| 1972 | Scape-mates | Ed Emshwiller | Video Art | SCANIMATE (analog computer) | Introduced a new vocabulary of video image-making using 3D sculptural illusions and real-time colorization. |
| 1973 | Novel Writer | Sheldon Klein | Poetry | Rule-governed state changes and micro-simulation | First storytelling system on record; generated 2100-word murder mystery stories in less than 20 seconds. |
| 1974 | Labanotation Interpreter | Zella Wolofsky | Dance | Symbolic notation translation | M.Sc. thesis project for computer interpretation of selected Labanotation commands. |
| 1976 | Labanotation Graphics Editor | Brown and Smoliar | Dance | Interactive graphics editor | Among the first to develop an interactive graphics editor for entering and storing Labanotation symbols. |
| 1977 | CHOREO | Savage and Officer | Dance | Interactive computer model | Included an animator which simulated a moving two-dimensional figure based on notation scores. |
| 1977 | Star Wars | George Lucas / Lucasfilm | Film Production | Rudimentary computer effects | Incorporated early computer effects alongside traditional techniques to enhance storytelling. |
| 1977 | TALE-SPIN | James R. Meehan | Poetry | Goal-directed planning (Planboxes) | Interactive program that wrote stories by simulating rational behavior of woodland creatures; noted for ‘mis-spun’ tales revealing system brittleness. |
| 1978 | Keyframe Choreography | John Lansdown | Dance | Positional orientation keyframes | Computer composed dancer’s positional keyframes at specific points in time. |
| 1981 | AUTHOR | Natalie Dehn | Poetry | Simulating authorial meta-goals | Aimed to simulate the author’s mind as she makes up a story by achieving a complex web of author goals. |
| 1982 | Tron | Steven Lisberger (Walt Disney Productions) | Film Production / CGI | Procedural rule-based generation | Featured extensive CGI, including the 30-minute Light Cycle scene; an early milestone in algorithmic animation for visual effects. |
| 1983 | Benesh Editor | Ryman, Singh, Beatty, and Booth | Dance | Interactive user interface with graphical icons | First movement notation editor to be ported to a personal computer system (Apple Macintosh in 1984). |
| 1983 | UNIVERSE | Michael Lebowitz | Poetry | Planning and plot snippet reuse | Modeled the generation of scripts for TV soap opera episodes; first system to focus on independent character creation. |
| 1984 | afternoon: a story | Michael Joyce | Poetry/Literature | Storyspace (Hypertext) | A seminal 1987 hyperfiction (early development 1984) consisting of 539 lexia; readers follow multilinear paths. |
| 1984 | RACTER | William Chamberlain and Thomas Etter | Poetry / Prose | Markov chains / Pre-coded templates | Produced a full book titled The Policeman’s Beard Is Half Constructed, the first book attributed to a computer program. |
| 1984 | STANZA | Roger Carl Schank | Poetry | Knowledge representation systems (Frames) | One of the first integrations of knowledge representation in NLP; claimed to be able to narrate stories. |
| 1985 | Money for Nothing | Steve Barron | Music Video | Bosch FGS-4000 and Quantel Paintbox | Dire Straits music video featuring computer-animated characters made of stereometric volumes; won MTV ‘Video of the Year’. |
| 1986 | EMI (Experiments in Musical Intelligence) | David Cope | Music | Augmented transition networks (ATN) | Successfully mimicked classical composers’ styles; produced sonatas in the style of Mozart and Bach that fooled listeners. |
| 1986 | LifeForms | Dr. Thomas W. Calvert / Merce Cunningham | Dance and CGI | Inverse Kinematics and 3D animation | Compositional tool used to compose ‘Trackers’; allowed the discovery of ‘impossible’ movement sequences in choreography. |
| 1987 | afternoon: a story | Michael Joyce | Poetry/Literature | Storyspace (Hypertext) | A seminal hyperfiction consisting of 539 lexia and 905 links; readers follow multilinear paths to uncover the story. |
| 1989 | Animate Tokens | Bradford and Cote-Lawrence | Dance | High-level abstract movement patterns | Visualized dancers as patterns of energy rather than human forms to facilitate creative exploration. |
| 1989 | The Legible City | Jeffrey Shaw | Digital Installation | Silicon Graphics Workstation and bicycle interface | Interactive installation where observers ‘ride’ a bicycle through a virtual city made of architectural-sized letters. |
| 1991 | Artificial Evolution Videos | Karl Sims | Visual Art | Artificial life and genetic programming | Sims won the Golden Nica at Prix Ars Electronica in 1991 and 1992 for videos utilizing artificial evolution. |
| 1991 | Terminator 2: Judgment Day | James Cameron | Film Production | Liquid metal effects (CGI) | Showcased realistic liquid metal CGI effects that significantly advanced the medium. |
| 1993 | Doom | id Software | Games | Raycasting engine | Revolutionary first-person shooter that set new standards for 3D graphics and fostered an internet-based player subculture. |
| 1993 | MINSTREL | Scott R. Turner | Poetry | Transform Recall Adapt Methods (TRAMs) | Simulated creative re-use of prior material to generate stories about King Arthur and the Knights of the Round Table. |
| 1994 | Evolving Virtual Creatures | Karl Sims | CGI/Digital Effects | Genetic Algorithms and Artificial Intelligence | Influenced crowd simulation logic; agents evolved to walk or swim in simulated environments. |
| 1994 | GNARL | Peter J. Angeline | Poetry | Genetic algorithm | Devised to engender poetic compositions through evolutionary mutation and selection processes. |
| 1994 | JAPE | Kim Binsted and Graeme Ritchie | Poetry | Symbolic pattern-matching and WordNet | Successfully generated pun-based riddles consistently evaluated as humorous by young children. |
| 1995 | Toy Story | John Lasseter (Disney-Pixar) | Film/CGI | 3D Computer Animation | The first feature-length movie completely animated by computer-based image processing. |
| 1996 | Massive | Steven Regelous (Weta Digital) | CGI/Digital Effects | Fuzzy Logic / Autonomous Multi-Agent Simulation | Software explaining ‘fuzzy logic’ as a method for creating autonomous character decisions; used for 200,000 Uruk-hai in The Lord of the Rings. |
| 1997 | Deep Blue | IBM | Strategy Gaming | Brute-force processing | First computer system to defeat a reigning world chess champion (Garry Kasparov). |
| 1997 | LILIPUTIANS | David Cope | Music | Genetic programming | Iteratively evolved musical compositions; won the Prix Ars Electronica Golden Nica for Interactive Artistry in 2001. |
| 1999 | Voyager | George E. Lewis | Music / Interactive Improvisation | Real-time algorithmic improviser | Autonomous improvising system that listens to and responds to live musicians, pioneering human–computer interaction in jazz performance. |
| 1999 | BRUTUS | Selmer Bringsjord and David A. Ferrucci | Poetry | First-order logic and production rules | Designed to write stories specifically about betrayal based on detailed thematic knowledge and logical models. |
| 1999 | Electric Sheep | Scott Draves | Visual Art/Generative Art | Fractal Flame Algorithm / Distributed computing | A screensaver that evolves animations based on user votes; functions as a distributed ‘cyborg mind’ network. |
| 1999 | MEXICA | Rafael Pérez y Pérez | Poetry | Engagement-reflection cognitive model | Modeled the creative writing process through cycles of reflection to produce short stories about early Mexican inhabitants. |
| 2001 | BotFighters | It’s Alive | Games | Cell-ID positioning (Mobile telephony) | Pervasive game for mobile phones where real urban locations became combat zones between players. |
| 2001 | The Lord of the Rings | Peter Jackson / Weta Digital | Film Production | Motion capture and AI crowd simulation | Integration of live-action with hyper-realistic digital creations like Gollum and photorealistic autonomous battle crowds. |
| 2002 | Particle-based Crowd Effects | Industrial Light & Magic | CGI/Digital Effects | AI and Particle Physics | Arena sequence in Star Wars: Episode II used AI for 30,000 individual actions. |
| 2005 | The Dark Knight / The Hangover | Legendary Entertainment | Film Production | AI for user preferences | Developed audience behavior data and user preferences to inform production strategies. |
| 2012 | Flow Machines | Sony CSL Paris | Music | Machine learning and signal processing | Project started to achieve augmented creativity in music composition. |
| 2012 | Skyfall | MGM / Eon Productions | Film Production | Big Data Analytics | Utilized AI-driven big data to analyze audience demographics and inform targeted marketing/distribution. |
| 2013 | Cinelytic License | Cinelytic | Film Production | Machine learning and historical data | Los Angeles startup that cross-references movie performances to match key talent for production. |
| 2014 | Conversations with Bina48 | Stephanie Dinkins | Visual Art | Social robotics and interactive AI | Recorded ongoing conversations with BINA48 to explore the culture of people of color in artificial intelligence. |
| 2014 | Ex Machina | A24 | CGI/Digital Effects | AI-driven character animation | Utilized advanced animation techniques to bring the humanoid character Ava to life; sparked ethical discussions. |
| 2014 | GANs (Generative Adversarial Networks) | Ian Goodfellow | Visual Art / Multi-domain | Neural networks (Generator vs. Discriminator) | New architecture where two networks compete; a catalyst for modern AI art enabling highly realistic synthetic content. |
| 2015 | DeepDream | Alexander Mordvintsev (Google) | Visual Art | Convolutional neural network | Popularized AI-generated surrealism by amplifying patterns in data to create psychedelic, dream-like images. |
| 2016 | AlphaGo | Google DeepMind | Strategy Gaming | Deep Reinforcement Learning | Defeated world Go champion Lee Sedol; showcased ‘unconventional’ and ‘creative’ play style. |
| 2016 | Daddy’s Car | Sony CSL (François Pachet / SKYGGE) | Music | Flow Machines (Recombinant AI) | Milestone AI song composed in the style of The Beatles; highlighted tensions regarding training on intellectual property. |
| 2016 | Morgan (Trailer) | IBM Watson | Film Production | AI editing software | The world’s first movie trailer edited using artificial intelligence. |
| 2016 | Sunspring | Oscar Sharp and Ross Goodwin | Film Production / Poetry | LSTM (Long Short-Term Memory) | First short film written entirely by an AI algorithm (Benjamin) after analyzing sci-fi screenplays. |
| 2018 | Dio | Ben Snell | Visual Art / Sculpture | Machine Learning (3D modeling training) | The first AI-created sculpture sold at Phillips; cast from the ground-up remains of the computer that designed it. |
| 2018 | Portrait of Edmond de Belamy | Obvious (Art Collective) | Visual Art | Generative Adversarial Networks (GANs) | First AI-generated artwork sold at a major auction (Christie’s) for $432,500, signaling market recognition. |
| 2018 | Transformer Architecture | Google / Vaswani et al. | Poetry | Self-attention mechanism | Revolutionized NLP by allowing parallel processing of data, enabling modern Large Language Models. |
| 2019 | Avengers: Endgame | Digital Domain / Disney | CGI/Digital Effects | Machine Learning and facial capture | Used AI to recreate facial expressions for Thanos and for digital de-aging of actors. |
| 2019 | Human Allocation of Space | Scott Eaton | Visual Art / Sculpture | Custom ‘shape’ neural network | A sculptural work drawn by the artist and translated into 3D form by a neural network trained on a synthetic dataset. |
| 2019 | Mosaic Virus | Anna Ridler | Visual Art / Video Installation | GANs trained on custom datasets | 3-screen installation where AI tulips change based on Bitcoin prices; critical engagement with financial speculation. |
| 2019 | The Irishman | Netflix | CGI/Digital Effects | De-aging technology | Digitally rejuvenated actors De Niro, Pacino, and Pesci for flashback scenes across several decades. |
| 2020 | GPT-3 | OpenAI | Poetry | Large Language Model (LLM) | State-of-the-art model with 175 billion parameters capable of generating text that closely resembles human-authored work. |
| 2021 | DALL·E | OpenAI | Visual Art | Diffusion models / Multimodal learning | Revolutionized art creation by generating high-quality images from natural language text descriptions. |
| 2022 | Stable Diffusion | Stability AI | Visual Art | Latent Diffusion Model | Open-source release that democratized high-quality image generation for the broader creative community. |
| 2022 | Théâtre D’opéra Spatial | Jason M. Allen | Visual Art | Midjourney (Text-to-Image) | Won an art prize at the Colorado State Fair, sparking global debate on AI and authorship. |
| 2022 | Unsupervised | Refik Anadol | Visual Art / Digital Media | Machine Learning on museum archives | Installation at MoMA featuring ever-shifting abstractions derived from the museum’s collection. |
| 2023 | GANStrument | Flow Machines Team (Sony CSL) | Music / Sound Design | Generative Adversarial Networks (GANs) | Application that blends features of two sounds to generate unique soundscapes and instrument tones. |
