“Innovation, like evolution, is a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance—and that happen to be useful.”
Matt Ridley, How Innovation Works
“The higher the creativity component of a profession, the more likely it is to have disconnected inputs and outputs.”
In these early days of AI-based software such as ChatGPT, DALL-E 2, MidJourney, and Stable Diffusion, there’s been much excitement around the technologies’ capacity to generate new texts, images, and musics around any topic or theme with a level of sophistication and complexity that is often believably human. This generating is possible–and believable–because AI software is trained on billions of examples culled from the internet. As Stephen Wolfram says in his book What Is ChatGPT Doing…?, ChatGPT operates by trying “to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.'”
A key part of using AI-based software is the text prompt, which points the software in a general thematic direction. A prompt usually takes the form of a question or a directive, and popular prompts include famous author or artist names, cultural references, places and eras, and fantasy scenarios. Nuanced and unconventional prompts can generate nuanced and unconventional results. For example, an entrant in a 2022 digital art competition used a prompt whose words revolved around the theme of “space opera theater” to generate his first prize-winning work, “Théâtre D’Opéra Spatial.” (The entrant never shared his prompt’s exact wording.)

Within a prompt there are ways to further constrain an AI’s search area. A common prompt constraint is to use the qualifier “in the style of” as a way of linking two disparate ideas. For example, in 2022 a ChatGPT user’s prompt (discussed in the New Yorker) asked for “instructions for removing a peanut-butter sandwich from a VCR, written in the style of the King James Bible.” Or this year, an anonymous music producer (aptly) named Ghostwriter released a track that featured AI-generated voices credibly impersonating the rapping and singing of Drake and The Weeknd. Along the same lines, Open AI’s Jukebox software can generate a range of new music in the style of established artists as long as users provide it with “genre, artist, and lyrics as input.” In all these examples, the prompt-constrained results aren’t always aesthetically thrilling—Jukebox’s impersonations of Frank Sinatra and Katy Perry are creepy—yet uncanny in their realism. Implicit in the technologies we’ve seen so far is a conception of creativity as a fundamentally imitative and derivative process. It’s as if for computer code to pass the Turing Test of reasonable, human-esque creativity, it’s enough to be able to generate something in the style of something else.
•
These examples of AI software trained on billions of examples to produce reasonable continuations in the style of extant works remind us of how re-creation figures prominently in any artistic practice. Making art is inescapably intertextual: to a degree, everyone is copying everyone else, and we learn by trying (and failing) to make something along the lines of what we previously read, saw, or heard others do. In the electronic music production community, the practice of copying the sound and style of others is considered a route to, if not originality, then at least competence. For example, online there’s a current of articles, discussion threads, and YouTube tutorials on how to (re-)create music in the style of a famous producer or in the style of a popular sound of the moment. Consider videos about how create pad sounds in the style of Flume,
how to create music in the style of Burial
or how to create music in the style of lofi hip hop.
These videos are insightful, educational, and a good way to learn some of the myriad techniques for producing music (though not as good as experimenting yourself, by trial and error). But in-the-style-of tutorials are fundamentally about re-creation, not innovation. There’s a parallel here between AI software trained on the internet as a dataset for generating new art, and videos that dissect well-known tracks into their component parts for the purposes of re-creating new works that sound similar. In both cases, one uses what has already been done as a model text for making more of the same.
In sum, seeing and hearing AI-generated work has me wondering about the mechanisms of innovation in electronic music production, and beyond. Does innovation happen by imitating and remixing existing styles? Or does it happen by way of black swan discoveries, outsider/outlier approaches, naïveté, accident and random findings, and unconventional (mis)use of tools? Is innovation a methodical building and reasonable continuation in the style of something we already know, or a non-linear leap, a surprising output transcending its input?