A day will come when a cheeky first-grader shoves a plastic disc in front of our face and demands to know what it is. The CD, which heralded the demise of the long-playing record at the beginning of the 1980s with the promise of brilliant digital sound quality and saw music lovers trade in their entire record collections, will soon be an object of sentimental wonder like vinyl records and audio cassettes. What does this mean? It means that digitalisation is getting on in years but is certainly not yet “over the hill”.
The music industry has been fighting for survival since 1992 with the arrival of the miniaturised MP3 format which made it easier to save and share music. Meanwhile, consumers are streaming music (legally, for the most part), but in such reduced sound quality that hi-fi freaks can only despair. But no one is listening to them, because listening to music has never been easier and more convenient, and because translating musical recordings into digital ones and zeroes has triggered a tsunami of other changes which has swept up entire branches of music production and reception.
The digital revolution is not only a technical revolution – it affects all areas of music. Today’s music collections are called playlists and aren’t purchased but rented, which naturally means the owner is also listening. It means that record shops are closing, recording labels are going bankrupt, and artists are losing an important source of income. Yet the changes go much further than that. They are impacting the compositional process, starting from the digital tools that artists employ to enlisting the support of artificial intelligence. In the end, the music itself is changing.
No technological development has had a greater impact on this art form in recent memory. Composition classes are still held in real places, and it still matters if someone studies in Frankfurt, San Francisco or Zagreb. However, national, geographic and aesthetic boundaries have become increasingly surmountable. Musical styles have either vanished or exist alongside one another, like in the works by composer Jennifer Walshe where things that would have never been put together in the pre-digital age collide like an unbridled YouTube vision (The Total Mountain, 2014). It’s no coincidence that her volatile aesthetics come at the heels of the Web 2.0 and the invention of video platforms and social media. “When I start my computer, millions of things are waiting there every day,” explains the Irish composer, who admits that she couldn’t imagine life without Internet anymore. “I start by listening to ten aesthetically perfect pieces, each of which sound different and have different aims. Working with these in this historic moment comes very naturally to me.”
In the age of digital networking, one can find it quite “natural” to go a step further and question the individual work of the artist at a fundamental level. In April 2018 a website called Wiki-Piano.netwent online which has since become the venue of a collective composition. The composer Alexander Schubert invites users to participate in this open artwork; anyone can write notes, offer comments or upload videos, images and audio files. How will they ultimately perform this group-composed “piano piece” and who will assume responsibility for the imagination of the swarm intelligence which has already suggested having the piano go up in flames? These are questions that Schubert, like every website administrator, will have to address. His platform is a reflection of the Web.
New media and tools demand new working methods. Digitalisation is changing the romantic idea of a genius with a divine gift for composing music. While the first electronic studios in Europe were founded in 1957 to the amazement of the analogue music scene – and just ten years after the development of the first computer – Lejaren A. Hiller and Leonard M. Isaacson from the University of Illinois presented the world’s first computer composition. The score of the Illiac Suite was performed by a string quartet. The contrast couldn’t have been more striking. Digital technology had laid claim to the core ensemble of classical, Western music culture. And the artistic intelligence of the composers – most of whom were male at the time – were not the only ones to feel the heat of competition. Around 1960 Max Mathews at Bell Laboratories set out to teach the computer to speak and sing. While language synthesis software plays with psycho-acoustic parameters today and comes surprisingly close to imitating human speech , the early experiments were rather rudimentary. Yet these early attempts made an enormous impression. The first artificial violin sounds from digital synthesizers are only remotely related to acoustic violins to the ears of today’s listeners. Nonetheless, the digital synthesizers and samplers were enthusiastically received. The horrendously expensive equipment defined the sound of the 1980s. Michael Jackson, Jean Michel Jarre and Peter Gabriel zoomed up the charts with explosions, musical scales of breaking glass, digital percussion and artificial sounds that have no real-world counterpart. Digitally produced synthetics were the material of the day and promise of a future where everything could be produced artificially.
The means of digital production were just as popular in the e-music industry. While the icons of pop paid for these devices out of their own pocket, the composers of classical music were almost entirely dependent on the largesse of the studios until the 1990s. Nowadays hardly anyone composes music on a piano, and even fewer transcribe notes to paper; everything is done on one’s computer which allows one to score entire instrumental works and acoustically simulate them in real time. Today’s tools of the trade include sound analysis programs, apps that process sounds in concert in real time, and sound and music databases.
With the spread of digital technologies comes the practically limitless accessibility of data and, by extension, music. The materials artists work with today also include the sound files of myriad recordings of music history - a compositional practice which has sparked various legal battles. When the composer Johannes Kreidler parked his pickup truck in front of the GEMA headquarters in Berlin on 12 September 2008, he had a mountain of paper applications for his new composition in tow. The 33-second piece entitled product placements contain 70,200 musical quotations from other works. With this “analogue” action, Kreidler aimed to encourage an aesthetic debate on the work of the artist in the digital age and portray the system of bureaucracy ad absurdum. What am I allowed to sample, and more importantly, how long and how much? The copyright laws have yet to be adapted to the changing circumstances of the digital and global era of sharing.
The discussion about artificial intelligence demonstrates just how closely the digital transformation in the music world relates to moral, legal and ethical issues. In 2018 the pop musician Taryn Southern produced an album with IAMAI, composed using artificial intelligence. She accompanied the album with several thought-provoking questions which could have just as well been asked after the moon landing: “Who are we? What are we becoming? ... and are we ready for it?” To be fair, the computer did not produce the tracks all by itself. The new software still requires a human subject that plays with it and accepts or rejects its suggestions. However, the program enabled Taryn Southern to (collaboratively) compose her songs without any prior knowledge of harmonics. “You can change things around as often as you want so that you get what you’re looking for in the end. I think it’s great!” said the singer, explaining that she didn’t feel constrained by the aesthetic boundaries and the stylistic horizon of her tools, or rather, programmers. Aside from the fact that she can now compose music without the aid of other people as long as she uses the commercial programs for the musical mainstream, she won’t have to expect any big surprises. Similar to those who stay within the bounds of their social media bubbles, artificial intelligence could prevent her from gaining contact with all too foreign musical sociotopes.
However, artificial intelligence is capable of achieving exactly that if the developers so wish. The Berlin artist couple Holly Herndon and Matt Dryhurst has developed an artificial intelligence which walks the fine line between the object and the subject. Herndon integrated their artificial baby, whom they named Spawn and declared to be a girl, into her vocal ensemble. “I’m looking for a new sound and a new look,” Herndon explains. “The difference is that we consider Spawn a member of the ensemble and not a composer. Although she improvises like the other performers, she’s doesn’t compose the piece. I want to write the music!” Although the composer/singer underscores her autonomy at the fringe of avant-garde pop and experimental electronic music, working with Spawn means engaging with an artificial intelligence who not only sings with hybrid voices, but also learns from her surroundings. It is this developmental process that Herndon and Dryhurst regard as part of their artwork. Spawn, who resides in the casing of an old gaming computer, learned how to recognise and reinterpret unknown sounds during live “call-and-response” training sessions in 2018 with hundreds of people. “There’s a pervasive narrative about technology that claims that it dehumanises us. We represent a contrasting position. We don’t run away from it, we run toward it, but on our own terms,” says Holly Herndon. And that is – apart from her enlightened attitude – the single and perhaps most important lesson we can learn from digitalisation, which still holds the promise of so many technological advances. If we fail to develop our own vision of the musical future, others will do it for us – and right now, it’s the big corporations that are aspiring toward very different dreams.
Note: You can access the AR content of this article via the issuu-paper of this magazine.
 Interview mit der Autorin 27.11.2015
 Tacotron 2