What is digitalisation?
At its most fundamental, digitalisation represents the transition from the physical to the virtual, and from the local to the global. I am not so much interested in the true meaning of the term, but rather, how it is applied. The word “digitalisation” – a term ubiquitously used by German media outlets and politicians – is misleading. The expression indiscriminately applies to a broad range of past and future media technologies. In the 1960s, there were semiconductors and mainframes. There were procedural and functional programming languages. There was a command line interface. People owned TVs that received just a handful of channels. Then came the 1980s with Windows-based PCs or Macs, graphic interfaces, object-based programming languages, cable TV and satellite reception with hundreds of channels. In the 1990s we were introduced to new opportunities in digital publication and communication thanks to the Internet. Computer technology advanced automation in the workplace and robotics accelerated industrial working processes. We became accustomed to global connectivity, hypertext and hyperlinks, and utopian visions of cyber culture.
Since the first decade of the 21st century, we have acquired tablets, mobiles and smartphones. We use social media platforms. The paradox of online existence which exploded into millions of channels and initial pluralism has become a solidified force in the hands of a few monopolies. And now we find ourselves confronted with a paradigmatic change: advances in machine learning – or “deep learning” as it’s called – and neural networks. Data mining, the availability of massive computing power and enormous amounts of data (Big Data) have made AI a significant force that will impact our life, our society and the economy. A pragmatic definition of AI could be the following: software that learns from experience and moves beyond its original programming.
Industry 4.0 media technologies
And now a second wave of digitalisation is upon us with researchers and programmers busy creating the next generation of advanced digital media technologies. Klaus Schwab from the World Economic Forum calls it the “Fourth Industrial Revolution”, or Industry 4.0. I divide Industry 4.0 technologies into nine categories:
- artificial intelligence (AI), robotics, automated software processes and self-modifying algorithms
- virtual reality (VR) and augmented reality (AR)
- 3D printing and additive manufacturing
- Internet of Things (IoT)
- self-driving cars
- blockchains and other distributed ledger technologies
- virtual assistants (like Siri and Alexa)
- advances in biotechnology
- digital-neural interfaces
Some of the special fields where AI has been increasingly successful include the recognition and classification of patterns (language, images, facial scans), language processing (generation, translation, conversation); services (hotel, restaurant, cleaning, customer service); online shopping predictions and personalised advertising algorithms; decision-making support; health care applications and robot processing automation in hardware and software.
The first three industrial revolutions
The first industrial revolution took place between the late 18th and early 19th century. The population of large cities grew by leaps and bounds. With the invention of the steam engine, the iron and textile industries flourished, and rail connections were expanded. Mechanical production in factories led to ever-growing prosperity and a higher standard of living, although for many, living conditions remained precarious at best. The second industrial revolution occurred in the late 19th and early 20th century. This revolution was largely driven by such branches as steel manufacturing, oil production and electricity generation. Among the most important inventions of this period were the telephone, lightbulb, gramophone and the automobile. The management strategies using assembly lines (Fordism) and scientific management (Taylorism) were incorporated into the working processes. The third industrial revolution, which began in the 1960s, has become synonymous with the digital revolution, or as I referred to above, the first wave of digitalisation. This revolution heralded the transition from analogue electronic and mechanical devices to digital technologies.
The need to challenge and renew computer science from within
Most articles written about current trends in AI are either extremely enthusiastic or emphatically critical. Those who are most passionate about it are usually businesspeople, whose primary goal is to earn money, or technical experts who are simply enthused by its engineering or programming aspects. The critics are most often academics from the field of humanities and social sciences. These are philosophers and sociologists who foresee numerous moral and social problems arising from AI. As for myself, I take a third position (though I share the same concerns for the moral and human problems that AI could cause), namely that computer science needs to be challenged and fundamentally renewed from within instead of imposing moral and legal restrictions and provisions onto computer science from without. How can we consistently form a relationship between ethics and computer science at a more immanent and fundamental level? What might the systematic interface between humans and AI objects look like in the implementation of a dialogical artificial intelligence? I consider this type of interface, known as “creative coding”, situated at the boundary between art and computer science. Creative coding marks the first step to modifying and renewing the field of computer science from within.
What is creative coding?
A rather unrealistic, but basically justified suggestion would be to establish creative coding as the successor to critical social and media theories in the humanities at German universities. It is more feasible for German and European universities of art and design to recognise creative coding as a chance to develop an area of study comprising both theory and practical application. The rigid separation between theory and practice – as is often taught at universities of art and design – should be abandoned. Creative coding should become an important component of the course curriculum; for universities, this would represent an appropriate response to the global situation of digitalisation and Industry 4.0.
So what exactly is creative coding? Everyone knows what computer science is and what programming and writing software code is all about: it’s a technical discipline, an engineering subject, an established practice of learning and knowledge about how we get something to run, how to write a program to do something for us without making an error. Technical universities train their students in computer programming. All kinds of branches employ computer programmers: banks, insurance companies, car manufacturers, telecom providers – the list goes on and on. Every major corporation maintains a gigantic database, transaction system and the required IT know-how.
As early as the 1960s, artists have been using video technology to create artworks and art installations that explore the possibilities of technology and/or modify media technology in order to communicate aesthetic and socio-political concerns. These genres include new media art, generative art, code art, virtual reality art, robot art, bio art and ecosystems art. In the past fifteen years, artists and designers have been increasingly interested in learning how to write software code. This trend has been fuelled in part by specialised development environments for creative coding, special toolkits for artists and designers, such as Processing, openFrameworks, Cinder, Max / MSP and vvvv. So far artists and cultural scholars haven’t yet or only rarely questioned the conventional understanding of computer programming. It was obvious that programming was exactly what it was, and creative coding was just an added category for those who wanted to learn how to program. The idea that programming needed to be fundamentally changed and developed by those involved in the humanities, design, art and cultural studies has only now become more prevalent. There are now calls for reflecting on computer science and reintroducing the ambiguity of poetic language into software code. Creative coding promises to break new ground for changing the design patterns of culture. The aim is to make computer science a hybrid discipline that merges technology and the humanities. Specifically, it should focus on a hybrid comprised of technical and cultural codes.
The cultural revolutions of programming
Over the years, computer programming has represented a series of successive and very different paradigms and undergone revolutionary paradigmatic changes. These seemingly technical paradigms are actually “knowledge paradigms” which resemble a genealogy or sequence of cultural-historical stages. We have to view these successive phases or paradigms of computer science in terms of cultural and historical knowledge. Indeed, it is no simple task to see, recognise or define what computer science really is! Computer scientists, who have been trained as experts in technical practices, do not have any perspective of themselves.
Deep-learning algorithms supplement task-specific, rule-based algorithms with a paradigmatically shifted AI. This AI learns from experience, develops itself, and uses patterns and inferential reasoning to extract information from the massive pool of available data to help it make decisions. The “otherness” of neural, net-based artificial intelligence and artificial life is in some way an “alien intelligence” or “posthuman intelligence” which is not identical to human intelligence. It must be regarded as having its own autonomous, “aesthetic” form, its own ontological status and claims to rights and recognition. As the philosopher Luciana Parisi points out, deep-learning algorithms underscore the uncertainty, the vagueness of exceptions, the incalculability and the functionality that operates with coincidences, accidents and errors. They far exceed what was formerly a rational-calculating computer science that was based on certainty.
I am interested in studying today’s algorithms in terms of the history of automation, of the discipline, of monitoring, simulation, surveillance. My focus is on the development of an alternative concept of “moral algorithms” and their future use. Does AI necessarily have to be a continuation of capitalistic and bureaucratic automation? Is it possible to alter the meaning of automation? I believe automation should make society and commerce less bureaucratic. It should be more sensitive to exceptions and react more flexibly to specific circumstances. How can we build bridges between philosophy and programming?
Artificial life holds more promise than artificial intelligence. Essentially computer science is based on combinational logic and treats software as an inert “thing” (i.e. software can only do what it’s been programmed to do). AI cannot lead to autonomous reasoning if we think of it merely as a continuation of mainstream computer science of the past. We have to fundamentally rethink computer science. Artificial life is a movement that aims to make software more “alive”. In the 1990s the Santa Fe Institute proposed the “strong a-life” theory for creating software as the basis for biology and cellular machines. Self-replicating computer programs are often said to be “alive” in terms of how they implement biological analogies of complex adaptative behaviour. Software is thus produced according to organic principles of self-organisation. The predominant digital binary computing method is based on the so-called discrete logic of clearly distinct identities and differences. What we need instead is a new logic based on similarities. At present, the relationship between executable software (the whole) and the smallest units of database information (the parts) is a mechanical one based on the metaphor of the machine, a relationship between the whole and its components, like a car engine. What we are trying to achieve with a-life software is to build a relationship of patterns or a resonance between the software and its data elements. A relationship like the musical piece and its individual notes, or the ambiguity expressed in poetic word chains.
The AI objects or entities should be given more autonomy in matters of design and practice (as envisioned in science-fiction films like Blade Runner, Ex Machina and Bicentennial Man). Such an objective, though, immediately arouses suspicion that one is promoting the dreaded scenarios played out in science-fiction movies in which a “super intelligence” or “singularity”, which is far superior to humans and takes over the planet with AI machine species (like those in the movie Matrix). The possibility of avoiding this apocalyptic science-fiction scenario lies in writing an alternative scenario, i.e. by carefully defining the specific details of a mutual relationship between human moral-driven institutions/actors and AI. We need a system of partnership or careful monitoring and mutual exchange. In the prevailing views, morals and algorithms are ensnared in a dualism which strictly separates one from the other. A moral can be an input in an AI-driven processor, and moral consequences can be produced as output of AI. This separation of process and goal is reminiscent of the dissociation between media and message, or form and content, which was deftly refuted in Marshall McLuhan’s media theory (“The media is the message”).  Moral considerations should be embedded as an inherent component and not added to the whole as a dualistic, peripheral afterthought.
I am also interested in the following important questions: How can we draft a roadmap for transitioning deep-learning networks to a mutually transforming dialogical relationship between humans and technological entities which promote ethics and environmental sustainability? Who would be in charge of the ethical programming? How can the software be granted a relative degree of autonomy without giving it too much influence or power? How can the ethical behaviour of technological beings be monitored? 
 Klaus Schwab, The Fourth Industrial Revolution,World Economic Forum, Genf, 2016.
 Luciana Parisi, AI (Artificial Intelligence), in: Rosi Braidotti, Maria Hlavajova (Hrsg.),Posthuman Glossary, New York, 2018.
 Christopher Langton (Hrsg.), Artificial Life: An Overview, Cambridge, 1995.
 Marshall McLuhan, Quentin Fiore,The Media is the Massage: An Inventory of Effects, Berkeley, 1968.
 Alan Shapiro, Die Software der Zukunft: oder Das Modell geht der Realitat voraus, Koln, 2014.