The Google Instinct

Elena Esposito & Laokoon

Show the full version of this element's decorative background image.

A bold claim: The collective Laokoon speculated that with their artistic data experiment “Made to Measure” they could completely reconstruct a human being based solely on the digital trail they left behind on the Internet over five years. Their attempt revealed a future in which the practice of data collection will see a massive increase in political and economic influence. The inspiration behind Laokoon’s project was the research work by sociologist Elena Esposito (EE). In the following, Cosima Terrasse of Laokoon (L) speaks with Esposito about the cultural techniques we will have to develop for the coming “age of algorithmic prediction”, one that may well spark a crisis of individual freedom and autonomy.


Elena Esposito, your work has played a crucial role in the development of our current project “Made to Measure”. In our project, we work with predictions which drive algorithms on specific future developments based on data trails in the Internet. How do you think this will change our idea of the future in the coming years?

EE First things first– something fundamental – our understanding of modern human individuality and self-determination depends almost entirely on how we handle the future. The realisation that we create our own future through our decisions and actions is a deep-seated conviction we’ve held since the era of European modernity, and which continues to decisively influence human self-awareness to this day. But several underpinnings of this hard-won view of self-understanding may have to change in the near future because we may be entering a new cultural era which thinks about the future in a fundamentally different way. I call it the Age of the Prophecy.

L And Big Data plays a major role in this respect?

EE Yes. I study the social consequences of machine learning. Its purpose, among other things, is to precisely predict our individual and collective future. Granted, this is an assumption. But as such, it fundamentally alters how we deal with uncertainty which an unknown future always represents. I believe we can best understand the scope of this imminent cultural shift by studying the analogy regarding practices of prophesying in ancient mythology.

L  Like the flight paths of birds or a sudden change of weather were used to foretell the future, now predictions are made based on our Google Search history today?

EE That’s about right, yes. My comparison is based on three aspects that describe what comprises the questionable quality of algorithmic predictions and why these fail to meet our modern requirements of prognostics. The first aspect is that we turn a blind eye to the high degree of coincidence inherent in predictions. Algorithms use all kinds of coincidentally similar data from the Internet to generate predictions of the future. Nonetheless, these predictions are sometimes treated as if they were substantiated by a sound, structured, analytical process. Even ancient observations of the natural world tried to draw fundamental conclusions based on similarities: “Every time the weather suddenly turns, the gods are angry, and the battle will end in defeat.” In the same way, algorithms explain the future to us, and we actually take such information seriously!
A second aspect of my comparison touches on the individualisation of future expectations. Algorithmic predictions are supposed to be accurate for a very specific individual – in the same way mythical visionaries revealed their prophecies. When a character in Greek mythology asks the Oracle of Delphi for a prophecy, it comes in the form of a specific, personal question: “Will my marriage be blessed with children?” or “Will the battle go well for me?” There is never any talk about average values, probabilities or general trends. Nowadays companies hope such questions will provide them with algorithmic answers of individual quality: “Who will purchase a red dress on the 2nd of August in Berlin? That will be you!”

L You might be right, if you say so…

EE Not me. The algorithm says so! The example also shows how uncertain this kind of individualised prediction is, how many unconsidered factors depend on it coming true. But exactly such predictions are being made and companies pay for them.
And the third aspect is the performative, self-fulfilling characteristics which exist both in ancient prophecies – think of Oedipus – and the algorithmic prediction. The French sociologist Dominique Cardon claims that the algorithm creates the future that it itself predicts. Products, political beliefs and social contacts are suggested to you because the algorithm thinks they fit you. You might be unfamiliar with them at first, but little by little, they begin to influence your buying habits and even your worldview. The algorithm constructs the future that it aims to predict. In the end, you are more likely to buy that red dress because a countless number of images and articles on the web have been repeatedly planting that thought in your head in advance.

In our project “Made to Measure” we were interested in how these new qualities of algorithmic predictions impacted the life of an individual. To do this, we created an experiment: Is it possible to create a perfect doppelgänger based exclusively on a person’s data which she has left behind on Google, Facebook and other Internet companies over a period of five years? Our “data donator” requested her data from these companies and forwarded it to us for analysis. Together with specialists, we analysed this data and created the person’s double – an actress, who re-enacted important events the person experienced during the previous five years –relationships ending, moving to different places, daily routines, emotional lows and new beginnings – everything as detailed as possible. In the end, our data donator then offered her impression of our attempt at reconstructing her life and contributed her own version of her life story.
Based on the data and especially using Google Search queries, we were able to follow how the person’s preferences, attitudes and interests developed and changed quite fundamentally over time. The algorithm initially identified numerous aspects with surprising accuracy. At the beginning of the data set, we saw Google Search queries on strategies of physical self-optimisation and obtaining social appreciation. She was shown ads for nutritional supplements – from diet programmes to fitness courses – which she subsequently clicked. These products are financially relevant for Google and therefore, were important aspects for algorithmically profiling our data donator. The predictions were initially correct: the person’s interest in exactly this suggested content steadily increased with time and became obsessive to some extent.
But as her values and wishes successively changed, as she began looking into, for example, career changes and professional training opportunities, the recommendations she continued receiving were based on her former, more lucrative habits. She was confronted again and again with patterns that were actually detrimental to her. It’s impossible to say whether the algorithm was wrong, because after being confronted with certain products, the data donator indeed responded in the same manner. And this touches on that third aspect of your comparison – the self-fulfilling prophecy – and confirms that the influence of the algorithm in our example did in fact exert an enormous influence on her personal life and continues to do so today. At the same time, our data donator had resources at her disposal to change her life: growing curiosity about a different lifestyle, self-confidence thanks to success in her studies. Luckily, every life is more complex and multifaceted than the model created by the algorithm.

EE It’s a fascinating project because it shows a process that sends a person back to certain vulnerable phases of their life. It seems obvious that we’d want to make the big corporations and their programmers responsible for that. But research has found numerous reasons why placing blame won’t suffice. The conservative moment comes from the data we ourselves leave behind on the web. Human behaviour is seldom guided by innovative, socially responsible and self-reflective patterns. Just the opposite – we generally tend to reinforce these same trends over and over. This means that innovation is also an exception in digital culture.

L Of course we are the ones who leave these data trails behind on the Internet and are accountable for the decisions we make. But these bits of data do not fall on neutral ground. We leave them behind in a digital universe full of programmed preliminary decisions based on which of our habits are economically relevant to third parties. Before we can even make any decision online, an algorithm has already limited our possibilities to a manageable number. What’s more, our personal development is co-dependant on the digital environment which influences us just as much as the analogue. Sometimes I go back to Google’s search engine despite knowing better. I’ve grown up with it and have basically developed an instinct for how to move around within its familiar system. There’s even a desire to please the system – if you wish to be personally and economically relevant. For example, when YouTubers repeatedly remind their viewers “And please leave a comment – the algorithm loves that!”

EE In both directions – with humans and algorithms – information is basically collected, there’s no learning that takes place in terms of innovation. That can only change by consciously behaving in a certain way on the Internet, by manipulating the technology. We call this reverse engineering, and it’s based on, for example, a parody of expected behaviour, on humour and clever tricks – on art in the broadest sense of the word.

The only way to apply such methods of online behaviour at the moment is through cultural proficiency and personal knowledge. A person with average knowledge cannot even begin to grasp the form of data collection they grant their consent to. Our data donator, for instance, granted Google her consent – and then later consented to our experiment. But did she guess that based on these five years of data, we’d be able to detect a pattern of an eating disorder? In hindsight, it’s been imprinted in the data and indeed played a role in the data donator’s life – but she couldn’t foresee that five years earlier. In the beginning, this person simply clicked on “Accept cookies” and had no idea that a model, a typification of her personality would be created from this. She couldn’t know how strongly this model would reflect her reality based on the subsequent personalised ads for products and recommended content from the Internet.
How are we supposed to change, how can we escape ourselves if we behave in self-reinforcing patterns that first and foremost play on the commercially exploitable aspects of our behaviour? Knowing how these models are created won’t necessarily lessen their efficacy – rather, psychologically, it could strengthen them. Wouldn’t it be better if we simply ignored what the Internet companies know about us?

EE In the field of medicine, there’s a similar thing – the right not to know. There are cases, for example, when people would rather not be confronted with the fact that a severe illness will shorten their presumed life expectancy. But based on so-called contextual profiling, third parties like insurance companies suddenly treat us differently because they presume to know more about our illness than we do. There are also situations in which third parties have a moral right to this information and data. In our example, that might be relatives who should be notified of the risk of carrying a genetically transmitted illness. Differentiating between the various rights to certain data promises to be one of the major legal challenges facing our society in the future.

L What makes these issues even more complicated is that such personal fates are regarded as the responsibility of individual persons alone and not as a shared social project.

EE In the past many European societies could, or had to, depend on one infrastructure of “shared uncertainty”. In the insurance industry, for example, this was based on general statistics instead of individual prognoses. Here in Europe, we’ve long regarded this system of socially shared risks as a motor for innovation, because protecting oneself can activate one’s willingness to take risks. But what if the consequences of my individual lifestyle could be predicted? Who would be willing to share the financial burden of ensuring my safety? Who would knowingly allow a person with an eating disorder to join a collective health insurance company if high costs of treatment were more likely for one person than for the other?
We cannot forbid anyone who would rather be individually analysed by Big Data than seeking the shared protection in a community – but the consequences have an influence on the basic assumptions of our co-existence.

Are there any technical innovations that could help us develop solidarity-based infrastructures within Big Data?

EE There’s a huge research field on “explainable AI”, which basically aims to improve our understanding of algorithms and the way they glean information from data. Building on this more expanded knowledge of algorithmic learning processes, international researchers hope to gain sovereignty over algorithms. I believe we should go about it differently and instead create AI that produces datasets in exactly the form we ourselves define as a socially desired goal. We are at the very beginning of programming these types of completely new control mechanisms for AI. The method of so-called “co-constructing”, which is the focus of our research project at the University of Bielefeld, aims to allow future Internet users to exert more influence on what exactly machines can learn about them. In other words, individuals should consciously instruct algorithms to learn something about their preferences and expectations based on self-selected information.
But I’d be interested in learning whether the protagonist in your project found her own personal way of freeing herself from those algorithmic predictions?

L During the course of the project, the person took an intense interest in examining the major changes she experienced over five years – and seeing it presented like that in an artistic experiment was extremely emotional for her as it was for us. Just a few days ago, though, the data donator told us that she hadn’t fundamentally changed how she behaved on the Internet. Now every time she searches for something on Google, she wonders “how would Laokoon interpret this?” Apparently, she recently asked Google: How can I become a nun?

EE And what conclusions do you draw from the project about your task as artists?

L “Made to Measure” takes place on an interactive storytelling website – a digital stage, if you will. The way the website functions embodies the otherwise hidden logic of algorithms. We make this visible and appropriate it for our own purposes – through artistic reproduction and by reprogramming certain aspects based on our own rules. It’s impossible to foresee what kind of experience the visitors will have on the website, and what they will learn about how algorithms function. What I can say is this: it’s an attempt to poke holes in the black-box of Big Data by making ourselves the authors of our life story – with nothing but a dataset.

Elena Esposito

Elena Esposito is a professor of sociology at the Universities of Bologna and Bielefeld. She earned her doctorate in 1990 under Niklas Luhmann, and as a system theorist, has studied the problem of time in social systems. Her research has also addressed such topics as technologies of remembering, forgetting and predicting, as well as fashion, fictions and the use of unpredictability in the financial system. In recent years, she has focused on researching the social and political consequences of implementing algorithmic predictive systems – their evolution from mere risk assessment tools to those that can predict the future to the latest autopoietic reproductions of contemporary scenarios without alternative.

Labs of Cohabitation

For its event series “Labs of Cohabitation”, the German Federal Cultural Foundation has presented a compilation of digital dialogues and artistic projects since 2021. Artists, theorists, researchers and cultural practitioners are invited to envision and discuss scenarios of our future cohabitation.

The form of each individual lab is determined by the knowledge it promises to provide. This could be a moderated controversy on the painful discord in cultural practice. It could be a fast-paced brainstorming event to encourage others to share their thoughts and ideas in the forum of public discourse. The central component of the programme consists of artistic projects developed in dialogue with the Federal Cultural Foundation as real-world laboratories. One of these projects is the artistic data experiment “Made to Measure”, initiated by Laokoon and the Federal Cultural Foundation.

The “Labs of Cohabitation” series has staged five digital events so far on the topics of human self-determination, the culture of remembrance in a global context, East German identities, and truth in the information society. The invited guest speakers have included the historian Michael Rothberg, writer Olivia Wenzel, stage director Arne Vogelgesang and choreographer Florentina Holzinger. All past episodes are available for future viewing on YouTube.


The artists group Laokoon creates works which combine investigative and scientific research findings with various artistic forms of expression. Their essays, documentary films, stage productions, lectures and radio plays centre around the question of how our views of humanity and society are evolving in the digital age.

The French artist and landscaping architect Cosima Terrasse develops participative performance projects. She has taught at the Social Design Studio at the University of Applied Arts in Vienna since 2018. The documentary film “The Cleaners” by directors/screenwriters Hans Block and Moritz Riesewieck about the underground industry of digital censorship in Manila premiered at the Sundance Film Festival in 2018 and has since garnered the Prix Europa and the Grimme Audience Award. Their essay “Die digitale Seele” (The Digital Soul) was published by the Goldmann Verlag in 2020.

Made to Measure

For their artistic data experiment “Made to Measure”, the artists group Laokoon created a real-life double made of flesh and blood based solely on the data trail left behind by an anonymous person. Together with an actress and data analyst, Laokoon reconstructed and filmed momentous events in the life of this person on the main stage of the PACT Zollverein. Several months later, a meeting was arranged between the original data subject and the “datified” double. Since the end of August 2021, online visitors have been able to experience this intriguing experiment on an interactive storytelling website which innovatively presents the kind of conclusions algorithms make regarding one’s personality and future behaviour. With its new web experience, Laokoon has effectively created a complex digital narrative in and about the Internet. “Made to Measure”, which includes both the website and television documentary, impressively reveals the far-reaching insights into our psyche and most intimate secrets which we voluntarily grant Google, Facebook and Co. access to every day. “Made to Measure” was initiated and jointly developed by Laokoon and the German Federal Cultural Foundation. The project premiered on 29 August 2021 as part of Lab #5 in our series “Labs of Cohabitation”.

Magazine archive

Order Magazines

The magazines of the Cultural Foundation provide a multifaceted insight into the work of the Cultural Foundation over the past 20 years.

Numerous print editions can be reordered free of charge. Until issue 30, the magazines are available in English. Please let us know your desired issues and your address.


In addition, from issue #9 (2007) you can also access all magazines as digital issues on Isuu (external link, opens in a new window) (not barrier-free).