Interview: Trials & Errors

By Foteini Vergidou.


Ferocious Urbanites met the curators Daphne Dragona & Katerina Gkoutziouli for a quick chat on trials & errors in data-driven and algorithmic systems.


Computer algorithms today are performing incredible tasks with high accuracies, at a massive scale, demonstrating the speeding dynamics of technological development. This increasing computing power has turned artificial intelligence into a key technology for the 21st century.


The exhibition ‘Trials & Errors’ explores the social and aesthetic implications of A.I. The participating artists examine, try out and challenge artificial environments, classification systems and ways of thinking that are being shaped by machine learning.


They focus on the machine learning training process, the role of errors in their continuous optimization and their influence on humans. Adopting a trial – and – error approach, artists challenge malfunctions, failures and flaws as a means of understanding automation processes.


Using DIY AI tools, shadowy devices, handmade datasets and personal taxonomies, they generate heterogeneous, unstable and unpredictable worlds. As artificial intelligence systems remain largely obscure and opaque, artists’ exploration suggests ways to navigate and interpret them.


Trials and Errors, Romantso, Ferocious Urbanites, Ferocious Athens
Yorgos Papafigos, BoliasmaII, 2021. PH: Mariana Bisti

FU: In the exhibition ‘Trials & Errors’, currently on view at Romantso in Athens, five Greek young artists present new works that explore, through a variety of media, the complexities of automated computing. What does a reality based more and more on artificial intelligence mean? What can we expect from the future?


DD: Applications of artificial intelligence are already being used in many different sectors. Of some, we are more aware of, and of others less. For instance, we know well by now that social media feeds are constantly monitored and regulated by algorithms. We also have got used to personal intelligent assistants like Amazon’s Alexa that have become part of many home environments. In countries like Japan, caring robots supporting the elderly are also common. AI systems are also used to combat climate change, helping scientists to capture information and process climate data, or they are used in urban environments to combat criminality.


A reality based more and more on artificial intelligence means a reality where prediction but also pre-emption of events is respectively more and more possible and decisions can be taken faster and more accurately by machines. While one cannot deny the benefits of this emerging reality, one also cannot avoid questioning: What are the drawbacks in this case? What happens when wrong decisions are made? How might people’s lives be affected by wrongly taken decisions and machines’ errors?


FU: When facing an error in an algorithmic system, questions of control arise. What makes the ‘Trials & Errors’ approach so relevant right now?


KG: We have already seen many errors arising from algorithmic systems such as the failure of face recognition softwares to identify dark skin faces; the failure of Amazon’s AI hiring tool which was discriminating against women; or more recently the South-Korean chatbot Lee Luda which was living in Facebook messenger and was using hate speech towards sexual minorities. There is definitely the ongoing debate of control in terms of who controls our data, how our data is manipulated and what views these technologies wish to propagate. However, there are also concerns related to the design, the input (datasets) and the output (applications) of AI systems taking into consideration the missing ethical components in an industry that still remains unregulated.


In our everyday life we make use of AI systems that we are unaware of most of the time. The trial & error approach is a common strategy in most industries, but in the case of AI, the “error” component can strongly influence or abuse people’s lives when these technologies are applied for example in the criminal justice system, military services, predictive policing, recruiting and so on. The “Trials & Errors” approach in the exhibition makes reference to the work ethic of the AI industry hinting at the repercussions of AI systems to humans, society and art. In art, trials can be liberating and errors can be productive. The exhibition “Trials & Errors” serves as a means to interrogate the uncertainties that the vague field of AI currently brings into the world.


FU: Error can be perceived as a deviation from the norm, but it can also provide space for exploration of all the possibilities and limitations of a system. How can an error become a driving force behind a positive development?


DD: In the case of the AI industry, errors are to a great extent being used to optimize systems. The ‘trial and error’ approach, interestingly, exactly captures a form of knowing based on experience, on trying out and failing until a problem is figured out and solved. Just like it happens with humans, machines in a way learn from mistakes. More accurately, they learn how to recognize patterns based on the datasets that are being given. Their fallacies are located by humans who with their contribution assist in the training process, by distinguishing and naming what a machine fails to properly see or to hear.


The existence of errors is important from a critical perspective because it shakes the faith that the AI industry wants us to have in its systems. That is why we often hear or read: Is artificial intelligence intelligent? And, can it ever be or become conscious of mistakes and decisions? Errors are important because, as you also put it, they can make us think of the possibilities and limitations of systems. Errors offer a ground to study or understand how machines learn, and to reflect upon how close or how far human and artificial intelligence really are. Possible errors related to ethical dilemmas of self-driving cars are to a great extent the reason that they still are not out in the market. Errors related to biased datasets allow us to seriously question the application, use and credibility of such systems.

FU: The works presented in the exhibition are all new commissions. How did the artists approach the concept?


KG: We approached the artists knowing that they are already engaging with aspects of machine learning and artificial intelligence in their work. All works in the exhibition have a different story to tell about AI and its influence on people, art and society. The participating artists focus on the relationship of humans to intelligent systems both as designers and users. Eva Papamargariti explores the development of the automaton and artificial avatars hinting at the incompetencies of AI systems to imitate human-like behaviour. Yorgos Papafigos creates uncanny devices emerging from an unusual amalgamation of organic matter, tech and industrial waste addressing the consequences of the big tech in the environment. Theo Triantafyllidis creates his own handmade dataset and generates new hybrid weapon-like objects implying the blurring boundaries between gamification, fantasy and radicalisation in the post-digital condition. Theodoros Giannakis designs and trains his own DIY AI system giving it an uncanny human-like form and leaving it unsupervised to evolve in the exhibition space. Maria Varela explores the effects of machine learning and pattern recognition when it comes to the female body and fertility challenging the alleged accuracies of AI’s predictive capacity.


All works touch upon different aspects of machine learning and AI hinting at the uncertain and controversial narratives that these technologies generate.


FU: Final thoughts?


KG: Artificial intelligence is the buzzword of our time and it will soon transform into something else, probably a new word that will allay the fears and anxieties it creates in people. However, as part of the so-called fourth industrial revolution, AI will be evolving in new directions that it is still hard to imagine. Through the exhibition, we wish to highlight the rise, the values and the opacity of AI systems and their various implications in everyday experience.


DD: While systems of AI keep evolving at a fast pace, artistic works assist us in imagining and comprehending how the future might unfold, what is and will be at stake, and what role we could play to evoke change.


Trials and Errors, Romantso, Ferocious Urbanites, Ferocious Athens
Maria Varela, In Vivo In Vitro In Silico, 2021. PH: Mariana Bisti

Trials and Errors, Romantso, Ferocious Urbanites, Ferocious Athens
Theo Triantafyllidis, Radicalization Pipeline Series, 2021. PH: Mariana Bisti

Trials and Errors, Romantso, Ferocious Urbanites, Ferocious Athens
Theodoros Giannakis, How Great Complex, 2021. PH: Mariana Bisti

Trials and Errors, Romantso, Ferocious Urbanites, Ferocious Athens
Eva Papamargariti, Throng, 2021. PH: Mariana Bisti

 

Exhibition information:


Location: Romantso, Anaxagora 3 – 5, Athens

Duration: Until 22 December, 2021 Opening Hours: Tuesday – Sunday: 17:00 - 22:00. Closed on Mondays.

Artists: Theodoros Giannakis, Yorgos Papafigos, Eva Papamargariti, Theo Triantafyllidis, Maria Varela

Curated by Daphne Dragona & Katerina Gkoutziouli


Credits:


Exhibition Design | dragonas architecture studio

Visual Communication Design | NMR Office

Audiovisual Design | Michalis Antonopoulos, Antonis Gkatzougiannis, Makis Faros

Press & Publicity | Fotini Barka

Art Mediator | Lydia Panagou

All artworks are new commissions.

Under the auspices and with the financial support of the Hellenic Ministry of Culture and Sports | With the support of Bios-Romantso

Organised and produced by VEKTOR Athens