Aarhus University Seal

Thinking Heads Reflect on ‘Thinking Machines'

How will ‘thinking machines’ impact our future lives? This is just one of the numerous questions discussed last week at an interdisciplinary AIAS conference, which focused on the challenges, applications and perspectives of neural networks, Artificial Intelligence and Machine Learning.

The international conference ‘Thinking Machine: Interdisciplinary Perspectives on Neural Networks’ was held at the Aarhus Institute of Advanced Studies on 21-23 August 2018 and included 16 talks by researchers from a diversity of fields, coming together to inspire and challenge each other’s academic research on AI, neural networks and machine learning.

The three keynote speakers already represented different fields and views on the conference topic. Patrick Jagoda, from the Department of English and the Department of Cinema & Media Studies at the University of Chicago, explained how video games’ interactive and participatory affordances can be considered a model of systems and networks, which allows for recipients' exposure to and experimentation within the field of digital media, deep learning, algorithms and telecommunication networks. The second keynote speaker was Maria Schuld, who researches quantum computing at the South African University of KwaZulu-Natal and is part of Xanadu Quantum Computing Inc., Toronto. Her talk discussed the perspectives and shortcomings of quantum mechanical models of the biological brain, and presented her work with photonic quantum computers as excellent linear algebra machines, and thus potentially superior tools for machine learning. The third and last keynote speaker was José del R. Millán from the Swiss Federal Institute of Technology in Lausanne. In his talk, he focused on different examples of how researchers from the Swiss Federal Institute of Technology used Brain-Machine-Interfaces (BMI) in their studies, specifically focusing on the development of BMI in helping people with severe motor disabilities. Millan focused particularly on the challenges posed by the finding that for the best results, both machine learning and learning on the part of the human user need to be stimulated and calibrated.

The organizers of the conference have invited two students from Aarhus University, Andreas Bock Michelsen from the Department of Physics and Astronomy and Mathias Holm Sørensen from the Department of Media Studies to write a report recording their take-away from the conference. Here is the result of their interdisciplinary effort: 

Conference Report (by Andreas Bock Michelsen and Mathias Holm Sørensen)

Neural networks, and, by extension, machine learning and AI are excellent tools for analysis and prediction across disciplines – whether it be treatment planning with MRI scans, assistance in quantum mechanical calculations or analysis of texts and rituals. In a society where the constant rise in available computational power is only matched by the immense amounts of data gathered about everyone and everything, the ability of these tools to learn from the past and induce about the future makes them indispensable in the constant quest for optimization, and an ever more feasible replacement of humans in many tasks.

Culture shapes itself around unsolved riddles

The conference opened with a keynote by Patrick Jagoda, who argued that media provides ways of exploring the relations between human and machine and the impact networks have on society. At the same time, the production, reception and form of media themselves is impacted by networks. In his talk, Jagoda showed how video games can provide forms of experimentally engaging with aspects of the new technologies, such as distributed decision-making and networks and AI in a more general sense. The explosive popularity of video games helps us navigate and understand a globally connected world where interactions with human-mimicking machines have become an everyday occurrence.  Adding to this, Emanuele Nicolo Andreoli from the School of Communication and Culture at Aarhus University later argued that in fiction, the manifestations of AI are assuming a gradually more human form and thereby breaking down the distance between human and machine. But this change in culture also brings alienation when we experience ourselves reduced to parameters to be tweaked by a neural network, and fear as sensationalist media brand peculiarities in technological developments as the dawn of evil AI overlords, which is a point made by Kristoffer L. Nielbo from the Department of History at the University of Southern Denmark. The interplay is complex, and requires interdisciplinary collaborations between those who study humans, and those who develop the technologies.

Limits of learning

These tools are also often overrated. While clearly superior for many specific tasks, there is a tendency to decide on neural networks as the solution, and only afterwards figure out the problem. As pointed out by Kristoffer L. Nielbo, this is especially a problem within the Humanities, where viewing neural network as an auxiliary tool instead of the goal in itself would suit the disciplines far better. He mentioned that in his experience many new researchers within his field had the tendency to focus on using machine learning before even knowing what to use it for. We are peaking in a hype-cycle about neural networks and AI, which is also a sign that the technology is maturing. And so, we must remember that the machines are still far from resembling humans and their skillset – both in the eyes of the philosopher and the engineer. As pointed out by Christian Grund Sørensen, from the Department of Communication and Psychology at Aalborg University, we must understand what intelligence is, and which parts of it we want and can put into a machine, before we can consider the machine intelligent.

While human intuition can help us solve problems faster than even a machine, as demonstrated by Janet Rafner of the Department of Physics and Astronomy at Aarhus University, how can we implement such a seemingly irrational feature into the logical? A machine can achieve incredible precision when issuing medical diagnoses or evaluating loan applications, but is empathy not a critical part of making the “right” decision in so many of these cases? There is a tendency to employ these technologies as black boxes in competitive pursuits of profit, which hinders implementation of seeming suboptimal and irrational features and makes it impossible to contest the decisions produced. But if we wish to make the inner workings of these neural networks, and the data they learn from, available to the common person, how far can we come without severely infringing on privacy and intellectual property? 

We have gone beyond the point of no return, and must face a society full of learning machines mining our data. Therefore, neural networks pose two very different problems, namely on one hand the individual brain versus the individual machine, and on the other hand how already existing technologies can change us as humans as well as our society. To equip ourselves for this future, we must combine the technical understanding of the engineers developing the machines with the societal understanding of philosophers, humanities, social science and policy-makers to lay down road maps to a world with democratic and contestable algorithms. This was a point of focus for both Ansgar Koene from the Horizon Digital Economy Research Institute at the University of Nottingham and Bendert Zevenbergen from the Center for Information Technology Policy at Princeton University. They both argued that ethics should be a prominent focus when developing these thinking machines, and that this ethical work is seriously lagging behind the technical developments. Simon Enni, a PhD student from the Department of Computer Science at Aarhus University, also touched upon ethics in the context of neural networks. He stated that neural networks are inductive in nature since their behavior is based on analysis of past data. Since this data is chosen by people of a specific culture and acting within specific fields and markets, biases are inevitable. But these biases have to be justifiable through ethical standards, while at the same time offering meaningful transparency.

Throughout the conference it was made clear that secondary neural networks designed to explain specific decisions by machines are already being developed, as are concrete ethical check-lists for companies to certify their products. Experts are being brought together to create cases inspired by real problems, distilled to their most relevant parts and distributed over the world to prepare industries and researchers for the new problems we will come to face, as exemplified by Bender Zevenbergen’s work at the Center for Information Technology Policy at Princeton University. But the technologies themselves also face a diversity of problems. In the concluding discussion it was argued that the ubiquity of parallels to biological brains seem to hinder more than help understanding and development of neural networks, and the ideas on which neural networks are based are many decades old. New technologies, such as quantum computers, are being developed in a framework of intelligent machines, but it is unclear what these syntheses will actually achieve, as pointed out by keynote speaker Maria Schuld.

Neural networks and similar technologies are excellent and necessary helpers in our striving for better lives in a better understood world, but we must work to unify the paradigms of algorithms, data-gathering and quantitative optimization with a society full of individuals, who all have their own understanding of the world and who all have immense ethical values. The different aspects of neural networks must be explored further in the years to come within different disciplines, to ensure a future where the intelligent machines interface with our society in ways which fuel progress in an ethical way. Throughout the conference the desire for interdisciplinary collaborations was illustrated, and hopefully this will be prevailing in the research to come.

Read more about the conference on



Andreas Bock Michelsen, student
Department of Physics and Astronomy, Aarhus University

Mathias Holm Sørensen, student
Department of Media Studies, Aarhus University