Introduction

Last Thursday we had a short pitch concerning the topic of our thesis, "Approaching Beneficial AI in the Context of […]", which subsequently led to an interesting and highly productive discussion with professors, students and guests at HfG Gmünd. Every team designed posters with their proposal for their master thesis, which were used as a foundation for discussing the topic. In the following, we'll discuss some of the most interesting notes of the event. Feel free to add and discuss these in the comment section!

Our Posters

Stages of AI

To introduce our audience to the ongoing field of AI safety research, we first displayed a brief overview of the three stages of AI development, starting with Artificial Narrow Intelligence, as we are already dealing with systems relying on this type of AI today. We define this "narrow" intelligence as the ability to accomplish specific goals in a specific context. If someday an AI is able to solve goals in every context as good as humans can, we have reached Artificial General Intelligence (AGI). If the AI surpasses our intelligence even further and it is able to accomplish goals in ways we humans can't even think of let alone understand, we would reach a stage of Artificial Superintelligence. It's important to note, that a concrete timeframe, in which this development will take place (or if we will reach superintelligence at all), is not predictable. You can find a more detailed description of the three stages here: https://tecadmin.net/evolution-of-artificial-intelligence/

Views on AI

Of course, there are different opinions on how this development might affect life on earth. Max Tegmark, Professor at MIT, categorizes these opinions in different categories. On the far left side of this spectrum of opinions are the "Luddites", which forecast an AI to be devastating for humanity. On the other hand, there are "Digital Utopians", which are convinced, that an AI will do so much good for humanity, that worrying about negative consequences will simply slow down the development of an AGI. Therefore, we should stop worrying and rather put our time and effort into the development of yet smarter AI. In between these two extremes you can find the "Beneficial AI Movement", which we consider ourselves to be part of as well. This movement argues that an AGI can be beneficial for humanity, but it might as well lead to humans becoming extinguished – it's completely up to us, so we should start taking action and thinking about how we want our future to look like and start working on that.

Possible Scenarios

To make this future a bit more tangible, we took up some existing scenarios, how the world might look like in the future. The future of life institute provides a more comprehensive overview of these scenarios (https://futureoflife.org/ai-aftermath-scenarios/) and Max Tegmark goes into more detail in Life 3.0 as well. These scenarios proved to be a great starting point for a lively discussion, as they got people thinking about some of the questions we are facing right now as well.

Key Questions

Our second poster was used to present the title of our thesis: "Approaching Beneficial Artificial Intelligence in the Context of […]". On this poster, we asked numerous key questions we are facing right now and will be working on for the next couple of months:

The following part will dig into these questions and provide some insight towards how they were discussed during the event.

The Discussion

The Designer's Approach

One crucial point of discussion, partially due to us discussing the topic at a design university, was the role of the designer in this process. This question can actually be interpreted in two different ways: