[Version 2, Mar 11 2019]

The following document outlines the structure of our master thesis going by the title of: "Approaching Beneficial Artificial Intelligence in the Context of [...]". The outcome is split into three parts, each augmenting the previous one with further context and details.

The first section will be an essay based on major insights gained during the creation of the thesis. It will contain a compact overview of our personal learnings and possible measures that we consider useful during the creation of beneficial artificial intelligence on the way to one day creating super-intelligent agents. The essay might for example contain a set of "beneficial guidelines", or thoughts on how those could be developed to ensure a beneficial core for the future goal of developing advanced intelligent systems.

The second part, which is the focus of this document, will augment the essay and explain the proposed concepts by offering a more tangible, in-depth exploration. This is to be realized by crafting different scenarios, each exploring important milestones on the way to artificial superintelligence. Inside each scenario, deep-dives into the constructed future demonstrate the previously proposed vision in more detail.

The third part is a complimentary collection of the project and its realization. It is going to explain the process and concepts in more detail, showing how the scenarios came to be and the different paths that were considered. Furthermore, the collection will contain an overview of different view points and opinions on the subject. This part of the thesis also serves as a medium for presenting the research conducted, and thematically condenses it for the interested reader.

A broad definition of the term "beneficial" will be a crucial task, as every formulated definition seems to exhibit a certain subjective nature. A general definition is an endeavor a lot of great minds have already undertaken, though arguably not yet concluded in a satisfying way. This bears the question, wether a definitive, universal model of value notions is possible at all, as humankind has not been able to establish one up until now. On top of that, ethical models that have been designed and are currently widely accepted as "beneficial", will most likely have to be readjusted due to changing factors of environmental, societal and technological nature, among others. That said, this should not serve as an excuse for not trying, but as an explanation for the subjective approach we will take towards this difficult topic.

The initial question we faced was: How can we make the concept of beneficial ai more tangible? From this, a set of sub sequential questions emerged: How can we create a functioning environment that enables framing the big picture as well as detailed concepts? How could those specific artifacts address some of the more concrete challenges and help substantiate the high level thoughts? How founded can those artifacts be as we move conceptually further into a speculative future?

The further approach is described in the following. Three different scenarios will be crafted, each one covering a different range on the path from narrow artificial intelligence to artificial superintelligence. These scenarios will come to life using speculative design methods like future cones, combined with story telling techniques and more more short term oriented design methods, like user journeys, MVPs or stakeholder maps. They will all be set in the same narrative world and play into one another. Combined they will outline a larger possible future regarding the creation, challenges and co-existence with artificial agents.

During the creation of the scenarios some critical choices about future developments have to be made in order to tell cohesive stories. Taking a certain point of view in fundamental and controversial decisions might belittle the value of further exploring the scenarios for those disagreeing with the chosen point of view. This issue will require a sensible solution to keep the developed interesting for a broad base of readers.

At this point it seems important to specify the target audience of the created. While still very much open for change, "the interested mind" is a too unspecific and incomplete answer. So lets put it this way for now: These scenarios are supposed to be a working tool for us to explore and to generate ideas, as well as an interesting and hopefully thought provoking read for everyone with a prior interest in future AI developments.

Inside each scenario we will work out speculative touch-points, for example new jobs and the tools these jobs would require, that emerge when different stakeholders (eg. companies, governmental institutions or completely new societal structures) are working towards ensuring beneficial intelligent agents. Those touch points are supposed to take the abstract, make it tangible and in a sense more "experienceable". They might for example be software applications the scenarios' protagonist will use in the given context to fulfill a task related to the domain of beneficial ai. Some of these fictional tools will be conceptualized and partially prototyped using current design processes and hardware solutions so one can try them out and interact with them to gain a better understanding of the underlying scenario and ideas.

While each telling a particular story, all scenarios will be centered around designing beneficial agents accumulating in the work towards a benevolent takeoff scenario for super-intelligent entities. The following briefly describes possible areas of focus for those storylines.

Scenario 1, Preparation: Set in a not too distant future (2-5 years from 2019), in a world where artificial agents have not yet reached a level of general intelligence, but continuous progress has been made and artificial agents are established as highly effective assistants in our professional and personal lives. AGI seems to be possible in the future, and a lot of work and investment goes into developing these generally intelligent systems.

Scenario 2, Transition: AGI is right around the corner. What will the transition look like and how will we co-exist with this powerful entity? Now that AGI is almost cracked, how can it be used in a beneficial way and how can the beneficial usage be ensured?

Scenario 3, Aftermath: AGI was a milestone, but having passed that one ASI is on the horizon. Planning a safe travel over this horizon now has to be an essential area of focus for legislative, overseeing institutions, scientists and developers alike, since the takeoff towards a singleton scenario might ultimately decide humanities future. The duration of this transition phase, and with that the time to take precautions, will depend on the speed of the takeoff and might vary from a few days to a much longer timeframe between scenario 2 and 3. Furthermore this chapter will be used to explore the role of humans in this new world and possible effects and outcomes of pathway-decisions made in the previous scenarios, now coming into effect.


Project Proposal.pdf