The first thoughts about this thesis started at the end of 2018, where the initial idea was to extend the work on our previous bachelor-thesis called "flows – rethinking the development of user interfaces". This thesis was still concerned with the development of graphical user interfaces – something we wanted to extend on during our masters'. So the first thing we did was to think of all possible media one can design now and in the future. An interesting point that came up here was that the topic of artificial intelligence had a special standing from the beginning, as it could be easily applied to every other area. An initial question therefore was, "are we talking about designing for AI (in this case AI is the medium) or are we designing with AI (in this case AI is a tool we use for designing for a different medium). It quickly became apparent, that in the field of designing for AI there are a lot more challenging aspects, which can, in principle, be applied to topics of creating with AI. Therefore we chose to be concerned with the "development for AI".
Predictions about the future of artificial intelligence (AI) are as old as the field itself. Often predictions estimate reaching some sort of "human-level" of intelligence through an artificial entity, often referred to as "Artificial General Intelligence". But this "human-level" is rather difficult to define, as there are two vastly separate ways to interpret this idea: First, one could interpret this as the ability of an artificial entity, to accomplish every goal a human is able of accomplishing just as well as any human could (including the best individual human). A second definition goes beyond this idea of accomplishing goals, as it states that it is necessary for a "human-level" of intelligence to include aspects of self-awareness.
The first definition takes a rather scientific, measurable approach. It is based upon the idea, that tasks can be measured, for example the performance of playing a game such as chess. An artificial entity can play a game of chess with a certain performance, and this performance can be higher or lower than the performance of the best human at playing chess. Of course not every task is as easily quantifiable as playing a game, but if there was, it would be possible to say, we have reached "human-level" intelligence, if there is no known task, which a human would be able to perform better, than some artificial entity (this does not necessarily mean, that one single artificial entity is able to perform every task on this "human-level", it could just as well be an individual entity for every task). Since I believe it is quite unrealistic to quantify every task, perhaps a more intuitive approach to evaluation would also be possible, as in "it is obvious, that this artificial entity is far better than any human, even though we do not have quantitative proof therefore". I imagine this being somewhat like when you see a professional actor, musician or athlete (and assuming you are not an actor, musician or athlete). Suppose, you don't know anything about this person (so you don't have any quantities as in awards or medals), you simply see her doing her thing and you will probably know, even though you cannot compare it to other professionals, that it is far beyond your own performance.
The second definition assumes there are factors such as self-awareness and consciousness that are necessary to reach a "human-level" of intelligence, this means that at least some goals require self-awareness and consciousness to be reached. Therefore artificial entities would need to be able to "feel" themselves, in order to be able to accomplish virtually any goal a human can accomplish. This assumption makes the task of creating artificial general intelligence vastly more complicated, as it suggests we need to create something, we do not fully understand and may perhaps never fully understand (at least in a scientific sense), as it is still highly debatable, wether consciousness can be scientifically proven. One can argue though, that it is not necessary to understand something in order to replicate it, but with something as debatable and complex as consciousness I am not sure, wether this is actually feasible. It is often suggested, that the idea of brain emulation (as in artificially replicating the neuron structure of the human brain) could lead to an artificial entity being able of consciousness and self-awareness, but, despite there being little progress in this area, I question the value of this approach. As Dr. Alexander D. Wissner-Gross points out, it has proven to not be very feasible in the past to replicate occurrences in nature in order to achieve the observed phenomenon[3]. For example, in order to create the ability to fly, it does not make much sense to look at how a bird flies, but rather to look at the phenomenology of flying itself. Nature can obviously be an inspiration in this approach, but a simple replication is not always feasible, as the way we approach the development of technology is vastly different to an evolutionary approach of natural selection, also due to the given circumstances being very different.
Though there is a lot of disagreement on how AI will develop in the future[5], when it will reach certain milestones and how these milestones should be defined anyways, it is obvious, that the impact artificial, somewhat intelligent entities have on life is growing by the minute. The milestones we construct are therefore not really the essential point when talking about the beneficial use of intelligent entities. Though these milestones might be of important nature in future takeoff scenarios, where we gradually or suddenly lose control over decisions that artificial entities make, there are a lot of things that can go wrong way before that. Therefore it is essential not to lose oneself in discussions about the definitions of general intelligence, as they fail the actual challenges we will be facing in the future. It will be of no use to us, if we one day are able to clearly define general intelligence, if we haven't done anything for the beneficial use of it.
Long before people will seriously consider humans no longer being the most intelligent entities on the planet, AI will have already impacted us in a lot of areas, as it is already starting to right now. And even right now, challenges arise continuously with software, that is more capable than previous generations.
When talking about the capability of an artificial system, it is commonly compared to that of a human, as this is a measure we are all familiar with, but this isn't always a comparison that makes a lot of sense. We do not fully understand the human brain, in fact, neuroscience is quite far from this full understanding[1]. Therefore, artificial neural networks may be inspired by nature, but the way they function is actually quite different, and so are their capabilities: simply different. For instance, an artificial entity used for object detection in images can be fooled fairly easily by putting objects out of their usual context[2]. We as humans can still easily identify these objects, because we simply detect them differently as the artificial entity does. Does this make the artificial entity worse at visual object detection? Perhaps not, as the context might be something we can use as a beneficial factor. For example, in the case of skin cancer detection, artificial entities excel compared to humans – even those specifically trained to do this task, doctors[4]. So in this context, the artificial entity exceeds the performance of a human, though this is not generally applicable to all tasks, but does it matter when you think about how this artificial entity will affect the way skin cancer is diagnosed? I do not believe this is the case, there are a lot of challenges we will have to face, even with this very narrowly intelligent entity.