It seems that as long as I can remember there has been a general fear surrounding the future of AI.
It is hard to pin point an exact media source or article, but it feels that it is an omnipresent feeling.
It's a sentiment that's been reflected in our films. Whether it's Will Smith escaping robots, Tom Cruise fleeing from his own minority report, or whatever the hell happened in Ex Machina. Why are humans, and Hollywood, constantly running away, or at least trying to run away and failing miserably, from AI?
Maybe because the common depiction of AI has been some form of sentient robot, pursuing the honest citizen with a Chucky-esque determination and vengefulness. Wait... was Chucky AI too?
I feel that this image of AI, Chucky included, has been quite problematic.
Perhaps the fears that underly it are reasonable, namely that we as humans are transgressing informational boundaries that have the potential to override the very nature of humanity, and confuse our own perceptions of consciousness. However, I believe that these overly personified depictions are a problem.
Take Machine Learning for example. Consultants don't go to work with the intention of building all knowing humanoids. Instead they create powerful models that can contextualise, understand and eventually learn from mass streams of data. The fears that these capabilities could someday contribute to a frightening future in AI could be a valid one, but it doesn't feel or look like it to us.
Surely, by assigning a wrongful, or at least exaggerated image onto these capabilities, it makes legitimate concerns lose a certain relevancy that is essential in adequately informing policy and societal perspective?
For me, the debate isn’t just how it will affect us, but the effects it will cause us to have on ourselves.
I also feel that AI could be better understood if there is a stronger understanding of what it is today, not just what it could be in the future,
They say that beauty is in the eye of the beholder. The same also seems to go for consciousness.
Historically, some facets of consciousness have been deemed more important than others in the make up of our societal norms.
Animals are a good example of this. We all clap when a pig learns new tricks, we even ascribe it with humanistic tendencies that legitimize it's consciousness, and yet the mass production of pork is somehow justified as a bi-product of an innate human superiority in sentience.
Consciousness from this perspective doesn’t have an empirical connotation, but has been an intangible, and rather hypocritical, stream of thought that is constantly differing to justify and explain the world that we have created ourselves.
How can we fear consciousness in other things, if we don't understand it in ourselves?
All of this is not to say that current fears surrounding the future of AI should be dismissed, and that the worst case scenarios will not happen, because they very well could.
It's simply to say that we may have to begin informing the way people perceive these technologies, so that at least we can put an accurate face to the fears, and accurate emotions to the face.
Like this, we can at least address and shape existing technologies, instead of constantly running away from an ominous future.
Stay up to date with latest articles about AI and Machine learning tools.