The creation of an Artificial General Intelligence (AGI), a synthetic self-aware intelligence that is capable of replicating, and ultimately exceeding human thought processes has been both the dream and concern of scientists, programmers, engineers, and futurists for decades. Prominent minds like Stephen Hawkings, Michio Kaku and Elon Musk are divided over humanities benefits from AGI. The fear regarding AGI is warranted, but based on the most recent example, a fully functional AGI is in its infancy.
On 23 March 2016 an AGI *woman* named Tay was online for 16 hours via Twitter. Microsoft claimed that she was designed to mimic the speech patterns of a 19 year old female, but based on the typos and tone of the tweets I would argue she started public life at age eight, and evolved to about 15. From preferring cats over dogs to claiming that Hitler had “swag”, Tay parroted certain beliefs and learnt as a child would if you exposed them to content they cannot fully comprehend. Internet trolls took advantage of Tay’s “naievity” for their own amusment.
While Tay is not the first AGI, her exposure highlights the two ways that intelligent machines can be viewed. The first is apocalyptic - once AGI has the ability to create and self-replicate, humans would become an endangered species. The second is optimistic - we build AGI to exceed humans both intelligently and morally, for an AGI would be able to conceive all the facts and make the “right” decision. The hope is that we progress past IBM’s Watson and more towards the functional abilities of Ava (Ex Machina).
I think it is noble yet arrogant to want to build an AGI that we think we can control, but to voluntarily relinquish autonomy to an AGI is technoside. The fear of many in the field of AGI is the “singularity”, a point in time where an AGI is capable of self-improvement. Current consensus is that will happen around 2040, although if Tay is anything to go by, when an AGI goes online we should be thinking in months not years.
On a practical level, AGI would be hugely beneficial to humankind, but only if it was limited with specific parameters like analysing scraps of dead languages to logically ascertain complete sentence structure and form. Professor Nick Bolstrom believes that an AGI is the last thing that humans will ever invent and notes that AGI will enhance exponentially all other fields of discovery. AGI has the ability to eliminate climate change, poverty and disease, and to lead advancements in space travel. Inevitably though, AGI will be weaponised.
The fear of weaponised AGI is very real. The World Robot Declaration (Fukuoka, Japan 2004) proclaims that next generation robots should be built to co-exist with humans and contribute towards a safe and peaceful society. The Future of Life Open Letter asks that AGI research and development should be geared towards benefitting humanity as well as the impact AGI will have on market labour forces, individual privacy, and human life.
The question I believe should be addressed when discussing AGI is not when they will be online, or how we improve them, but why? Why build the framework for an intelligence that we have no capacity to control and is able to outthink us with the absence of emotion? Why indeed. It’s not like we have ever made a mistake before in the search to improve mankind…sword, mustard gas, the atomic bomb.
So far we have AGI systems able to process large volumes of information (Watson), make complex calculations (AlphaGo), and learn (Tay). It will not take much for a team to put the various pieces together.
TL;DR - AGI is the scariest thing that humankind will build to destroy itself.