"The potential for significant risk accompanies the development of artificial superintelligence."
In Alex Garland's 2015 filmEx Machina, two elite data scientists focused on artificial intelligence, one a fairly straight-laced fellow played by Domnhall Gleeson, the other a controversial hard-drinking iconoclast memorably portrayed by Oscar Isaac, ponder the deeper implications of the word "consciousness" and what it takes for that human quality to develop organically over time. To say their perspectives aren't exactly in sync is...quite an understatement. More importantly, the AI project they're working on, and its shocking culmination - which, if further explained, would give away the secrets of a speculative fiction movie that's well worth watching - represent a fictional but not implausible example of concerns regarding a highly advanced form of AI referred to as artificial superintelligence.
There are undeniable advantages to be found in the creation of superintelligent AI - as well as equally significant risks. Even though the realization of such supremely advanced AI platforms or networks isn't likely to occur for a decade or more, it's nonetheless prudent to begin thinking about ways in which oversight of these high-echelon digital machines could be managed without excessively infringing on their most beneficial functionalities. Tools like high-speed, low-code application development platforms may be among the solutions that couldbe brought to bear for this purpose, albeit likely as one piece in a larger puzzle.
It's impossible to develop contingency plans related to a threat without first laboring to truly understand the genesis, nature and scope of that threat. (Using the term "threat" isn't even the perfect word for what superintelligence's adverse possibilities represent, but along with "risk," it's the best one the English language currently offers us.) In a nutshell, artificial superintelligencerefers to simulated neural networks that are capable of cognition at a level equivalent to or possibly even greater than the abilities possessed by human beings, as explained by Techopedia. The terms "artificial general intelligence" and "human-level machine intelligence" are also sometimes used to define this concept, although usually not when the idea is framed in a negative perspective.
People at the forefront of science and engineering have speculated about the possibilities of AI (in one form or another, usually as a sort of robotics) for millennia, dating back to automatons like the "Digesting Duck" designed and built by Jacques de Vaucansonin 1738. (Why "digesting?" According to a journal piece written atSwarthmore College, the robot illustrated the processes of commonplace duck movements, including the beak opening and closing to consume grain and send it to the internal organs, so use your imagination from there - or perhaps don't!)But the prospect of artificial superintelligencebegins in 1936, with the polymath genius Alan Turing and his Turing Machines study. Pivotal to this was the development of the Turing Test, a method to assess the cognition of a computer; if the machine passed, it would not let on to its interlocutor that the respondent was an AI (provided the questioning individual didn't already know).
Harvard Science Review pointed out that the foundational principles of Turing remain relevant to discussions of modern advanced AI systems like IBM's Watson, which famously beat two humans atJeopardy!in 2011. Watson, as well as voice-activated digital assistants like Amazon Alexa, Google Assist and Siri from Apple, are not currently at the level of cognitive ability theoretically associated with notions of artificial superintelligence, but they certainly represent foundational steps in that direction. True AGI, HLMI or superintelligence would be achieved when AI-powered machines or networks possess the ability to think in a complex manner, make decisions, operate with no human intervention and perhaps even develop the capacity for emotion and introspection.
"Without oversight and ethical programming, superintelligent AI could theoretically harm living things, including humans."
Tech firms such as DeepMind and Sentient Technologies, as well as countless academic teams at prestigious technical universities all over the world, have been working diligently for years on the creation of an AI with a neural network capable of all the mental functions humans can access every day (and often take for granted). While it's still possible that a temporary AI winter - a period in which the pace of AI development slowsconsiderably - could occur due to the limitations of certain components or technologies, Gizmodo noted that this is fairly unlikely based on the advancements that have already occurred in the field. Root concepts of superintelligent AI exist in the evolutionary computation projects of Sentient Technologies and competing firms working on autonomous decision-making capabilities for AI, according to PC Magazine.
So what is the danger? Futurist Anders Sanbergexplained one such possibility of a superintelligent, self-improving AI in a famous hypothetical known as "the paperclipper" or "the paperclip scenario:"
This is, of course, an extreme, arguably improbable and above all rather ghoulish scenario, but also a possible one according to logical terms similar to those at work within a superintelligentAI. Even more likely, according to Gizmodo, is the potential for AIs that lack ethical boundaries or operate according to biased ideas, the latter of which originate with human programming errors but become entrenched and magnified by the AI's own neural network. Additionally, the institutional culture within Silicon Valley, characterized by extremely fierce competition that in turn leads to overly fast development of new technologies or improvements to existing products, is not necessarily the healthiestone for the creation of superintelligencethat features ethical standards.
The issue of ethics in AI, superintelligent or otherwise, is being addressed to a certain extent, evidenced by the development of ethical advisory boards and executive positions to manage the matter in a direct fashion. DeepMindhas such a department in place, and international oversight organizations such as the IEEE have also created specific standardsintended for managing the coexistence of highly advanced AI systems and the human beings who program them. But as AI draws ever closer to the point where superintelligence is commonplace and ever more organizations adopt existing AI platforms, it's critical for ethics to be top of mind for all major stakeholders in companies hoping to get the most out of the technology.
Leveraging the Appian enterprise-scale low-code platform for creating business process management tools can be a viable method of maintaining firm oversight regarding your company's AI utilities. The solution's drag-and-drop interface makes for a quick and intuitive process even in the development of apps for complex needs such as this. Contact us today to learn more or set up a free trial.
Appian is a software company that orchestrates business processes. The Appian Platform empowers leaders to design, automate, and optimize important processes from start to finish. With our industry-leading platform and commitment to customer success, Appian is trusted by top organizations to drive transformational process change.
©2024 Appian. All rights reserved.