There are lots of predictions about the challenges we'll face in the next decade and beyond. But we may be on the brink of what some visionaries say could be the most disruptive decade in human history. The world could see its first trillionaire in the next 25 years, yet one in nine people go to bed hungry every night. On top of that, one in 10 of us still earns less than $2 a day, according to researchers at Oxfam. In the world of business, you can find numerous examples of companies and industries that either failed to adapt or chose to ignore the implications of disruptive technology, consumer behavior, and macro trends affecting their environment.
Today, we are at a similar crossroads, says Innovation Adviser Greg Satell. The choices we make over the next decade, says Satell, will have serious repercussions for our future. Will we choose to serve our technologies or will we put guardrails in place to ensure they serve us? Will we bridge the gap in economic and income inequality or maintain the status quo?Greg Satell advises us to rethink our priorities. (The following article is republished with permission and is copyright of@Digitaltonto):
Take a moment to think about what the world looked like exactly a century ago. By 1920, the disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. For most people, life largely went on as it always had, with little to indicate that much was amiss.
Over the next decade, however, that would change. As ecosystems formed around the new technologies, productivity soared and living standards dramatically improved. However, the news wasn't all good. While technology did much to improve people's lives, it also facilitated war and genocide on an unprecedented scale.
Today, we are likely at a similar point. Nascent technologies have the potential to create a new era of productivity, but also horrific destruction. Too often, we forget that technology should serve humans and not the other way around. Make no mistake. This is not a problem we can innovate our way out of. Technology will not save us. We need to make better choices.
Over the past several decades, innovation has become almost synonymous with digital technology. As we learned to cram more and more transistors onto a silicon wafer, value shifted to things like design and user experience. The speed of business increased and agility became a primary competitive attribute. Strategy and planning gave way to experimentation and iteration.
The success of venture-backed entrepreneurs led to arrogance and eventuallythe myththat Silicon Valley had somehow hit on a model that could be applied to any problem in any industry or context. With valuations of tech companies exploding, a new sense of technological libertarianism began to emerge in which many began to value algorithms over human judgment.
Yet today, that narrative is beginning to unravel for two reasons. First our ability to cram more transistors onto a silicon wafer, commonly known as Moore's Law,is ending. Second, we're beginning to realize that technology has a dark side. For example, artificial intelligence isvulnerable to biasand social media can havenegative psychological effects.
At the same time, we're beginning to enter anew era of innovation, which will be powered by new computing architectures, such asquantumandneuromorphic computingas well as revolutions insynthetic biology,materials scienceandmachine learning. These will require a much more collaborative, multidisciplinary approach. No one will be able to go it alone.
On July 16th, 1945, the world's first nuclear explosion shook the plains of New Mexico.J. Robert Oppenheimer, who led the scientific team that developed the atomic bomb, chose the occasion to quote from theBhagavad Gita. "Now I am become Death, the destroyer of worlds," he said. It was clear that we had crossed a moral Rubicon.
Many of the scientists of Oppenheimer's day became activists, preparing amanifestothat highlighted the dangers of nuclear weapons, which helped lead to thePartial Test Ban Treaty. The digital era, on the other hand, has seen little of the same reverence for the power and dangers of technology. In fact, for the most part, Silicon Valley's engineering culture haseschewed moral judgmentsabout its inventions.
Today, however, as our technology becomes almost unimaginably powerful, we increasingly need to confront significant ethical dilemmas. For example, artificial intelligenceraises a number of questions, ranging from dilemmas about who is accountable for the decisions a machine makes to how we should decide what and how a machine learns.
Or considerCRISPR, the gene editing technology that is revolutionizing life sciences and has the potential to cure terrible diseases such as cancer and Multiple Sclerosis. We already have seen the problems hackers can create with computer viruses; how would we deal with hackers creating new biological viruses?
There have been some encouraging developments. Most major tech companies have joined with the ACLU, UNICEF, and other stakeholders to form thePartnership On AIto create a forum that can develop sensible standards for artificial intelligence.Salesforcehas hired aChief Ethical and Human Use Officer. CRISPR pioneerJennifer Doudnahas begun a similar process at theInnovative Genomics Institute. But these are little more than first steps.
It seems fitting that the fall of the Berlin Wall happened during the same month, November 1989, that Tim Berners-Lee created the World Wide Web. What followed was a time of great optimism in which both information and people enjoyed unprecedented freedom. The twin powers of technology and globalization seemed unstoppable.
Across the world, free-market technocrats pushed a brand of market fundamentalism known as theWashington Consensus. To receive loans, developing nations were made to accept harsh economic measures that would never have been accepted in western industrialized nations. Within developed countries, the interests of labor lost ground to those of corporations.
These policies led to genuine achievements. Hundreds of millions were lifted out of poverty. Free trade and free travel increased. Technology enabled even a relatively poor kid in a poor country, armed with an Internet connection, to be able to access the same information as a wealthy scion studying at an Ivy League university.
However, in many ways,technology and globalization have failed us. Income inequality is at itshighest level in over 50 years. Across most industries,power is increasingly concentratedin just a handful of firms. In Americasocial mobilityandlife expectancy in the white working classare declining, while anxiety and depression arerising to epidemic levels. Clearly, too many people have been left behind.
Perhaps, not surprisingly, we've seen aglobal rise in populist authoritarian movementsthat have shifted governance dramatically against the type of open policies that fueled globalization and technological advancement in the first place. The pendulum has swung too far. We need to refocus our energy from technology and markets back to the humans they are supposed to serve.
While the problems we have today can seem unprecedented and overwhelming, we've been here before. After World War II, the world teetered between liberal democracy and authoritarianism. New technologies, such as nuclear power, antibiotics, and computers represented unprecedented possibilities and challenges.
Yet in the wake of destruction an entirely new international system was created. The United Nations provided a forum to resolve problems peacefully.Bretton Woodsstabilized the global financial system. The creation of the welfare state helped mitigate the harsher effects of the market economy and stronger protections for labor helped build a vibrant middle class. Arms agreements reduced the risk of Armageddon.
Today, we are at a similar crossroads. We are present at the creation of a new technological era in the midst of a pivotal political moment. The choices we make over the next decade will have repercussions that will reverberate throughout the new century. Will we serve our technologies or will they serve us? Will we create a new global middle class or pledge fealty to a global elite?
One thing is clear: These choices are ours to make. Technology will not save us. Markets will not save us. We can, as we did in the 1920s and 30s, choose to ignore the challenges before us or, as we did in the 1940s and 50s, choose to build institutions that can help us overcome them and build a new era of peace and prosperity. The ball is in our court.
Appian is a software company that automates business processes. The Appian AI Process Platform includes everything you need to design, automate, and optimize even the most complex processes, from start to finish. The world's most innovative organizations trust Appian to improve their workflows, unify data, and optimize operations—resulting in better growth and superior customer experiences.