Neil Postman, the distinguished cultural critic who died in 2003, wrote prolifically on digital technology and its impact on culture and education. To put it mildly, he was a skeptic of technology and what it could help us achieve. If you read some of his books or listen to some of his lectures, the benefit of hindsight allows us to see some of the ways in which he was right, and some of the ways in which he was wildly wrong.
Predicting the future is a tough business. But one thing he totally got right—and which both individuals and society should spend magnitudes of more time thinking about—was focusing on two questions: “What is the problem to which the technology claims to be the solution?” and “What new problems will be created from solving an old one?”
In the The Big Nine: How Tech Titans and Their Thinking Machines Could Warp Humanity, Amy Webb explores how a handful of the world’s biggest tech companies could profoundly shape our future—far more than they already have. As we move into an age of artificial intelligence, the influence of Google, Microsoft, Amazon, Facebook, IBM, and Apple, along with Chinese giants Baidu, Alibaba, and Tencent, will permeate nearly every aspect of our lives. This, she rightly claims, should put us on alert.
Despite the book’s title, Webb is no modern Neil Postman; this book is not in any way a diatribe against technology. Nor does Webb think that giant tech companies are a force of evil.
However, because of their power and influence over our lives, she’s deeply concerned that the milieus in which our cherished tech products are conceived, designed, and developed are bereft of diversity. It’s imperative that this change.
After providing an abridged history of AI, Webb dives right into the matter of diversity and explores the “insular world of AI tribes.” Not only are they male-dominated, but the people working in these areas tend to come from a handful of elite institutions with similar cultures. She points out that there are only half a dozen or so “AI hubs,” among them Carnegie Mellon, UC Berkeley, and MIT. “Those professors,” writes Webb, along with “their labs, and the leadership within AI’s academic units are again overwhelmingly male and lacking in diversity.”
These and related topics are matters of growing public discussion, one’s that are taking on an increasing level of urgency to resolve (or at least improve).
Webb goes on to write perceptively on other matters, like the difference in how China and the United States are approaching AI. China has a “government-centralized model”—one that is incredibly ambitious, detailed, and designed for the long game. (Xi Jinping has proved a very capable leader in bringing these aims to fruition).
The country’s commitment to AI is evident in their education. Webb tells us that there’s an “official, government-ordered textbook detailing the history and fundamentals of AI,” and that by 2018, “40 high schools had piloted a compulsory AI course.” Even elementary students engage in activities that are designed to help them become AI savvy. These are indeed signs that China is fostering a laser-focused mindset on becoming the world’s AI superpower.
The United States, while it certainly isn’t lacking in impressive and innovative AI, does suffer from a national plan for the future. Part of this is owing to the very different system of governance between the U.S. and China. But if the U.S. is to compete with China in the long game, Silicon Valley, Washington, and other sectors of society will need to start having many discussions about where we want our AI-imbued futures to go. As of right now, the United States has a “lack of AI preparedness within our businesses, schools, and government.”
In the second part of the book, “Our Futures,” Webb lays out three scenarios: optimistic,” “pragmatic,” and “catastrophic.” Which scenario transpires will depend, in part, on how effectively we deal with the issues mentioned above, along with many others. What happens if we don’t add more diversity, and the AI software used in courts creates greater disparities in the criminal justice system? What happens if issues of privacy, transparency, and control are not sufficiently debated and addressed?
From a geopolitical perspective, what happens if the U.S. falls short of its potential and China becomes the world’s undisputed AI superpower? How might that affect the number of allies they have? Or that we lose? These and many other questions are played out.
Though making predictions is Webb’s bread and butter—she’s a professor of strategic foresight at the NYU Stern School of Business—it’s always hard to know what to make of such predictions (from Webb or anyone else). What do they really tell us?
Consider, for instance, a preview of how we might interact with our beloved four legged “pets”: “With advanced cameras in their eye sockets, haptic fur, and the ability to recognize subtle changes in our voices, robotic pets are significantly more empathetic than our organic ones, even if they are less warm and fuzzy.”
And here’s a preview of how children might learn: “In classrooms and homes, IBM has brought Socrates back to life as an AI agent, which engages us in argumentative dialogue and rigorous question-answer-sessions to help stimulate critical thinking.”
There’s nothing wild about such predictions (unfortunately). But to what degree do they actually help us? Too understand just how limited they can be, imagine how someone might have predicted (and described) a platform akin to Twitter:
“There will be a new way of connecting and communicating with one another. People will be able to interact with people from all over the world. Sure, some people will say mean or false things, but there will be systems in place to prevent that. Because people of all backgrounds and of all viewpoints on a single platform, our empathy towards each other will dramatically increase. Indeed, we will become smarter, more socially intelligent, and better informed individuals about what’s going on in the world.”
The point is that prognosticating about the future, especially technology, is not only very difficult, but much of it tends to be limited in scope. It’s one thing to make predictions about future technologies; it’s an entirely different thing to make predictions about how said technology will affect us psychologically, cognitively, and culturally (let alone what those influences will amount to over multiple generations).
Clearly, social media has opened up a can of worms. We’re all aware of its unsavory aspects. But what we tend to think far less about, and at our own peril, is that we don’t yet even know the full story. Social media is a mere two decades old; we have relatively little idea what its longer-term effects will be.
Indeed, we’re faced with a rather well-deserved 21st-century dose of irony. In order to make AI fair, effective, and beneficial to us all, we need to engage in deep and sustained thought; analyze the past (not just the present and future); and listen to the ideas of others. These, of course, are precisely the mental capacities that social media has drastically diminished.
We can have discussions about how to make AI work for everyone; we definitely should do that. But without an overall change in public discourse (and probably our educational system as well), the discussion will likely only mirror the messy and unproductive ones we’re already having about myriad other issues.
As we gradually move closer to playing fetch with our robotic border collies and packing our bags for Mars, we should be asking ourselves the questions that Socrates (the non-robotic one) started asking 2,500 years ago.