Taming Silicon Valley: A Fervent Call to Action

In the last decade or so it’s become more apparent than ever that social media companies and other tech giants hold an unhealthy amount of power and influence over our lives. Surely we don’t want the story of AI playing out in similar fashion. A new book offers a helpful roadmap to avoid such a repugnant future. 

                                                                        ______________

Often mentioned in articles about ChatGPT is how popular it became in such a short amount of time. Within a matter of days after OpenAI unveiled its ground-breaking product to the public (late November 2022), roughly a million users had played around with it, eager to see what it could do. Reactions varied, but nearly everyone had an opinion. ChatGPT of course continues to attract enormous attention, and OpenAI is one of—perhaps the most—talked about AI companies in the world.  

Unfortunately, despite its name and mission statement, OpenAI is far from transparent. But unless you happened to have been born yesterday, that’s not likely to come as a shock.     

Nonetheless, the joke is on us if we passively give a company such as OpenAI essentially free reign to do what it wants, just as we collectively did with the likes of Facebook.  

We can’t let that happen again—not this time, not with artificial intelligence. The ongoing morass of social media has thoroughly demonstrated the dystopia that results when a handful of infinitely wealthy companies come to hold unparalleled sway over our day-to-day lives. 

As transformative as social media has been, the current and future uses of AI will almost certainly prove to be far more profound.      

Taming Silicon Valley, Gary Marcus. Published by the MIT Press.

Gary Marcus, “out of a sense of urgency and disillusionment,” has recently written Taming Silicon Valley: How We Can Ensure That AI Works For Us. Though it’s under two hundred pages, the book is full of cogent ideas about how we can help craft a future in which AI positively affects the lives of many diverse people rather than merely a percentage of Silicon Valley elites.  

It’s worth saying at the outset that Gary Marcus, a cognitive psychologist at MIT, is hardly a Luddite. Like the vast majority of us, he enjoys technology, appreciates the ways it can enhance our lives, and acknowledges the value and importance of innovation. But (also like the great majority of us) he has serious concerns about how AI might play out: how it could negatively impact the lives of hundreds of millions, if not billions, while only truly benefiting a relative few.  

At the heart of Marcus’s book is the following pronouncement: “We don’t have to play along with what’s going on in big tech, and AI.” It’s well past time that we demand far greater transparency than we’ve gotten in the past, and it’s well past time that we demand genuine consequences (not mere slaps on the wrist) when serious violations occur or when laws are blatantly trampled upon.  

Marcus’ sharply written book is focused on generative AI, the type of AI that is currently all the rage. (Another recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor, offers a trenchant analysis of the ways in which predictive AI can go awry, and the hype behind many predictive AI tools.) 

While AI tools like ChatGPT tend to be the subject of many media headlines about AI, there’s a whole lot more at stake than AI-generated student essays. Some of the many issues that Marcus addresses are privacy, cybersecurity, intellectual property, nonconsensual deep fakes, and environmental costs. Given the vast range of areas that generative AI has already affected, and the many more it is very likely to affect, it’s vital that conversations about it are open and diverse.       

Indeed, conversations about AI in general need to be genuinely collective, not unevenly shaped or influenced by a minute percentage of the population—and certainly not by someone such as billionaire venture capitalist Marc Andreessen, whose stance on AI regulations Marcus briefly highlights. 

Marcus quotes from Andreessen’s “The Techno-Optimist Manifesto,” a strangely sophomoric diatribe that can be found on the latter’s website. More often than not, it either belittles or castigates anyone who doesn’t happen to share his particular views: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Andreesen seems to equate the intensity of his convictions with intellectual and moral clarity. If you don’t happen to agree with his stance, you’re apparently complicit in murder.     

As off-putting as some of Andreesens views are, they serve as a compelling example for why all of us should make an effort to voice our opinions rather than passively allow critical discussions about the future—humanity’s future—to be shaped by the deep-pocketed perspectives of a select few. 

This is particularly true when it comes to privacy concerns or sectors that can involve matters of actual life and death  (like self- driving cars). But it’s also important to involve oneself in discussions like how generative AI could potentially impoverish areas like education and the arts: two things that are essential to a vibrant and meaningful world.               

Marcus writes with particular zest about the ways in which artists are getting ripped off by tech companies. “The default view in Silicon Valley,” he says, “is that anything that’s out there is there for the taking.” It’s a contemptible mindset, one that merits particular scorn when it comes to companies using scores of artists’ work to train their AI models. With very few exceptions, artists are not compensated. What’s perhaps worse, artists have essentially no control over whether their work is used in the first place. 

If you’re an artist, that might well be enough to make your blood boil; if you’re not an artist, there are still many reasons to take issues with certain practices emanating from many tech companies. Indeed, “Almost no matter what you do,” Marcus writes, “the AI companies probably want to train on whatever it is you do, with the ultimate aspiration of replacing you.” Not only that, we’ve seen for many years now that tech companies have few qualms about achieving their ends through highly questionable ways, both legally and morally. On top of that, a pernicious mindset in Silicon Valley has continued to grow and grow: We are entitled to do virtually whatever we want; we know what’s best.    

Indeed, too many tech companies have become accustomed to treating the world as if it’s literally their playground. Marcus brings readers’ attention to Sidewalk Labs, a company dealing in urban design. In circa 2017, a plan was put in place to reshape an area of Toronto in order to explore smart city initiatives.     

That’s all fine and well, perhaps, but what about the fine print, as it were? What kinds of data would be collected, and how would it be used? To what extent was Sidewalk Labs (owned by Alphabet, the parent company of Google) committed to respecting people’s privacy? Sidewalk Labs had no ready responses when faced with such questions, which is an eye-opening state of affairs. It’s as if Sidewalk Labs, funded by one of the world’s most powerful companies, felt they were simply entitled to do whatever they wanted and everyone would just go along with it.   

Fortunately, the project eventually fizzled out. But that was only after, and because of, significant pushback from a vocal body of citizens led by proactive leaders. As Marcus points out, it’s a powerful example of how it is in fact possible—even when up against an extremely powerful and wealthy company—to successfully fight back.        

There’s no denying that the world’s largest tech companies have done much good. They’ve conceived, designed, and brought to market scores of various products and services that can be very easy to take for granted. But that doesn’t mean tech companies should be given carte blanche to do whatever they want, whenever they want, and with virtually no repercussions. 

Taming Silicon Valley offers readers an articulate, passionate, and timely message about why and how we need to take action when it comes to AI. Like any transformative general-purpose technology (but perhaps even more so), AI will influence not only our own lives but the lives of many generations to come.