Three Tests for the Societal Integration of AI
- dvollaro
- Jun 26
- 5 min read
Updated: Jul 19
The age of AI has arrived with salvific predictions that artificial intelligence will solve humanity’s biggest problems, making power plants more efficient, optimizing medical diagnostics and supply chains, and creating new jobs. Running just beneath this techno-utopian optimism is an undercurrent of concern and dread over the possible negative outcomes from AI adoption, touching on everything from "brain rot" to a Skynet-style AI apocalypse. Sometimes the direst warnings are coming from the same people who are developing these systems.
It is impossible to accurately predict the negative downstream consequences of big technological breakthroughs. Sometimes the reckoning comes quickly. Facebook rolled out the “like” button in 2009 but in 2018, after mounting evidence of the company’s role in fostering political polarization, spreading disinformation, undermining privacy, and sparking a mental health crisis, Mark Zuckerberg testified before Congress, admitting, “We didn’t take a broad enough view of our responsibility.” Sometimes it takes longer. Henry Ford rolled out the Model T in 1908, and sixty years later, the world was just beginning to contend with urban sprawl, tens of thousands of traffic deaths per year, and the threat of climate change, and oil dependency that came with mass-produced automobiles.
Major disruptions are certain to come from AI. Knowing this, how should we evaluate the social benefits of this technology? How do we balance its positive and negative consequences? What are some concrete metrics we can use to assess the efficacy and success of AI as a force for good in society?
One place to begin is with a “do no harm” ethic inspired by the Hippocratic Oath. At the very least, we should expect that AI will not create additional big problems for humanity or exacerbate existing ones. Is a cure for cancer an absolute good if millions of people have also lost their jobs and cannot afford good health care? If AI makes transportation systems safer and more efficient but enables mass surveillance by corporations and the state, is that a big win for AI? Is a more equitable education system powered by chatbots an improvement if overall intellectual ability and literacy decline because of overexposure to AI systems?
Some will argue for a narrow scope when evaluating the societal benefits of AI—a focus on particular systems affected by the technology—but I take a more holistic view inspired by Neil Postman, who wrote this in his 1992 book Technopoly: The Surrender of Culture to Technology, “Technological change is not additive; it is ecological. A new technology does not merely add something; it changes everything.”
Postman's ecological approach to technology adoption is evident everywhere we look in the modern world. In the 1980s and 1990s, for example, cell phone technology was greeted as a big step forward for humanity. Most of this good press revolved around benefits associated with phone technology as we understood it, after a century of landlines in people's homes. Cell phones would make communication more mobile and convenient for everyone. You could carry the phone with you everywhere now, instead of it being tethered to a wall or the inside of a house or office. Business efficiency would be improved by making real-time communication more widespread and democratic. Access to technology and information would improve for people all over the world. The cell phone was seen as an additive, an expansion and democratization of the technological benefits of phones in general. People who never had a phone would now have them, and people who had lived with phones their entire lives would now be able to carry them everywhere and do things with them that were impossible before.
It was difficult then to see beyond the phone-ness of cell phones to imagine that they would become powerful handheld computers that are connected to worldwide information and communication networks, equipped with a camera, easy video and audio recording capability, games, instant access to a breathtaking array of entertainment, and something entirely new, social media. It was also impossible to predict how these devices would re-engineer the societal ecosystem, changing the way we communicate, work, socialize, pursue romance, conduct politics, and position ourselves socially, often not for the better. The cell phone was not merely additive; it changed everything.
Is it fair to evaluate the effects of a technology based on the harm it brings to the entire societal ecosystem? “No,” if the goal is to protect the” right” of moneyed interests to disrupt institutions and social systems in pursuit of profits; “Yes,” if we intend to have a technology revolution that benefits everyone.
Here are a few concrete measures we can use to evaluate the success of integrating AI into our society:
Fewer people are living under overpasses? - It is a simple test: If widespread deployment of AI in our workplaces and systems causes spikes in unemployment and homelessness, then it will have failed the social integration test. If those numbers improve, it will have succeeded. Homelessness is an ecosystem indicator of the economic health of America’s most vulnerable citizens. It is also an indicator of changes in the employment market. A 2019 study by the U.S. Government Accountability Office showed that with every 1% increase in unemployment, homelessness rises by 0.65%. The data we should watch will be in the unemployment and homelessness statistics, but overpasses are a powerful visual representation of the problem. We should look there as well.
Screen time has gone down rather than up - If AI is the great efficiency machine we are being promised, then integrating it into our systems should produce a surfeit of free time for all of us. Will that free time come in the form of massive layoffs and job loss, or will it find its way into all our lives as a shared benefit? What will we do with more AI-facilitated free time if we get it? If the new AI-powered society is an extension of the heads-bent-over-screens, entertainment-obsessed, high-tech dystopia we currently inhabit, it will be a failure. Americans are now spending an average of seven hours a day on screens. This is an outrageously unhealthy social reality. AI will have passed the “do no harm” test if it helps create a world in which people are spending less time on screens and more time outdoors, interacting in person with friends and family, and enjoying the fruits of life.
Economic inequality has declined - The current technological "revolution" we are living through is more like another Gilded Age than a genuinely revolutionary moment. Will the next one be different, or will we see trillionaires instead of billionaires stomping across the earth, forcing AI-fueled “disruption” on the rest of us? We hear AI boosters talk about the equalizing effects of AI, promising that it will level the playing field in education, democratize access to financial services, and even make the tax code more equitable. Some suggest that universal basic income will naturally flow from the tax windfall collected through AI-fueled productivity gains. These extremely optimistic projections run against the grain of the history of capitalism, which suggests darker outcomes. The top 1% of Americans measured by net worth currently own 30% of the nation’s wealth. This same group controls around 14% of all U.S. real estate, and they own 54% of U.S. public equity market. If the emerging AI revolution exacerbates these obscene disparities, widening the gap between rich and poor, it will be a failure. Not ameliorating the current economic disparities will also be a failure.
These measures are not perfect, but they are a good place to start. I offer them as a corrective to the wildly optimistic rhetoric surrounding AI. This technology will not merely be additive; it will change everything. Knowing this, we should set high standards and expectations for how AI will affect our social systems. Without these standards, we are at the mercy of tech billionaires and their political allies, who will happily disavow any responsibility for the consequences of their technology as long as they are profiting from it.
Comments