top of page
  • Facebook
  • YouTube
  • Instagram

Why Moore's Law Isn't Necessarily Progress

Updated: Jun 3



ree


I recently calculated that I have owned seven desktop computers in my lifetime, along with four tablets, at least five laptop computers, and approximately seven cell phones. I thought this was a lot until I asked around. Many of my friends and relatives have burned through considerably more devices than I have. I tend to hold on to things until they disintegrate or stop working altogether. In this regard, I am very bad for the U.S. economy. 


I bought my first personal computer in 1995, a Dell PC with an Intel Pentium 75 MHz CPU and a dial-up modem. This device seems laughably out of date now, worthy of a place in a museum of ‘90s tech and nothing more, but it had its charms. I could write documents on it and make a screechy, not-so-stable connection to the internet. I could send emails and participate in chatroom discussions. It was more than sufficient at the time. I felt like I was participating in something revolutionary, even beautiful. Now that computer is buried in a landfill somewhere, smashed up and forgotten, beginning its hundreds-year disintegration as it leeches heavy metals into the soil. 


This parade of digital devices through my life tells a story of runaway consumerism, planned obsolescence, long-term environmental damage, and something called Moore’s Law, named for Gordon Moore, co-founder of Intel, the company that made the CPU in my first computer. When I was a year old in 1965, Moore theorized that the number of transistors would double every two years and the cost for each would drop precipitously. This is the phenomenon that allowed computers to become smaller, cheaper, and much more powerful. His prediction held until recently, when the trend line began to slow and flatten.  


At first glance, the decades-long upward trend of Moore’s Law suggests something resembling progress. The iPad Pro I am currently using to write this, for example, has 45,000 times more processing power in CPU performance than my museum-piece Dell. The technical specs hint at unquestionable leaps forward in speed and performance. Surely my life has improved alongside this miraculous revolution in microprocessor technology. Surely the society that uses these tools is better because of them. How could it be otherwise? 



Graph of Moore's Law showing the exponential growth of computer power, taken from Wikipedia.
Graph of Moore's Law showing the exponential growth of computer power, taken from Wikipedia.


The history of the cell phone reveals how it could be otherwise. Twenty years ago, predictions of the liberatory potential of cell phones were coming fast and furious. In a 2005 PBS interview, author Thomas Friedman prognosticated that “the next great breakthrough in bioscience… could well come from a fifteen-year-old in Romania who downloads the human genome on his $25 Motorola cell phone connection to Google.” That same year, while the rock band U2 was on its Vertigo tour, lead singer Bono would regularly urge audiences to “Take out your cell phones, make this place into a Christmas tree.” He would then command them to text their names to a special number that would sign them up for the ONE Campaign to fight poverty and preventable diseases in Africa—a stunt that was meant to demonstrate the cell phone’s supposedly unlimited potential as an instrument of social justice reform. Two years later the iPhone was released, and I would regularly hear people say that it “has as much computing power as NASA when the U.S. landed a man on the moon,” as if to suggest that smartphones would usher in the next great breakthroughs in scientific and technological achievement. 


Jump ahead to the present. What do most people do with this moonshot-sized computer in their pockets? Cell phone usage rose from 2 hours per day to 4.9 in the past fifteen years. On average 50-55% of this time is spent streaming entertainment. Another 20-25% is spent socializing—calling, texting, surfing social media, and online dating. The harms are now well known. Social psychologist Jonathan Haidt has documented the rise in depression, self-harm, and anxiety among teenagers who use social media apps on smartphones, a trend that is especially impactful on girls. Smartphones have been the main vector for the circulation of misinformation and disinformation, the increase in passive screen time, shortening attention spans, and a rise in attention fragmentation. 


Of course, there are many healthy, positive things that one can do with a cell phone, but earlier predictions that these devices would lead to a positive net outcome for the human race were quite naive. A person who spends more than five hours a day staring at algorithmically fed content on a small screen is certainly not liberated.  


The question that comes to my mind when I see the Moore’s Law trend line on a graph is this: what happened to the liberatory potential that was possible with that much computing power? Anthropologist David Graeber humorously posed this same question in his 2015 book The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. He recalls his childhood in the ‘50s when predictions of flying cars were being made that would never be realized. 

Well, all right, not just flying cars. I don’t really care about flying cars – especially because I don’t drive. What I have in mind are all the technological wonders that any child growing up in the mid-to-late twentieth century simply assumed would exist by 2015. We all know the list: Force fields. Teleportation. Antigravity fields. Tricorders. Tractor beams. Immortality drugs. Suspended animation. Androids. Colonies on Mars. What happened to them? Every now and then it’s widely trumpeted that one is about to materialise – clones, for instance, or cryogenics, or invisibility cloaks – but when these don’t prove to be false promises, which they usually are, they emerge hopelessly flawed. 

Graeber’s list is delivered tongue-in-cheek, but his main point lands: Something forestalled the proletarian tech revolution that many predicted was right around the corner in the 1960s. Graeber has an explanation: The confluence of state and corporate power in the U.S. funneled money into technology that would yield short-term profits and supported innovations that would facilitate administrative control, mass surveillance, and compliance. 


Another way of saying it: The U.S. got the tech revolution that powerful elites wanted for us. 


We certainly got disruption, a lot of it. My history of computer devices tells a story of technology that successfully rewired my brain chemistry along with massive changes in social mores, reading habits, news consumption, voting behavior, and societal expectations for just about everything. Collectively, these changes are a perfect illustration of Marshall McLuhan’s famous mantra, “the medium is the message.” In 2005, I was perfectly content watching videos that occasionally glitched; now, this would be an occasion to throw another device on the trash heap or cancel a service or subscription. The tech kept getting better, retraining my expectations for what is normal along the way. The same subtle indoctrination was going on alongside nearly every large-scale social, economic, cultural, and political change wrought by this technology, whether it was good or bad. We were being normalized to the tech in the same way that large-scale car ownership normalized Americans to urban sprawl, 45,000 deaths per year, and potentially planet-killing carbon emissions.


But the path we are on, the six-lane superhighway that has been paved for this particular brand of digital technology revolution, was a wrong turn to begin with. For one thing, it is profoundly undemocratic. To my knowledge, I was never able to vote on this path. It was not arrived at through careful consideration, public debate, or clearly outlined policy changes. Our political class has proven itself woefully ill-equipped to regulate new technology or even discuss it coherently. Well-financed technologists and billionaire entrepreneurs have been driving these changes for decades by quickly releasing new technology and then later showing up in Congressional hearings with weak mea culpas for unexpected consequences or making demands for the government to fund or support technology that is already out there and adopted by millions.


More importantly, this path has not led to a better way of life. If you don’t look at it very closely or ask too many questions, Moore’s Law suggests that we should all be living better lives, but that is far from reality for many people. It would be more accurate to say that we live in an age of anxiety and uncertainty that is at least partially caused by our technology revolution. In December 2025, Gallup reported that only 19% of Americans think the country is headed in the right direction. Gallup has been asking this question in surveys since 1979 and the number has been under 50% for almost 20 years. Gallup's "Mood of the Nation" poll indicated that only 44% of Americans were "very satisfied" with their lives, the lowest percentage since the question was first asked in 2001. A 2023 Pew Center poll showed that only 37% of Americans expressed confidence in the future. The same poll showed deep pessimism about the country's moral and ethical standards, education system, and the future of racial equality. This is not the portrait of a country with a progressive sense of its present or future.


The recent rush to release AI chatbots has followed America's superhighway of tech adoption unswervingly. The technology was developed with serious ethical problems—namely the massive piracy of artists’ and writers' intellectual property—and then released into the world with no oversight while its creators went on talk shows and recorded TED Talks to admit that they don’t know exactly how it works and warn that it could lead to potentially catastrophic consequences. Last year, a group of AI developers showed up in Washington asking for a major expansion of the country’s energy infrastructure to support their large language models. It was business as usual in the world of Big Tech. 


Just last week, OpenAI announced that it had purchased Jony Ive’s hardware design company for $6.4 billion. Ive is the designer who gave us the iMac, iPad, and iWatch. OpenAI wants to make the next generation of AI-enabled wearable or portable devices. 


Something is different this time—in me at least. I’ve decided I won’t be purchasing any of these new AI devices, whatever form they take. My life is already disrupted enough. Despite what the self-proclaimed disruptors say, there is no right to disruption in American society. If the only vote I get is my consumer choice, I will be choosing a slower, more deliberate life from now on, one that protects my mental environment and prevents a few more devices from ending up in a landfill.

 
 
 

Kommentare

Mit 0 von 5 Sternen bewertet.
Noch keine Ratings

Rating hinzufügen
bottom of page