Historian Mar Hicks on why nothing about AI is inevitable

AI usage has been deemed by some to be an inevitablity. Many recent headlines and reports quote technology leaders and business executives who say AI will replace jobs or must be used to stay competitive in a variety of industries. There have already been real ramifications for the labor market; some companies have laid off a substantial number of employees to “go all-in” on AI. In response, a number of colleges are creating AI certificate programs for students to demonstrate at least their “awareness” of the technology to future employers, often with backing from the AI companies themselves.

Looking at the history of technology, however, the pronouncements that have been made about generative AI and work can be better understood as marketing tactics to create hype, gain new users, and ultimately deskill jobs as a result. Deskilling reduces the level of competence required to do a job and funnels knowledgeable workers out of their positions, leaving brittle infrastructure behind in their place. 

[Photo: Courtesy Mar Hicks]

Mar Hicks, a historian of technology at the University of Virginia, researches and teaches about the history of technology, computing, and society, and the larger implications of powerful and widespread digital infrastructures. Hicks’ award-winning book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge In Computing, is about how Britain lost its edge as the leader in electronic computing as the proportion of women in computing declined, and there was a dearth of workers with the expertise, knowledge, and skills required to operate increasingly complex computer systems. Hicks co-edited Your Computer is On Fire, a collection of essays that examine how technologies that centralize power tend to weaken democracy, and is currently working on two projects: a history of instances of resistance to large hegemonic digital systems, and a book on the dot-com boom and bust.

Fast Company recently spoke with Hicks about hype cycles, how tools get framed as “inevitable,” and the relationship between technology, labor, and power. This interview has been condensed and edited for clarity.

There is a lot of AI hype. Previously, there was blockchain hype. In the late 1990s, there was dot-com hype. How has hype played a role in the adoption of and investment in prior technologies?

The new technologies that we’ve been seeing over the past 20 to 30 years in particular are dependent on a cycle of funding from multiple sources. Notably, sometimes really significant funding from venture capital sources. They almost by necessity have to sell themselves as more than what they feasibly are. They’re selling a fantasy, which in some cases they hope to make real, and in some cases they have no intention of making real.

This fantasy attracts venture capital because venture capital bets on all sorts of different technologies, many of them somewhat outlandish, in hopes of one or a few being billion-dollar technologies. As that cycle has become more tuned and understood by both the investors and those whose companies are being invested in, various ways to game the system have come about.

Sometimes they can be incredibly harmful. One way to game the system is to promise something you know you can’t deliver simply for very short-term gain. Another way of gaming the system, which I would argue is more dangerous, is to promise something that you either know you can’t deliver, or you’re not sure that technology can deliver, but by trying to essentially reengineer society around the technology, reengineer consumer expectations, reengineer user behaviors, you and your company are planning to create an environment—a labor environment, a regulatory environment, a user environment—that will bring that unlikely thing closer to reality. 

In doing so, a lot of dangerous things can happen, especially when the attempt to do these things does something like labor arbitrage, where the profit for a particular technology isn’t coming out of the technology’s utility—it’s coming out of the fact that that technology allows employers to either pay far less or nothing for labor that is somehow seen as equivalent, even if it’s much inferior, or to do things like browbeat labor unions through the threat of a certain technology. 

We’re seeing that a little bit now with certain technologies that are being marketed not just as labor saving, but as something that can replace even human thought. That might seem very dystopian, but it’s a common thread in the history of technology. With every wave of technologies that automate some part of what we do in the workplace, the benefits are always overpromised and under-delivered. 

Hype cycles seem to be centered on tools. Why are tools historically hyped in a way that invisible systems, practices, and knowledge are not? 

Hype cycles tend to be centered on visible products and tools because it’s much harder to explain and mobilize excitement around processes and infrastructures and knowledge bases. When you start going into that level of complexity, you automatically start talking about things in ways that are a bit more grounded in reality.

If you’re going to try to hype something up, you’re promising radical new change. You really don’t want to get into the details very much about how this fits with existing processes, labor markets, and even business models. The more you get into those details, you get into the weeds about how things might not work, or how things quite obviously don’t make sense. Where the profit is coming from not only can start to look unclear, it can start to look very shortsighted.

This focus on tools comes up again and again in a way that’s both very specific but also kind of hand wavy. We see these tools as a thing that we can understand and grasp onto, literally or metaphorically, but also, the tool stands in for a whole bunch of other things that become unsaid or hidden and are left to people to infer. That puts the person or corporation who’s selling the tool in a very powerful position. If you can drum up hype and get people to think the best of what the possible future outcomes might be without telling them directly, then you don’t have to make as many false promises, even as you’re leading people in a very specific direction.

Technology adoption is frequently framed as inevitable by those advocating for it. Why are technologies framed that way? This seems like a technologically deterministic way of thinking—as if it is predetermined that people will adopt a technology just because it exists.

Framing anything—a technology, a historical movement—as inevitable is really a powerful tool for trying to get people and organizations on your side, ideologically. In some cases, when a very powerful person, organization, or set of social interests says that something is inevitable, this makes it much harder for other people who might not see that inevitability or not want that thing to come to pass. Instead of just disagreeing on the level of whether something will work well, the discourse is shifted to arguing whether or not it’s inevitable, and how to mitigate the harm if it is.

Once you fall back to the position that the technology may be inevitable as a critic, you’re already arguing from a much weaker position. You’re arguing from a position that assumes the validity of the statement that a technology is just going to come along and there’s nothing that can be done to stop it, to regulate it, to reject it.

This is a technologically deterministic way of thinking, because it produces this idea that technologies shape society when, of course, it’s usually the other way around. It’s society that creates and chooses what those technologies are and should be. Saying a technology is inevitable and that it is going to determine how things historically develop puts so much power in the hands of the people who make the technology. 

I think some of the feeling of inevitability with regard to AI comes from the fact that AI features have already been integrated by engineers into many tools that people rely on, and the makers of these technologies do not provide a way to opt out. How inevitable is widespread AI usage?

The only way we can truly answer that question is in hindsight. If it were inevitable, that means that people and the governments that they have elected to supposedly represent them do not any longer have a say in the process. Technology corporations are essentially doing a massive beta test of the generative AI LLMs for the public at low introductory rates right now, or even for free. I think it’s really premature to say that things have to go the way that the people boosting, profiting, and funding these technologies want them to go.

It’s not inevitable. History doesn’t just happen. People, organizations, and institutions make it happen. There’s always time to change things. Even once a technology or a set of practices becomes entrenched, it can still be changed. That’s real revolution. To say that the technology can become inevitable and sort of entrenched in these simple terms is, let’s just say, a big oversimplification.

A talented team of women, known as computers, were responsible for the number-crunching of launch windows, trajectories, fuel consumption, and other details that helped make the U.S. space program a success. [Photo: NASA/JPL-Caltech]

How can individuals resist technologically deterministic thinking and AI hype?

There are a couple of things that I would caution people to be on the lookout for. Whenever something is framed as new and exciting, be very wary about just uncritically adopting it or experimenting with it. Likewise, when something is being presented as “free,” even though billions of dollars of investment are going into it and it’s using lots of expensive resources in the form of public utilities like energy or water.

That is sort of a red flag that should cause you to think about not uncritically adopting something into your life, and not even playing around with it with the goal of “checking it out.” That is exactly the behavior and curiosity that companies rely upon to get people hooked, or at least talking about, creating positive rhetoric, and generating buzz for these products to help them spread farther and faster, regardless of their level of utility or readiness. 

I would really love to see folks being a little more skeptical of how they use technologies in their own lives, and not saying to themselves, “oh well, it’s here, so nothing I do is going to change that. I guess I’ll just use it anyway.” People do have agency, both as individuals and as larger groups, and I would foreground that if we’re thinking about how we combat technological determinism. I’ve been really heartened to see that so many journalists have changed their approach in the last few years when it comes to pulling back on the breathless, uncritical reporting of new tools and new technologies.

Science and technology reporting frequently focuses on new advances and therefore reporters seek interviews with scientific or technological experts, rather than people who study the broader context. As a historian of science, what do you think is left out when that perspective is not included?

It’s totally reasonable to expect that science and technology journalists will talk to the folks who are experts in a particular technology. But it’s really important to get the context as well, because technology is only useful as much as it’s applied. While these folks are experts in that technology, they are not, just by nature of what they do, going to be experts in the application and the social propagation, the way this is going to impact things economically or politically or educationally. It’s not their job to have very deep or good answers to those things.

Domain experts from those fields need to be brought into the conversation as well, and need to be brought into any reporting of a new technology. We’ve gone through a pretty dangerous metamorphosis since the late 20th century, where anybody who is an expert in computing or can even be seen as competent in computing, not even an expert, has been given an intellectual cachet, where their opinions are considered more important than people who aren’t technological experts, and are being asked questions that they’re not well equipped to answer.

In the most benign case, that means you get poor answers. In the worst case scenario, it means that people who are trying to manipulate public discourse to help their business interests can do that really easily. 

I have seen AI compared to the calculator, the loom, the industrial revolution, and mass production, among other things. Are any of these historically accurate comparisons?

I think that certain aspects can be historically accurate, but the way that people cherry pick which aspects to talk about and which technologies to talk about usually has more to do with how they hope things will go rather than being explanatory for how things are likely to go. 

As a historian, I think it’s important to use examples from the past, but I prefer to see them used in a way that’s a bit more critical. Instead of just saying “AI is like a calculator, it’s just a new tool, get over it,” maybe we should be comparing it to automated looms and automated weaving, and thinking about how that affected labor, and how frame breakers—Luddites—were coming in and trying to get this technology out of their workplaces, not because they were against technology, but because it was a matter of their survival as individuals and as a community.

These historically accurate comparisons are tricky, and I would just say, be wary of anybody who’s giving a historical comparison that they say is going to 100% map onto what’s happening now, especially if they’re doing it to say “get over it, people were afraid of this technology at the start, too.” 

AI has been purported to boost or augment workers’ skills as it automates tasks that they can spend less time on. This reminds me of Ruth Cowan’s More Work for Mother. Do new technologies tend to save time? 

Oftentimes, new technologies do not save time, and they often do not function in the ways that they are supposed to or the ways that they’re expected.

One of the throughlines in the history of technology is that big, new infrastructural technologies often make more uncompensated labor. People may have to do more things to essentially shepherd those technologies along and make sure that they don’t break down, or the technologies create new problems that have to be addressed. Even if it seems like it’s saving time for one person, a lot of the time it is creating a ton of work for other people—maybe not in that immediate moment. Maybe people, days, months, even years down the line, have to come in and fix a mess that was created in the past.

In your book Programmed Inequality, you write about how feminized work, work that was “assumed to be rote, deskilled, and best suited to women,” was critical for early computing systems. This work was anything but unskilled. Now we have work that is assumed to be unskilled—and has historically been done by women—being marketed as replaceable by AI: using chatbots to virtually attend or take notes of meetings, to automate tedious tasks like annotating and organizing material, to write emails, reports, code. What do we lose when we let AI do these kinds of tasks rather than letting people accomplish them on their own, if the task is theoretically getting done either way?

In my book, I talk about how early computing, especially in the U.K.—but this was also true in the U.S. to a very large degree—was feminized. In other words, it was done largely by women.

The other thing that the word “feminized” means is work that is seen as—emphasis on seen as—deskilled, and it’s undervalued as a result. It was seen as just another kind of clerical work, or very rote mathematical work, nothing that required any sort of real brilliance, even though it did require a lot of education, skill, and creative thinking to do these early programming jobs at the dawn of the electronic age. Bug tracking software, tools that help people keep their code neat, or even compil

Vytvořeno 3h | 21. 8. 2025 16:30:12


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

How ESPN finally made the leap from cable TV to the app era

CEOs rarely talk about plans that are a half-decade or more away from reaching reality. Yet way back in 2015, Disney CEO Robert Iger

21. 8. 2025 18:40:16 | Fast company - tech
New cellphone restrictions in school begin for students in 17 states

Jamel Bishop is seeing a big change in his classrooms as he begins his senior year at Doss High School in Louisville, Kentucky, where

21. 8. 2025 16:30:10 | Fast company - tech
China weighs expanding digital currencies globally with a yuan stablecoin

China has been expanding the use of digital currencies as it promotes wider use of its yuan, or renminbi, to reflect its status as the world’s second-largest economy and challenge the overwh

21. 8. 2025 16:30:09 | Fast company - tech
Democrats are teaching candidates how to use AI to win elections

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most imp

21. 8. 2025 16:30:05 | Fast company - tech
Google did the math on AI’s energy footprint

Ever wonder how much energy it takes when you ask an AI to draft an em

21. 8. 2025 14:10:08 | Fast company - tech
Sweetgreen’s sour summer

It’s one of the great questions of our modern age: How does Sweetgreen lose money selling $14 (and up!) fast casual salads and bowls? And not just a little money but $442 million in the last three

21. 8. 2025 14:10:06 | Fast company - tech
Biden-era AI safety promises aren’t holding up, and Apple’s the weakest link

Throughout 2023, the Biden administration persuaded a group of AI comp

21. 8. 2025 11:40:22 | Fast company - tech