Worker protection laws aren’t ready for the automated future of work

Science fiction has long imagined a future in which humans constantly interact with robots and intelligent machines. This future is already happening in warehouses and manufacturing businesses. Other workers use virtual or augmented reality as part of their employment training, to assist them in performing their job, or to interact with clients. And lots of workers are under automated surveillance from their employers.

All that automation yields data that can be used to analyze workers’ performance. Those analyses, whether done by humans or software programs, may affect who is hired, fired, promoted, and given raises. Some artificial intelligence programs can mine and manipulate the data to predict future actions, such as who is likely to quit their job, or to diagnose medical conditions.

If your job doesn’t currently involve these types of technologies, it likely will in the very near future. This worries me mea labor and employment law scholar who researches the role of technology in the workplace—because unless significant changes are made to American workplace laws, these sorts of surveillance and privacy invasions will be perfectly legal.

New technology disrupting old workplace laws

The U.S.’s regulation of the workplace has long been an outlier among much of the world. Especially for private, nonunionized workers, the U.S. largely allows companies and workers to figure out the terms and conditions of work on their own.

In general, for all but the most in-demand workers or those at the highest corporate levels, the lack of regulation means companies can behave however they want—although they are subject to laws preventing discrimination, setting minimum wages, requiring overtime pay, and ensuring worker safety.

But most of those laws are decades old and are rarely updated. They certainly haven’t kept up with technological advances, the increase in temporary or “gig” work, and other changes in the economy.

Faced with these new challenges, the old laws leave many workers without adequate protections against workplace abuses, or even totally exclude some workers from any protections at all. For instance, two Trump administration agencies have recently declared that Uber drivers are not employees and therefore not entitled to minimum wage, overtime, or the right to engage in collective action such as joining a union.

Emerging technologies like artificial intelligence, robotics, virtual reality, and advanced monitoring systems have already begun altering workplaces in fundamental ways that may soon become impossible to ignore. That progress highlights the need for meaningful changes to employment laws.

Consider Uber drivers

Like other companies in what has been called the gig economy, Uber has spent considerable amounts of money and time litigating and lobbying to protect regulations classifying its drivers as independent contractors, rather than employees. Uber set its fifth annual federal lobbying record in 2018, spending $2.3 million on issues such as keeping its drivers from being classified as employees.

The distinction is a crucial one. Uber does not have to pay employment taxes—or unemployment insurance premiums—on independent contractors. In addition, nonemployees are completely excluded from any workplace protection laws. These workers are not entitled to a minimum wage or overtime pay; they can be discriminated against based on their race, sex, religion, color, national origin, age, disability, and military status; they lack the right to unionize; and they are not entitled to a safe working environment.

Companies have tried to classify workers as independent contractors ever since there have been workplace laws, but technology has greatly expanded companies’ ability to hire labor that blurs the lines between employees and independent contractors.

Employees aren’t protected, either

Even for workers who are considered employees, technology allows employers to take advantage of the gaps in workplace laws like never before. Many workers already use computers, smartphones, and other equipment that allows employers to monitor their activity and location, even when off duty.

And emerging technology permits far greater privacy intrusions. For instance, some employers already have badges that track and monitor workers’ movements and conversations. Japanese employers use technology to monitor workers’ eyelid movements and lower the room temperature if the system identifies signs of drowsiness.

Another company implanted radio-frequency identification (RFID) chips into the arms of employee “volunteers.” The purpose was to make it easier for workers to open doors, log in to their computers, and purchase items from a break room. But a person with an RFID implant can be tracked 24 hours a day. Also, RFID chips are susceptible to unauthorized access or “skimming” by thieves who are merely physically close to the chip.

No privacy protections for workers

The monitoring that’s possible now will seem simplistic compared to what’s coming: a future in which robotics and other technologies capture huge amounts of personal information to feed AI software that learns which metrics are associated with things such as workers’ moods and energy levels, or even diseases like depression.

One healthcare analytic firm, whose clients include some of the biggest employers in the country, already uses workers’ internet search histories and medical insurance claims to predict who is at risk of getting diabetes or considering becoming pregnant. The company says it provides only summary information to clients, such as the number of women in a workplace who are trying to have children, but in most instances it can probably legally identify specific workers.

Except for some narrow exceptions—like in bathrooms and other specific areas where workers can expect to be in relative privacy—private-sector employees have virtually no way, nor any legal right, to opt out of this sort of monitoring. They may not even be informed that it is occurring. Public-sector employees have more protection, thanks to the Fourth Amendment’s prohibition against unreasonable searches, but in government workplaces the scope of that prohibition is quite narrow.

AI discrimination

In contrast to the almost total lack of privacy laws protecting workers, employment discrimination laws—while far from perfect—can provide some important protections for employees. But those laws have already faced criticism for their overly simplistic and limited view of what constitutes discrimination, which makes it very difficult for victims to file and win lawsuits or obtain meaningful settlements. Emerging technology, particularly AI, will exacerbate this problem.

AI software programs used in the hiring process are marketed as eliminating or reducing biased human decision-making. In fact, they can create more bias because these systems depend on large collections of data, which can be biased themselves.

For instance, Amazon recently abandoned a multiyear project to develop an AI hiring program because it kept discriminating against women. Apparently, the AI program learned from Amazon’s male-dominated workforce that being a man was associated with being a good worker. To its credit, Amazon never used the program for actual hiring decisions, but what about employers who lack the resources, knowledge, or desire to identify biased AI?

The laws about discrimination based on computer algorithms are unclear, just as other technologies stretch employment laws and regulations well beyond their clear applications. Without an update to the rules, more workers will continue to fall outside traditional worker protections—and may even be unaware how vulnerable they really are.


Jeffrey Hirsch is the Geneva Yeargan Rand distinguished professor of law at the University of North Carolina at Chapel Hill.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


https://www.fastcompany.com/90779774/worker-protection-laws-arent-ready-for-the-automated-future-of-work?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 3y | 19 ago 2022, 5:21:19


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

Palantir, Nvidia stocks slip as Wall Street edges away from its records

Wall Street is edging lower on Tuesday following drops for Palantir and other stars that had been riding the mania surrounding artificial i

19 ago 2025, 20:20:07 | Fast company - tech
This free AI tool wants to make divorce less complicated

Since its founding in 2018, Hello Divorce has aimed to make the divorce process less stressful and more cost-effective. The startup helps spouses accurately

19 ago 2025, 15:40:06 | Fast company - tech
AI study tool Cubby Law looks to boost law students’ GPAs

Law school can be notoriously competitive, with post-graduation job opportunities heavily dependent on grade point average. GPAs are determined

19 ago 2025, 15:40:05 | Fast company - tech
Clippy is back—this time as a mascot for Big Tech protests

Clippy has become an unlikely protest symbol against Big Tech. 

The trend started when YouTuber Louis Rossmann ">posted a video

19 ago 2025, 15:40:04 | Fast company - tech
Social media is dead. Meta has admitted as much. What now?

Back in March, Facebook introduced a new feature that wasn’t exactly new. The Friends tab—de

19 ago 2025, 13:20:12 | Fast company - tech
Diagnostic AI is powerful—but doctors are irreplaceable

Microsoft captured global attention with a recent announcement that its new

19 ago 2025, 13:20:11 | Fast company - tech
Why Japan’s 7-Elevens are the hottest new tourist attraction

Forget the Shibuya Crossing or Mount Fuji; tourists in Japan are adding convenience stores to their travel itineraries.

Thanks to

19 ago 2025, 11:10:06 | Fast company - tech