08.10.2024

Your boss might be better as an algorithm

As the Alexas and Siris of the world proliferate, it’s only a matter of time before we all have personal AI assistants to tend to our every whim. But as AI gets smarter, our roles will reverse, and we will inevitably start working for AI instead of it working for us.

This may sound like a distant sci-fi dystopia, but it’s already happening to Uber and Lyft drivers. An algorithm serves the role of manager, dictating how much they’ll be paid, mandating performance levels like acceptance and cancellation rates, and setting a schedule based on consumer demand and delivery services. Our tools once shaped us-but now we serve as their tools.

The truth is that nearly any cognitive task-whether determining a schedule, assigning tasks, providing feedback, or making a decision-can be broken down into a series of instructions. And that’s all an algorithm really is: a set of steps that should be followed to achieve a given outcome.

If nearly any task can be governed by an algorithm, that means nearly any management role can be performed by software. The white-collar jobs that were assumed to be safe from automation may in fact be even more vulnerable-and your boss’s job might be automated before yours is. Before rushing into this brave new world, managers, employees, and AI engineers should consider the benefits and failings of AI management.

Scaling at the speed of software

Partly due to the fact that humans can’t keep up with their rapidly scaling workload, AI management is more popular than ever. For example, when Uber was expanding in 2014 and 2015, the company was adding up to 50,000 drivers every month. Even the best managers wouldn’t be able to schedule these drivers and make sure they’re performing at their best, so some system of virtual management had to be put in place.

 Even the best, most conscientious human bosses are still fallible. Although having an AI boss sounds like something out of a futuristic dystopia, it’s much less HAL 9000 and more like Bill Lumbergh from Office Space: a middle manager that handles the more mundane, repetitive tasks involved in coordinating a large group of employees. After all, even the best, most conscientious human bosses are still fallible. They make poor decisions when tired; their biases (whether unconscious or not) affect how they treat employees; and their egos and personalities can determine how well they work with others. Algorithmic bosses would theoretically eliminate those problems: They could make decisions constantly and consistently, reduce bias, and effortlessly modify their management style to the needs of employees.

So why haven’t we seen more management roles be assisted or replaced entirely with AI?

Humans are hard to update. Changing a rule in an algorithm is relatively straightforward: Just push an update, and the entire system refreshes itself instantly. Unfortunately, human operators aren’t as flexible and can only handle so much change at once. (And despite their protests otherwise, humans often don’t like change in the first place.) Moving toward this style of management is a hurdle many don’t want to jump over.

Humans build in bias. As long as humans are writing the algorithms, there’s the possibility of unconscious bias being incorporated into the algorithmic boss itself. For instance, a software program designed to assess the likelihood of criminals to reoffend was biased against African Americans-even though race was not one of the questions accounted for in the algorithm.

Humans don’t always know what the answer should be. All the steps in an algorithm might be right, but they won’t help if they add up to the wrong answer. We don’t even understand what truly makes for effective management: Despite management fads coming and going, Gallup engagement scores have been stuck at around 32% for the past 15 years.

Humans dislike inequality. Automation and AI could lead to conflict between work cultures and greater inequality in society at large: Those who work on or with software, and those who are managed or replaced by it. In fact, a recent survey indicated that 76% of Americans are concerned that automation will exacerbate economic inequality.

Humans find loopholes in systems. Any incentive structure can be gamed, and if people don’t feel like they’re being treated fairly, they’ll quickly look for ways to even the score. When Uber drivers felt that the algorithm was making it difficult for them to achieve their bonuses, they formed online communities to share information and coordinated practices to trick the system, such as logging out of Uber in order to trigger surge pricing.

Despite these concerns, Silicon Valley runs the risk of pushing forward with the technology first and figuring out the ethics later. They’re creating modern-day Frankenstein’s monsters: experiments designed to highlight the best of humanity but have the potential to become twisted and terrifying through neglect.

It’s unlikely that any pushback will come from the Zuckerbergs and Kalanicks of the world-they’re the ones who stand to profit the most from the shift to greater automation. Instead, if recent mea culpas are any indication, core employees a layer or two down from the CEO are more likely to question the impact of technology like AI on society. At Facebook, for instance, the employees who built the ad platform are now lamenting how their work may have allowed Russians to spread fake news, while the engineer behind the “Like” button now questions its addictive nature.

For those employees currently engineering the future of automated managers, here’s a basic code of conduct:

  1. Do no harm to the worker. AI bosses should protect the humans they influence from harm, even when the company stands to financially benefit from those workers. For example, algorithms shouldn’t force drivers to log long hours just to ensure ample rides for customers.
  2. Make the opaque transparent. Employees should be able to see and understand the underlying rules that drive their algorithmic managers so they can evaluate the reasoning and context of the requests. Employees have spent the last two decades struggling for greater meaning and clarity at work, so AI shouldn’t roll back the tide by obfuscating what progress we have made.
  3. Reject society’s inequities. Algorithm creators must be self-aware of their own biases, incorporate diverse viewpoints when building the algorithm, and carefully test and monitor to prevent potential problems. For example, if African-American ride-share drivers receive on average lower driver scores because of societal conditioning, an automated boss shouldn’t reinforce that prejudice by limiting those drivers’ hours.
  4. Preserve freedom. Studies show that cultures of draconian rules and impossible expectations foster rule-beating and unethical behaviors. Algorithmic bosses can seem like a simple way to enforce rules, but human beings are wont to exercise their agency and free will.

Leave a Reply

Your email address will not be published. Required fields are marked *