Blog
/
Future of Work
/
The top 5 most important ethical issues facing tech

The top 5 most important ethical issues facing tech

Cathy Reisenwitz
Content, Clockwise
July 24, 2020
Updated on:

The top 5 most important ethical issues facing tech
Photo by 

Tech workers are building the future. With that power comes a responsibility to build a future that is more free, just, and prosperous than the present. “Reality is something we create together,” wrote Ruha Benjamin in Race After Technology: Abolitionist Tools for the New Jim Code. At Clockwise, we believe tech has a moral obligation to work toward greater social good.

Many tech workers are taking that responsibility seriously. Since 2018, tech workers at Google, Facebook, and Amazon have publicly protested their company’s behavior on ethical grounds.

It’s essential that we understand what’s at stake when it comes to who we work for and what we build. Below are five areas within technology that I believe represent forks in the road. Each holds tremendous possibility. Some are helping to usher in a better future. But each has the potential to hasten dystopia. Here's a brief summary of each of these areas and why they matter.


1. Mass surveillance

In a nutshell

“Mass surveillance is a public private partnership from hell,” Doctorow said. In Race after Technology, Ruha Benjamin describes what Stop LAPD Spying Coalition calls “the stalker state.” Private companies including social media sites and cell service providers are collecting vast troves of detailed, real-time location and communication metadata and selling it to and sharing it with law enforcement, immigration enforcement, and the intelligence community without informing users. “In the United States, data fusion centers are one of the most pernicious sites of the New Jim Code,” Benjamin writes.

What may be at stake

Surveillance by immigration enforcement is literally a matter of life and death. Law enforcement use of surveillance technology to identify and track protestors and journalists threatens First Amendment rights. Amazon Ring and other public/private surveillance tools “streamline police escalation and increase the likelihood of violent interactions,” according to EFF.

Where to learn more

The Intercept and 20 Minutes into the Future are good sources for surveillance reporting. Follow fellow 5 people to follow in tethics (tech ethics) listee Eva Gaperin on Twitter for updates on surveillance. And check out our post on the pros and cons of employee surveillance.

2. Deepfakes

In a nutshell

In April, State Farm debuted a widely discussed TV commercial that appeared to show an ESPN analyst making shockingly accurate predictions about the year 2020 in 1998. It was a deepfake. Deepfakes are media representations of people saying and doing things they didn’t actually say or do. To make a deepfake, someone records a photo, audio clip, or video of someone and then swaps out their likeness for another person’s.

What may be at stake

“Detecting deepfakes is one of the most important challenges ahead of us,” said Alphabet CEO Sundar Pichai. “Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea,” writes Rob Toews in Forbes. Some of these things we don’t have to imagine. Examples of deepfakes in the wild include videos in which Belgium’s Prime Minister Sophie Wilmès links COVID-19 to climate change. In one particularly frightening example, rumors that a video of the President of a small African country was a deepfake helped instigate a failed coup. On the other hand, brands are using deepfakes for marketing and advertising to positive effect. Other positive uses include creating “voice skins” for gamers who want realistic-sounding voices that aren’t their own.

Where to learn more

This MIT intro and this CSO intro both do a good job covering how deepfakes are made, use cases, threats, and defenses. The Brookings Institution has a good summary of the potential political and social dangers of deepfakes. These two resources are good primers on how advanced deepfake technology is currently. The videos embedded in this CNN explainer is great for getting up to speed with less reading.

3. Disinformation

In a nutshell

A play on the word “misinformation,” disinformation is a type of propaganda meant to mislead or misdirect a rival. For example, a 2019 Senate Select Committee on Intelligence (SSCI) report confirmed that Russian-backed online disinformation campaigns exploited systemic racism to support Donald Trump’s candidacy in the 2016 election.

What may be at stake

While disinformation from Chinese and Russian-backed groups is distributed online, it has real-world consequences. Between 2015 and 2017 Russian operatives posing as Americans successfully organized in-person rallies and demonstrations using Facebook. In one instance, Muslim civil rights activists counterprotested anti-Muslim Texas secessionists in Houston who waved Confederate flags and held “White Lives Matter” banners. Russian disinformation operatives organized both rallies. Experts predict more Russian-backed disinformation in the runup to the 2020 elections. “There is a war happening,” writes disinformation expert Renee DiResta. “We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

Where to learn more

Dan Harvey’s 20 Minutes into the Future is an awesome newsletter and his recent edition is a quick read on the recent developments in Russian disinfo. In it he recommends The IRA and Political Polarization in the United States, which he calls “a brilliant analysis of Internet Research Agency (IRA) campaigns published by Oxford University.” The Axios Codebook newsletter is also great and their June edition on Russian disinfo is good. For a thorough-but-readable longread, I recommend DiResta’s The Digital Maginot Line. For a less readable, more academic analysis, check out Stanford University’s Internet Observatory.

4. Addictive UX

In a nutshell

Product managers, designers, marketers, and start-up founders are all trying to build tools that users can’t put down. The benefit of addicted technology is obvious for the builders. But what is the impact on users?

What may be at stake

Habit-forming products aren’t bad in and of themselves. But not all habits turn out to be healthy. “A lot of tech is getting people to do things they might not otherwise make a conscious choice to do,” tech startup founder Vincent Woo said. “The more we optimize for engagement, the more we optimize for addiction.” Multiple studies have linked social media use with anxiety and depression, although the causal relationship isn’t clear. After Robinhood made it free, easy, and fast to trade individual stocks some users developed an unhealthy relationship with trading. One 20-year-old user committed suicide after seeing his $730,000 negative balance. Arguably no app is more addictive than TikTok. By showing users the stickiest content regardless of their social graph and using user behavior to constantly refine the algorithm TikTok has become “indispensable to its users,” Ben Thompson explained in Stratechery. As a Chinese company, TikTok owner ByteDance is required to pass user data to the Chinese government. And going back to the disinformation section, TikTok has little incentive to resist pressure to display content that gives China an advantage over the US. In 2019 Senator Josh Hawley introduced ham-fisted legislation aimed at combating addictive UX.

Where to learn more

This Scientific American piece is a good overview of the research on social media’s impact on mental health. The Margins newsletter is a good source of information on the pros and cons of technology and their Robinhood edition is a good read. Ben Thompson’s Stratechery newsletter is usually more nuts-and-bolts, but sometimes delves into useful analysis of the ethical implications of technology.

5. Racist AI

In a nutshell

AI is only as good as the data it’s trained on. Since humans still, by and large, have meaningful racial biases it makes sense that the data we produce and use to train our AI is also going to contain racist ideas and language. The fact that Black and Latino Americans are severely underrepresented in positions of leadership at influential technology companies exacerbates the problem. Tech workers are just 3.1% Black nationwide. Silicon Valley is just 3% Black. Around 1% of tech entrepreneurs in Silicon Valley are Black.

What may be at stake

After Nextdoor moderators got in hot water for deleting Black Lives Matter content, the company said they would be using AI to identify racism on the platform. But racist algorithms are causing harm to Black Americans. Police departments are using facial recognition software they know misidentifies up to 97% of Black suspects, leading to false arrests. The kind of modeling used in predictive policing is also inaccurate, according to researchers. And judges are using algorithms to assist with setting pre-trial bail that assign Black Americans a higher risk of recidivism based on their race. Amazon scrapped their internal recruitment AI once it came to light that it was biased against women. On the other hand, one study showed that a machine learning algorithm led to better hires and lower turnover while increasing diversity among Minneapolis schoolteachers.

Where to learn more

The Partnership on AI, a nonprofit coalition committed to the responsible use of AI, is a great follow in this space. Why algorithms can be racist and sexist and Why the left should worry more about AI are good short, readable intros to the topic. Race after Technology is a concise, readable, quotable tome on what author Ruha Benjamin calls the New Jim Code: “The employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of the previous era.” These two posts delve a little into the need for diversity, equity, and inclusion in tech.

Read next:

5 people to follow in tethics (tech ethics)

3 ways tech workers can support police reform

Is now the time to outsource engineering overseas?

About the author

Cathy Reisenwitz

Cathy Reisenwitz is the former Head of Content at Clockwise. She has covered business software for six years and has been published in Newsweek, Forbes, the Daily Beast, VICE Motherboard, Reason magazine, Talking Points Memo and other publications.

Optimize your work day with AI powered calendar automation.

Sign up for free

Make your schedule work for you

More from Clockwise