Please disable your adblock and script blockers to view this page

The Next Big Privacy Hurdle? Teaching AI to Forget

the European Union
the General Data Protection Regulation
the Children's Online Privacy Protection Act
The New York Times
Stanford AI
CNMN Collection©
Condé Nast

Darren Shou
Kai-Fu Lee
Fei Fei Li
Maria Streshinsky

No matching tags

No matching tags

California Privacy Rights

United States
San Francisco

No matching tags

Positivity     39.00%   
   Negativity   61.00%
The New York Times
Write a review: Wired

But as we continue to grapple with this crucial issue, we’ve largely failed to address one of the most important aspects—how do we control our data once it’s been fed into the artificial intelligence (AI) and machine-learning algorithms that are becoming omnipresent in our lives?Virtually every modern enterprise is in some way or another collecting data on its customers or users, and that data is stored, sold, brokered, analyzed, and used to train AI systems. Algorithms may not offer this leniency, meaning that data collected on a youthful transgression may be given the same weight (and remembered the same) as any other data—potentially resulting in the reinforcement of bad behavior, or limited opportunities down the line as this data becomes more embedded into our lives.For instance, today a college admissions counselor may be able to stumble upon incriminating photos of an applicant on a social media platform—in the future, they may be able to hear recordings of that applicant as a 12-year-old taken by a voice assistant in the child’s home.The AI Generation needs a right to be forgiven.Historically, we have worked hard to create protections for children—whether that’s laws about advertising, the expunging of juvenile criminal records, the Children's Online Privacy Protection Act, or other initiatives. The moral panics of the mid 20th century seem quaint in comparison to today’s digital free-for-all.The lack of debate on what data collection and analysis will mean for kids coming of age in an AI-driven world leaves us to imagine its implications for the future. Now, advanced AI systems can analyze the data they’ve internalized in order to arrive at a solution that humans may not even be able to understand—meaning that many AI systems have become “black boxes,” even to the developers who built them, and it may be impossible to reason about how an algorithm made or came to a certain decision.On a basic level, people understand that there are trade-offs when they use digital services, but many are oblivious to the amount of information captured, how it is used, and whom it is shared with.

As said here by Darren Shou