Please disable your adblock and script blockers to view this page

Optimizing Machines Is Perilous. Consider ?Creatively Adequate? AI.


Profile
AI
the US Department of Defense
Natural Language Processor
Air Force
FaceNet
Popperian
Central Statistics Administration
UX
GOFAI
Artificial General Intelligence
Condé Nast
Affiliate Partnerships


Angus Fletcher Erik J. LarsonTo
Niccolò Machiavelli
J.P. Guilford
GPT-3
ArtBreeder
van Gogh
Karl Popper
van Goghs
Elon Musk
Neurolinks

No matching tags

No matching tags


California Privacy Rights.


Guilford
USSR

No matching tags

Positivity     37.00%   
   Negativity   63.00%
The New York Times
SOURCE: https://www.wired.com/story/artificial-intelligence-data-future-optimization-antifragility
Write a review: Wired
Summary

Which means that AI antifragility won’t ever be human, let alone superhuman; it will be a complementary tool with its own strengths and weaknesses.We then must step toward heresy by acknowledging that the root source of AI’s current fragility is the very thing that AI design now venerates as its high ideal: optimization.Optimization is the push to make AI as accurate as possible. Instead of designing AI to prioritize resolving ambiguous data points, we can program it to perform quick-and-dirty recalls of all possible significations–and to then carry those branching options onto its subsequent tasks, like a human brain that continues reading a poem with multiple potential interpretations held simultaneously in mind. And in any and all causes, the AI won’t break itself, self-destructing (via a digital version of anxiety) into making unnecessary errors because it’s so stressed about being perfect.The next big contributor to antifragility is creativity.Current AI aspires to be creative via a big-data leverage of divergent thinking, a method conceived 70 years ago by Air Force Colonel J.P. Guilford. The knock-on result of such pseudo-invention is a culture of AI design that categorically misunderstands what innovation is: FaceNet’s “deep convolutional network” is hailed as a breakthrough over previous facial recognition software when it’s more of the same brute-force optimization, like tweaking a car’s torque band to add horsepower—and calling it a revolution in transportation.The antifragile alternative is to flip from using data as a source of inspiration to using it as a source of falsification. When translated onto AI, this Popperian reframe can invert data’s function from a mass-generator of trivially novel ideas into a mass-destroyer of anything except wildly unprecedented ones.Rather than smudging together billions of existing priors into an endless déjà vu of the mildly new, tomorrow’s antifragile computers could trawl the world’s ever-growing flood of human creations to identify today’s unappreciated van Goghs. And the latter destroys our independence and renders us passive to a secretive, bean-counting apparatus that reduxes the USSR’s Central Statistics Administration.We can troubleshoot this dystopian union by regearing the collaboration between AI and its human users, starting with three instant fixes.First, equip AI to identify when it lacks the data required for its computations. That limit can’t be identified in real-time by AI’s human users. If the user doesn’t see a good choice on the list, they can redirect the AI or take manual control, maximizing the operational ranges of both computer logic and human initiative.Third, decentralize AI by modeling it after the human brain. And by enabling AI to view life through multiple epistemologies, decentralization also invests AI-human partnerships with greater antifragility: Rather than concentrating monomaniacally on its own internal optimization strategies, AI can look outward to learn from anthropological cues.

As said here by Wired