Member-only story

Automation & philosophy

John Ohno
2 min readOct 10, 2018

Automation tools don’t obviate abstract discussions about how best to think about decisions (like ‘should translations be precise or should they be accurate’ or ‘what is the good’) but make them more important.

AI is a force-multiplier for philosophical positions. The programmer encodes their understanding of the world (or a program extracts a statistical model of the worldview of the guy who decided what goes in the data set), then the running program mass-produces the decisions made inevitable by that world-model. Since it’s harder for us to backtrack large-scale automated decisions, it’s very important to be critical about them before any automation occurs.

The paperclip-maximizer model gets a lot of hassle because making paperclips is obviously stupid. But if you replace ‘paperclips are good’ with ‘communication is good’ you get Twitter refusing to ban nazis. If you give it ‘money is good’ you get global neoliberal capitalism. People are really good at making generalizations — at substituting their actual goals with nearly-unrelated but easily-measurable proxy goals — but they’re not great at identifying when their proxy goals have diverged from their real goals, particularly when their measurements are changing rapidly. (Just look at Jim Jones’ trajectory: he allowed himself to become paranoid and myopic, and the end game was mass suicide — something that was neither inevitable nor anybody’s desired result.)

As a human being, you can make a mistake, recognize it in a vague and intuitive way, and change your…

--

--

John Ohno
John Ohno

Written by John Ohno

Resident hypertext crank. Author of Big and Small Computing: Trajectories for the Future of Software. http://www.lord-enki.net

No responses yet