When news got out that Facebook turned off an experiment between two chat boxes designed to teach themselves the art of negotiation and persuasion, people freaked. According to news sources, the bots – Bob and Alice – started to get a little creepy when the AI behind it started creating its own language.
AI is funded both publicly and privately. Reports like the one about Bob and Alice stir up fear partly because people don’t always trust the monolithic corporations that fund AI research. Partly because of the attraction between arttificial intelligence and politics (no pun intended).
The political relationship goes back to the vacuum tube era, when in 1952 the public woke up to these inevitable bedfellows while CBS and NBC used “electronic brains” on election night to predict the outcome of the presidential election. Electronic brains and politics were forever married that night.
Then a use for AI started floating around the Equal Employment Opportunities Commission in 1983 that was designed to help students find employment, and was supposed to help find employment for minorities in particular. Code named “Project 2000,” this imaginary algorithm would take two basic inputs from students: “What do you want to do when you grow up?” and “Where do you want to live?” Then, crunching census data trends and proprietary data from investment houses, it would tell them where to live if they wanted to be a zoologist and what to study instead if they were bent on living in Miami.
But background chatter in the Reagan administration hinted that maybe it could be used to help win elections. Figure out what the economy and industry would be in the next election cycle -- on a micro level -- and use it to exploit vulnerabilities and get votes. And, the information would only be in the hands of the executive branch and the candidates it selectively gave the information to: Republicans.
Maybe that was the beginning of the big AI debate. Today, Scientific American questions whether democracy can survive it, contemplating that “persuasive programming” will allow the government to play big brother, convincing us to do what’s best rather than vote for what we think is best.
Here's the solution. We have to be smart enough to keep AI from creating more powerful generations of itself without our permission. Unless we start placing a higher value on human intelligence, and rethinking our horrible educational system, we might not be able to tell when AI is getting the last laugh.
But there's a lot of political resistance to improving human intelligence in America, and also a lot of religious motivations to suppress it. If there weren’t, then the most powerful nation in the world wouldn’t rank below Slovenia in science, math and reading.
Until this is fixed, until human intelligence in America can keep artificial intelligence in check, maybe there's a band aid that will keep us safe.
For example, movies about AI have scared the crap out of us since Charlie Chaplin made "Modern Times." Scientists scare the crap out of us. More than any other country, we disapprove of AI for reasons like thinking it will make God unhappy.
But if there is a God, she's unhappy because America isn't teaching its kids logic. So they'll grow up voting for Trump clones and letting IBM's Watson do a lot more than beat them on Jeopardy.