The alignment problem

Beth Singlet writing for Aeon:

My stomach sank the moment the young man stood up. I’d observed him from afar during the coffee breaks, and I knew the word ‘Theologian’ was scrawled on the delegate badge pinned to his lapel, as if he’d been a last-minute addition the conference. He cleared his throat and asked the panel on stage how they’d solve the problem of selecting which moral codes we ought to program into artificially intelligent machines (AI). ‘For example, masturbation is against my religious beliefs,’ he said. ‘So I wonder how we’d go about choosing which of our morals are important?’

The audience of philosophers, technologists, ‘transhumanists’ and AI fans erupted into laughter. Many of them were well-acquainted with the so-called ‘alignment problem’, the knotty philosophical question of how we should bring the goals and objectives of our AI creations into harmony with human values. But the notion that religion might have something to add to the debate seemed risible to them. ‘Obviously we don’t want the AI to be a terrorist,’ a panellist later remarked. Whatever we get our AI to align with, it should be ‘nothing religious’.

Hey. I’m Alex Murrell. I'm a Planner at Epoch Design in Bristol where I help deliver highly creative, innovative and effective pack, instore and online communications for some of the world’s biggest FMCG brands. Want to know more? You can find me on Twitter or LinkedIn.

Leave your comment