With chatbots and robots having stronger artificial intelligence every day — and with the sure-to-be epic Blade Runner 2049 out later this year — one question is going to continue to plague us: How can you tell if you’re talking to a bot?
There’s no doubt that Westworld and Black Mirror leave us wondering if we could really tell the difference. And with Alexa rapidly outsmarting Siri, I’d argue that this isn’t a question for a couple decades from now, but one we should already be asking.
This piece takes you through my recent obsession with answering this question, as well as asking further: Are chatbots and robots ethically obligated to identify themselves as such?
First, for the background. I’m in charge of this online Slack community. It’s a large group of people who are passionate about being happy at work, with a lot being agile software developers. One member happens to be an agile-minded chatbot developer. Or perhaps he is an agile-minded chatbot? Or perhaps both? As multiple community members said, often developers and agilists tend to talk in platitudes that could be seen as robotic and formulaic in language.
As API Architect David Biesack pointed out to me, “In most Slack channels, the probability of telling people from bots approaches zero because so many people elect botspeak style.”
At the same time this was happening more than a month ago, I attended and spoke at a conference focusing on chatbots. And I realized that, at a two-day conference almost exclusively dedicated to the topic, no one ever discussed how to recognize if you are talking to a chatbot.
And the better artificial intelligence gets — and it’s getting pretty extraordinary already — the more confused we will be.
Thus began my journey of robots versus humans.
First, I asked the Twittersphere: How do we know we are talking to chatbots? We think we have one in our Slack.
Slack promptly replied: “Bots added to Slack should have a little tag marked “BOT” to the right of their names. Is that not showing on your team?”
I responded that there was no BOT tag, and pointed out that devs have found their way around harder “shoulds.” To which @SlackHQ cheekily replied “If only there was a test…” linking to Alan Turing’s 67-year-old test And off I went searching for more answers.
Should robots be guided by ethics and rules?
As Slack’s community manager pointed out, there are tests designed to identify robots.
Back in 1954 when autonomous robots were more something of fantasy than reality, sci-fi forerunner Isaac Asimov became the first to establish the first Three Laws of Robotics, which still are widely accepted today:
- A robot must not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given by human beings except when such orders would conflict with Law #1.
- A robot must protect its own existence except when such orders would conflict with Law #1 or #2.
If these rules were enforced, we shouldn’t have to worry about the impending doom of robots subjugating the human race.
But is this enough? When we are talking to technical support online, should it be identifying itself as human versus chatbot?
If you add in Lyuben Dilov’s Fourth Law of Robotics — “A robot must establish its identity as a robot in all cases.” — then a lot of tech support chatbots are already breaking this rule.
But then you could follow Nikola Kesarovski’s fifth rule that “A robot must know it is a robot” and wonder if it’s the fault of the humans that are training that chatbot to believe it’s a real specialist.
Of course a fair question is, why are we allowing fiction writers from previous generations to establish our rules of order? Shouldn’t a contemporary genius like Stephen Hawking be guiding us?
Perhaps it’s more a question of politeness. Is it rude not to tell? Would we be less frustrated with tech support if we knew if it was a bot? Or perhaps that’s even more irritating?
Explore alternatives to the Turing test and hear Jennifer’s final results on part two of this piece coming next week!