Bot or not? | Part 2

By , 6 February 2017 at 11:00
Bot or not? | Part 2
Digital Life

Bot or not? | Part 2

By , 6 February 2017 at 11:00
Tags:

This is part two of the original part one published last week here.

Alternatives to the Turing Test: The right questions to ask to determine if it’s a bot or not:

Now, back to that Turing Test, which has been countlessly suggested to me on my journey. Technically this is a test to see if a computer is capable of thinking like a human —  the utter distinguisher between artificial intelligence and not — but it could be applied to distinguishing human versus machine.

To synthesize the concept of the Turing Test, it has one human asking the same questions of two unknown respondents — one human, one computer. This test is repeated many times. If the questioner correctly identifies human vs. computer half the time, then the computer has achieved artificial intelligence, since it’s regarded “just as human” as the actual human respondent.

 

Of course, it’s not perfect and it’s been pointed out that for something like search queries, Google’s algorithm would far out-perform the human. So I looked to my tech friends to come up with our own Turing Test for the Age of AI.

 

Founder of Hitch, a developer community tool that includes a chatbot, Bruno Pedro suggested you pursue “stateful conversations.” This means coming back after a few days and talking about things that you talked about before, something a human would remember but only the most complex levels of AI would be able to put into context.

 

And while a chatbot should be able to regurgitate the basic contact details of a company, Bruno also suggested asking about getting in touch via alternative communication channels, which would blow most bot minds.

 

Launch Any Tech Consultant James Higginbotham said I should just ask if it’s a robot “because bots never lie,” though he admitted that might not be true. This is now something I’ve been doing every time I’m on a website with a tech support pop-up.

 

In this case, my suspected bot was actually following this series of tweets and tweeted to me “I am a human,” clarifying that his other Twitter alter ego was indeed a bot. But then he/it agreed with me that a bot could lie and say it’s a human. Also, just because it was a likely a human on Twitter, doesn’t mean the Slack community member was one too.

 

Sendgrid’s Matt Bernier suggested you ask the suspected bot about Asimov’s rules, then ask how these rules apply if humans can’t be trusted to protect themselves.

 

This is a good one, but in a situation where you don’t want to potentially offend a real human customer, it may be a bit personal of a question — or frankly too terrifying of one.

 

API Handyman Arnaud Lauret also recommended just speaking in puns as a replacement for the Turing Test and seeing how it goes.

So was it a bot or not?

First, none of my tech friends were convinced that it’d be challenging to work around Slack’s Bot policy.

 

Arnaud said, “It could be a custom bot that interact with Slack like a human being — a program could interact with a browser running Slack for example.” Going on to point out that “A Slackbot has to be registered to work and you could see it in the team configuration,” which is something we haven’t seen.

 

So in the end, things got rapidly weirder with this human-or-bot. Suddenly he/it left most of the channels. At the same time he/it had written a post for our blog that I had gone through and taken the time to heavily edit and offer feedback to make it more specific with personal examples. Then he/it pulled out of that too with no explanation why.

 

So, while I don’t have a conclusion, I have a theory: He/it’s both human and bot. One thing I didn’t mention before is that just about every line he/it said in public channels was “edited” (as is always indicated in Slack.) All of these pieces come together to make me believe we were an experiment. My theory is that a Slackbot that was commenting but then a human behind the scenes was curating both the Slack and blog content to try to make it more accurate.

 

But we may never know for sure because that bottish human has gone MIA.

 

previous article

A policy guide for a better Europe – the ERT Benchmarking Report

A policy guide for a better Europe – the ERT Benchmarking Report
next article

Can a self-driving car surpass our ethical standards?

Can a self-driving car surpass our ethical standards?