The hope was to show that Alan Turing’s ideas can be implemented as scientific experiments.
My research has shown that there is something that can be measured and observed through practical Turing tests. But the big surprise is, from results and findings, that some human judges are not good at being able to tell the difference/distinguish between human and machine dialogue based on text-based answers to questions and no other input (sight, hearing, touch, etc.).
So my research can include a focus to improve deception-detection, especially in Internet transactions, when you’re dealing in text only and have to place your trust in what you’re reading. Think of the number of scam emails we receive and the amount of successful hacking – some of it is from innocent selection of links in emails that take you to malicious websites and take control of your computer, steal your personal information and use your financial details. As artificial intelligences are applied more and more across the Internet, including as digital assistants for banks, retail companies, the risk that some of them are bad programmes is real.
So the unintended consequences of Turing test research was to learn how gullible some humans are. Now part of my research mission is to help people avoid being a victim of digital deception by staging public experiments and public talks getting as many people involved in the discussion on artificial intelligence and social robots.
That’s a great question in my opinion. The nature of research is that what you may start off trying to understand may change over time. In this way, the most interesting things can be studied because we aren’t forced to research something that may not return useful results.
Comments