• Question: What have you found out about AI and how it works

    Asked by anon-200670 to Oliver on 5 Mar 2019.
    • Photo: Oliver Gordon

      Oliver Gordon answered on 5 Mar 2019:


      I’ve found out a few things, personally.

      1) It’s not particularly “intelligent”. Think less of a grown adult, and more of a smart baby. It can on occasion do things that are more subtle, but very rarely is it refined, it struggles with all but the simplest tasks, and it has absolutely no common sense. You very much have to “guide” it to learning – for example at the size of atoms, all your numbers are about 0.0000000001 in scale. A machine learning algorithm will get upset and learn NOTHING if you do that! If you change your numbers and make them 10 million times bigger, it’ll suddenly go “oh, OK I get it now” and start to learn. A lot of my job is figuring out things like this. As a scientific community we don’t know a lot about how best to train neural networks, so we kind of have to try everything until it works (compared to teaching children – there’s generally a pretty tried and tested method for teaching you – it’s called schools, teachers and lesson plans!)

      2) It’s spooky at times. One of the big things in machine learning at the moment is trying to detect breast cancer. On the whole, this is now done at a higher accuracy than trained doctors. So it performs better than any of the individual people who gave it the examples to teach it what to do! We are finding roughly human-like performance for what we use it for for our group.

      3) It’s naughty, and tries to “cheat” you. Going back to the cancer thing, one group thought they had done it to nearly 100% accuracy (medical doctors are about 80% in agreement with each other). This could have been groundbreaking. Unfortunately, what they didn’t realise is that they labelled all of their cancer containing cells with a little dot. So the machine stopped looking for cancer, and just learnt to look for that dot!

      4) There’s a reason why Google, Facebook and so on want your personal data so much. AI learns quite similarly to a lot of people – it learns by example. You can’t just tell it some facts and expect it to understand them at such a core level. You have to give it hundreds of thousands of examples, if not more. We have about 300GB of data in my group (about 2 million website’s worth of information). But we are FORCED to use it to “invent” more data, because it’s just not enough. And all of this data needs to be labelled by hand, too. Facebook can get that data off its users, whereas we rely on ourselves and people who like science to help us out. If you want to get involved, there’s a super website called https://www.zooniverse.org/projects – the data you label gets used in actual science!

Comments