In mid-October, MIT announced a $1bn investment in a computing college devoted to AI[1]. It's the latest in a series of headlines that will have many chief executives wondering if they should be investing in AI and where it might make a difference for them. Plenty of companies will have gone some way down the path already but here are five pointers for anyone who wants to get more from AI and doesn’t know where to start.
1. Beware the snake oil
Artificial intelligence has become so ubiquitous that, as I've written on previous blogs, lots of people are unclear about what it really means. That's a gift to some of the shadier vendors who can rebrand their software as AI, bump up the price tag a bit and sell it to you as if it's a shiny new toy.
Now, I'm sure your vendor wouldn't do that but when evaluating an AI product, it is important to ask: does it learn and improve? True AI gets better as it crunches more data. It makes better decisions and provides better insights. If it doesn't learn, it might still be useful to you but it's not really AI. Ask to see evidence that it learns.
2. Is it beneficial?
The AI buzz is exciting. We're an AI company and it's great that people are interested in this area, which we've been fascinated by for years now. However, just because it's trendy and exciting doesn't mean you actually need it. It does us no favours to sell people products that they don't really need – it will always backfire in the long term – and it’s annoying that some companies do it.
When you're shown the fancy new tool, ask yourself what the benefit will be. Will it save money or make money? Will it free your staff for more important tasks? Will it give your customers a better experience? I know that cost/benefit analysis is obvious to most chief execs but I've seen plenty of people get carried away by desire for the latest tech.
3. Is it explainable?
AI can be described as anything that looks a bit magical - and who doesn't love a good magic trick? Any magician worth their salt will never tell you how the trick is done; they get sawn in half if they do that, I understand. The same is not true of an AI vendor: they should be sawn in half if they won’t tell you how it’s done.
A recent survey of 5,000 executives found that 60 per cent of them are worried about explaining how AI uses data to make decisions[2], because of the regulatory and compliance risks of not knowing.
This is an entirely reasonable concern. If you are using AI for important decisions and a regulator launches an investigation or a customer requests their data under GDPR, then you need access to information that explains what happened. "Our black box told us to do it" is unlikely to be a popular excuse. Make sure your vendor tells you how the trick is done.
4. Make sure you own the data
It's your data that is being fed into the AI, but the algorithm and the code belong to the vendor. The AI uses your data to get smarter and, if you aren't careful, the vendor will then be able to sell the AI to your rivals, who get a smarter tool that you paid to train. When you sign up you have to make sure that the training data is yours and stays that way throughout the process.
5. If in doubt, call a doctor
As that MIT investment shows, AI is at the cutting edge of computing right now, so it's understandable that a lot of people don't understand how it works. That includes customers and, sometimes, even people who work for vendors. One trick for starting with a new technology like this is to test it in a small area of your business and evaluate its success before you invest further.
However, because it's hard to see inside the black box, even a trial might not tell you what you need to know. One option, in that case, is to speak to academics. Has the vendor allowed academics to independently evaluate their tech? If not, why not? That could be a red flag. Even if independent studies are not available, you should still be able to find academics who can help you understand the questions that you should ask.
There is a lot to consider when investing in AI – perhaps more than with other IT tools. Hopefully this list has helped to clarify some of the questions that will make the process clearer.
_____________________________________________________________
[1] https://www.theverge.com/2018/10/15/17978056/mit-college-of-computing-ai-interdisciplinary-research
[2] https://blogs.wsj.com/cio/2018/09/26/tech-giants-launch-new-ai-tools-as-worries-mount-about-explainability/