Opinion: How to make AI trustworthy

Opinion: How to make AI trustworthy

Security threats against AI are not new, but they have been insufficiently addressed by enterprise users, argues VP analyst Avivah Litan 

Repressive regimes are working hard on artificial intelligence (AI) systems that control populations and suppress dissent. If these efforts succeed, political protests will be a sentimental relic of the past, squashed before they ever get to the streets.

Less dramatic but nonetheless serious risks also exist for AI in the enterprise. What if incorrect AI credit scoring stops consumers from securing loans? Or an attack on the AI model of a self-driving vehicle leads to a fatal accident? Or if data poisoning results in bank home loan approvals being biased against a certain group?

In a Gartner survey, companies deploying AI cited security and privacy as their top barriers to implementing it. Security threats against AI are not new, but they have been insufficiently addressed by enterprise users. Business trust is the key to enterprise AI success. However, security and privacy standards and tools that protect organizations better are still being developed. This means most organizations have been left largely on their own in terms of threat defense.

A new response

Most attacks against normal software can also be applied against AI. This means the same security and risk management solutions that mitigate damage from malicious hackers also mitigate damage caused by benign users who introduce mistakes into AI environments. Solutions that protect sensitive data from being compromised or from being biased also help protect against AI ethics issues caused by biased model training.
AI introduces new threats. Data “poisoned” intentionally or by mistake at the AI training stage or while the AI model is running can manipulate the outcome. Query attacks may attempt to determine the AI model’s logic and change its rules. Theft of training data or the AI model itself could also occur.

Gartner recommends implementing two new risk management pillars on top of existing measures used to mitigate threats and protect AI investments.  Retrofitting security into any system is much more costly than building it in from the outset. This is no less true with AI systems.

Don’t wait until the inevitable breach, compromise or mistake damages or undermines your company’s business, reputation or performance. This will keep AI models performing well, ensure that your data is protected, and support ‘responsible AI’ that weeds out model biases, unethical practices and bad decision making.

www.gartner.com