Software manages AI transparency and business risks


Too many AI systems are like a black box – it is impossible to know how the algorithm arrived at its decisions
| Photo source Cognitive Scale
  • Innovation
  • Work & Lifestyle
  • Software manages AI transparency and business risks

Work & Lifestyle

A new product adds transparency by allowing users to find out what criteria is being used by AI algorithms to determine decisions

Spotted:Both software and communications platforms can be used as services. Now, however, a Texas-based company is developing trust as a service. The company, CognitiveScale, has recently announced the release of the first AI vulnerability detection and risk management product, the Cortex Certifai. The software is designed to help customers manage AI transparency and business risks.

Over time, machine learning algorithms ‘learn’ to interpret new data. This leads to issues with risk because it is not clear why an algorithm is making particular decisions. Cortex Certifai uses AI to detect and manage risk in automated decisions systems, and provides answers to questions such as: How did the AI system predict what it predicted? Has the model been unfair to a particular group? How easily can the model be fooled?

CognitiveScale has tested Cortex Certifai with systems used in banking, insurance and healthcare. The motive behind it was to help answer growing calls for greater transparency and control of automated decision-making systems. It includes an AI risk metric called the AI Trust Index, developed in collaboration with AI, academic institutions, and business. The model measures fairness, robustness, data rights and compliance in AI decision-making.

Dave Schubmehl, Research Director of Cognitive/AI Systems at IDC, underscored the role of trust in AI systems: “The need for explainability and detection of potential bias in predictive and prescriptive AI models will be a critical requirement as organisations, governments and consumers demand more accountability from AI-based decisions.”

AI is growing rapidly and is now responsible for decisions in many areas that were once the domain of humans only. At Springwise, we have seen this growth in areas as diverse as ship navigation and clothing design. But AI cannot continue to grow if it lacks accountability.

Explore More:Computing & Tech Innovations

12th February 2020



Download PDF

Read More


Please enter your comment!
Please enter your name here