Public Description of the Project RaiDOT
According to a UK government report (2019), approximately 50% of SMEs in the UK were using AI technology. The government has called for greater transparency in AI systems to ensure they are used ethically and fairly. We also noted that AI systems could be vulnerable to cyberattacks, which could compromise sensitive data or cause the systems to malfunction. It needs to address privacy, autonomy, and the role of technology issues in our lives. PwC recently estimated that the mid-2030s could automate up to 30% of UK jobs; therefore, upskilling and reskilling are essential.
This project aims to develop an intelligent tool to evaluate the operational assurance of AI systems. We will develop RaiDOT (Responsible AI, Insightful Data, Operational Trust) based on the postoperative risks of AI- enabled SMEs. We plan to utilise government AI assurance and risk management frameworks such as the recently released NIST AI RMF 1.0 to ensure best practices.
Working with 20 regional SMEs, we will focus on developing a scalable system in which phase one will be a web-based evaluation tool to identify, analyse and evaluate the level of assurance based on the risk following the EU AI Act. So far, there has yet to be an intelligent tool to conduct operational assurance of AI Systems for SMEs.