Businesses have been using artificial intelligence for years, and while machine learning (ML) models have often been taken from open-source repositories and built into business-specific systems, model provenance and assurance have not always necessarily been documented nor built into company policy. Standards and guidance being developed are rightly aiming to be risk-focused and flexible. Still, implementing these in the businesses that create and consume AI-enabled products and services will make a difference. However, there are many open questions regarding the practicalities of assurance (what does good look like? Who is qualified to perform assessments?), liability (the software supply chain is complex), and even whether continuous development of this new technology is responsible. When considering third-party risk, it’s worth considering what templates the ML models have been built on and then understanding the journey of a product using these models. For example, how many ML models have been spawned from previous versions? If a vulnerable ML model has been used from the beginning, then it’s fair to say that it will be present in every subsequent version.
Full story : Interpreting regulation and implementing good practice with artificial intelligence.