AI development has major security, privacy and ethical blind spots
A recent survey by O’Reilly sheds light on common risk blind spots of artificial intelligence (AI)/Machine learning (ML) developers. The most glaring oversight is security, with nearly three in four (73%) respondents acknowledging that they do not test their models for security flaws during development.
Other issues that are commonly neglected are privacy (65% ignore this) and fairness, bias or ethical issues (59% ignore this). Moreover, 45% of developers fail to take steps to prevent unexpected outcomes or predictions, while 16% said they didn’t perform any kind of risk assessment for the models they are developing.