The AI Supply Chain Runs on Ignorance
It is no secret that in order to improve the performance of artificial intelligence (AI) solutions, developers need to feed massive amounts of data into their AI algorithms so that these can evolve through “learning.” However, experts warn that AI firms are often far from transparent about how they gather and use data, and that obfuscation is actually a characteristic part of the AI supply chain.
So what is happening? Many AI solutions gather data from users who provide consent by signing terms-of-service agreements they don’t fully understand because the documents are filled with legal speak. In order to make sure that the provided data is properly labeled so that it can help to train AI solutions, companies hire workers in low-wage countries such as India to perform this task, once again without providing them with sufficient info on what the data will be used for. The AI solutions developed in this way may be sold to law enforcement agencies or used for military applications, which according to Jake Laperruque of the Project on Government Oversight means that users are “effectively being conscripted to help build military and law-enforcement weapons and surveillance systems.” Moreover, since many AI firms sell the data they collect, they don’t always know what the data will be used for.
Experts like Liz O’Sullivan of the Surveillance Technology Oversight Project (STOP) are calling for government regulation to address these issues in order to make sure that companies “tell us what models they’re training and whether they intend to sell our data to anybody,” because in the absence of such regulation, “we’re all just going to continue to be used as food for these algorithms whether we’re aware of it or not.”
Read more: The AI Supply Chain Runs on Ignorance