Policymakers in the EU are taking final steps towards the world’s first comprehensive A.I. regulation. In the years to come, the A.I. Act will shape how artificial intelligence is built, deployed, and regulated globally. The EU established itself as a trendsetter for tech regulation with the GDPR which spread around the world in what became known as the “Brussels Effect.” This track record means policymakers around the world are taking note–as are open-source software developers. The A.I. Act presents an opportunity to get democratic oversight of A.I. right by encouraging responsible A.I. innovation while mitigating the technology’s risks. The need is clear: Without responsible development, deployment, and use, A.I. systems can result in real harm, ranging from biased algorithmic decisions that impact access to important life opportunities to the proliferation of misinformation. The Act provides a risk-based approach to regulating A.I. across all sectors: Before systems can be sold or deployed, they must meet a series of risk-management, data, transparency, documentation, oversight, and quality requirements. These requirements vary depending on the system and use-case. Using A.I. in sensitive areas like critical infrastructure, determining access to life opportunities in education and employment, and more is deemed high-risk. Generative A.I., which made its way into mainstream culture with the launch of ChatGPT, is likely to be subject to regulation too, with the EU Parliament expressly including new provisions to address it.
Full commentary : The EU A.I. Act can get democratic control of artificial intelligence–but only if open-source developers get a seat at the table.