The Future of AI Policy is Largely Unwritten
Congressman Will Hurd has emerged as one of America’s leading technologist legislators by tackling issues like quantum computing, cyber security, AI, and Homeland security enabling technologies like smart sensors and drones. He was the first standing member of Congress to attend the Def Con security conference and the only member with a blend of experience that includes a computer science degree, a decade in the CIA clandestine services, and direct experience in the cybersecurity industry.
This week the Subcommittee on Information Technology released a report entitled “Rise of the Machines
Artificial Intelligence and its Growing Impact on U.S. Policy” which argues that the United States cannot maintain its global leadership in AI absent political leadership from Congress and the Executive Branch and acts as an urgent call to action for the U.S. to demonstrate global leadership on the issue.
OODA Loop was able to get some exclusive perspective from Will on some of the more future-thinking aspects of the United States AI strategy.
OODA Loop – It seems that with many technologies, we engage in a rush to market that causes us to deploy new technology without thinking through the security implications. We’ve seen this with the Internet, IOT, etc. How do we ensure innovation in AI without introducing unacceptable risk into the equation or empowering machine learn bias?
Congressman Hurd – One of the most important security issues around Artificial Intelligence is protecting data used to train algorithms. We can use the best practices of good digital system hygiene to protect this type of training data but the protection of training data must be a priority at the beginning of the design of an algorithm. I find it fascinating that in 1950 when Isaac Asimov wrote I, Robot this issue of manipulating training data was a major point in his book. While protecting training data from destruction or manipulation is critical, the potential use of synthetic training data by AI practitioners will result in the need to focus on data quality and accuracy as well.
OODA Loop – We are already seeing where some less technologically sophisticated cultures are being influenced through bad fakes (e.g. Photoshopped images) as they aren’t finely tuned to question images. With emerging deep fake technology, even the most sophisticated technologists will not be able to detect which videos are real or fake. How do we establish standards or mechanism for introducing trust into the social graph? Do you envision that this is a role for AI?
Congressman Hurd – Being able to determine whether something is real or not, or accurate or not, is becoming more difficult in an environment of extreme partisanship and the interconnectivity of the world. When I was a kid I learned that it was bad to get into a car with a stranger (now we have to add, unless it’s an Uber of Lyft driver), so why do we think it is ok to share a social media post from someone we know nothing about? The issue of authenticity has been addressed for many other technologies and the same concepts and principles that we have used to transmit secure documents could be adapted for images and video. While, I’m unaware of current research or technology that exhibits the concepts I’ve outlined, I trust the ingenuity of our security research community to find a way. I’m sure my friends at DEFCON will one day soon have a presentation on how to spot and prevent Deep Fakes.
OODA Loop – What we’ve seen to date is the deployment of narrow AI, but if we encounter a situation where general AI is possible, is that a technology that needs to be regulated by the government? For example, we don’t allow an unregulated entity to build and operate a nuclear power plant. Do we need similar controls for advanced AI technology?
Congressman Hurd – The short answer is I don’t know, but I do know that the United States should be the leader in establishing the ethics around AI. I’m of the opinion that we will not get to general AI until we have fully realized the capabilities of Quantum Computing. So, while we are in this period of what I like to call “dumb AI” I think we have the opportunity to debate this topic. I think the United States needs a national strategy on AI and a key piece of it should be around the ethics of AI.
For more insight into how Congressman Hurd and his colleagues are thinking about AI and advanced technology, we highly recommend you read the Subcommittee report.