The competitive nature of AI developments has incentivized companies to move quickly and shroud their efforts in secrecy as AI continues to get more advanced. While playing things close to the vest is understandable (after all, each company wants to build AI solutions that are unique to them), it also creates major information gaps in the industry. If AI models continue to go unchecked, there could be tremendous real-world consequences. To avoid worst-case scenarios such as widespread ethical breakdowns or human bias in AI, the industry should embrace a few different strategies that help ensure a more transparent, pragmatic approach to AI development moving forward. With these controls in place, the industry can shift the conversation away from AI’s potential downfalls and toward the tremendous positive impacts that AI can have on the world. Today, no structure or legislation exists that truly enforces information sharing regarding AI development. As a result, the information gaps that exist also bring about major ethical concerns. Given that industry practitioners and even everyday people do not know how companies are developing their AI models and how they are protecting or using personal information throughout the process, massive ethical issues have arisen that the industry needs to confront as soon as possible. Alarm bells have been sounded on a number of levels and occasions due to a perceived lack of ethics in AI. Generative AI, for example, has sparked fears of racial bias, as current data models have shown a tendency to replicate human biases. On a larger scale, industry leaders have warned that AI could become so powerful that worst-case scenarios could lead to a risk of human extinction.
Full opinion : Playing The Long Game: Making AI Development More Transparent.