Large language models like ChatGPT and Claude offer a wide range of beneficial applications. However, there are significant risks associated with their use that demand a coordinated effort among partner nations to forge a solid, integrated defense against the threat of malign information operations. Large language models can assist in generating creative story plots, crafting marketing campaigns and even creating personalized restaurant recommendations. However, they often produce text that is confidently wrong. This design has profound implications, not only for routine use of artificial intelligence, but also for U.S. national security. AI-generated content can exhibit a phenomenon known as “truthiness” — a phrase coined by television host Stephen Colbert in the early 2000s to describe how information can feel right. This concept emphasizes that, despite lacking factual accuracy, content with a highly coherent logical structure can influence how smart, sophisticated people decide whether something is true or not. Our cognitive biases mean well-written content or compelling visuals have the power to make claims seem more true than they are. As one scholar who has studied “truthiness” describes it: “When things feel easy to process, they feel trustworthy.”
Full opinion : ChatGPT is creating new risks for national security.