Emerging technologies including AI, virtual reality (VR), augmented reality (AR), 5G, and blockchain (and related digital currencies) have all progressed on their own merits and timeline. Each has found a degree of application, though clearly AI has progressed the furthest. Each technology is maturing while overcoming challenges ranging from blockchain’s
Artificial Intelligence (AI) has been at the forefront of saving businesses valuable time and resources for quite some time now. It has made revolutionary impacts and discoveries from maps and navigation, chatbots, text editors, digital assistants, facial recognition and more. Individuals and enterprises use these AI-powered systems every day, and
With reliance on AI-based decisions and operations growing by the day, it’s important to take a step back and ask if everything that can be done to assure fairness and mitigate bias is being done. There needs to be greater awareness and training behind AI deployments. Not just for developers
In recent years, many leading companies have fallen victim to high-profile cyberattacks and data breaches. Cybercriminals and hackers around the world develop new methods and techniques to break into and compromise even the most advanced security systems and gain access to sensitive information. This has led to the exposure of
The Federal Trade Commission (FTC) is considering a wide range of options, including new rules and guidelines, to tackle data privacy concerns and algorithmic discrimination. FTC´s Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-CT), outlined her goals to “protect Americans from unfair or deceptive practices online” and in
Whether you are just overwhelmed with data or just curious about what you will learn, you may be feeling the impulse to jump on the artificial intelligence (AI) bandwagon. Before you go too far down the road, please consider this Top 10 list of the most common mistakes managers make when building an AI project. This comes from long, hard lessons learned across multiple missions and IT clients over the years.
Artificial intelligence is now an integral component of the processes and systems that drive our organizations. As AI practitioners, we must be intentional about developing, deploying and managing responsible AI — minimizing risk and removing bias while working toward our objectives. I recently defined a framework of six essential elements of
Part II of the Center for Security and Emerging Technologies (CSET) series is available which “examines how AI/ML technologies may shape future disinformation campaigns and offers recommendations for how to mitigate them.” We offered an analysis of Part I of the series (CSET Introduces a “Disinformation Kill Chain”) earlier this month. Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains. You name it, disinformation is in the mix scrambling truth and reality.
Artificial intelligence, and more specifically machine learning, is being deployed in the insurance space in some very exciting ways — from assessing underwriting risks to determining pricing to evaluating claims. But with these advances come sizable risks, some of which are already surfacing. Insurers need to take a proactive approach
Although artificial intelligence has been the subject of academic research since the 1950s and has been used commercially in some industries for decades, it is still in its infancy across much of the broader economy. The rapid adoption of this technology, along with the unique privacy, security and liability issues associated