Right around the time you are getting this email, Google finally released their long-awaited powerful AI, which, in the continued tradition of sudden AI name changes, is no longer Bard but rather Gemini Advanced. I have had early access to this LLM for over a month (as a reminder, I take no payments from any AI lab, nor do they see what I write in advance), and I wanted to offer some tasting notes. And, yes, I mean tasting, not testing, notes. In these newsletters, I have been sloppy with spelling — I figure it is a sign that a regular human rather than an AI wrote it — but I am not making a mistake here. AI testing benchmarks have their place, but they can also mislead. AIs can be trained on the test questions, on purpose or on accident, and many of the benchmarks consist of lists of trivia questions or reasoning puzzles, which don’t reflect real-world usage. So, I wanted to offer a bit of a subjective/ objective mix of opinions about Gemini Advanced, more like sampling a wine that giving a rigorous review. I am going to avoid doing a detailed feature set comparison, and focus on the big picture, with plenty of examples. Let me start with the headline: Gemini Advanced is clearly a GPT-4 class model. The statistics show this, but so does a month of our informal testing. And this is a big deal because OpenAI’s GPT-4 (the paid version of ChatGPT/Microsoft Copilot) has been the dominant AI for well over a year, and no other model has come particularly close. Prior to Gemini, we only had one advanced AI model to look at, and it is hard drawing conclusions with a dataset of one. Now there are two, and we can learn a few things.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.