In a bid to enhance the reasoning capabilities of large language models (LLMs), researchers from Google Deepmind and University of Southern California have proposed a new ‘self-discover’ prompting framework. Published on arXiV and Hugging Face this morning, the approach goes beyond existing prompting techniques used by LLMs and has been found capable of improving the performance of known models out there, including OpenAI’s GPT-4 and Google’s PaLM 2. “Self-discover substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning and MATH by as much as 32% compared to Chain of Thought (CoT),” the researchers write in the paper. The framework revolves around LLMs self-discovering task-intrinsic reasoning structures to solve a problem. The models look at multiple atomic reasoning modules, such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. More interestingly, this approach works with 10 to 40 times less inference compute — something that can be great for enterprises. LLMs have evolved to handle numerous tasks, thanks to their ability to follow instructions, reason and generate coherent responses. To make this happen, the models, powered by transformer architecture, use various prompting techniques inspired by cognitive theories of how humans reason and solve problems. This includes few-shot and zero-shot chain-of-thought, inspired by how we solve a problem step-by-step, decomposition prompting of how we break a problem into multiple subproblems and step-back prompting of how we reflect on the nature of a task to establish general principles.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.