Tech and AIIntroducing an Enhanced AI Reasoning Technique

Introducing an Enhanced AI Reasoning Technique

-


Exevutives using AI computing simulation.
Image: Envato/DC_Studio

Researchers from AI company DeepSeek and Tsinghua University have introduced a new technique to enhance “reasoning” in large language models (LLMs).

Reasoning capabilities have emerged as a critical benchmark in the race to build top-performing generative AI systems. China and the U.S. are actively competing to develop the most powerful and practical models. According to a Stanford University report in April, China’s LLMs are rapidly closing the gap with their U.S. counterparts. In 2024, China produced 15 notable AI models compared to 40 in the U.S., but it leads in patents and academic publications.

What is DeepSeek’s new technique?

DeepSeek researchers published a paper, titled “Inference-Time Scaling for Generalist Reward Modeling,” on Cornell University’s arXiv, the archive of scientific papers. Note that papers published on arXiv are not necessarily peer-reviewed.

In the paper, the researchers detailed a combination of two AI training methods: generative reward modeling and self-principled critique tuning.

“In this work, we investigate how to improve reward modeling (RM) with more inference compute for general queries, i.e. the inference-time scalability of generalist RM, and further, how to improve the effectiveness of performance-compute scaling with proper learning methods,” the researchers wrote.

SEE: DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns

Reward modeling is the process of training AI to align more closely with user preferences. With Self-Principled Critique Tuning, the model generates its own critiques or ‘principles’ during inference to fine-tune its answers. The combined approach continues the effort to let LLMs deliver more relevant answers faster.

“Empirically, we show that SPCT significantly improves the quality and scalability of GRMs, outperforming existing methods and models in various RM benchmarks without severe biases, and could achieve better performance compared to training-time scaling,” the researchers wrote.

They called the models trained with this method DeepSeek-GRM.

“DeepSeek-GRM still meets challenges in some tasks, which we believe can be addressed by future efforts in generalist reward systems,” the researchers wrote.

What’s next for DeepSeek?

DeepSeek has generated significant buzz around the R1 model, which rivals leading reasoning-focused models like OpenAI o1. A second model, DeepSeek-R2, is rumored for release in May. The company also launched DeepSeek-V3-0324, an updated reasoning model released in late March.

According to the paper, models built with the new GRM-SPCT method will be open-searched, though no release date has been specified.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest news

Atua AI Extends Bitcoin-Backed Infrastructure for Intelligent Enterprise Operations

Leveraging Bitcoin's Security and Immutability to Power Scalable AI Workflows and Trusted Automation April 17, 2025 1:00 AM EDT |...

Mantra Crash Fallout: Binance, OKX, and Bybit Address Price Drop and Insider Allegations

After the crash of $OM, the native token of Mantra, a blockchain for real-world asset (RWA) tokenization, three...

Datai Network Takes the Stage at Google HQ Hong Kong for “Unlocking the Future of AI on BNB Chain”

Datai Network announced its participation at the prestigious Google Cloud x BNB Chain event, Unlocking the Future of...

Advertisement

DOGE Cuts Pull AmeriCorps Volunteers Off of Disaster Relief Jobs

AmeriCorps, the US federal agency that oversees volunteerism and service work, abruptly pulled teams of young people out...

Bitwise Launches 4 Crypto ETFs on London Stock Exchange

Bitwise supercharges its European expansion by listing four top-tier crypto ETPs on the London Stock Exchange, unlocking elite...

Must read

You might also likeRELATED
Recommended to you