Skip to main content

Local Setup: Small Language Model vs. Quantized Large Model

RL for Small LLM Reasoning: What Works and What Doesn't

Paper

 Another interesting research paper. RL for Small LLM Reasoning: What Works and What Doesn't.

The paper investigates how reinforcement learning (RL) can improve reasoning capabilities in small language models (LLMs) under strict computational constraints. The researchers experimented with a 1.5-billion-parameter model (DeepSeek-R1-Distill-Qwen-1.5B) on 4 NVIDIA A40 GPUs over 24 hours, adapting the Group Relative Policy Optimization (GRPO) algorithm and creating a curated dataset of mathematical reasoning problems. The performance gains were accomplished using only 7,000 samples at a rough cost of $42.

Through three experiments, they discovered that small LLMs can achieve rapid reasoning improvements within 50-100 training steps using limited high-quality data, but performance degrades with prolonged training under strict length constraints. Mixing easy and hard problems improved training stability, while cosine rewards effectively regulated output length. Their Open-RS variants outperformed models requiring thousands of dollars and much larger datasets to train.

The findings demonstrate that RL-based methods can effectively enhance small LLMs' reasoning capabilities with minimal resources, offering a cost-effective alternative to large-scale approaches. However, challenges remain, including optimization instability, length constraints, and multilingual tendencies. The researchers released their code and datasets as open-source resources to encourage further exploration of resource-efficient approaches to LLM reasoning enhancement.

Thoughts

Not sure what triggers this but I have seen Qwen models are especially susceptible to language drifts during extensive evaluations. (Basically, responding back in mandarin instead of English.)

The stability is always a challenge while training in reinforcement learning. Catastrophic forgetfulness is something my team is well aware of.

The ideal small model size still seems to be in the 7B+ parameter range for more specific applications. And can go as small as 3B parameter models for extreme narrowly focused tasks.

What is the next breakthrough that is needed for stable long-term reasoning on these constraint settings? Perhaps different reward mechanisms, manage internal state or architectural changes, or combine RL with other research?

My 2025 focus is still on the embedding/tokenization, and perhaps deeper research around backpropagation.


Comments

Popular posts from this blog

2024 Progress...

My team has made considerable advancements in applying various emerging technologies for IMG (Investment Management Group). Predictive Models We have transitioned from conventional methods and refined our approach to using alternative data to more accurately predict the CPI numbers. Our initial approach has not changed by using 2 models (top-down & bottoms-up) for this prediction.   So far we have outperformed both our larger internal team and major banks and dealers in accurately predicting the inflation numbers. Overall roughly 80% accuracy with the last 3 month prediction to be right on the spot.  We have also developed predictive analytics for forecasting prepayment on mortgage-backed securities and predicting macroeconomic regime shifts. Mixed Integer Programming  / Optimization Another area of focus is on numerical optimization to construct a comprehensive portfolio of fixed-income securities for our ETFs and Mutual Funds. This task presents ...

What matters?

 What matters? Six things that matter in LLM in July 2024. 1) Scale of the model, number of parameters: Scale with brute force alone won't work. But the scale does matter depending on the overall goal and the purpose of what the LLM is trying to solve.   2) Compute matters: Even more than ever, we need to look at the infrastructures around LLMs. Infrastructure is also one of the main constraints for the near term and strategically provides an advantage to a few Middle East countries. 3) Data, quality & quantity. It remains true that high-quality data with extensive (longer) training is the way. Quantity of the data also matters. 4) Loss function matters: If your loss function isn't sophisticated or incentivizes the "right" thing, you will have limited improvement. 5) Symmetry or architecture: Do you have the correct architecture around your model(s) and data? Inefficient engineering can be costly to the overall performance and output. There are inherent struc...

Gemma 3 - Quick Summary & Why this matters

Introduction Despite being labeled the laggard in the language model race behind OpenAI and Anthropic, Google holds two decisive advantages in 2025's evolving AI landscape: unparalleled high-quality data reserves and compute infrastructure that dwarfs even Meta's formidable 600,000 H100 GPUs. As pre-training scaling laws plateau, these assets become critical differentiators. This is especially important in 2025 when everyone is looking for the killer application that can legitimize the research on language models. Combined with DeepMind's elite research talent and visionary leadership, Google possesses a power that competitors ignore at their peril. Gemma is a family of open-weight large language models (LLMs) developed by Google DeepMind and other teams at Google, leveraging the research and technology behind the Gemini models. Released starting in February 2024, Gemma aims to provide state-of-the-art performance in lightweight formats, making advanced AI accessible f...