top of page

Building Language Models that Learn, Remember and Reason like Experts                      Dr. Niket Tandon 

As language models become deeply integrated into our daily workflows, our expectations are shifting. We no longer seek systems that merely predict the next word—we want collaborators that can learn from us, remember what matters, and reason with the rigor of a domain expert.

 

This talk introduces a new class of language models by exploring two converging paths toward that future: systems that learn from the world and about the world.

 

First, we’ll examine how models can learn from the world—alongside us—by reflecting on past mistakes and adapting to human feedback. Drawing inspiration from the psychological theory of recursive reminding, we present a memory architecture that enables models to avoid repeating errors and improve through interaction. This is a step toward making language models not just responsive, but reflective—and even self-reflective.

 

Second, we’ll explore how models can learn about the world. Despite their scale, today’s models often fail in high-stakes domains like law, medicine, and finance because their knowledge is static and incomplete -- we can mitigate this by injecting more knowledge into their memory without compromising trust. To address this, we’ll discuss emerging memory-based strategies for injecting curated knowledge directly into models—moving beyond retrieval-augmented generation (RAG) to build systems that can robustly and efficiently integrate domain expertise while respecting privacy.

 

Together, these approaches point to a new generation of language models: systems that learn from the world and about the world. Through case studies and early results, we’ll explore how fusing memory and knowledge can create agents that reason better, fail less often, and ultimately serve us more effectively.
 

©2025 by Plaksha Academic Conference.

bottom of page