The Brain Has Been Solving AI's Hardest Problem for Millions of Years

I recently spent some time reading a PhD thesis about recording electrical signals from a small patch of retinal tissue. The author was trying to understand how neurons in the eye coordinate their firing, how patterns emerge from what are essentially simple on-off switches working together. The thesis is technically impressive, but the deeper question buried underneath all the methodology is what stayed with me: how does something simple, repeated at scale, produce something that looks like intelligence? That question has not gone away. It has just migrated from biology labs into server farms. ...

March 1, 2026 · 6 min · Sankalp Chudmunge

Notes on Predicting IPL Chases After 10 Overs

I started this project with a deceptively simple question: can we predict whether an IPL chase will be successful once 10 overs of the second innings are complete. The IPL datasets I was working with had two very different granularities. The ball by ball data was a stream of events, while the match level data stored outcomes and context. If I naïvely trained on ball by ball rows, the model would see multiple rows from the same match, future information would leak backwards, and accuracy would become meaningless. The only way forward was to decide exactly when the prediction is supposed to happen. I fixed that moment at the end of the 10th over of the second innings. ...

January 11, 2026 · 3 min · Sankalp Chudmunge
A portrait of Ettore Majorana

Majorana zero Edge States with longer-range interactions in a quantum Ising chain

I’ve always been fascinated by the idea of global protection arising from local rules. In my master’s thesis, I spent a lot of time looking at a 1D quantum Ising chain, specifically the Transverse Field Ising Model (TFIM), to see how it behaves when you push it toward its topological limits. When you transform the TFIM into a Kitaev chain, something remarkable happens: Majorana Zero Modes (MZMs) emerge. These aren’t just particles; they are zero-energy edge states that act as their own antiparticles. What makes them special is their topological protection—they don’t care about local noise or small perturbations. They only care about the “global” state of the system. This makes them a holy grail for quantum computing, as they offer a way to store information that is naturally resistant to decoherence. ...

January 9, 2026 · 2 min · Sankalp Chudmunge

An Honest Model Is Better Than a Perfect One

Training my first CNN! Today was one of those days where the learning wasn’t about getting a high accuracy number, but about understanding why things behave the way they do when training deep learning models, especially in medical imaging. I started with a simple CNN for brain tumor MRI classification. At first, the instinct was to think that more parameters and more epochs would naturally lead to better performance. That intuition turned out to be wrong. Flattening large feature maps created millions of parameters, which made the model memorize instead of understand. Switching to Global Average Pooling forced the network to focus on meaningful patterns rather than pixel-level noise, drastically reducing parameters and making training more stable. ...

January 7, 2026 · 2 min · Sankalp Chudmunge