When Technology Forgets the Human—and How One Founder Refused to Let That Happen
Podcasts

When Technology Forgets the Human—and How One Founder Refused to Let That Happen

Feb 27, 2026

At some point in every technologist’s life, there’s a quiet, uncomfortable realization. You’re not building for people anymore. You’re building for efficiency, speed, and scale—and the humans affected by your work have become invisible. Shaker Natrajan knows that moment well. He lived it. He optimized systems so effectively that 2,000 people lost their jobs. And at the time, he didn’t see their faces. He didn’t see their children. He didn’t see his own father who once delivered telegrams for a living reflected in the code he wrote.

That’s the paradox of modern technology, isn’t it? You’re rewarded for shaving seconds, cutting costs, and removing “inefficiencies.” But those inefficiencies are often people. Real lives. Real families. And when you’re deep inside dashboards, KPIs, and algorithms, it’s dangerously easy to forget that.

Shaker didn’t forget forever. When he returned to India during his mother’s illness, something slowed down. Life forced him to pause. And in that pause, he realized he had been chasing vision, not validation. Success had arrived—Fortune 500 companies, hundreds of patents, global recognition—but fulfillment hadn’t. That’s when the deeper questions showed up. What does technology actually owe humanity? And what kind of world are we leaving for our children?

Today, Shaker wakes up at 4:00 a.m. Not to check emails. Not to code. He paints. Classical Indian Tanjore-style paintings, layered meticulously with gold leaf—22 intricate layers, each requiring patience and stillness. It sounds almost absurd for someone building cutting-edge AI. And that’s exactly the point. In a world obsessed with speed, painting forces him to slow down and simply be human.

Why does that matter? Because the state of mind you’re in when you build something becomes embedded in what you create. If you rush, you build reckless systems. If you’re disconnected, you build indifferent ones. Shaker believes that if technology is meant to support humanity—not replace it—then the builder must stay grounded in human values. That belief led to the idea of “angelic intelligence.”

Unlike conventional AI, which optimizes for efficiency, or even ethical AI, which focuses on “do no harm,” angelic intelligence aims to actively do good. And there’s a big difference. Preventing harm doesn’t automatically create goodness. A system can avoid obvious damage and still deny someone a loan, healthcare, or opportunity because efficiency told it to. Shaker’s argument is simple but unsettling: if machines are going to make decisions about human lives, they must be trained on human goodness—not just data scraped from the internet.

And here’s where his story gets deeply personal. His father earned very little but gave generously. His mother sold the symbol of her marriage so her son could be educated. His grandmother chose not to take medicine so he could prepare for IIT. These aren’t statistics. These are values lived out quietly, without applause. And Shaker knows no algorithm today truly understands that kind of sacrifice.

So how do you code compassion when even humans struggle to feel it universally? You don’t program what you can feel. You program what you should feel. Empathy isn’t about loving everyone equally—it’s about asking, “What would I want if I were in their place?” That shift—from could to should—is what’s missing in most systems today.

It’s also why Shaker draws a hard line. “If you’re a bad human, don’t buy my technology,” he says. It sounds bold, almost risky. But think about it. Who would raise their hand and admit they’re a bad human? The statement forces reflection. It introduces accountability into technology procurement—something we rarely talk about.

Of course, idealism alone doesn’t pay salaries. Shaker admits the fear is real. Payroll keeps founders awake at night. Capital markets still ask about returns before values. But instead of running from capitalism, he chose to work within it—designing structures where profits and purpose can coexist. Investors can participate. Returns can be generated. But the core values? Those are non-negotiable.

What’s striking is that Shaker doesn’t paint big companies as villains. He’s seen extraordinary humanity inside supposedly inhuman corporations—executives pausing profit-driven decisions to protect small businesses, employees risking their lives to save strangers, organizations stepping up during disasters without waiting for PR credit. The problem isn’t companies. It’s what happens when machines replace those human moments entirely.

Technology, at its best, should preserve that warmth—the soft voice on customer support that melts your anger, the delivery driver who brings back your lost dog, the employee who chooses kindness when no one is watching. Those moments are invisible, but they’re everything. And once lost, they’re hard to rebuild.

If Shaker’s mother were alive today and asked, “Beta, what are you building?” his answer wouldn’t mention AI models or patents. He’d say he’s trying to replicate the goodness she lived by. To amplify it. To make sure machines don’t erase what makes us human.

And maybe that’s the real challenge of our time—not building smarter technology, but building wiser humans behind it.

Leave a Reply

Your email address will not be published. Required fields are marked *