Issue153: The Hardest Part of AI Isn't The Models But Keeping Your Team Grounded Ft. Ankur Mathur, CTO at Experiture

Author :
Nishant Singh
March 22, 2026

This week, we sat down with Ankur Mathur, a technologist who has been quietly building AI-powered products since before "AI" was a boardroom buzzword, spanning roles as Head of AI at Iterable, enterprise architect at Walmart Canada and Macy's, and now CTO at Experiture. From founding an AI group from scratch and growing ARR 10x, to helping companies escape the trough of AI disillusionment, his insights cut straight to what actually works. If you build, lead, or invest in technology, this one's worth your full attention.

You went from writing Java code at GE and IBM in Bangalore to eventually founding an AI group at a San Francisco SaaS company that grew ARR 10x. What was the single biggest mindset shift you had to make along that journey?

This is such a juicy question!

I was actually an aspiring AI researcher in college, fascinated by Hopfield networks - statistical physics-inspired associative memory models that shared the 2024 Physics Nobel prize. This was the late 90s, deep in the AI winter, when I pulled out of the PhD program at Cornell to join a Silicon valley startup that had just IPOed. Then the dot com bust landed me in Bangalore where I rode the offshoring wave. But the point is, I had tons of latent passion for this space and when I got the opportunity to enter it professionally almost 15 years later, I was ready to do whatever it took.

The single biggest shift was trading in the straight-line certainty of traditional coding for a mindset that accepts and manages a range of possibilities. 'If I push this button, this exact thing must happen.' Now, when we build AI, we design for outcomes where 'If I push this button, there is an 85% chance of a great result and a 15% chance of a good result.' Software design is still about composing outcomes in desirable ways - but now we accept and manage their probabilistic nature. You're not aiming for a single bullseye, but the most advantageous region on a probability map.

At Iterable, you essentially built a startup within a startup, creating the AI and Experimentation group from scratch over nearly six years. What does it actually take to convince a company to bet on AI before it becomes the obvious thing to do?

I was lucky to have support from the senior-most leadership, but it's easy to erode that with carelessness. Of course, stakeholder management 101 applies : Communicate proactively, in language that makes sense to them, so that becomes a critical skill when it comes to a complex domain like this.

I think the most important principle remains staying customer focused and delivering value incrementally. We picked a platform that didn't require a lot of setup lead time, started simple and iterated quickly. Building the train track only a few miles ahead as our then-CFO used to say. For example, we knew we needed a solid feature store to power the future, so we released Brand affinity, a teaser auto-segmentation feature built in less than a quarter. That created demand for Predictive Audiences, a full AutoML product for marketing with a massive feature store. It took almost a year to build that, but it enabled Iterable to easily ride the genAI wave later, with features like next best action and now the agent, Nova.

You've worn a lot of hats across your career, from software engineer to enterprise architect to Head of AI to CTO. How do you decide when it's time to leave a role and what signals tell you the next chapter is ready?

It honestly starts after the core problems have been cracked. I try to reserve some personal time for open exploration, am blessed with a diverse and brilliant network and often uncover a new passion or spark an idea that way. Sometimes I can channel it at my existing position, but sometimes it leads outside. I am definitely drawn to personal growth and creative impact.

You helped re-engineer Walmart Canada's ecommerce platform and later applied ML to fraud detection and merchandise attribution at Macy's. What did working at that kind of massive retail scale teach you about building technology that actually holds up under pressure?

Scalable, resilient systems have to be designed with explicit failure modelling. Then there is capacity planning and scalability testing, typically for 2X peak load experienced so far. Our Walmart thesis was all about horizontal scalability, decoupled, stateless architecture and side-effect-free functional programming. For ML, of course, infrastructure cost is a central concern, one has to be smart ROI balance and vigilant about wasteful data processing, training and inference. Observability tools are important anywhere, but they become critical at high scale.

Having built and scaled Data Science, ML Platform, and AI applications teams, what's the hardest part of managing people in AI that no one really talks about openly?

Managing expectations and nurturing growth for some of the smartest, most diverse and ambitious people in the industry.

AI/ML projects have a lot of uncertainty, full of hard work that is often invisible to stakeholders and require an extraordinary tenacity and motivation. On top of all that there is the deafening hype and inevitable FOMO. Keeping it real and fun amidst all that requires special relationships and culture.

As someone who's been building AI-powered products since before Generative AI became a buzzword, how are you now weaving today's LLM and Generative AI capabilities into what you're building at Experiture, and where do you think most companies are still getting it completely wrong?

Experiture's technology has grown organically over nearly two decades, from a monolithic database application used by our internal marketing agency, into a lavishly featured SaaS platform for powerful customer engagement. It has sophisticated capabilities around audience segmentation, customer journey orchestration, analytics, and experimentation. Those are also the areas where the cognitive load for marketers is highest.

We have been introducing an LLM-enabled layer of APIs designed to distill complexity, reason across the system, explain what is happening, and take care of tactical work where appropriate. The priorities for embedding AI are to

  1. Free up human bandwidth by safely executing low-level, mundane tasks.
  2. Build trust and progressively execute higher-level tasks.
  3. Continuously learn to provide smarter assistance in steering strategy.

Let me try to address the last part of your question from my vantage point. Many companies still treat AI as a feature or an add-on rather than a foundational part of their system. The most useful applications are those tightly connected to real workflows, grounded in real data, and that genuinely enhance existing product capabilities. When this integration is missing, driving sustained adoption is harder, the feedback loop is weak and the trough of disillusionment is deep. The companies that succeed will treat LLMs the way we eventually learned to treat cloud or distributed systems: as a powerful primitive that needs strong product thinking, data infrastructure, and disciplined engineering around it. I believe “AI native” is not a healthy classification. AI is now foundational to all software.

We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.