This week on Coffee with Calyptus, we sit down with Preetam, Senior AI Lead at Microsoft, whose career spans crafting HDR apps at Ittiam, leading AWS Pinpoint for 20M users, and driving large-scale AI recommendations for Bing and Windows. From pioneering No Cost EMI at Amazon Pay to securing U.S. patents in autonomous cloud workflows, Preetam’s journey is a masterclass in scaling innovation. Dive in for lessons on distributed systems, personalization, and blending AI with legacy infrastructure without missing a beat.


Preetam, your arc from crafting HDR apps at Ittiam in Bangalore to leading AI recommendations for 20M Bing users at Microsoft is epic. How did your IIT Delhi ML internship ignited your passion for scaling AI in real-world finance and sports?
My machine learning internship at IIT Delhi in 2013 came at a time when many aspects of AI and ML were still in their early stages, far from the real-world applicability we see today. During that period, I worked on integrating k-means clustering into traditional heuristic algorithms and applied the resulting models to optimize electrical and electronic grid power systems. This work led to the discovery of several new sub-algorithms and, more importantly, taught me how data-driven models could optimize complex systems and deliver measurable impact. That experience sparked my passion for scaling AI solutions beyond academic research. Later in my career at Microsoft AI, I was able to apply much of that foundational knowledge—particularly in clustering and machine learning—to real-world applications in financial and sports recommendations for Windows and Bing users. In many ways, engineering challenges across domains share underlying patterns, and a successful algorithmic strategy in one area can often be adapted to solve problems in another.
As tech lead on AWS Pinpoint, handling 20M daily users and 1000+ clients with Lambda and DynamoDB wizardry, how did that shape your approach to building notification platforms that alert 2M users on key events at Microsoft?
My tenure at AWS Pinpoint was a pivotal period of learning and growth, during which I helped build one of the largest distributed notification services in the industry. AWS Pinpoint delivered messaging, recommendations, and user analytics to hundreds of major enterprises—including Netflix, Disney, Aetna, and Sony. As a tech lead, I had the opportunity to design and develop large-scale, automated notification and feedback workflows using AWS-native services such as Lambda and DynamoDB. These cloud-native, autonomous workflows were unique at the time and led to two U.S. patents, each with over 100 citations since 2019, co-authored by me and my colleagues. Overall, my five years at AWS Pinpoint were marked by deep technical learning and collaborative growth in building scalable, distributed systems. I later applied many of the autonomous workflow patterns and architectural principles from that experience to develop a core notification platform at Microsoft, enabling timely alerts for millions of users across financial and productivity scenarios.
Pioneering Amazon Pay's No Cost EMI and contactless payments must have been a high-wire act; what was the biggest distributed systems challenge you tackled there, and what's one tip for engineers optimizing data freshness in massive-scale apps?
Amazon Pay’s No Cost EMI and contactless payment features—still in use across Amazon Go, Fresh, and physical retail stores—impact over 100 million users globally every day. The most significant distributed systems challenge was integrating complex payment and banking systems, designed with strict financial and transactional security requirements, into Amazon’s large-scale delivery infrastructure, which is optimized for speed and customer obsession. This required careful technical trade-offs throughout the software development lifecycle to ensure both security and performance goals were met simultaneously. One key strategy to maintain data freshness at scale was implementing multiple data pipelines with varying speeds. This allowed seamless integration with diverse data sources, each with different refresh rates. Additionally, we categorized data based on its sensitivity to freshness—reserving high-cost, high-speed pipelines for time-critical data (e.g., stock prices), while leveraging slower, more cost-efficient pipelines for less time-sensitive information (e.g., quarterly earnings). This approach ensured both performance and cost-efficiency in large-scale systems.
Securing US patents in AWS messaging while evolving from SDE in India to senior lead in Vancouver screams reinvention. Tell us about this journey.
Certainly. I began my career with Amazon’s Payments Development teams, where the primary focus was integrating complex financial systems with Amazon’s distributed retail architecture. This was a foundational learning experience that gave me deep exposure to large-scale systems and the intricacies of secure financial transactions. After building a strong foundation in distributed systems over four years, I sought to broaden my impact by working on even larger-scale applications, which led me to AWS in Seattle. As one of the founding tech members of AWS Messaging, I designed and developed several autonomous workflows and notification services for AWS, which were onboarded by major tech giants such as Netflix, Sony, Disney, etc. We delivered billions of personalized messages daily to end users, with real-time user feedback directing the next workflow steps. The novelty of these real-time notification workflows led to two U.S. patents in the realm of autonomous cloud workflows which was a completely new approach in 2019. While I had the privilege of delivering large-scale architectures across two major tech organizations, I felt a strong pull to return to my original passion: applying machine learning to real-world problems. This brought me to Microsoft’s AI organization in Vancouver, where I initially focused on building ML-based user recommendations in domains like finance and sports. I combined my background in machine learning with over a decade of distributed systems experience to build recommendation and notification platforms that now serve over 20 million daily active users across Microsoft services such as Windows, Bing, and Teams. One guiding principle that has consistently helped me grow as a software professional is continuous learning. I’ve found that the most impactful innovations often come from combining experiences across different domains, technologies, and being intentional about applying those learnings in new contexts.
At Microsoft AI Org, integrating LLMs into backend for sports and financial queries sounds cutting-edge. How are you adopting these models to boost personalization and relevance for millions, and what's an actionable strategy for teams blending AI with legacy systems?
At Microsoft AI Org, without disclosing any confidential details, my work is two-fold. First, I train models using XGBoost, decision tree algorithms, and traditional LLMs to categorize user interests across financial and sports topics. These models power recommendations and notifications across Windows, Bing, and other Microsoft services. The second part involves clustering users based on their consensually provided interests and applying LLMs to these segmented datasets. This approach is essential in large-scale organizations like Microsoft, where the user base is vast and globally diverse. These combinations deliver higher accuracy and relevance in personalized recommendations while minimizing bias. For organizations looking to blend AI with legacy systems, an actionable strategy is to begin by identifying key user segments and integrating modular AI components—such as clustering, decision tree models, and LLMs—alongside existing infrastructure. This approach enables incremental improvements in personalization and relevance while preserving system stability and leveraging proven legacy processes. Rather than replacing legacy systems, successful AI integration often comes from thoughtful collaboration between new and existing technologies.
We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.