In this edition of Coffee with Calyptus, we sit down with Vasile, Principal Engineer at PSYKHE AI, to discuss his journey from configuring Windows servers in Moldova to shaping the future of AI-driven e-commerce. Vasile shares his invaluable lessons on scaling microservices, navigating startup growth, and the evolving role of generative AI in personalizing user experiences.

Vasile, your rocket ride from configuring Windows servers at Soricelul Priceput in Moldova to principal engineer at PSYKHE AI in London is the stuff of tech dreams. What early hurdle in your junior days at Cedacri pushed you to master scalable microservices so quickly?
At Cedacri, I learned how slow corporate environments can limit experimentation — enterprise processes, slow approvals, very little room to “break things”. Then I jumped to a startup where I suddenly owned critical components end-to-end. If production broke, I had to fix it, fast. Being in near-constant survival mode accelerated my understanding of scalable microservices like nothing else could.
That contrast is what turned me into a scalability-obsessed engineer. I learned that you shouldn’t architect for fantasy traffic — you scale only what proves it deserves to scale. That mindset still drives me at PSYKHE AI: fast learning, real-world validation, and tech with commercial purpose.
At PSYKHE AI, you rebuilt the job-processing system to handle massive user loads and launched an MVP that proved B2B viability with real CTR gains. How did that pivot from B2C to enterprise recommendations reshape your view on validating product-market fit?
I joined PSYKHE AI as a Founding Engineer, inspired by founder Anabel Maldonado’s vision for AI that understands taste through psychology. I transformed our prototype into a scalable enterprise product, and the first integrations were the real test of belief. We weren’t just deploying tech — we were powering revenue for brands where every shopper is unique, and every assortment order, should be
That’s how I learned that product-market fit isn’t a singular milestone — it’s a pattern proven across time and systems. When our first handful of clients saw between 15-25% gains in revenue, all through clean A/B tests, the pattern became undeniable. Relevance repeatedly driving revenue — that’s when I truly believed we had something the world needed.
Leading e-commerce teams at R Systems with AWS and Go microservices must have been quite an experience; what's one actionable lesson from those integrations that you think all tech leaders should know?
At R Systems, I had a defining moment when my team lead went on parental leave and recommended me — despite more senior engineers being available — to report directly to the CEO and Product Owners. The difference wasn't raw technical skill; it was that I'd invested early in understanding the business behind the code. I learned to translate product goals into architectural decisions and explain technical trade-offs in commercial terms.
The actionable insight: promote engineers who think like product owners. The ones who understand why performance impacts conversion, why a feature matters for revenue, why a UX detail influences retention. Those engineers don't just build solutions — they build the right solutions. In fast-moving companies, that's the difference between shipping features and driving the business forward.
Designing an ingestion system at PSYKHE to crunch millions of products daily with ML pipelines sounds intense. What was the toughest scalability challenge you faced?
In traditional distributed systems, scaling is well-mapped until you hit Amazon-or-Google scale. But when we scaled PSYKHE AI's ML ingestion pipelines, I learned the rules change: your model framework influences serving strategy, which influences infrastructure design, which influences cloud choice — and in a startup, every new hire can unintentionally shift those constraints. ML scalability isn't just throughput; it's keeping models consistent, latency low, and freshness high simultaneously.
The lesson: in ML systems, performance is meaningless if personalization isn't driving revenue. Amazon famously reported that every 100ms of latency costs 1% in sales — but in some scenarios, I’ve watched teams over-optimize for speed alone, pushing for practically instantaneous calls while conversion dropped, because recommendations weren't fresh or relevant. Optimize for the intelligence of the system, not just the speed of the API.
How is your team using generative AI both in the product and on a personal level?
We rely on generative AI daily: on a team ops level, instant call summaries so we never lose context, hypothesis exploration before writing code, and AI-assisted development to ship internal tools in minutes instead of days. For a startup testing ideas at high velocity, that acceleration compounds into a real competitive advantage.
But where it gets exciting is our role in the bigger ecosystem. GenAI is powerful, but fundamentally generic. LLMs weren’t built to infer deep preference. They respond to text prompts, but taste isn’t reliably encoded in text. The same prompt could mean different things to different customers, and hundreds or thousands of products could technically fit from across catalogs.
LLMs can generate, summarize, and retrieve, but they don’t have the grounding to know what any one person actually wants, or why.
At PSYKHE AI, we’re building the taste intelligence layer that fills that gap. Our psychographic OS decodes stable personality traits to infer an important data layer that personalizes. We sit between large catalogs and generic prompts, providing personal grounding that turns broad intent into pinpointed relevance.
We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.



