In this edition of Coffee with Calyptus, we’re joined by an AI and data strategy expert who’s been advising top institutions like London Business School and The Alan Turing Institute. With deep experience across AI applications, blockchain, and tokenomics, Dr Stylianos Kampakis shares valuable insights on what executives truly need from AI, the role of tokens in business models, and how to bridge the gap between academic theory and real-world business solutions.

You've been running The Tesseract Academy since 2018, advising everyone from London Business School to The Alan Turing Institute on AI and data strategy. What's the biggest gap you see between what academic institutions think businesses need from AI versus what CEOs are actually asking you to solve?
Most executives aren’t asking for cutting-edge research—they want a pragmatic AI strategy tied to data readiness, KPIs and fast time-to-value. The biggest gap: universities teach models; CEOs need operating playbooks (use-cases, data quality, integration, change-management). Start with a simple portfolio of automation and analytics wins, underpinned by clear metrics and basic data governance—not a lab. In our executive sessions we emphasise “descriptive → predictive → prescriptive” progression and tackle common blockers (stakeholder expectations, legacy systems, skills, data quality).
Your "AI Case Studies Bible" covers 30 industries and 11,000 words of applications, and you also lead tokenomics audits at Hacken and founded Janus Protocol. What advice do you have for people in crypto who want to adopt AI in their product?
Don’t bolt on AI to “look smart”—make it earn its keep. Add AI where it measurably improves the product: fraud/risk signals, support automation, recommendation/routing, or research copilots for devs and analysts. Begin with a build-vs-buy decision and a narrow, testable use-case; only then consider longer-horizon AI IP. And if you’re tokenised, align utility design with real network effects first (governance, access, reputation, incentives), then layer AI where it boosts those loops. Our curriculum reflects this separation of concerns (Module 4: Blockchain & Tokenomics alongside AI modules).
At The Alan Turing Institute, you researched how LLMs can be used for cyber-defense. Now as Partner and CAIO at Xenet AI, how are you using AI agents or automation to scale The Tesseract Academy's consulting work?
We deploy agents for marketing and sales ops: lead enrichment and scoring, webinar funnel orchestration, content repurposing, and follow-ups—simple systems, big leverage. On the learning side, we’re rolling out a Slack-based AI coach with a RAG knowledge base to nudge mastery and answer alumni questions from our materials—automation that compounds human coaching, not replaces it. It’s not rocket science; it’s disciplined workflows that free experts for higher-value work.
You have advised many blockchain projects at UCL's Centre for Blockchain Technologies. What are the things founders must get right when they try to tokenize their business model to create network effects?
Have a reason for the token to exist—before airdrops and hype. Tie utility to tangible behaviours (access, staking/locking for governance, reputation, fee rebates), define value accrual/obligations, and map incentives to real network effects—not vanity metrics. Stress-test market design (supply, emissions, liquidity, vesting) and ship a minimal, use-case-driven token—then iterate with on-chain data. A sandboxed, model-first approach keeps teams honest.
As a 2017 YTILI Fellow with the U.S. State Department and a Chartered Statistician, your experience bridges policy, academia, and startups. If a government or regulatory body asks you about regulations in AI, what will be your 2 cents?
I worry about innovation being smothered by checkbox compliance; but a risk-based, outcomes-oriented approach is necessary. Practical stance: adopt lightweight governance now—data quality and provenance, model transparency where it matters, human-in-the-loop for high-impact decisions—and map to emerging frameworks (EU AI Act, NIST AI RMF) without over-engineering. Most value comes from clear problem framing and data discipline long before fancy models—exactly where many projects stumble.
We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at



