This edition of Coffee with Calyptus features Rana Gujral, CEO of Behavioral Signals and author of the upcoming The AI Instinct, a leader who turned around Cricut's fortunes before building AI that reads emotion, intent, and truth from the human voice. Rana challenges the dominant narrative around superintelligence, arguing it won't arrive as a lone awakened machine but as an emergent property of tightly coupled human-AI systems we are already building today. From his concept of Artificial General Experience to his sharp critique of AI adoption as mere procurement, Rana offers a rare, grounded perspective on what it actually means to preserve human agency in an age of shared cognition.

You took Cricut from near bankruptcy and helped set it on a path to a $4.4B IPO, then walked away from hardware to build AI around emotion and voice at Behavioral Signals. What made you bet on that specific gap?
Cricut taught me that the hardest problem in hardware is not the product itself, it is aligning a team around a clear thesis about what customers actually want. But the thread that pulled me toward Behavioral Signals started earlier. At TiZE, the vertical SaaS company I founded, we were experimenting with some of the first production uses of machine learning inside a specific industry workflow, and that was where I first saw how much of decision quality happens outside the structured data. Most of the industry was racing to make machines better at text, but the densest signal in human communication is voice. Tone, prosody, hesitation, the way someone trails off at the end of a sentence. That information was being treated as a transcription problem instead of a cognition problem, and almost nobody was building the infrastructure to read it at scale.
That felt like the gap worth walking toward. At Behavioral Signals we work on paralinguistics, the parts of speech that carry emotion, intent, and risk rather than the literal words. Once you can model how someone is saying something, you get a window into how they are likely to decide, whether they are under duress, whether they are being truthful. That is a different order of capability than text based AI, and it became the empirical foundation for most of the ideas in The AI Instinct.
In The AI Instinct, you argue that AI is already shaping what gets trusted, believed, and decided. How do leaders preserve genuine agency when cognition itself is becoming shared?
The starting premise is that the unit of intelligence is no longer a person or a model standing alone, it is the coupled system of human, tools, and the rules that govern how they work together. Edwin Hutchins showed this decades ago with ship navigation, and Clark and Chalmers extended it into the idea of the extended mind. What is new is that the tools are now active participants in judgment, not passive extensions of it. Agency is not something you preserve by keeping AI at arm's length. It is something you preserve by designing the coupling deliberately.
For leaders, that means shifting from asking "where do we deploy AI" to asking "what does the contract between humans and the system look like." In the book I argue for non negotiables on both sides. For humans, those include dignity, transparency about when a model is in the loop, and reserved veto power on high stakes moves. For the system, those include calibration, auditability, and the ability to say "I do not know." When that structure is in place, shared cognition expands agency. When it is not, agency collapses into dependency, and the humans in the loop become rubber stamps for the model's prior.
You propose Artificial General Experience as the missing dimension on the road to AGI. What does a machine that truly "experiences" something look like, and why does it matter more than raw intelligence?
The cleanest way I frame it is this: a chess engine is superhuman at chess and has no experience of winning. It does not feel the pressure of the crowd, the sweaty hands, the curiosity about whether someone it loves is watching. Strip away the romance and you have a system that computes flawlessly inside a narrow domain and cannot tell you why any of it matters. That is every frontier AI system we have built so far. Artificial General Experience, AGE, is my framing for what is missing.
A machine that truly experiences something would need a short list of capabilities we have only begun to build. A world model that predicts what may happen next. An attention controller that decides what matters right now. A memory system that stores episodes and updates meaning when new information arrives. A valuation layer that maps states and outcomes to value. A narrative generator that stitches events into a coherent story. A self model that tracks the agent across time. And in my view, some form of embodiment, because sensory grounding is how value becomes real rather than notional. This matters more than raw intelligence because superintelligence framed purely as capability becomes an oracle without orientation. Power without perspective. The systems we will actually trust to co steer civilization will be the ones that can host experience, not just process it.
You argue in The AI Instinct that superintelligence is not going to arrive as a standalone machine waking up, but as an emergent property of the human AI system. Walk us through that reframing, and what it changes for how we think about risk and opportunity.
The entire modern conversation about superintelligence assumes a specific geometry. A standalone machine that, at some threshold, becomes more capable than any human and then pulls away. That framing drives the risk conversation, the policy conversation, and most of the investment strategy in the field. In the book I argue it is wrong, or at least dramatically incomplete. Superintelligence is not something we should expect to arrive as a discrete event inside a single system. It is something that emerges when human machine cognition becomes tightly coupled, continuous, and scalable enough that the combined system consistently outperforms either side alone. The superintelligent entity, if and when it shows up, is the hybrid.
This reframing changes what we should be preparing for. If you believe in the standalone machine model, the natural move is to pour resources into aligning a hypothetical future system while treating the humans using today's tools as a secondary concern. If you believe in the hybrid model, the urgent work is happening right now, inside the coupling we are already building. That is where agency is being redistributed, where judgment is being shaped, and where the rules, interfaces, and defaults we choose will compound for decades. It also changes who the stakeholders are. The question stops being "what will the machine do to us" and becomes "what are we becoming together," which is a much more actionable question for leaders, policymakers, and individuals.
Advising both public and private sector leaders on AI adoption, what is the most common mistake organizations make in their first year of seriously adopting AI?
Treating AI adoption as a procurement decision instead of a cognition redesign. In the first year, most organizations buy tools before they have clarified which decisions they actually want to change, what their current decision quality looks like, and how they will know if it got better. The result is a portfolio of pilots that each individually look reasonable and collectively produce almost no measurable lift, because nobody defined what the counterfactual was supposed to be.
The fix is not more governance or a bigger committee. It is starting with a narrow question: what are the three decisions in our business that would move the needle if we made them better or faster, and what does "better" mean for each one. Once that is specified, the tooling choices mostly make themselves, and the conversation shifts from "are we using AI" to "is this decision improving." The organizations I have watched succeed in year one are the ones that started there. The ones that struggled almost always started with a tool shortlist.
We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.



