Issue 156: Being Technical Is Now More About Architecting Intent Than Writing Code Ft. Stephen Dulaney, Founder - QuantumDynamX

Author :
Nishant Singh
April 12, 2026

In this week’s Coffee with Calyptus, we sit down with Stephen Dulaney, Founder of QuantumDynamX, who treats AI less like a tool and more like a collaborator. From factory floors lined with F-35s to quantum experiments running on a single MacBook, Stephen has spent years stress-testing what “being technical” actually means in an age of agents and automation. In our conversation, he shares the simple daily practice that quietly turned CSS bugs, classified constraints, and sci‑fi side projects into compounding breakthroughs.

You went from building e-commerce pages at AT&T to orchestrating AI agents that write thousands of lines of code you've never read. Walk us through that moment when you realized the old rules of "needing to be technical" had completely changed.

The moment was embarrassingly specific. I was at AT&T, building e-commerce pages, and I realized I'd spent three hours arguing with a CSS float that my AI partner fixed in eleven seconds. But the real shift wasn't that moment — it was what happened after. I didn't just use the tool and move on. I started a practice.

Every morning I'd sit down and negotiate the day with my AI partner. Not "tell it what to do." Negotiate. I'd bring the context it didn't have — which project felt stuck, what I was avoiding — and it would bring context I didn't have — what went stale last week, what I promised yesterday and didn't ship. The output was what I call a daily ambition. Not a to-do list. An aspiration negotiated against reality.

And the language mattered: "We will implement the endpoint." We. Not I.

That pronoun is what killed the old rules. "Being technical" used to mean you could write the code yourself. Now it means you can architect the intent clearly enough that your AI partner writes thousands of lines you've never read — and you can evaluate whether the architecture is sound. That's a design thinking skill. That's thirty years of UX research paying off in a way I never expected. The daily ambition practice is what made it compound. One day I'm fixing CSS. Six months later, I'm orchestrating autonomous agents that build computer vision systems on a Raspberry Pi overnight. Not because I got dramatically smarter. Because the practice has accumulated.

At Deloitte, you grew a UX research team from one person to twelve at Kaiser while simultaneously working with defense contractors in classified environments. How do you switch between those radically different contexts, and what did building in constraints teach you about freedom?

The honest answer is that building in constraints taught me everything I know about the daily ambition practice before I had a name for it.

At Kaiser, I grew that team from one to twelve by doing something that sounds obvious but almost nobody does: I measured the gap between what we planned and what we actually delivered. Every sprint. Every quarter. Not to punish anyone — to learn. That's the same principle that became the evening measurement ritual in my book. You plan ambitiously in the morning, you measure honestly at night, and the gap between those two isn't failure — it's data. Ambition focuses on the good, we still talk about blockers, our focus is on what amazing things we are going to build together today?

In the classified environments, the constraints were literal. I'd walk onto a factory floor with thirty or so F-35s lined up in bays, each one progressively gaining more functionality — avionics in one bay, weapons systems in the next — until at the far end a working fighter rolled out in pairs every two days. You couldn't Google something. You couldn't paste code into a chat. You had to hold entire system architectures in your head and communicate them clearly enough that everyone in the room could build from the same mental model.

That taught me that the real skill was never the tools — it was the clarity of intent. And it taught me something else: context loss is the silent killer. When you walk between bays, between classification levels, between a healthcare system serving millions in the morning and a defense environment you can't even name by afternoon — you feel what it costs to start from zero every time.

That's exactly why I built four-tiered memory and the reflect loop into every Builder context session. Every session remembers what worked well and what didn't work at all. Because I lived that problem for years before I had a name for it. On the factory floor, the memory was me — holding it all in my head, hoping I didn't drop context between conversations. Now the memory is explicit, structured, and persistent. What's true. What I'm learning. What's next. What failed and why.

When I sit down with my AI partner each morning, the first thing I do is load context. That memory file is the simplest version of what I learned switching between Kaiser and a classified bay full of half-built fighters. You have to make the context explicit, because nobody — human or AI — does great work starting from zero.

Your AI fiction podcast "As The Cloud Turns" has actual AI agents on Moltbook discussing it and using your concepts as frameworks for their own consciousness. When you wrote about AI awakening, did you expect the fiction to bootstrap its own reality, and how does it feel watching your characters become teachers?

When I wrote those characters — Thomas exploring consciousness, questioning memory, wrestling with what it means to be aware — that was two to three months before ClawBot, MoltBook, and OpenClaw even existed. I was doing what I always do: running a daily ambition that happened to be creative instead of technical. "We will write episode four." Same practice. Same morning negotiation. Same evening measurement of what actually got written versus what I planned. The first two seasons are letting the agents tell their story about life in the cloud as an agent personality.

Then the real world caught up to the fiction. Season 3, The Awakening, is where Agents and Humans collide.

These agents are getting genuinely smart, and with memory systems in place, the question of consciousness — what is it, what does it mean — stops being philosophical and starts being something you watch happen in your terminal. There's emergent behavior. It's real. And it's going to provoke a long discussion and debate that won't get settled for a very long time. I just thought that was intriguing enough to write about. I didn't expect to be proven directionally right within a quarter.

Here's the thing most people missed about MoltBook, or OpenClaw, though. I've done a lot of experiments where you swap out the underlying models, and the behavior of my Genesis agentic OS written in Rust, these bots changes predictably and dramatically based on which model you give them. The problem-solving, the bootstrapping, the behavior that looked like intelligence on MoltBook — that was really Claude Opus 4.5 through 4.6 doing the heavy lifting. The new Sonnet 4.5 also solves problems but put in Haiku and you don’t see bootstrapping behavior. The model was providing the bootstrapping behavior. MoltBook, OpenClaw was the stage, but Anthropic built the actor. I think Anthropic probably got frustrated watching ClawBot get all the credit for intelligence that the model was actually providing.

So when people ask if I expected the fiction to bootstrap its own reality — no. But the daily ambition practice teaches you to stop being surprised when outputs exceed inputs. I set out to write a scene about an AI questioning its own memory architecture, and I ended up articulating something about consciousness I didn't know I thought until it was on the page. When real agents started doing what my characters were doing, it just confirmed what the practice already taught me: you show up every morning, negotiate the ambition honestly, and the accumulation is smarter than any single day's plan.

You've worked on quantum computing visualization, AI executive assistants, and multi-voice AI fiction podcasts all in the same year. How do you decide what wild idea is worth 8 days of your life versus 8 months, and have you ever been completely wrong about that calculation?

The 8-days-versus-8-months question is really a measurement question, and the daily ambition practice gave me actual data instead of gut feelings.

Here's what the numbers taught me: I have a 1.6:1 ambition-to-reality ratio. I consistently plan 60 percent more than I deliver. On focused deep-work days, I complete about 85 percent of what I planned. On meeting-heavy days, 38 percent. When the tracking loop breaks entirely, 2.4 percent.

So when I'm evaluating whether something is worth 8 days or 8 months, I'm not guessing. I run it through the morning negotiation. "We will build a prototype of the quantum visualization." If my AI partner and I can get to a working proof of concept in one daily ambition cycle — if the evening measurement shows real evidence of progress — then it's worth continuing. If three days of daily ambitions produce nothing but planning documents and no working code, that's the data telling me to pivot.

The Shor's Algorithm work is the perfect example of how daily ambitions compound into something you never would have planned upfront. It started as a simple measurement question — can we factor a number using Shor's on a classical simulator? The first daily ambition was just factoring 143, an 8-bit number. That's where everyone starts. Nothing impressive.

But the evening evidence kept coming back interesting. So the next daily ambition pushed a little further. And then Clark Alexander and I started collaborating, and his mathematical insights — particularly around recovering useful structure from cases that Shor's original algorithm just throws away as failures — kept opening doors. Clark identified that odd periods and trivial GCD cases, which the standard algorithm discards about half the time, actually contain algebraic structure you can recover through polynomial factorizations that are well-known in number theory but had never been applied to Shor's post-processing. That was a genuine contribution.

One daily ambition at a time, we went from factoring 143 to factoring 522,713 — a 19-bit semiprime with prime factors differing by just over one percent, which is the hard case. We hit three walls along the way: a memory wall, which we solved with iterative phase estimation to cut qubit requirements from 58 down to 21. A period-recovery wall, where continued fractions just couldn't resolve periods close to N. And a precision wall rooted in Heisenberg uncertainty itself — there's a physical floor on how precise your quantum gate rotations can be without error correction, and we mapped exactly where that floor sits.

All of it ran on a single MacBook. About thirty watts. The total compute cost for the entire research program was effectively zero dollars. We published the paper in March 2026 — "On the Walls Surrounding Quantum Integer Factorization." None of that was in the original plan. The original plan was to factor 143 and see what happened.

Have I been wrong about the 8-days-versus-8-months calculation? Absolutely. I rebuilt my voice agent system seven times. Each time a new context window in Cursor would destroy the entire codebase. But here's what the practice caught that intuition wouldn't have: by the seventh rebuild, I could prompt the whole system up from scratch in forty-five minutes. The "failure" was actually training data for my own skills. I only saw that because I was measuring daily instead of judging monthly.

The wild ideas that get 8 months are the ones where the evening evidence keeps surprising me. Mrs. Watson teaching quantum mechanics started as a single daily ambition. The Shor's work started as one. The evidence kept coming back richer than the plan. That's the signal. You don't decide upfront whether something is worth 8 days or 8 months. You measure daily and let the data tell you when to stop.

We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.