Issue157: Why 'It works' and 'it ships reliably' are completely different statements? Ft. Louis-François Bouchard, Co-Founder & CEO - TowardsAI

Author :
Nishant Singh
April 19, 2026

This edition of Coffee with Calyptus features Louis-François Bouchard, co-founder and CTO of Towards AI and the creator behind the "What's AI" YouTube channel, a brand he started in 2017 when AI was still a niche conversation. Louis-François has spent nearly a decade bridging the gap between AI research and everyday understanding, going from community builder to running the educational infrastructure at one of the field's most recognized learning platforms. Whether you are a builder, a learner, or just AI-curious, his perspective is one of the clearest and most grounded you will find.

You started building the "What's AI" brand back in 2017 as a student, long before AI was a mainstream conversation topic. What was driving you at a time when almost nobody in your circle was paying attention to this space?

Honestly, it came from curiosity first. I was deep in AI academically, and the more I learned, the more I felt there was this weird gap between what people thought AI was and what it actually was. Around me, most people were either not paying attention at all, or they had this sci-fi version of AI in their heads. I kept feeling that if this field was going to matter as much as it clearly would, then it needed better translators, not just more researchers. That was the real motivation behind What’s AI. I wanted to answer the simple question, “what is AI?” in a way normal people could actually use.

At the time, it definitely was not the obvious thing to do. AI was nowhere near as mainstream as it is now, so building content around it felt niche, almost irrational from the outside. But that was also why it felt worth doing. When a field is still early, the people helping shape understanding have a real chance to influence how others enter it. I was learning, building, researching, and explaining at the same time, and that combination made the whole thing feel alive. It was never just content for me. It was a way to make sense of the field in public while helping other people get in earlier and with less confusion than I had. As a side effect, it was a way for me to practice my english, as a french native speaker, practice speaking “in front of people” (or at least in front of a camera, haha) and force myself to read more papers and test more techniques, as I was covering them on youtube. Great motivation!

You've worn so many hats simultaneously: YouTuber, CTO, podcast host, O'Reilly instructor, Discord community builder. How do you decide where your energy goes when everything feels important at once?

I try very hard not to treat all opportunities as equal just because they all sound good. For me, the question is usually: where is the leverage right now? Sometimes that means creating content because one video can clarify an idea for hundreds of thousands of people. Sometimes it means focusing on Towards AI because if we improve the courses, the book, or the product side, that compounds across a lot of builders. And sometimes it means teaching more directly, through workshops or O’Reilly-style material, because some topics need more depth than a video can give. I also have a hard time saying no when it comes to education, to be honest.

The other thing I’ve learned is that “important” is not the same as “urgent” and definitely not the same as “mine to do.” Community building taught me that really well. When you start a Discord, a podcast, a company, and an education platform, you quickly realize you can drown in good ideas. So I usually prioritize based on a mix of impact, compounding value, and personal unfair advantage. Where do I have the clearest signal? Where can I say something useful that not many other people can say in the same way? That tends to cut through the noise pretty fast. I also focus on personal interest, which often results in scattered projects.

You've spent years breaking down complex AI concepts for everyday people, but you also went deep on AI ethics with the Global AI Ethics Institute. Where do you think the biggest gap between public understanding and reality actually lives right now?

I think the biggest gap is that people still talk about AI as if it were one thing. It’s not. They mix together chatbots, foundation models, agents, automation, AGI, recommendation systems, bias, and product hype into one blurry mental model. That creates two bad outcomes at once. On one side, people overestimate what current systems can reliably do. On the other, they underestimate the amount of design, evaluation, constraints, and human judgment required to make them useful in the real world.

That’s also where the ethics side matters much more than people think. Ethical risk is not just some abstract “AI might become dangerous one day” conversation. A lot of it is much more practical and immediate: what data are you using, what biases are you encoding, how much trust are you asking from users, how much human review is in the loop, and what happens when the system is confidently wrong? I’ve spent a lot of time trying to demystify the black box because once people can see the mechanism more clearly, they usually stop being impressed by the wrong things and start asking better questions. That’s where the real public understanding gap is for me.

You built a community from scratch, got it acquired by Towards AI, and then became CTO of that same company. Walk us through what that journey felt like from the inside.

Pretty good research you’ve got there! I don’t even remember discussing this acquisition much, haha.

It felt much less linear from the inside than it probably looks in retrospect. When I started building community, I was not thinking, “this becomes an acquisition and then I become CTO.” I was thinking much more simply: can I build something genuinely useful for people trying to learn AI? That meant content, conversations, resources, Discord, experiments, all of it. Over time, that grew into something with real momentum, and that momentum created opportunities I couldn’t have planned at the start.

The move into Towards AI happened because there was strong alignment, and the founder became a friend of mine and someone I trusted. We both cared about making AI more accessible, but not in a shallow way. Not just hype, not just headlines, but real understanding and practical education. Going from community builder into CTO also changed the game for me. Community teaches you what people are confused about and what they need. The CTO role forces you to turn that understanding into systems, products, teams, courses, and real execution. Which I was already doing as the head of AI at a startup I worked at, but not in an education-first company. So from the inside, the journey felt like going from explaining the field, to helping build the infrastructure around how people learn and apply it. That was a very natural evolution for me, even if the title shift sounds dramatic on paper.

You've been both a researcher at ÉTS and a commercial Head of AI at designstripe, then pivoted heavily into education. What did the industry teach you that academia simply couldn't, and vice versa?

I was actually a researcher at Mila and Polytechnique Montreal during my PhD! ETS wasfor my Master’s and engineering degree.

Industry taught me that “it works” and “it ships reliably” are completely different statements. In research or academic settings, you can spend a lot of time optimizing for novelty, rigor, benchmarks, or a cleaner framing of the problem. In industry, the question becomes much more brutal and much more useful: does this solve the actual problem under real constraints? Budget, latency, bad inputs, messy user behavior, weak data, changing priorities, all of that hits you at once. At designstripe especially, being close to product and applied AI forces you to care about usefulness, not just elegance.

Academia, on the other hand, gave me the depth and discipline that a lot of purely commercial work skips. It trained me to ask whether something is actually true, what assumptions are hiding in the setup, what the limitations are, and whether a nice demo is masking a weak underlying method. My research background through ÉTS, Mila, and Polytechnique Montréal gave me a much stronger foundation for thinking critically, not just building quickly. And education ended up feeling like the place where those two worlds meet. You need the clarity and rigor from academia, but also the practical honesty from industry. Otherwise you either teach theory that never lands, or tactics that don’t generalize.

As someone who has spent nearly a decade making AI accessible to hundreds of thousands of people, how are you personally weaving AI tools into the way you create content, run Towards AI, and teach today?

I use AI constantly now, but probably in a less magical way than people expect. I do not use it as a replacement for thinking. I use it as leverage around thinking. For content, that means research support, structuring, first-pass reframing, pressure-testing explanations, and helping me explore different angles faster. But the judgment layer is still mine. I care a lot about whether something is actually clear, actually true, and actually useful to a skeptical builder or learner. That part does not get outsourced.

Inside Towards AI, it shows up even more operationally. AI helps with internal workflows, course development, experimentation around agents and LLM systems, and generally reducing the overhead between idea and execution. The same is true in teaching. We use it for almost everything to be honest, but we have very good frameworks around agents and LLMs to ensure qualitative outputs. We even describe and give many many tips to “become AI-first” in one of our courses.

For example, I use AI to help build examples, compare approaches, and show students not just what a model can generate, but how to validate, constrain, and debug what it generates. That’s the big shift for me. A few years ago, a lot of the conversation was “look what AI can do.” Today, my focus is much more “how do you make it reliable enough to be useful?” Prompting is the easy part. Reliability is the real work. That’s the mindset I’m trying to bring into content, products, and education now.

We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.