Issue 102: How NFTs Teach You More About Data Pipelines than Any FAANG Job Ft. Matheus Silva, Sr Software Engineer @Yearn Finance

Author :
Nishant Singh
May 4, 2025

"Most scalability advice breaks down the moment you touch the blockchain...."

This week on Coffee with Calyptus, we sit down with Matheus Silva, a systems architect who’s scaled everything from NFT marketplaces at Magic Eden to IoT fleets at Ikon and optimized conversion at iFood. He shares hard-won lessons in distributed systems, scaling event-driven pipelines, and how AI is (and isn’t) changing the dev game.

From building NFT marketplaces at Magic Eden to architecting IoT vehicle platforms at Ikon via X-Team, how has your view of “scalability” evolved across such radically different domains?

The two problems, despite involving such a massive scalability factor, are extremely different.

At magiceden there was 2 types of pressures on the whole system: amount of data being created on the blockchain + such amount of concurrent users on the platform, whereas on ikon there was pressure from only one side, ie: the massive amount of data per second each IOT device was creating and had to be consumed. There’s a massive difference in architecture, in a system design per say, where you need to scale or improve differents part of your infra to support each type of pressure and scale that correctly where the end user doesn’t suffer from stale data. At ikon the level of scalability was more on throughput for consumption + database improvements to allow such amount of data to be stored and queries correctly. That involves much more, if you will, of a data analytics POV rather than pure software engineering!

You’ve engineered systems for both blockchain ecosystems and mission-critical hospital platforms, which project stretched your skills the most, and why?

When working with blockchain and distributed systems, you quickly realize that the scalability challenges are fundamentally different from those encountered in more traditional, centralized environments. Mainstream scalability techniques, like those you might see at companies such as Notion or Facebook—are often optimized for high data throughput and traffic, but don’t always translate directly to decentralized or event-driven architectures.

For example, at Magic Eden, one of the most significant challenges we faced was processing Polygon blockchain events at extremely high speeds, ensuring that users always had access to the most up-to-date (non-stale) data. This was especially complex with Polygon NFT collections, which can have a massive number of items. To address this, we had to architect a data pipeline capable of processing incoming data streams within microseconds.

In these scenarios, leveraging technologies like Kafka is almost a given, but the real challenge lies in fine-tuning the system—specifically, optimizing the number of partitions and configuring consumer groups to maximize throughput. However, increasing parallelism in Kafka often leads to another bottleneck: a surge in database connections, which can place significant load on the backend and introduce new performance constraints.

This is where deep knowledge of distributed systems comes into play. It’s not just about scaling one component, but about understanding how each part of the stack interacts under load. For instance, at a certain scale, even sharding the database and using connection poolers like PgBouncer may no longer be sufficient. We reached a point where PgBouncer couldn’t keep up with the number of required connections, and further sharding introduced additional complexity.

Ultimately, the key is knowing how to parallelize processing efficiently, leveraging the right tools for each stage, and always anticipating that each new scalability solution will introduce its own set of challenges. In my experience, true expertise in scaling distributed systems comes from understanding these trade-offs and continuously evolving the architecture as you reach new thresholds of scale.


At iFood, you boosted conversion rates by optimizing for low-tier devices, what’s your playbook for creating tech that performs under pressure and still delights users?

3 words: study the user. If you don’t understand what users are trying to do or how they use your product, you won’t be able to fit into their needs or make their lives better. Everything else falls into place when you get your user as the main driver for your technical decisions.

From the beginning POS had the understanding and perception that the whole website was the problem. I saw that we were using hotjar at the time, despite the tools being so bad for what was paid, I could see where the user was struggling.

The problem was the ambiguity of the checkout flow where there were several CTAS and they were lost.

I could also identify a loading/performance issue on a few pages throughout the whole food ordering process, which helped as well to direct the solutions towards one direction.

We had a few customers where we could get in touch with and we also managed to identify their struggles with those user analysis!

These were key for us to apply the correct adjustments!

You’ve worked with everything from Solana and Polygon to AWS Kinesis and EventBridge, what’s been your biggest challenge balancing cutting-edge innovation with real-world production stability?

Having a controlled test env where you can still experience with that and if everything goes wrong you can fallback to another solution. But the majority of time you can get other companies' experience and see how much you can get from a certain technology, and can be even better than a much older one where you feel much more comfortable.

With your full-stack and serverless expertise, how are you starting to weave AI into your workflows or products? Any use cases where it’s already adding value or changing the game?

Not entirely yet but if there’s one thing ai can do is to improve the requirements of users or make their lives easier, for eg: when you write X into a report, you can leverage AI to boost users speed and rewrite into a much more detailed manner.

As a user, well, now im leveraging that to do things in parallel or that requires a lot of laborious time. That’s the advantage of AI, at the very end its a prediction system, where it has some context and argues over that context. It won’t replace developers by any means specially that i'm seeing ai being even more dumb as time goes by.

Solidity Challenge 🕵️‍♂️

The Tree Bard wants to find the maximum depth of his verse structure. What's the flaw in his poetic function?

date

Solidity Challenge Answer ✅

The base case should check if the root is null, not if its value is 0, as 0 could be a valid node value.