I started working on Neova’s projects at the very beginning of the summer.
I was initially approached by Quentin, Neova's CEO, who asked me to lead the project from end to end. The vision was to build a decentralized drive within the Web3 ecosystem, specifically leveraging IPFS technologies.
When I joined, the project was a patchwork of contributions from successive interns and apprentices. It relied on a 'traditional' stack—microservices, Kubernetes, and JavaScript-based tech like NestJS—but frankly, it was a mystery how it ever managed to run at all. The teams were constantly applying 'band-aid' fixes directly on the servers just to keep things afloat.
With about ten different microservices in place, our goal was to start from scratch and get the foundations right.
I was quickly joined by my best friend—my long-time collaborator over the past five years. Together, we decided to scrap the existing Kubernetes setup, which was poorly configured and failing 90% of the time. We opted to drastically simplify the system by switching to Coolify. Since we were running on bare-metal servers rather than a cloud environment (GCP, AWS, etc.), the time required to properly reconfigure Kubernetes didn't outweigh the benefits it offered.
We also migrated the entire stack to the appropriate hardware, as the production environment was inexplicably running on the weakest server.
My role involved:
Overhauling the entire CI/CD pipeline.
Mentoring an apprentice.
Implementing unit, E2E, and integration tests.
Refactoring most of the backend services.
Reworking all non-functional front-end applications.
We also took the opportunity to improve our developer experience and internal tools by deploying Sentry, Umami, Kuma, Homer, Vaultwarden, Infisical, RabbitMQ, Keycloak, Metabase, and Nexus (Docker). We established proper production and staging environments, integrated automated testing, and set up Renovate for automated dependency updates.
The result: In just a few months, we went from a system crashing multiple times a day—requiring constant manual supervision and hitting 90% load with only 100 daily users—to a stable app.
Today, the platform supports peaks of 10k users per day with minimal latency and no need for 24/7 monitoring. All this was achieved at the same infrastructure cost, while ensuring the system is now scalable and maintainable. 👌🏻
It has been a fantastic experience as Lead Tech / CTO, with excellent feedback from both the business side and the clients!
