No measurement, no optimization.
There's a common saying in programming: "Don't optimize what you haven't measured yet." You can only truly appreciate this advice when you experience it yourself. And I definitely did.
I'm always looking forward to projects where I can optimize the code after the first release. It's really satisfying to refine code and make it cleaner and more efficient. But I've learned the hard way that optimizing without measuring the performance changes can be a disaster, especially with my limited experience. Most of our performance issues stem from database queries, not from our program’s architecture.
After our initial release, there were two queries that I had to write separately because of how complex they were. I planned to optimize them after the release, and I thought combining them would be more efficient. I tested it, and everything looked fine—the output was the same and there were no issues.
Feeling good about my work, I submitted a merge request late that day and waited for feedback. The next day, I saw that no one had reviewed it yet. My curiosity took over, and I decided to measure the performance myself, hoping to validate my work. It didn't.
I opened Postman and sent 100 requests to calculate the average response time. It wasn't the best approach, but it would get the job done. The new implementation had a higher overall response time. While some requests were faster than the old version, there were also significant spikes. Ultimately, it wasn’t better at all. I had actually made it worse.
I closed my merge request and noted that the code was flawed and could cause performance issues. After digging into it, I realized I had overlooked how crucial it was to understand the structure of the data and its data types, which had an unforeseen impact on performance.
Even so, it was a valuable experience that I genuinely enjoyed. I learned more about Laravel performance tools, discovered how useful Postman is for this kind of work, and gained experience researching potential performance regressions.
Ultimately, optimizing based on instinct is just the beginning. You still need to back it up with accurate measurements. As they say, you can only call it optimized if you have the benchmarks to prove it!