Back to Home

My journey into ML

My first real encounter with Machine Learning happened during my time at Politecnico di Milano, in a course called Model Identification and Data Analysis. I didn’t know it back then, but that class would open a door I’d keep walking through ever since.

The topic immediately fascinated me, and Prof. Formentin shared a few great resources that helped me dive deeper. I started exploring the theory through Yaser Abu-Mostafa’s Learning from Data course from Caltech — a fantastic introduction that helped me understand what was really going on behind all those mysterious algorithms.

After that, I wanted to get my hands dirty. I took the Python for Data Science and Machine Learning Bootcamp on Udemy, which gave me plenty of practical examples to experiment with.

At first, I played around with small projects — things like portfolio optimization or time-series prediction for stock markets. I was convinced that with a simple neural network I could predict stock prices and become rich. Spoiler alert: that’s not how it works in the real world. But that failure taught me one of my first important lessons in ML — the difference between theory, data, and reality.

For a few years, I put ML aside. My focus shifted toward classical control theory, especially MPC, and first-principle modeling — areas where I felt I could better understand and design systems from the ground up.

But then something happened. The rise of LLMs completely reignited my curiosity. It felt like the whole world of AI had suddenly accelerated, and I couldn’t resist jumping back in.

I started small again, this time experimenting with RL to get back into the flow. Soon after, I got my first taste of VLMs during a small challenge. In that project, I combined SAM and CLIP to build an open-vocabulary visual grounding system — an experience that showed me how far the field had evolved since my first neural network days.

I was back on track and ready to dig deeper and understand the state of the art of AI at this time.