How Our AI, Upset by New Boards, Forgot How to Count

Imagine this: a client wants the AI model to count boards in a photo:

Easy! We grabbed a bunch of board photos, trained the model to recognize and count them. After weeks of trials, testing, and fine-tuning, the model worked flawlessly: it marked every board, gave the exact count - even neater than we could do ourselves. 

The client was happy, everyone was satisfied.

A few months go by… the client gets a new type of board. They take a picture, send it over, run it through the model.

The result? Our AI reacts as if seeing a foreign language: “Are these… tables? Bricks? Pieces of the neighbor’s fence?” - anything but boards.

And then we saw the photo. The new boards were completely different in shade, wider, with a different texture, lacking the familiar markings we had trained on before.

The result: instead of accurate counts - total chaos.

Lessons We Learned Along with the AI

  1. Data diversity is critically important.
    If we are training a model to recognize objects, we must show it as many real variations as possible: different colors, shapes, lighting conditions, new manufacturing designs. In other words - if the model only learns from “one type of board,” it will get confused when it sees another.

  2. AI is not a “set it and forget it” solution.
    The world changes. Products change. When a new, previously unseen case appears, the model needs additional training. This doesn’t mean the first model was bad - it’s simply a natural process of evolution.

  3. Plan for model maintenance from the very start.
    When developing an AI project, include model updates in the strategy. AI is not a static app, but a living mechanism that must learn alongside you.

Today, our model once again counts all boards perfectly - old, new, and even ones it sees for the first time. But this story became a great reminder that AI learning never truly ends.

Next
Next

When AI Becomes a Sports Journalist