Continuous fruit recognition from a live video feed with a very limited set of picture for the training.
Also made for AWS Summit Paris 2018, since we are lucky enoough to have a stand and a dedicated showcase at AWS Innovation’s Corner. Patrice Ferlet is here showcasing how a model, trained on a very limited amount of pictures of apples, strawberries and lemons can show some decent result when it comes to doing some “live” trakcing experiments.
Here’s a video but we have a working demo where users can actually play in front of camera with plastic fruits and see the result live, at a reduced FPS (to prevent having to come in a tech salon or at a client HQ with all your heavy hardware).