Latest release - tons of false positives
EDIT: FYI - opened a ticket following the advice Ben provided another user.
Hey all. Been running SS for years. I love the new human / vehicle detection feature which seemed to work great in the last version. Now, with the latest update, I'm getting rather constant false positives. We're talking stuff like a car is identified a human, a shadow, my dog, things like that. I upped the threshold to 90+ on my cameras, and still getting the same deal. Restarts seem to have no effect. I've read about tuning things for detection, and inspected the files that it claims a human was detected..and even for a basic AI there's no way those items could even remotely be considered human-like.
Hey all. Been running SS for years. I love the new human / vehicle detection feature which seemed to work great in the last version. Now, with the latest update, I'm getting rather constant false positives. We're talking stuff like a car is identified a human, a shadow, my dog, things like that. I upped the threshold to 90+ on my cameras, and still getting the same deal. Restarts seem to have no effect. I've read about tuning things for detection, and inspected the files that it claims a human was detected..and even for a basic AI there's no way those items could even remotely be considered human-like.
Comments
This is exacerbated by situations that deviate significantly from the average. The AI is trained on hundreds of thousands of real CCTV images, but in these, dogs/animals are present in only a small minority (around 1%). Therefore, when training and optimising the AI, it doesn't have to distinguish accurately between humans and dogs in order to get a very high overall score (e.g. even if it classifies all dogs as humans, because the number of dog samples is so small, this doesn't hurt the overall accuracy too much and you can still end up with 95% accuracy overall).
There isn't really a great solution for this, as the dataset used for training should be representative of the frequency of images that the AI will encounter in production, which it currently is, on average. By including more dog samples in the dataset used to train the model, this may make it more accurate for you, but would make it less accurate for someone else who doesn't own a dog.
The ultimate solution is to do on-device training so that each user could further train the AI for their own purposes, but this isn't easy to implement, and would probably require a significant amount of user input to achieve. We are however taking a look at this possibility.