Skip to content

Additional “Debug” Feature Documentation?

msymsy
edited July 2019 in SecuritySpy
In a different thread, Ben mentioned:

“Create a folder on your Desktop called "SS AI Predictions". If present, SecuritySpy will save image files to this folder that are annotated with the areas of motion and AI prediction values.”

I love this. But I could really use a similar feature for determining info about other motion detection decisions. Would love a debug option to include a bounding box(es), and/or bounding box coordinates. Is there one that I’m just missing?

Finally, is there a list of other hidden features like the AI Predictions folder?

Comments

  • This is an interesting idea that we would be open to. Could you tell me exactly how you imagine this feature working, and how you would use it in practice?

    There aren't too many other hidden features - just a few that we use for debugging and diagnosing problems when users contact us with issues - none that would really be useful in and of themselves.
  • So as you know, when setting up a motion mask, the UI puts up a big red rectangle around a trigger. But when motion is recorded, the captured pictures and videos are clean.

    I'd love for there to be an option to *also* export an annotated movie or picture that contains, at the very least, the red bounding box that triggers the motion event. Or, in addition to a clean movie/picture, a file containing the coordinates of such a bounding box so that I could overlay my own box afterwards. Or both!

    This way I could very quickly tune my motion triggers after an event happens - something that's difficult now unless I'm watching the feed in real-time and happen to catch the motion in the act, which I'm sure you can understand, is far from ideal in a real-world situation.
  • I see how this is useful, but this is exactly what SecuritySpy does already to the "SS AI Predictions" folder (full details here: Optimising SecuritySpy’s AI Object Detection). How is what you are proposing different from this?
  • I'm suggesting to do the bounding box debug folder with straight-up motion detection too, not solely the AI predictions.
  • Ah OK I see what you mean now, I'm just not convinced about how useful this will be to users generally - we have to ration our development resources to features that we feel have the widest appeal. When you get a false-positive detection, it's normally obvious what has caused it by looking at the captured file, and you can then take steps to prevent similar events in the future (e.g. mask out the offending area, increase the trigger time, or decrease the sensitivity).

    Plus, with the new AI features, it's not so critical to avoid all false-positive motion detections, as the AI will screen out almost all of them.
  • I'd love to say it's "normally obvious" to me but unfortunately, well, here I am asking for a new feature! :-) Your logic makes sense if I'm using the AI features, and believe me, I wish I was. I'm just not tracking for people or vehicles on this camera.

    For what it's worth, I'm having a difficult time determining when birds are landing close to my target area. When there's no breeze, it's easy. The problem comes when the trees and bushes in the background start getting blown around. Unfortunately there's a lot of potential for movement in the frame and knowing where the movement is being picked up would help a lot.

    Admittedly I'm chasing a thin calibration line that may ultimately end up getting ditched, but having that red box would let me reach my go/no-go conclusion that much quicker.

    Anyway, thanks for lending your ear, I appreciate the prompt responses!
  • Are you trying to capture birds? This may explain why you're having problems - the algorithm is optimised for much larger objects (humans, vehicles). What kind of camera/lens are you using?
  • It prioritizes people & vehicles even in the standard non-AI motion detection?

    For my motion trigger settings, I'm basically ignoring everything in the frame except for a horizontal railing and about 1 foot of space above it. The area I'm looking to trigger on is 2304x350 in a 2304x1296 frame.

    I've got it set to 85% / 0.5s.

    It's an Amcrest 2k PTZ, I'm blanking on the exact model at the moment.
  • Yes, basically the standard (non-AI) motion detection is optimised for human or vehicle movement, at a size and speed that these objects would typically appear in standard CCTV setups (i.e. large enough in the frame that you would be able to make out people's faces or car license plates).

    Movement of small objects is actively filtered out, as in normal contexts this would indicate errant motion. This would include birds, under normal circumstances.

    One way to make this work better would be to zoom in the camera (assuming it has optical zoom) so that the birds appear larger in the frame, and then reducing your motion mask so that SecuritySpy has more pixels to work with. If the motion mask covers most of the frame, there won't be enough pixels left over to achieve accurate motion detection.
Sign In or Register to comment.