Noted, and thanks for the reply. But… be aware that I’m not trying to reduce false positives (non-faces tagged as a face), nor to alter the recognition algorithm in any way - I’m trying to throw away a given subset of the already detected faces: both true and false positives.
I’m suggesting that no changes be made to the “is it a face?” part of the algorithm at all - so the same number of true/false positives/negatives to begin with. Does subsequently throwing away some number of those positives skew the percentages, by leaving fewer overall positives? Sure. But I don’t care about percentages, I just want to ignore a (significant) number of alerts that I don’t care about.
So maybe it goes from 10 TP, 1 FP, 2 FN before to 2 TP, 1 FP, 2 FN after. The error rate may be worse, but I’m still happier, because I’ve shed > 70% of my annoying irrelevant alerts. (And I’m missing no more actual visitors than previously were undetected.)
Also: with my current settings, I’ve noticed I nearly never get false positives. Almost everything I get is a face. And the algorithm is doing a good job of putting tight boxes around them, as evidenced by the photo attachments in the alerts I receive: in almost all cases, they show up as a similar size in the box-thumbnail. So post-culling by the size of the box will indeed target the ones I’m not interested in (the small ones).
“Creating” a new false negative in this scheme would require some small part of a large face to be detected as a whole face. So, say the face box ends up around just an ear, and thus the face is culled as being too small. I’m just not seeing this actually happening.
Again, it’s not for everyone - but for people who have the problem that I’m seeing, characteristic of a specifically-urban location and vantage point, I believe it would be pretty effective. Nothing’s going to be perfect - as evidenced by the fact that none of the current controls work for my situation currently. But it would be super cheap to implement, and potentially very useful as an additional control for some of us.