Zoom Logo

Cam Trap CV 2 - Shared screen with speaker view
Ariel Hammond
00:47
Hello everyone! We'll be starting right at 9
saul
02:15
Can you say something so I can test audio?
Tod
16:48
will speakers share their slides?
lfortson
21:29
do others have access to Sara’s slides? I don’t seem to ...
Sara
23:34
I'll try to share via a different account, Google has some strict privacy stuff that I think is getting in the way
Jason Parham
01:01:42
BRB
David Russell
01:04:25
https://pjreddie.com/media/files/papers/YOLOv3.pdf section 2.2 has a brief mention of how they deal with hierarchical data.
Dan Morris
01:07:30
This was the tiger ID competition (and associated data set)I was referring to:https://cvwc2019.github.io/challenge.html
Dan Morris
01:08:49
And the new WCS data set I was referring to:http://lila.science/datasets/wcscameratraps
Jason Parham
01:10:16
Sorry everybody, I have to drop off, urgent issue came up
Ariel Hammond
01:10:59
Take care - I'll send you the full video later
Dan Morris
01:12:33
MegaDetector info:https://github.com/microsoft/CameraTraps/blob/master/megadetector.md
Dan Morris
01:17:25
Web conferencing also doesn't quite do justice to how fast you can scroll through these images when 99% of them are the same thing (e.g. empty)...
Siyu
01:35:44
Classification training pipeline: https://github.com/microsoft/CameraTraps/tree/master/classification
Sue Townsend
01:38:56
Saul/Dan Can you make a clear request of what type of databases you are looking for?
saul
01:41:45
http://saul.cpsc.ucalgary.ca/timelapse/ to get timelapse, and to see a powerpoint deck (click on its videos to play) showing and overview of its features (large file)
Dan Morris
01:55:01
Re: databases... as far as data for training the detector, the most important types of data are the ones where we see systematic problems. We recently realized we do poorly, for example, on both bats and small reptiles, for lack of training data. So what we need most is feedback about systematic problems, and then if you also have data you can share to help us fix those problems, that's a bonus.
Siyu
01:55:04
Hi Sue, I want to understand why there are datasets that have > 97% empty images… Is time-triggering (as opposed to motion-triggering) a necessary component in the analysis?
saul
01:58:13
Some of our collaborators require time mode as they are specifically sampling over time (e.g., fishery folks who want to count the number of anglers every hour to get a sense of angling effort). Others reported that they set their camera to motion detection to record (say) 5 images for every sequence - typically the first and last image are empty. Then there are the motion triggers based on wind effects.
Dan Morris
02:04:37
https://aiforearth.drivendata.org
Ariel Hammond
02:05:03
Zoohackathon: https://drive.google.com/open?id=1foQK-MN_X2QwwTRcR5dEOxoRM3RC1NQE
lesk
02:07:14
sorry, must leave. Michael. Thanks to everyone.
Sue Townsend
02:10:39
Hi Siyu, I am not exactly clear what you are asking but the protocols I use use the motion trigger (not time sampling) with bursts of 3 with 5 sec separation - the large number of blanks is usually a by product of habitat type (moving grass and the like) - the blanks can gum up the work flow both in the time to upload sd card and then when farming out "good" deployments for volunteers (e.g. not a lot of blanks). We still need to vet empty images and we use the max number of inds of each species in each event (burst of three) but now folks use separating by 120 sec for each detection (harder to explain) for events for analysis. I am not sure if this answers your question in any way.