Collecting and analyzing egocentric videos from children
If you're looking to collect your own dataset with a new camera, please e-mail me at bria [at] stanford.edu — we're finalizing a new high-resolution camera using the GoPro Hero Bones camera (see Research page for photo) and more information at
https://langcog.github.io/babyview/
If you're looking for existing, available egocentric video datasets, you can:
-
find the paper on the SAYCam dataset here
-
get access to the SAYCam dataset through Databrary
-
get access to the in-lab dataset through Databrary
If you're looking for information on how to analyze social information in video datasets, you can:
-
First, see discussion section of Long et al., in press DevPsych — there are some limitations to this method!
-
If you want to run these models on your data, first check out the OpenPose repository we used
-
Look through this repository which has instructions to follow our pipeline — we applied the algorithm without fine-tuning and then used face/hand keypoints.
-
You'll need access to a server with a GPU to run OpenPose (there may be easier algorithms/codebases available, let me know if you find one). Note that this is a serious data management issue as OP produces 1 file/frame of each video, and keypoints for every person in every frame.
Collecting & analyzing digital children's drawings
If you're looking for our available drawings datasets, please hang tight — we will release the large datasets with publication. Send me an e-mail and I will add you to a notification list.
If you want to analyze your own digital drawing data:
-
For model embeddings
-
I recommend this repository for getting OpenAI's CLIP model embeddings very easily
-
We used custom PyTorch code, but THINGSvision is a great new resource for getting VGG-19 and other DNN model embeddings
-
-
If you want to get stroke annotations, you can browse our codebase but do note that it is not intended for public use
If you're looking for to collect your own drawing data:
-
Since we initially collected these data, there is now a sketchpad plugin in JsPsych!
-
Images/stroke data are typically sent to a server database via an an internet connection since they are generated as the user produces the drawing — we used MongoDB. If you want to run an offline version of your experiment, you can use PouchDB (which caches the data locally) and then sends them to a CouchDB database when connected to the internet.
-
For use in museums, we embedded our Javascript experiment in the Kiosk Enterprise App so that you can implement some controls (e.g., control volume, turn Kiosk on/off at certain times of day, etc)