Social presence has been known to impact eating behavior among people with obesity; however, the dual study of eating behavior and social presence in real-world settings is challenging due to the inability to reliably confirm the co-occurrence of these important factors. High-resolution video cameras can detect timing while providing visual confirmation of behavior; however, their potential to capture all-day behavior is limited by short battery lifetime and lack of autonomy in detection. Low-resolution infrared (IR) sensors have shown promise in automating human behavior detection; however, it is unknown if IR sensors contribute to behavior detection when combined with RGB cameras. To address these challenges, we designed and deployed a low-power, and low-resolution RGB video camera, in conjunction with a low-resolution IR sensor, to test a learned model’s ability to detect eating and social presence. We evaluated our system in the wild with 10 participants with obesity; our models displayed slight improvement when detecting eating (5%) and significant improvement when detecting social presence (44%) compared with using a video-only approach. We analyzed device failure scenarios and their implications for future wearable camera design and machine learning pipelines. Lastly, we provide guidance for future studies using low-cost RGB and IR sensors to validate human behavior with context.
Foodtrk: Track meals and snacks with pictures of food and questionnaire for research