Over 1 million children under the age of 17 in the US are on the autism spectrum. These children often times fail to recognize basic facial emotions, which make social interactions and developing friendships even more difficult to sustain. Gaining these skills requires intensive behavioral interventions that are often expensive, difficult to access, and inconsistently administered. Our team at Stanford is researching a solution. We have developed a system using machine learning and artificial intelligence to automate facial expression recognition that runs on wearable glasses and delivers real-time social cues. Our novel system uses the outward-facing camera on the glasses to read facial expressions and provides social cues within the child’s natural environment. It also records the amount and type of eye contact, which adds an additional layer for behavioral intervention.
After a successful 40-person pilot study we are embarking on the second phase of our research. This 100-person at-home study will consist of 80 children with Autism Spectrum Disorder and 20 children who are typically developing. We are seeking participants between the ages of 6-16 years old. In this trial, we aim to study the long-term behavioral progression within a 4-month period. Following our existing IRB-approved protocol, we will allow families of children with autism to use our device at home, with scheduled, periodic in-lab visits. If you are interested in learning more about this study, please consider signing up today.
Professor of Pediatrics,
Professor Emeritus of Computer Science, co-PI
Professor Emeritus of Psychiatry & Founder of Autism Center, co-PI
Project Co-Founder & Postdoctoral Fellow
Computer Vision Guru
Machine Learning Guru
Clinical Research Consultant
Deep Learning Collaborator
Image Processing Collaborator