Current Projects

Health and Technology


Together with Dartmouth University, Carnegie Mellon University and Cornell University, we at Georgia Tech are interested in extending the seminal work of StudentLife, started by Dartmouth’s Andrew Campbell a few years ago. Campbell sought to determine if mental health and academic performance could be correlated, or even predicted, through a student’s digital footprint. We are proposing the CampusLife project as a logical extension, which aims to collect data from relevant subsets of a campus community through their interactions with mobile and wearable technology, social media and the environment itself using an Internet of Things instrumentation. What drives this project is both a human goal of understanding wellness for young adults, as well as how one can perform such experimentation and address the significant security and privacy challenges. We invite everyone from the Georgia Tech community to join a conversation with the lead researchers from four universities to help influence the directions of this nascent effort.

For more information, contact Gregory Abowd.

Human-Environment Interactions and Computational Materials


COSMOS is an interdisciplinary collaboration to design, manufacture, fabricate, and apply COmputational Skins for Multi-functional Objects and Systems (COSMOS). COSMOS consist of dense, high-performance, seamlessly-networked, ambiently-powered computational nodes that can process, store, and communicate sensor data. Achieving this vision will redefine the basis of human-environment interactions by creating a world in which everyday objects and information technology become inextricably entangled. COSMOS seeks to rethink the embodiment of computing through the integration of materials science and computation, literally realizing Weiser’s figurative “weaving” of technology into the fabric of everyday life.

For more information, contact Dingtian Zhang.

Computational Behavioral Science and Analysis

A collaboration between developmental psychologists and computer scientists seeks to develop novel computational tools and methods to objectively measure behaviors in natural settings. The goal is to develop tools that can help us better detect, understand, and ultimately treat autism or other chronic diseases. Current standard practices for extracting useful behavioral information are typically difficult to replicate and require a lot of human time. For example, extensive training is typically required for a human coder to reliably code a particular behavior/interaction. Also, manual coding typically takes a lot more time than the actual length of the video. The time intensive nature of this process puts a strong limitation on the scalability of studies.

For more information, contact Agata Rozga, Rosa Arriaga, Kaya De Barbaro, Gregory Abowd, and Yi Han.

Activity and Gesture Recognition for Mobile and Wearable Computing

Over the past few years, we have seen a number of wearable devices emerge that did a small number of tasks well (e.g., step counting). As these wrist-worn health trackers gained in popularity, commercial devices sought to do even more things around the wrist, to the point where a smartwatch is trying to become an all-purpose interaction device. Our group is working on a variety of explorations of mobile and on-body activity and gesture recognition systems, using both commodity sensing in existing devices and new form factors with novel sensing. Our goal is to expand the richness of existing interactions and activity recognition capabilities for everyday mobile and wearable computing users.

For gesture recognition and interaction systems, reach out to Aman Parnami, Cheng Zhang, and Gabriel Reyes.

For activity recognition and passive sensing systems, reach out to Aman Parnami, Dingtian Zhang, and Gabriel Reyes.

Transportation Informatics

A great deal of work in personal informatics has focused on health and well-being, including areas such as fitness, eating, and sleep. Relatively little work has focused on transportation. According to the Bureau of Labor Statistics, transportation (mostly driving) is the second highest expense for the average American household, ahead of food and healthcare and behind only housing. Transportation Planners have long explored various strategies (mostly incentive-based) to encourage people to choose alternatives to driving alone, such as ride sharing, transit, and walking or biking. We have developed a personal informatics system that allows individuals to track their driving trips, and provides the user with an estimated cost for each trip (including fuel, vehicle depreciation, maintenance, insurance, taxes, and fees). By aggregating these dispersed costs on a per-trip basis, we provide drivers with a way to directly compare the cost of each driving trip to alternatives, such as Uber, transit, and so forth. We are exploring how revealing this personalized cost information impacts individual awareness, including the potential for behavior change, in how people make choices about their transportation modes and discretionary trips.

For more information, contact Caleb Southern.

Security and Privacy

As the Internet of Things (IoT) and ubiquitous sensing technology emerges, users are exposed to large attack interfaces. IoT (smart home devices, cars, cameras, wearable devices) collect users’ private data and are involved with users’ daily life but can be remotely compromised. Compromising IoT devices such as cars can even jeopardize lives. While users are adopting IoT devices, new usable security mechanisms can be built on top of IoT infrastructure. Our group is working on a variety of explorations of security and privacy issues related to human beings and ubiquitous computing, building next-generation usable, secure, and privacy-preserving ubiquitous computing systems.

For more information, contact Weiren Wang.

Past Projects

Technologies for Special Needs

Using SMS for support chronic illness management for pediatric patients
T. Yun, Y. Han, K. An, G.D. Abowd, R. Arriaga

We developed an SMS system that sends periodic questions to patients with diabetes or asthma to educate them and communicate the status of their condition to their health care providers. Our earlier study showed that within 3-4 months of use, adolescent asthma patients were able to improve their pulmonary functions in a randomized controlled trial.

T.-J. Yun and R. I. Arriaga, “A text message a day keeps the pulmonologist away,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 1769-1778.

Y. Han, T. Yun, G.D. Abowd, R.I. Arriaga, How to Design Persuasive Health Technologies for Adolescents with Chronic Illness?, CHI Workshop 2012.

T.-J. Yun, H. Y. Jeong, T. D. Hill, B. Lesnick, R. Brown, G. D. Abowd, and R. I. Arriaga, “Using SMS to provide continuous assessment and improve health outcomes for children with asthma,” in Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, 2012, pp. 621-630.

In-home Capture and Sharing of ‘Behavior Specimens’
Nazneen, G.D. Abowd, R. Arriaga, A. Rozga

We are designing a smart phone based video capture system, smartCapture, which can support parents in collecting samples of their child’s problem behaviors in the home with remote assistance from clinical specialists. The motivation is to support assessment and intervention of behaviors of concern with evidence collected in the natural environment. We define smartCapture as a system that allows the collection and sharing of behavior specimens. This is analogous to specimen collection containers given to patients at a lab or a clinic. In this case the behavior clinic or school can ship smartCapture, preconfigured with a required list of behaviors, to parents to collect and share specimens of their child’s behaviors. The primary goal of smartCapture system is to increase the number of families that can be effectively managed and improve access to care among rural and remote communities.

Nazneen, Fatima A. Boujarwah, Agata Rozga, Ron Oberleitner, Suhas Pharkute, Gregory D. Abowd, Rosa I. Arriaga. Towards in-home Collection of Behavior Specimens: within the Cultural Context of Autism in Pakistan. 6th International Conference on Pervasive Computing Technologies for Healthcare, 21-24 May 2012, San Diego, California, USA.

Nazneen, Agata Rozga, Mario Romero, Addie J. Findley, Nathan A. Call, Gregory D. Abowd, Rosa Arriaga. Supporting Parents for in-Home Capture of Problem Behaviors of Children with Developmental Disabilities”. Journal of Personal and Ubiquitous Computing 16(2): 193-207 (2012).

A Specialized Social Network for Day-to-day Independence
H. Hong, G.J. Kim, G.D. Abowd, R. Arriaga

Building social support networks is crucial both for less-independent individuals with autism and for their primary caregivers. We investigate the role of a social network service (SNS) that allows young adults with autism to garner support from their family and friends. We explore the unique benefits and challenges of using SNSs to mediate requests for help or advice. In particular, we examine the extent to which specialized features of a SNS can engage users in communicating with their network members to get advice in varied situations. Our findings indicate that technology-supported communication particularly strengthened the relationship between the individual and extended network members, mitigating concerns about over-reliance on primary caregivers. Our work identifies implications for the design of social networking services tailored to meet the needs of this special needs population.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “A Specialized Social Networking Service to Promote the Independence of Young Adults with Autism”, The International Meeting for Autism Research, May 17-19 2012, Toronto, Canada.

Social Mirror
H. Hong, G.J. Kim, G.D. Abowd, R. Arriaga

Independence is the key to a successful transition to adulthood by individuals with autism. Social support is a crucial factor for achieving adaptive self-help life skills. We conducted a formative design exercise with young adults with autism and caregivers to uncover opportunities for social networks to support the important skill of promoting independence and facilitating coordination. The results of this study led to the concept of SocialMirror, an interactive mirror connected to an online social network that allows the young adult to seek advice from a trusted and responsive network of family, friends and professionals. Focus group discussions reveal the potential for SocialMirror to increase motivation to learn everyday life skills for young adults with autism and foster collaboration with a distributed care network. We present important design considerations to leverage a small trusted network that balances quick response with safeguards for privacy and security of young adults with autism.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “Designing a Social Network to Support the Independence of Young Adults with Autism”, Proceedings of the 15th ACM International Conference on Computer-Supported Cooperative Work (CSCW 2012). Seattle, WA.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “SocialMirror: Motivating Young Adults with Autism to Practice Life Skills in a Social World”, CSCW 2012 Videos.

Food Journaling & Eating Detection

Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation
E. Thomaz, A. Parnami, I. Essa, G.D. Abowd

There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual’s eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.

E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd. “Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation”, ACM International SenseCam and Pervasive Imaging (SenseCam ’13), 2013.

Technological Approaches for Addressing Privacy Concerns When Recognizing Eating Behaviors with Wearable Cameras
E. Thomaz, A. Parnami, J. Bidwell, I. Essa, G.D. Abowd

First-person point-of-view (FPPOV) images taken by wear- able cameras can be used to better understand people’s eat- ing habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious con- cern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating in- formation in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard- mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy- infringing content of images while still maintaining evidence of eating behaviors throughout the day.

E. Thomaz, A. Parnami, J. Bidwell, I. Essa, G.D. Abowd “Technological Approaches for Addressing Privacy Concerns When Recognizing Eating Behaviors with Wearable Cameras”, Proceedings of the 15th ACM International Conference on Ubiquitous Computing 2013.

Activity Recognition in the Home and Health Modeling

E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd

We are developing Hydrostream, a server-based platform for the collection, visualization, annotation and analysis of water pressure in a home setting. While designed with health applications in mind, Hydrostream can be easily applied towards more general-purpose personal informatics applications.

E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd. “Hydrostream: A Platform for Collecting, Annotating and Analyzing Water Pressure for Health Applications”, Personal Informatics Workshop, ACM CHI 2012.

Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing
E. Thomaz, V. Bettadapura, G. Reyes, M. Sandesh, G. Schindler, T. Ploetz, G.D. Abowd, I. Essa

Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure- mediated sensing for inferring high-level human activities in a home setting.

E. Thomaz, V. Bettadapura, G. Reyes, M. Sandesh, G. Schindler, T. Ploetz, G.D. Abowd, I. Essa “Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing”, Proceedings of the 14th ACM International Conference on Ubiquitous Computing 2012.

Input Technologies

Cheng Zhang, Anhong Guo, Dingtian Zhang, Caleb Southern, Rosa Arriaga, and Gregory Abowd

While most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone), the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone. We outline the implementation of these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios.

Cheng Zhang, Anhong Guo, Dingtian Zhang, Caleb Southern, Rosa Arriaga, and Gregory Abowd. 2015. BeyondTouch: Extending the Input Language with Built-in Sensors on Commodity Smartphones. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI ’15). ACM, New York, NY, USA, 67-77.

The Tongue and Ear Interface
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo

Tongue and Ear InterfaceWe address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user’s medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user’s tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov model-based techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.

Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo. 2014. The tongue and ear interface: a wearable system for silent speech recognition. In Proceedings of the 2014 ACM International Symposium on Wearable Computers (ISWC ’14). ACM, New York, NY, USA, 47-54.

C. Southern, B. Frey, J. Clawson, G.D. Abowd, M. Romero

BrailleTouch is an eyes-free mobile chorded text input technology, based on the standard Perkins Brailler. It is a system for touch typing on your touch screen.

Southern, C., Clawson, J., Frey, B., Abowd, G. D., Romero, M., “An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired.” Mobile HCI 2012. San Francisco, September 2012.

Frey, B., Rosier, K., Southern, C., Romero, M. “From Texting App to Braille Literacy.” Conference on Human Factors in Computing Systems, Extended Abstracts, ACM CHI 2012. Austin, USA: May 2012.

Romero, M., Frey, B., Southern, C., Abowd, G. D., “BrailleTouch: Designing a Mobile Eyes-Free Soft Keyboard.” Mobile HCI 2011, Design Competition. Stockholm, August 2011

Frey, B., Southern, C., Romero, M., “BrailleTouch: Mobile Texting for the Visually Impaired.” Proceedings of Human-Computer Interaction International, HCII. Orlando: July 2011

More on BrailleTouch here.

Information Visualization

Behavis: Using Visual Analytics to Explore Social and Communicative Behaviors
Y. Han, A. Rozga, J. Stasko, G.D. Abowd

We developed visual analytics tools to help psychology researchers explore social and communicative behaviors captured by new sensing technologies. The tools are specifically designed to find groups of children that exhibit commonalities in their behaviors.

Y. Han, A. Rozga, N. Dimitrova, G. D. Abowd, and J. Stasko, “Visual Analysis of Proximal Temporal Relationships of Social and Communicative Behaviors,” Computer Graphics Forum, vol. 34, no. 3, pp. 51–60, Jun. 2015.

Han, Y., Rozga, A., Stasko, J., Abowd, G. D., “Visual Exploration of Common Behaviors for Developmental Health.” To appear in Workshop on Visual Analytics in Healthcare. Washington, D.C., Nov 2013.

Han, Y., Rozga, A., Stasko, J. T., Abowd, G. D., “Using Visual Analytics to Understand Social and Communicative Behaviors.” Poster in IEEE VAST 13′. Atlanta, GA, Oct 2013.

More on Behavis here.

Last updated: August 26, 2016 at 15:00 pm