RESEARCH


CURRENT PROJECTS

CampusLife

CAMPUSLIFE
Health and Technology

Together with Dartmouth University, Carnegie Mellon University and Cornell University, we at Georgia Tech are interested in extending the seminal work of StudentLife, started by Dartmouth’s Andrew Campbell a few years ago. Campbell sought to determine if mental health and academic performance could be correlated, or even predicted, through a student’s digital footprint. We are proposing the CampusLife project as a logical extension, which aims to collect data from relevant subsets of a campus community through their interactions with mobile and wearable technology, social media and the environment itself using an Internet of Things instrumentation. What drives this project is both a human goal of understanding wellness for young adults, as well as how one can perform such experimentation and address the significant security and privacy challenges. We invite everyone from the Georgia Tech community to join a conversation with the lead researchers from four universities to help influence the directions of this nascent effort.

For more information, contact Vedant Das Swain.


COSMOS
Human-Environment Interactions
Computational Materials

COSMOS is an interdisciplinary collaboration to design, manufacture, fabricate, and apply COmputational Skins for Multi-functional Objects and Systems (COSMOS). COSMOS consist of dense, high-performance, seamlessly-networked, ambiently-powered computational nodes that can process, store, and communicate sensor data. Achieving this vision will redefine the basis of human-environment interactions by creating a world in which everyday objects and information technology become inextricably entangled. COSMOS seeks to rethink the embodiment of computing through the integration of materials science and computation, literally realizing Weiser’s figurative “weaving” of technology into the fabric of everyday life.

For more information, please visit https://cosmos.gatech.edu or contact Dingtian Zhang.


PECSS

Prolonged Exposure Collective Sensing System (PECSS)
Mental Health
Post-Traumatic Stress Disorder (PTSD)

We are conducting research at the intersection of mental health management and computer science. Our project, Prolonged Exposure Collective Sensing System (PECSS) seeks to address some of the pressing clinical challenges inherent in PTSD therapy. It leverages ubiquitous computing, human computer interaction and machine learning. PECSS is a novel, user-tailored sensing systems that allows patient data transfer and information extraction during therapeutic exercises, it also has interfaces (mobile application and dashboards) for monitoring this information. In the next two year we will develop, validate, and deploy computational models of heterogeneous, PE related sensor data that will support and facilitate the improvement of PTSD treatment delivery and effectiveness for clinicians and patients. This work is funded by the National Science Foundation, in collaboration with Emory University and University of Rochester.

For more information, please contact Rosa Arriaga.


Computational Behavioral Science and Analysis
Autism
Chronic Diseases

A collaboration between developmental psychologists and computer scientists seeks to develop novel computational tools and methods to objectively measure behaviors in natural settings. The goal is to develop tools that can help us better detect, understand, and ultimately treat autism or other chronic diseases. Current standard practices for extracting useful behavioral information are typically difficult to replicate and require a lot of human time. For example, extensive training is typically required for a human coder to reliably code a particular behavior/interaction. Also, manual coding typically takes a lot more time than the actual length of the video. The time intensive nature of this process puts a strong limitation on the scalability of studies.

For more information, contact Agata Rozga, Rosa Arriaga, and Gregory Abowd.


Activity and Gesture Recognition for Mobile and Wearable Computing
Activity Recognition
Gesture Recognition

Over the past few years, we have seen a number of wearable devices emerge that did a small number of tasks well (e.g., step counting). As these wrist-worn health trackers gained in popularity, commercial devices sought to do even more things around the wrist, to the point where a smartwatch is trying to become an all-purpose interaction device. Our group is working on a variety of explorations of mobile and on-body activity and gesture recognition systems, using both commodity sensing in existing devices and new form factors with novel sensing. Our goal is to expand the richness of existing interactions and activity recognition capabilities for everyday mobile and wearable computing users.

For activity recognition and passive sensing systems, reach out to Hyeokhyen Kwon.


Eating Detection
Health and Technology

Chronic and widespread diseases such as obesity, diabetes, and hypercholesterolemia require patients to monitor their food intake, and food journaling is currently the most common method for doing so. However, food journaling is subject to self-bias and recall errors, and is poorly adhered to by patients. This project explores the different ways eating episodes can be recognized as well as the potential applications of being able to recognize these episodes.

For more information, please refer this page or contact Mehrab Bin Morshed

Security and Privacy
Security
Privacy

Security and privacy help realize the full potential of computing in society. Without authentication and encryption, for example, few would use digital wallets, social media or even e-mail. The struggle of security and privacy is to realize this potential without imposing too steep a cost. Yet, for the average non-expert, security and privacy are just that: costly, in terms of things like time, attention and social capital. More specifically, security and privacy tools are misaligned with core human drives: a pursuit of pleasure, social acceptance and hope, and a repudiation of pain, social rejection and fear. It is unsurprising, therefore, that for many people, security and privacy tools are begrudgingly tolerated if not altogether subverted. This cannot continue. As computing encompasses more of our lives, we are tasked with making increasingly more security and privacy decisions. Simultaneously, the cost of every breach is swelling. Today, a security breach might compromise sensitive data about our finances and schedules as well as deeply personal data about our health, communications, and interests. Tomorrow, as we enter the era of pervasive smart things, that breach might compromise access to our homes, vehicles and bodies.

We aim to empower end-users with novel security and privacy systems that connect core human drives with desired security outcomes. We do so by creating systems that mitigate pain, social rejection and fear, and that enhance feelings of hope, social acceptance and pleasure. Ultimately, the goal of the Ubicomp Group/SPUD Lab is to design new, more user-friendly systems that encourage better end-user security and privacy behaviors.

For more information, contact Youngwook Do.


PAST PROJECTS

Transportation Informatics
Transportation

By Caleb Southern and Gregory Abowd
A great deal of work in personal informatics has focused on health and well-being, including areas such as fitness, eating, and sleep. Relatively little work has focused on transportation. According to the Bureau of Labor Statistics, transportation (mostly driving) is the second highest expense for the average American household, ahead of food and healthcare and behind only housing. Transportation Planners have long explored various strategies (mostly incentive-based) to encourage people to choose alternatives to driving alone, such as ride sharing, transit, and walking or biking. We have developed a personal informatics system that allows individuals to track their driving trips, and provides the user with an estimated cost for each trip (including fuel, vehicle depreciation, maintenance, insurance, taxes, and fees). By aggregating these dispersed costs on a per-trip basis, we provide drivers with a way to directly compare the cost of each driving trip to alternatives, such as Uber, transit, and so forth. We are exploring how revealing this personalized cost information impacts individual awareness, including the potential for behavior change, in how people make choices about their transportation modes and discretionary trips.


CampusLife

Using SMS for support chronic illness management for pediatric patients
Technologies for Special Needs

By T. Yun, Y. Han, K. An, G.D. Abowd, R. Arriaga
We developed an SMS system that sends periodic questions to patients with diabetes or asthma to educate them and communicate the status of their condition to their health care providers. Our earlier study showed that within 3-4 months of use, adolescent asthma patients were able to improve their pulmonary functions in a randomized controlled trial.

T.-J. Yun and R. I. Arriaga, “A text message a day keeps the pulmonologist away,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 1769-1778.

Y. Han, T. Yun, G.D. Abowd, R.I. Arriaga, How to Design Persuasive Health Technologies for Adolescents with Chronic Illness?, CHI Workshop 2012.

T.-J. Yun, H. Y. Jeong, T. D. Hill, B. Lesnick, R. Brown, G. D. Abowd, and R. I. Arriaga, “Using SMS to provide continuous assessment and improve health outcomes for children with asthma,” in Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, 2012, pp. 621-630.


CampusLife

In-home Capture and Sharing of ‘Behavior Specimens’
Technologies for Special Needs

By Nazneen, G.D. Abowd, R. Arriaga, A. Rozga
We are designing a smart phone based video capture system, smartCapture, which can support parents in collecting samples of their child’s problem behaviors in the home with remote assistance from clinical specialists. The motivation is to support assessment and intervention of behaviors of concern with evidence collected in the natural environment. We define smartCapture as a system that allows the collection and sharing of behavior specimens. This is analogous to specimen collection containers given to patients at a lab or a clinic. In this case the behavior clinic or school can ship smartCapture, preconfigured with a required list of behaviors, to parents to collect and share specimens of their child’s behaviors. The primary goal of smartCapture system is to increase the number of families that can be effectively managed and improve access to care among rural and remote communities.

Nazneen, Fatima A. Boujarwah, Agata Rozga, Ron Oberleitner, Suhas Pharkute, Gregory D. Abowd, Rosa I. Arriaga. Towards in-home Collection of Behavior Specimens: within the Cultural Context of Autism in Pakistan. 6th International Conference on Pervasive Computing Technologies for Healthcare, 21-24 May 2012, San Diego, California, USA.

Nazneen, Agata Rozga, Mario Romero, Addie J. Findley, Nathan A. Call, Gregory D. Abowd, Rosa Arriaga. Supporting Parents for in-Home Capture of Problem Behaviors of Children with Developmental Disabilities”. Journal of Personal and Ubiquitous Computing 16(2): 193-207 (2012).


CampusLife

A Specialized Social Network for Day-to-day Independence
Technologies for Special Needs

By H. Hong, G.J. Kim, G.D. Abowd, R. Arriaga
Building social support networks is crucial both for less-independent individuals with autism and for their primary caregivers. We investigate the role of a social network service (SNS) that allows young adults with autism to garner support from their family and friends. We explore the unique benefits and challenges of using SNSs to mediate requests for help or advice. In particular, we examine the extent to which specialized features of a SNS can engage users in communicating with their network members to get advice in varied situations. Our findings indicate that technology-supported communication particularly strengthened the relationship between the individual and extended network members, mitigating concerns about over-reliance on primary caregivers. Our work identifies implications for the design of social networking services tailored to meet the needs of this special needs population.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “A Specialized Social Networking Service to Promote the Independence of Young Adults with Autism”, The International Meeting for Autism Research, May 17-19 2012, Toronto, Canada.


CampusLife

Social Mirror
Technologies for Special Needs

By H. Hong, G.J. Kim, G.D. Abowd, R. Arriaga
Independence is the key to a successful transition to adulthood by individuals with autism. Social support is a crucial factor for achieving adaptive self-help life skills. We conducted a formative design exercise with young adults with autism and caregivers to uncover opportunities for social networks to support the important skill of promoting independence and facilitating coordination. The results of this study led to the concept of SocialMirror, an interactive mirror connected to an online social network that allows the young adult to seek advice from a trusted and responsive network of family, friends and professionals. Focus group discussions reveal the potential for SocialMirror to increase motivation to learn everyday life skills for young adults with autism and foster collaboration with a distributed care network. We present important design considerations to leverage a small trusted network that balances quick response with safeguards for privacy and security of young adults with autism.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “Designing a Social Network to Support the Independence of Young Adults with Autism”, Proceedings of the 15th ACM International Conference on Computer-Supported Cooperative Work (CSCW 2012). Seattle, WA.

Hong, H., Kim, G.J., Abowd, G.D., Arriaga, R.I., “SocialMirror: Motivating Young Adults with Autism to Practice Life Skills in a Social World”, CSCW 2012 Videos.


CampusLife

Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation
Food Journaling
Eating Detection

By E. Thomaz, A. Parnami, I. Essa, G.D. Abowd
There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual’s eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.

E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd. “Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation”, ACM International SenseCam and Pervasive Imaging (SenseCam ’13), 2013.


CampusLife

Technological Approaches for Addressing Privacy Concerns When Recognizing Eating Behaviors with Wearable Cameras
Food Journaling
Eating Detection

By E. Thomaz, A. Parnami, J. Bidwell, I. Essa, G.D. Abowd
First-person point-of-view (FPPOV) images taken by wear- able cameras can be used to better understand people’s eat- ing habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious con- cern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating in- formation in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard- mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy- infringing content of images while still maintaining evidence of eating behaviors throughout the day.

E. Thomaz, A. Parnami, J. Bidwell, I. Essa, G.D. Abowd “Technological Approaches for Addressing Privacy Concerns When Recognizing Eating Behaviors with Wearable Cameras”, Proceedings of the 15th ACM International Conference on Ubiquitous Computing 2013.


CampusLife

Hydrostream
Activity Recognition
Health Modeling

By E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd
We are developing Hydrostream, a server-based platform for the collection, visualization, annotation and analysis of water pressure in a home setting. While designed with health applications in mind, Hydrostream can be easily applied towards more general-purpose personal informatics applications.

E. Thomaz, T. Ploetz, I. Essa, G.D. Abowd. “Hydrostream: A Platform for Collecting, Annotating and Analyzing Water Pressure for Health Applications”, Personal Informatics Workshop, ACM CHI 2012.


CampusLife

Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing
Activity Recognition
Health Modeling

By E. Thomaz, V. Bettadapura, G. Reyes, M. Sandesh, G. Schindler, T. Ploetz, G.D. Abowd, I. Essa
Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure- mediated sensing for inferring high-level human activities in a home setting.

E. Thomaz, V. Bettadapura, G. Reyes, M. Sandesh, G. Schindler, T. Ploetz, G.D. Abowd, I. Essa “Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing”, Proceedings of the 14th ACM International Conference on Ubiquitous Computing 2012.


CampusLife

BeyondTouch
Input Technologies

While most smartphones today have a rich set of sensors that could be used to infer input (e.g., accelerometer, gyroscope, microphone), the primary mode of interaction is still limited to the front-facing touchscreen and several physical buttons on the case. To investigate the potential opportunities for interactions supported by built-in sensors, we present the implementation and evaluation of BeyondTouch, a family of interactions to extend and enrich the input experience of a smartphone. Using only existing sensing capabilities on a commodity smartphone, we offer the user a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone. We outline the implementation of these interaction techniques and demonstrate empirical evidence of their effectiveness and usability. We also discuss the practicality of BeyondTouch for a variety of application scenarios.

Cheng Zhang, Anhong Guo, Dingtian Zhang, Caleb Southern, Rosa Arriaga, and Gregory Abowd. 2015. BeyondTouch: Extending the Input Language with Built-in Sensors on Commodity Smartphones. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI ’15). ACM, New York, NY, USA, 67-77.


CampusLife

BrailleTouch
Input Technologies

BrailleTouch is an eyes-free mobile chorded text input technology, based on the standard Perkins Brailler. It is a system for touch typing on your touch screen.

Southern, C., Clawson, J., Frey, B., Abowd, G. D., Romero, M., “An Evaluation of BrailleTouch: Mobile Touchscreen Text Entry for the Visually Impaired.” Mobile HCI 2012. San Francisco, September 2012.

Frey, B., Rosier, K., Southern, C., Romero, M. “From Texting App to Braille Literacy.” Conference on Human Factors in Computing Systems, Extended Abstracts, ACM CHI 2012. Austin, USA: May 2012.

Romero, M., Frey, B., Southern, C., Abowd, G. D., “BrailleTouch: Designing a Mobile Eyes-Free Soft Keyboard.” Mobile HCI 2011, Design Competition. Stockholm, August 2011

Frey, B., Southern, C., Romero, M., “BrailleTouch: Mobile Texting for the Visually Impaired.” Proceedings of Human-Computer Interaction International, HCII. Orlando: July 2011


CampusLife

Information Visualization
Visualization

By Y. Han, A. Rozga, J. Stasko, G.D. Abowd
We developed visual analytics tools to help psychology researchers explore social and communicative behaviors captured by new sensing technologies. The tools are specifically designed to find groups of children that exhibit commonalities in their behaviors.

Y. Han, A. Rozga, N. Dimitrova, G. D. Abowd, and J. Stasko, “Visual Analysis of Proximal Temporal Relationships of Social and Communicative Behaviors,” Computer Graphics Forum, vol. 34, no. 3, pp. 51–60, Jun. 2015.

Han, Y., Rozga, A., Stasko, J., Abowd, G. D., “Visual Exploration of Common Behaviors for Developmental Health.” To appear in Workshop on Visual Analytics in Healthcare. Washington, D.C., Nov 2013.

Han, Y., Rozga, A., Stasko, J. T., Abowd, G. D., “Using Visual Analytics to Understand Social and Communicative Behaviors.” Poster in IEEE VAST 13′. Atlanta, GA, Oct 2013.

More on Behavis here.