Articulate+ - A Conversational Interface for Democratizing Visual Analysis

Researchers: Andrew Johnson, Jillian Aurisano, Veronica Grosso

PI: Andrew Johnson (UIC/EVL); co-PIs: Barbara DiEugenio (UIC/CS), Moira Zellner (Norheastern University)
Collaborator: Jason Leigh (University of Hawaii Manoa)

The Internet gives the public, including scientists, access to vast amounts of data. Visualization is a powerful tool to take that data and turn it into a form that is easier to understand, helping people make better decisions. However, creating good visualizations from data is not easy. The Articulate+ Project will work to generate new knowledge about how to empower people with information visualization, by using speech and gesture as inputs, in new ways. A goal is to enable users to benefit from imprecise statements about what they want, rather than needing to give specific commands telling the computer what to do. This project will make it easier for citizens to engage in conversations with the computer about economic, medical data, demographic, transportation data, climate, and sports data; to answers that are more meaningful, and so to democratize data literacy.

This research will use speech, gesture, and log data as new input interaction modalities for visual analytics. It will develop new understandings of how the imprecise and vague nature of natural language queries and gestures, as contextualized by work on a specific visual analysis problem, can be modeled as an opportunity for an intelligent software system to learn more about the underlying intent of the users. This model of intent, in turn, will be used to provide contextualized visualizations, which are expected to help those users gain valuable insights from their data. The audio speech data will be used to computationally model overhearing, to help infer users’ current contextualized goals. The gesture data will be used to disambiguate expressions that refer to visual elements of a visualization (e.g., “that pie chart about electricity consumption”). The project will develop new understandings of how to combine the log of visualization state, the computational model of overhearing, and the gesture data in order to generate new visualizations in support of users’ work on tasks. Evaluations will be performed in both controlled laboratory and naturalistic study settings to determine whether effective semantic parsers can be derived for these specific visual analysis domains, and how this contextual interface affects users’ experience of visualization and discovery. The integrated annotated datasets from the studies will be made available to the research community satisfying a need for ecologically valid situated language interaction corpora. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Date: October 1, 2020 - September 30, 2022

Related Entries

Directory:

Papers:

Research:

Related Categories