Event Title

Information Salience: Artificial Intelligence Models of Human Attention

Presenter Information

Noel Warford, Oberlin College

Location

Science Center, Bent Corridor

Start Date

10-27-2017 6:00 PM

End Date

10-27-2017 6:40 PM

Research Program

Vanderbilt Undergraduate Summer Research Program (VUSRP), Vanderbilt University

Poster Number

41

Abstract

This work presents a new computational cognitive architecture used to model and understand human visual attention in the specific context of visual search for a spatiotemporal target. This type of search occurs frequently in human experience, from military or aviation staff monitoring complex displays of multiple moving objects to daycare teachers monitoring a group of children on a playground for risky behaviors. What differentiates these spatiotemporal search tasks from more traditional visual search tasks is that the target cannot be identified from a single frame of visual experience; the target is defined as a spatiotemporal pattern that unfolds over time, and so detecting the target is also an activity that must integrate information over time. Results from a previous human participant study found that humans show interesting attentional capacity limitations in this type of search task. We created a computational cognitive architecture, called the SpatioTemporal Template-based Search (STTS) architecture, that solves the same spatiotemporal search task using a wide variety of parameterized models that each represents a different cognitive theory of visual attention from the psychological literature. We present results from initial computational experiments using STTS as a first step towards understanding the computational nature of attentional bottlenecks in this type of search task, and we discuss how continued STTS experiments will help determine which theoretical models best explain the capacity limitations shown by humans. We expect that in the long run, results from this research will help refine the design of visual information displays to help human operators perform more efficiently and effectively on difficult, real-world monitoring tasks.

Major

Organ Performance; Computer Science

Project Mentor(s)

Maithilee Kunda and Adriane Seiffert, Psychology, Vanderbilt University

Document Type

Poster

This document is currently not available here.

Share

COinS
 
Oct 27th, 6:00 PM Oct 27th, 6:40 PM

Information Salience: Artificial Intelligence Models of Human Attention

Science Center, Bent Corridor

This work presents a new computational cognitive architecture used to model and understand human visual attention in the specific context of visual search for a spatiotemporal target. This type of search occurs frequently in human experience, from military or aviation staff monitoring complex displays of multiple moving objects to daycare teachers monitoring a group of children on a playground for risky behaviors. What differentiates these spatiotemporal search tasks from more traditional visual search tasks is that the target cannot be identified from a single frame of visual experience; the target is defined as a spatiotemporal pattern that unfolds over time, and so detecting the target is also an activity that must integrate information over time. Results from a previous human participant study found that humans show interesting attentional capacity limitations in this type of search task. We created a computational cognitive architecture, called the SpatioTemporal Template-based Search (STTS) architecture, that solves the same spatiotemporal search task using a wide variety of parameterized models that each represents a different cognitive theory of visual attention from the psychological literature. We present results from initial computational experiments using STTS as a first step towards understanding the computational nature of attentional bottlenecks in this type of search task, and we discuss how continued STTS experiments will help determine which theoretical models best explain the capacity limitations shown by humans. We expect that in the long run, results from this research will help refine the design of visual information displays to help human operators perform more efficiently and effectively on difficult, real-world monitoring tasks.