The advancement of technological innovation and the fight against the data desert that exists related to sign language have been areas of focus for the AI for accessibility Program. Toward those goals, in 2019, the team hosted a sign language workshop, soliciting applications from top researchers in the field. Abraham Glasser, a Ph.D. Computer Science and Native American Sign Language (ASL) student, supervised by Professor Matt Huenerfauth, received a three-year scholarship. His work would focus on a very pragmatic need and opportunity: driving inclusion by focusing on and improving common interactions with smart home assistants for people who use sign language as their primary form of communication.
Since then, faculty and students from the Rochester Institute of Technology (RIT) Golisano College of Computing and Information Sciences have conducted work on the Research Center on Accessibility and Inclusion (CAIR). CAIR publishes research on computer accessibility and includes many deaf and hard of hearing (DHH) students who work bilingually in English and American Sign Language.
To begin this research, the team investigated how DHH users would prefer to optimally interact with their personal assistant devices, whether it be a smart speaker or other types of devices in the home that respond to spoken commands. Traditionally, these devices have used voice-based interaction, and as technology has evolved, newer models now incorporate cameras and display screens. Currently, none of the devices available on the market understand commands in ASL or other sign languages, so the introduction of that capability is an important future technology development to address an untapped customer base and drive inclusion. Abraham explored simulated scenarios where, through the device’s camera, the technician could view a user’s signature, process the user’s request, and display the output on the device’s screen.
Some previous research had focused on the phases of interaction with a personal assistant device, but few included DHH users. Some examples of available research included the study of device activation, including concerns about waking a device, as well as modalities of device output in the form of videos, ASL avatars, and English subtitles. The call to action from a research perspective included collecting more data, the key bottleneck, for sign language technologies.
To pave the way for technological advancements, it was critical to understand how DHH users would like interaction with devices to look like and what kind of commands they would like to issue. Abraham and the team set up a Wizard of Oz video conference setup. An ASL interpreter “wizard” had a personal home assistant device in the room with them, and joined the call without being seen on camera. The display and output of the device would be seen in the video window of the call, and each participant was guided by a research facilitator. When the deaf participants signed on the personal home device, they were unaware that the ASL interpreter was rendering the commands in spoken English. A team of annotators watched the recording, identified key segments of the videos, and transcribed each command into English and bright ASL.
Abraham was able to identify new ways users would interact with the device, such as “wake up” commands that weren’t captured in previous research.
function facebookPixelInit() {
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘650316371776477’);
fbq(‘track’, ‘PageView’);
}
if ( typeof mscc === ‘undefined’ || mscc.hasConsent() ) {
facebookPixelInit();
}