Dragonfly: Sign Language to Voice System
Abstract: DragonFly is a system we have developed that bridges the communication gap between deaf and hearing individuals. This technology enables deaf and hearing individuals to communicate with each in their own languages using mobile devices, such as smartphones and tablets. With this application, the deaf person uses sign language, the hearing person speaks, and the system automatically translates what they said to each other. This tool is designed for use in impromptu situations when interpreters are not available, and can be used either face-to-face or virtually. To develop this innovative new capability, we have developed state-of-the-art Artificial Intelligence (AI), Automated Sign Language Recognition (ASLR), Machine Learning (ML) and Machine Translation (MT) algorithms.
Summary: Dragonfly is a technology that enables deaf and hearing individuals to communicate with one another in real-time, face-to-face, each in their own languages, without the assistance of an interpreter. Dragonfly embodies the ‘power of you’ by enabling deaf users to communicate with non-signers in situations where an interpreter is not available. This capability opens up opportunities for deaf users to have more types of interactions with hearing individuals in a variety of different situations in everyday life. The system currently translates American Sign Language (ASL)-English, and once it is commercialized, our vision is to have it developed to translate more languages; for example, Japanese Sign Language, French, etc.
At this workshop, we will present the research and development (R&D) we have been conducting to develop ASL-English MT that runs on mobile devices and demonstrate the Dragonfly system. We will show how the system captures and translates both sides of the conversation. We will demonstrate how the system takes what someone has signed and automatically translates and enunciates it into English for the hearing person. We will show the avatar we have developed that takes what the hearing person has said and signs it for the deaf speaker to see.
In addition, we will brief our ongoing efforts to increase the accuracy of the system’s automated translations. We will also discuss our plans for commercializing this technology and our goal for eventually having the system be able to translate between multiple languages and dialects.
Learning Objectives: The audience will be able to learn about this new, enabling technology and the commercialization plans we are putting in place to push this capability out in the near future. They will learn about how this innovative tool will empower the deaf user and facilitate inclusion by helping to increase the number of conversations between the hearing and deaf communities and foster relationship building. The audience will come away with information about a new technology that they may be interested in availing themselves of when it becomes available.