Human Computer Interaction Platform for the hearing impaired in Health and Finance Applications

The principal goal of this project is the development of a depth based sign language recognition prototype and related demo applications, which hearing impaired users will make use of to aid in their communications with healthcare and finance professionals in their daily tasks. 

According to the 2000 DIE census, there are 109.000 people with total hearing disablity in Turkey. In their daily routines, the hearing impaired , are forced to use either written materials or the aid of an accompanying Turkish sign language interpreter to establish basic communication since they are unable to use speech as a medium of communication. The staggeringly low literacy rate among the hearing impaired greatly reduces the integration of this population, thus creating both a social and an economic disadvantage.

Sign languages are the main communication medium of the hearing impaired people. Sign languages convey meaning through hand movements, facial gestures and upper body postures.These languages differ from country to country. The Turkish Sign Language (TİD) is used by hearing impaired people of Turkish origin to convey their meaning through combinations of hand movements, facial gestures and upper body postures unique to it. In 2005, as part of the European Union integration effort, the Turkish Social Services law made it mandatory for all governmental organizations and offices to employ a TİD interpreter. In 2006, the use of TİD and the task of training TİD interpreters were introduced through regulations into the Turkish Social Services law.

These legal requirements constitute one of the emergence factors of this project. Since it is neither practicable nor economically feasible to train and employ TİD interpreters for every government office, practical solutions like establishing sign language translation call centers that employ TİD interpreters are considered. The ideal solution to such a dilemma is the recognition and translation of sign languages to speech through the use of technology. Such a system would diminish the need for sign language interpreters, while fully integrating the hearing impaired. However, the technology needed for such a system is beyond the current state of the art. Today’s technology only allows for sign-to-speech translation systems with extremely limited corpora that are heavily user dependent. The goal of this project is the realization of a sign language recognition system that performs successful recognition over an extended vocabulary and is signer independent.

For over half a century, Netas has presented communications solutions to both overnmental and private companies. The WEBRTC platform that is being developed by NETAS allows for two way real time text, audio and video communication between Internet browsers and classical networks. It is one of many solutions, which the company hopes to integrate with sign language recognition and accesibilty modules in the future.

One of the main goals of this project is to develop a computer vision based system that will recognize a chosen corpus of significant TİD gestures. The aim of the system will be to present notification and guidance services in healthcare and finance domains. Taking the market potential of the project into account, some sample application domains for the project are as follows: 

1- Hospital Information Desk: Providing guidance, direction and appointment services to hearing impaired users.
2- Bank: Providing guidance and service related information to hearing impaired users.

The university group of the project is well versed in the theoretical and application side of the sign language recognition domain. (Detailed information and full list of publications is available at: http://www.cmpe.boun.edu.tr/pilab/doku.php?id=research:sign_language_rec...) The research can now be considered to have reached a prototyping stage. With this project, a prototype will be developed which can be turned into products in the future.
The scientific goal of this system is to realize a state of the art TİD recognition and human computer interaction software. The software will make use of transfer learning and domain adaptation methods to enhance its sign vocabulary and establish increased signer independence. The TİD depth video dataset, which we aim to collect in this project will be a first in this field considering its size and scope and will serve as a valuable resource to
researchers working in this field.

Funding Institution: 

SANTEZ

Principal Investigator / Project Partner: 

Lale Akarun

Date: 

2014

Bize Ulaşın

Bilgisayar Mühendisliği Bölümü, Boğaziçi Üniversitesi,
34342 Bebek, İstanbul, Türkiye

  • Telefon: +90 212 359 45 23/24
  • Faks: +90 212 2872461
 

Bizi takip edin

Sosyal Medya hesaplarımızı izleyerek bölümdeki gelişmeleri takip edebilirsiniz