GSoC’21 RoboComp project: Graphical user interface for affective human-robot interaction
19th June, 2021
My name is Aditya Kasibhatla. I will be completing the pre-final (3rd) year of my undergraduate program - Computer Science and Engineering (CSE) in July 2021 from MIT World Peace University, Pune, India. I am a self-taught developer in mobile and web technologies. It started with looking up and reading how various things work. Eventually, in 8th grade (2013), I started reading from Google’s Android SDK documentation and tried to create my first application - A simple finances management application. Since then, I have worked on a number of projects using a variety of tech stacks, and developed applications end to end. Some projects I worked on alone, and others in a team. Learning and implementing new technologies and frameworks has been and continues to be one of my favorite parts of working on new projects. I have been working on frontend mobile technologies for over six years and frontend web technologies for over three years.
About the Project
Interaction of a robot with the operator (human) is an important and daunting task. The aim of this project is to create a clean, modern, and modular Graphical User Interface (GUI) for the conversationalAgent component of RoboComp Viriato. The desktop application will be written using the Electron framework. This application abstracts all the CLI interaction with the conversationalAgent to provide a chat-like experience with some advanced capabilities like Text-to-Speech, Speech-to-Text and language translations. The app would support open source free-to-use Text to Speech engine - Mycroft AI’s Mimic. Combining such a TTS engine with a translation engine can enable the robot to converse in any of the given languages with the user. For Speech to Text, I am running tests on Mozilla’s deepspeech. I like their Common Voice initiative. Another advantage of deepspeech is that it has pre-trained models that can run offline.