I am a part of the team at Comcast Applied AI Research (Washington DC) that develops NLP models for the voice-enabled entertainment operating system, X1. As a senior machine learning researcher, I work on deep neural network based solutions for various problems in Natural Language Processing (NLP). One of the problems I worked on recently is to recognize human intents from multilingual voice queries made through Xfinity Voice Remote control.
Over the last 10 years, I applied concepts of machine learning in various academic research domains, e.g. affective computing, multimodal signal processing, accessibility, human-computer interaction, and behavioral prediction of humans. I developed a causal model guided deep learning architecture for a bias-free prediction of the TED Talk ratings. I invented an unsupervised algorithm to detect repetitive body-movements (Mannerisms) from MoCap signals. I did my Ph.D. from the University of Rochester. My academic advisor was M. Ehsan Hoque. During my stay at UofR, I got the opportunity to work several great researchers and reputed professors, including Daniel Gildea, Ji Liu, and Gonzalo Mateos.Md. Iftekhar Tanveer, http://www.itanveer.com/