Andrew Maas

I work on data-centric deep learning approaches as part of Special Projects Group at Apple. In Spring 2022 I am teaching CS 224S: Spoken Language Processing. Starting May 30, 2022 I am teaching a new 4 week project-driven course, Data-Centric Deep Learning on the co:rise platform.

Until 2019, I was building a language extraction platform for healthcare as a co-founder of Roam Analytics (acquired by Parexel). From 2015 to 2016, I was a research scientist in deep learning for spoken language understanding at Semantic Machines (acquired by Microsoft). In 2015 I completed a PhD in Computer Science at Stanford University advised by Andrew Ng and Dan Jurafsky. My dissertation explored large scale deep learning for spoken and written language tasks. My research was supported by a National Science Foundation graduate research fellowship. In May 2009 I completed a Bachelors in Computer Science and Cognitive Science at Carnegie Mellon with a minor in computational neuroscience.

Email: amaas [a t} cs . stanford dot edu

Research Interests

I work at the intersection of machine learning, natural language processing, machine perception, and cognitive science. Human perception and learning are remarkable when we consider the complex data entering our senses. Developing algorithms to automatically find structure in audio, text, images, and other data will enable autonomous systems to better integrate into everyday life to positively transform the ways we live and work.


My CV (out of date since 2012)

Publications (Google Scholar Listing)

Mike Wu, Jonathan Nafziger, Anthony Scodary, Andrew Maas. (2021). HarperValleyBank: A domain-specific spoken dialog corpus. Workshop on Machine Learning in Speech and Language Processing. [ ArXiv Pre-print]

Benjamin J. Lengerich, Andrew L. Maas, Christopher Potts. (2018). Retrofitting Distributional Embeddings to Knowledge Graphs with Functional Relations. COLING 2018. [ Pre-print] [ Code]

Invited lecture. (2018). Natural Language Prcoessing in Healthcare. Stanford University Seminar in AI in Healthcare (CS 522)

John Semerdjian, Konstantinos Lykopoulos, Andrew Maas, Morgan Harrell, Julie Priest, Pedro Eitz-Ferrer, Connor Wyand, Andrew Zolopa. (2018). Supervised machine learning to predict HIV outcomes using electronic health record and insurance claims data. AIDS 2018 Conference. [ Poster] [ Blog post]

Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Daniel Jurafsky, Andrew Y. Ng. (2017). Building DNN acoustic models for large vocabulary speech recognition. Computer Speech & Language, Volume 41, Pages 195-213. [Earlier ArXiv version]

Andrew L. Maas*, Ziang Xie*, Daniel Jurafsky, and Andrew Y. Ng. (2015). Lexicon-Free Conversational Speech Recognition with Neural Networks. NAACL 2015. (*Shared first author)

Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, and Andrew Y. Ng. (2014). First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs. ArXiv:1408.2873 [cs.CL].

Moritz Sudhof, Andrés Gomés Emilsson, Andrew L. Maas, and Christopher Potts. (2014). Sentiment Expression Conditioned by Affective Transitions and Social Forces. Proceedings of 20th Conference on Knowledge Discovery and Data Mining (KDD 2014).

Andrew Maas, Chris Heather, Chuong (Tom) Do, Relly Brandman, Daphne Koller, and Andrew Y. Ng. (2014). Offering Verified Credentials in Massive Open Online Courses. ACM Ubiquity Symposium: MOOCs and Technology to Advance Learning and Learning Research. January 2014.

Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. (2013). Rectifier Nonlinearities Improve Neural Network Acoustic Models. ICML Workshop on Deep Learning for Audio, Speech, and Language Processing (WDLASL 2013).

Andrew L. Maas, Tyler M. O'Neil, Awni Y. Hannun, and Andrew Y. Ng. (2013). Recurrent Neural Network Feature Enhancement: The 2nd CHiME Challenge. The 2nd International Workshop on Machine Listening in Multisource Environments (CHiME 2013). [ DRDAE Code]

Andrew L. Maas, Quoc V. Le, Tyler M. O'Neil, Oriol Vinyals, Patrick Nguyen, and Andrew Y. Ng. (2012). Recurrent Neural Networks for Noise Reduction in Robust ASR. Interspeech 2012. [ DRDAE Code]

Andrew L. Maas, Stephen D. Miller, Tyler M. O'Neil, Andrew Y. Ng, and Patrick Nguyen. (2012). Word-level Acoustic Modeling with Convolutional Vector Regression. ICML 2012 Representation Learning Workshop.

Andrew L. Maas, Andrew Y. Ng, and Christopher Potts. (2011). Multi-Dimensional Sentiment Analysis with Learned Representations. Technical Report, April 2011. [Supplementary Figure]

Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011). [Dataset]

Richard Socher, Andrew Maas, and Christopher D. Manning. (2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. AISTATS 2011. [ Project page ]

Andrew L. Maas and Andrew Y. Ng. (2010). A Probabilistic Model for Semantic Word Vectors. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. [Code]

Andrew L. Maas and Charles Kemp. (2009). One-Shot Learning with Bayesian Networks. Proceedings of The 31st Annual Meeting of The Cognitive Science Society.

Brian D. Ziebart, Andrew Maas, Anind K. Dey, and J. Andrew Bagnell. (2009). Human Behavior Modeling with Maximum Entropy Inverse Optimal Control. AAAI Spring Symposium on Human Behavior Modeling.

Brian D. Ziebart, Andrew Maas, Anind K. Dey, and J. Andrew Bagnell. (2008). Navigate Like a Cabbie: Probabilistic Reasoning from Observed Context-Aware Behavior. Proceedings of the 10th International Conference on Ubiquitous Computing.

Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. (2008). Maximum Entropy Inverse Reinforcement Learning. Proceedings of the 23rd AAAI Conference on Artificial Intelligence.

Demos

Demo of a Personalized Navigation Device which Predicts User Behavior Mapprentice Project. 2009.

Dynamically Adjusting Suggested Route as Hazards Change. Mapprentice Project. 2008.

Destination Prediction Route so far shown in black, log probability of destination shown in varying red intensities. Mapprentice Project. 2008.

Predicting Route During Travel Destination is known. Mapprentice Project. 2008.