Location
MIT Media Lab, E14-633
Description
The Echo Nest has collected and analyzed data about over 2 million artists and 35 million songs. We've crawled billions of documents, parsed every phrase written about any band you can think of, and can tell you the pitch and timbre of every note in almost every song ever recorded. And all of that data in one way or another invisibly affects the music experience of over 150 million people every month through our friends and partners at MTV, Clear Channel, Rdio, Spotify, MOG, eMusic and many more. We've done it using scalable approaches to machine learning, natural language processing and information retrieval and I'll be describing the particular challenges in doing quality machine listening at scale.
Readings:
Music Retrieval from Everything: http://dl.acm.org/citation.cfm?id=1743428
The Million Song Dataset: http://www.columbia.edu/~tb2332/Papers/ismir11.pdf
Echoprint–An Open Music Identification Service: http://static.echonest.com/echoprint_ismir.pdf
Biographies
Brian Whitman (The Echo Nest) teaches computers how to make, listen to, and read about music. He received his doctorate from the Machine Listening group at MIT's Media Lab in 2005 and his master's in computer science from Columbia University's Natural Language Processing group in 2000. As the co-founder and CTO of the Echo Nest Corporation, Whitman architects an open platform with billions of data points about the world of music: from the listeners to the musicians to the sounds within the songs.