Different people respond to and query for music in vastly different ways. Someone's favorite song is only barely recognizable by another: this leads us to question why and how we attach preference and memory to musical information. We are working on an architecture to model the individual and singular representation of a piece of music by a human listener. This model can then be applied to common music retrieval tasks such as recommendation and search, which in turn could leverage the power of the immediate delivery of music over networks while allowing users to discover new and varied music.