Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The best definition I've seen for the success of a piece of music is this: "What emotion is the artist trying to convey and how well does it convey it?"

Throughout the composing, arranging, recording, mixing, and mastering process, there are thousands of choices to be made, and the correctness of each choice is entirely linked back to that goal: Does the choice help to convey the emotion, or does it detract from it?

To that end, there is no correct choice, no correct or optimal harmony, no correct note, no correct rhythm, no correct timbre. It's all contextual in relation to conveying the desired emotion.

I'm really not sure how you could ever train a NN to make choices in that regard without first trying to teach them how to understand the impacts of their choices on the emotions conveyed.

At best, you may be able to train a NN to reproduce emotionally-void works in a particular style, and perhaps assign some emotion through the timbres selected (ambient music comes to mind here). Still, this isn't much of an achievement. You could easily codify the rules taught in Music 101 about harmonization and melody composition to a computer and have it spit out bland but pleasant excerpts, no deep learning required.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: