Other than for research purposes (extending the capabilities of AI) — I 100% agree with your sentiment regarding the relative worthlessness of AI composition.
As an aside, it is my understanding that there is no measure of something like consonance/dissonance in complex musical forms. There are models of dissonance in 2 and 3 note chords but even these have gaps. I'm suggesting that the research on "what sounds good to people" is surprisingly immature in contemporary science. This is surprising because that question played a major role in the history of science. For instance, many of the very first experiments conducted at the Royal Society (c1660s) investigated harmony — and arguably the first scientific experiment was designed to evaluate a mathematical model of music (Viz, the 5th century BC Pythagoreans demonstrating their integer ratio theory of harmony by casting bronze chimes at those ratios).
> You won't get a grasp of either by throwing a corpus into a bucket and fishing things out of it with statistics.
We can agree to disagree on that. Or, in any case, it's an empirical question. If we had a large corpus of music labeled by annotated "feels", I think we'd learn an immense amount about how music evokes feelings. (I'm not sure I feel comfortable with the term "semantics" applied to music)
Regarding the measures of dissonance, I'm only familiar with things like measures of roughness and harmonic entropy. If you know more, I'd appreciate sharing.
We don't have a large corpus of music labelled "feels" because "feels" and "evocations" are not atomic objects.
It's not obvious they're objects at all.
Debussy's La Mer is an excellent evocation of the sea, but you're not going to learn anything useful by throwing it into a bucket with a sea shanty. Or with Britten's Four Sea Interludes.
The absolute best you'll get from this approach is a list of dismally on-the-nose reified cliches - like the music editor who dubs on some accordion music when a thriller has a scene in Paris.
It's also why concepts like harmonic entropy don't really help. You can't parse "dissonance" scientifically in that way, because the measure isn't the amount of dissonance in a chord on some arbitrary scale, even if that measure happens to have multiple dimensions.
It's how the dissonance is used in the context in which it appears. There are certainly loose mappings to information density - too little is bad, too much is also bad - but it's not a very well explored area, and composers work inside it intuitively.
So there is no fitness/winning function you can train your dataset on. Superficial similarity to a corpus is exactly that, and misses the point of what composition is.
Music has invariant temporal forms that reliably communicate feelings, based on the context. A musical cadence versus none, a change in rhythm, lingering on a note... The common nature of these forms lends themselves to common feelings about them. When two people are open to music, with a similar experience, they roughly feel the same thing. Perhaps not exactly, but music as a technology for exchanging non-verbal experiences, feels, is surprisingly consistent — why does film music have such a common effect on the emotional vibe of the scene?
In the future, if we could gather and annotate people's feelings in response to musical forms (including but far more than consonance and dissonance), I'm sure this would enable an AI-based model of the emotional resonances of various musical elements (and their multi-level representations in the neural network). Then, compositional models could be trained using real-time aesthetic rating devices (e.g., reporting on pleasure/uncomfortabble and interestingness/boringness).
Now, this system would hypothetically be able to manipulate emotions, at least to the extent that a composer can now.
Is that useful in anything but a creepy way? Well, maybe you could add filters to existing compositions to change their vibe... like, the "humanize" button in logic pro gives a looser feel. You might be able to apply filters that could make a song feel more longing or hopeful.
As an aside, it is my understanding that there is no measure of something like consonance/dissonance in complex musical forms. There are models of dissonance in 2 and 3 note chords but even these have gaps. I'm suggesting that the research on "what sounds good to people" is surprisingly immature in contemporary science. This is surprising because that question played a major role in the history of science. For instance, many of the very first experiments conducted at the Royal Society (c1660s) investigated harmony — and arguably the first scientific experiment was designed to evaluate a mathematical model of music (Viz, the 5th century BC Pythagoreans demonstrating their integer ratio theory of harmony by casting bronze chimes at those ratios).
So, I'm surprised that there isn't more interest today. That said, there was a recent breakthrough in the science of harmony: an integrated model of vertical and horizontal harmony. https://spj.sciencemag.org/journals/research/2019/2369041/#:....