Chess is a great analogy. Now, because of great computers, folks can choose to play against a “master” chess player whenever they like, whether they have an internet connection and a proper rank or not.
Being able to jam with an AI group of your favorite jazz bests seems like a great stand-in for when you can’t get a real trio or quartet together, and if sufficiently good at attending to the ideas of the “live player”, would probably “raise all boats,” making better composers overall.
I agree that what you propose would be better than nothing, but it will still be lacking.
Assuming the software can get there, we also need to add more sensors. The head nods, eye contact, subtle facial gestures, and other body language that are such an important part of a collaborative jazz ensemble will have to be sensed. However, even if the computer can be enhanced with sensors, now you've got to communicate from the computer(s) back to the other players. So, either advanced robotics, or ensembles adapt other signaling schemes that the computer player(s) can engage. It is not a simple problem even if AI can made to be "creative" and "musical," that is not the whole story.
I don’t think “equal or better than human” is the bar we’re trying to beat.
I was more responding to both the claim that “AI will kill human composition” (it won’t) and “talking about computer-based music composition is framing the question wrong, since it doesn’t provide value” (it does)
Being able to jam with an AI group of your favorite jazz bests seems like a great stand-in for when you can’t get a real trio or quartet together, and if sufficiently good at attending to the ideas of the “live player”, would probably “raise all boats,” making better composers overall.