The mess is in the "V". RFC4180 leaves value interpretation to the application. It doesn't tell you, for example, when a string like 12345 should be interpreted as a number. So we end up sending numbers and dates and other meaningful values into the ether, hoping that the end application understands what you meant to convey.
XLSX has a cell type for IEEE754 doubles, so there is no such ambiguity. It also has a special date type!
That's still not a failing of CSV. If knowing whether a V is an IEEE754 double is critical to your application you shouldn't be using CSV. You wouldn't say nails are a mess because sometimes you need screws.
Knowing whether a value is a number or text or date is critical to myriads of application, and CSV specs give no guidance on how to interpret the actual values as numbers or text or dates. You don't have control over both endpoints in most cases, especially when you accept files from clients. Furthermore, they mix presentation and values (is `1.23%` a string or a number? What is the number?)
XLSX separates the value from presentation and gives every value a clear type. If a value is a number, there's a concrete numeric value that is stored separately from the number format. That way there is no guesswork involved, you can figure out exactly what the value is with zero magic.
You're missing the point. You're just listing use cases for which CSV does not fit.
CVS's strengths are simplicity and ubiquity. It has existed long before Excel and will probably outlive it. You can't say it's a mess because it doesn't help you parse "1.23%" reliably and constantly -- that's not CSV's job. To try another analogy: you can't say square pegs are poorly designed because you have round holes.
CSV is good because it's simple (easy to read, easy to implement, hard to break) and it's ubiquitous. If you're just trying to dump data without knowing/caring what the consuming application will be it's a fine choice. By using XLSX you've both lost that ubiquity and introduced a world of headache -- it's a lot easier to break XLSX export than CSV export.
For concrete examples CSV is best when you want to release the data from your system but really have no idea what the client wants to do. Maybe they just want to curl it and display it, maybe they want to process it with R. It's a very easy way to say "here's your data, my job is done".
But you argued before that CSV is not meant for at least one of your examples. If I want to process it in R, I need to know the datatype of the columns. And this is indeed a regular painpoint in this workflow.
For displaying it _might_ work, if you just want to dump an ugly mess. If you don't want to do that, you need to know the type of the values, so you can e.g. right-align numbers in columns.
So CSV clearly fails in your examples. You might argue that we don't have a better format that has support in so many applications. That might be true, but doesn't make CSV good or makes any of these failings not failings.
My take: include a schema when you can and there's a justification and it's practical. IMO even JSON is a (very limited, "name only, no type") type of schema. If it becomes impractical for something you're doing, stop and switch to something less redundant.