I would be interesting in learning more about the "easy to reason about the complexity" part of OCaml. The problem I'm having with Haskell isn't that it's too slow, but that it's so high-level and gets so aggressively optimized at compile-time, that the performance of the resulting binary has performance characteristics that are unpredictable and non-deterministic. Performance can regress between GHC releases, for instance. This just seems to be a problem of high-level programming languages in general, they're really slow until you start optimizing them at compile-time. And with each one of those optimizations, you get another layer of indirection with regards to how disconnected the speed of the code you wrote should be vs. how fast the final product actually is. Haskell has no business being as fast as it is already, but GHC gives it to us at the price of long compile times and very fuzzy performance properties that are very finicky and expert-friendly.
The problem I see here is just the unsolved computer science problem of how to reduce the complicated, drawn-out, and error-prone job of a C programmer into the succinct job of a functional programmer without fundamentally pretending modern computers work in a way they don't.
"My perspective on this is that lazy be default made sticking to purity much more compelling as if you just dropped print statements in you weren't sure exactly when they get evaluated."
And that's basically a death sentence to the feature because it reduces it to a silver lining of desperate post-mortem optimism. As far as my understanding goes, lazy evaluation was originally included in the language because it was perceived to be a powerful optimization technique enabled by pure functional programming. You reduce the amount of work a program is doing dynamically at runtime while not trading off on readability or modularity. All win. Except there were subtle trade-offs that have made themselves clear over the years and like you said, most would agree that lazy-by-default is too extreme of a feature and doesn't really pay its share of the rent at the end of the day.
My main gripe with lazy evaluation is that it's implicit behavior. It's to evaluation what garbage collection is to memory or dynamic types are to types. It hides something from the programmer that is useful to know more often than not.
> My main gripe with lazy evaluation is that it's implicit behavior. It's to evaluation what garbage collection is to memory
Are you saying you would like to return to the languages with explicit memory management? Or why is, generally speaking, GC considered good and LE considered bad?
> Are you saying you would like to return to the languages with explicit memory management?
Not really, but that does seem to be the logical conclusion at the end of the day. Haskell makes a really good case for the use of a garbage collector via pure functional programming, but I'm open to more fine-grain programmer-driven mechanisms for handling memory. I need to try out Rust's borrowing system on a meaningfully large project before I can start saying anything conclusive.
That's not how IO in Haskell works, you're basically describing what Haskell would be if it literally had no side-effects ever.