Nope. Can't learn anything after the training data, only within the very narrow context window.
Any novel connections are through randomness, hence hallucinations instead of useful connections with background knowledge of involved systems or concepts.
About creativity, see my previous point. If I spit out words that go next to eachother, it won't be creativity. Creativity implies a goal, a purpose, or sometimes by chance, but utilising systematic thinking with understanding of the world.
I was considering refuting this point by point, but it seems your mind is already made up.
I feel that many people who deny the current utility and abilities of large-language models will continue to do so far after they've exceeded human intelligence, because the perception that they are fundamentally limited, regardless of whether they actually are or if their criticisms make any sense, is necessary for some load-bearing part of their sanity.
LLM's can do all those things