I subscribe to the belief that, for a chat model with the same parameters, creativity will be proportional to tendency to hallucinate, and inversely proportional to the factual answers. I suspect an unaligned model, without RLHF, wouldn't adhere to this.