This is totally wrong. Fine-tuning techniques were a prerequisite for turning academically impressive but boring and useless models into useful, compelling user products like ChatGPT. Fine-tuning complements fundamental model improvements -- it doesn't substitute for them.