That's definitely what a lot of people think the choice is but learned helplessness is not the only option. It ignores the fact that for many many use cases small special-purpose models will perform as well as massive models. For most of your business use cases you don't need a model that can tell you a joke, write a poem, recommend a recipe involving a specific list of ingredients and also describe trig identities in the style of Eminem. You need specific performance for a specific set of user stories and a small model could well do that.
These small models are not expensive to train and are (crucially) much cheaper to run on an ongoing basis.
I suspect small specific purpose models are actually a better idea for quite a lot of use cases.
However you need a bunch more understanding to train and run one.
So I expect OpenAI will continue to be seen as the default for "how to do LLM things" and some people and/or companies who actually know what they're doing will use small models as a competitive advantage.
Or: OpenAI is going to be 'premium mediocre at lots of things but easy to get started with' ... and hopefully that'll be a gateway drug to people who dislike 'throw stuff at an opaque API' doing the learning.
But I don't have -that- much understanding myself, so while this isn't exactly uninformed guesswork, it certainly isn't as well informed as I'd like and people should take my ability to have an opinion only somewhat seriously.
I have a slightly different take. Not all use cases are narrow use cases. OpenAI crushes the broad and/or poorly defined use cases. On those if you tried to train your own inhouse model it would be very expensive and you would produce a significantly inferior model.
I'm not sure how my "quite a lot of use cases" and your "not all use cases are narrow use cases" are meaningfully different (slightly) to you.
This isn't a snipe, mind, it's me being unsure if we even disagree, especially given the latter part of your comment seems entirely correct (so far as my limited understanding goes ;).
That's not the part that's different. The part where I feel we perhaps differ is rather than being "premium mediocre" I think that openAI is really excellent where the problem space is very broad or is poorly specified. Then we both agree there are better choices where it is narrow and well specified.
These small models are not expensive to train and are (crucially) much cheaper to run on an ongoing basis.
Opensource really is a viable choice.