"I will save this for the future, when people complain about Chinese open models and tell me: But this Chinese LLM doesn't respond to question about Tianmen square."
But Chinese model releases are treated unfairly all the time when they release new model, as if Tianmen response indicates that we can use the model for coding tasks.
We should understand their situation and don't judge for obvious political issue. Its easy to judge people working hard over there, because they are confirming to the political situation and don't want to kill their company.
I think more people should spend time talking about this with American models, yeah. If you're interested in that then maybe that can be you. It doesn't have to be the same exact people talking about everything, that's the nice thing about forums. Find your own topic that American models consistently lie or freeze on that Chinese models don't and post about it.
I don't want to criticise models for things they're not being trained on or constraints companies have. None of the companies said our models don't hallucinate and we always have right facts.
For example,
* I am not expecting Gemini 3 Flash to cure cancer and constantly criticising them for that
* Or I am not expecting Mistral to outcompete OpenAI/Claude on their each release, because talent density and capital is obviously on a different level on OpenAI side
* Or I am not expecting GPT 5.3 saying anytime soon: Yes, Israel committed genocide and politicians covered it up
We should set expectations properly and don't complain about Tianmen every time when Chinese companies are releasing their models and we should learn to appreciate them doing it and creating very good competition and they are very hard working people.
I think most people feel differently about an emergent failure in a model vs one that's been deliberately engineered in for ideological reasons.
It's not like Chinese models just happen to refuse to talk about the topic, it trips guardrails that have been intentionally placed there, just as much as Claude has guardrails against telling you how to make sarin gas.
eg ChatGPT used to have an issue where it steadfastly refused to make any "political" judgments, which led it to genocide denial or minimization- "could genocide be justifiable" to which sometimes it would refuse to say "no." Maybe it still does this, I haven't checked, but it seemed very clearly a product of being strongly biased against being "political", which is itself an ideology and worth talking about.
"I will save this for the future, when people complain about Chinese open models and tell me: But this Chinese LLM doesn't respond to question about Tianmen square."
Please stop using Tianmen question as an example to evaluate the company or their models: https://news.ycombinator.com/item?id=46779809