I think that's wishful thinking. Just because someone is a "serious" researcher (careful, sounds like a No True Scotsman coming up), it doesn't mean that they care about AI guardrails or safety, or think our current administration is immoral.
I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.
I've hung out with this crowd and they are very idealistic, they care deeply about guardrails and safety, and definitely find the idea of handing the current administration AGI/ASI repulsive.