That’s the thing though, right? How do you optimize for two contradictory objectives? The “simple” answer is that you align incentives and avoid the problem by redefining it.
You mean distinct objectives, and the answer is with weights (sliders, usually on the UI.) Align what with what? And how is taking these choices out of the hands of the principle and handing it to the agent avoiding principle-agent?!? But in any case how does this relate to your previous comment?