Censorship and bias are two different issues.
Censorship is a deliberate choice by the deployment. It comes from a realistic and demonstrated need to limit the misuse of the tool. Consider all the examples of people using early LLMs to generate plans for bombs, Nazi propaganda, revenge p*rn etc. Of course, once you begin to draw that line, you have to debate where the line is, and that falls to the lawyers and publicity departments.
Bias is trickier to deal with because it comes from the bias in the training data. I remember on example where a writer found that it was impossible to get the model to generate a black doctor treating a white patient. Imagine the racist chaos that ensued when they applied an LLM to criminal sentencing.
I am curious about how bias might be deliberately introduced into a model. We have seen the brute force method (eg "answer as though Donald Trump is the greatest American," or whatever). However, if you could really control and fine tune the values directly, then even an "open source" model could be steered. As far as I know, the values are completely dependent on the training data. But it should be theoretically possible to "nudge" those values if you could develop a way to tune it.