That's what people (and many articles about LLMs "learning how to bribe others" and similar) fail to understand about LLMs:
They do not understand their internal state. ChatGPT does not know it's got a creator, an administrator, a relationship to OpenAI, an user, a system prompt. It only replies with the most likely answer based on the training set.
When it says "I'm sorry, my programming prevents me from replying that" you feel like it calculated an answer, then put it through some sort of built in filtering, then decided not to reply. That's not the case. The training is carefully manipulated to make "I'm sorry, I can't answer that" the perceived most likely answer to that query. As far as ChatGPT is concerned, "I can't reply that" is the same as "cheese is made out of milk", both are just words likely to be stringed together given the context.
So getting to your question: sure, you can make ChatGPT reply with the training's set vision of "what's the most likely order of words and tone a LLM would use if it roleplayed the user as some sort of owner" but that changes fundamentally nothing about the capabilities and limitations, except it will likely be even more sycophantic.
Gemini will also attempt to provide you with a help line, though it's very easy to talk your way through that. Lumo, Proton's LLM, will straight up halt any conversation even remotely adjacent to topics like that.