LLMs only become increasingly more politically correct. I would assume any LLM that isn't uncensored to return something about how that's inappropriate, in whatever way it chooses. None of those things by themselves present any real conflict, but once you introduce topics that have a majority dataset of being contradictory, the llm will struggle. You can think deeply about why topics might contradict themselves, llms can't. Llms function on reinforced neutral networks, when that network has connections that only strongly route one topic away from the other, connecting the two causes issues.
I haven't, but if you want, take just that prompt and give it to gpt3.5 and see what it does.
That's correct, I agree with you.
That requires this knowledge of how batteries work. Saying keep a battery pack and your phone at 100% could leave people in a situation worse than if they just used the battery manager to stop their phone at 85%. 99% of people will plug their battery pack in until it's full, stash it wherever they decide for emergencies, and will find a dead pack when they need it.