That wouldn't guarantee correct answers.
It's arguably more dangerous if ChatGPT gives mostly sane specific medical advice because it makes people put more trust in it than they should.
That wouldn't guarantee correct answers.
It's arguably more dangerous if ChatGPT gives mostly sane specific medical advice because it makes people put more trust in it than they should.
Generally a good rule, however Signal did develop their own encryption. It was so good it became the industry standard.