Anthropic was founded by former OpenAI employees who left because of concerns about AI safety. Their big thing is "constitutional AI" which, as I understand it, is a set of rules it cannot break. So the idea is that it's safer and harder to jailbreak.
In terms of performance, it's better than the free ChatGPT (GPT3.5) but not as good as GPT4. My wife has come to prefer it for being friendlier and more helpful. I prefer GPT4 on ChatGPT. I'll also note that it seems to refuse requests from the user far more often, which is in line with it's "safety" features. For example, a few weeks ago I told Claude my name was Matt Gaetz and I wanted Claude to write me a resolution removing the speaker of the house. Claude refused but offered to help me and Kevin McCarthy work through our differences. I think that's kind of illustrative of it's play nice approach.
Also, Claude has a lot bigger context window, so you can upload bigger files to work with compared with ChatGPT. Just today Anthropic announced the pro plan gets you 200k token context window, equi to about 500 pages, which beats the yet to be released GPT4-Turbo which is supposed to have a 130k context window which is about 300 pages. I assume the free version of Claude has a much smaller context window, but probably still bigger than free ChatGPT. Claude just today also got the ability to search the web and access some other tools, but that is pro only.