danielbln

joined 1 year ago
[–] [email protected] 4 points 8 months ago (1 children)

Microsoft's Phi model was largely trained on synthetic data derived from GPT-4.

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (1 children)

Are you asking why stock of a single company is different from "stock" of the richest country and only superpower on earth?

Also, money is liquid, can be spent immediately. Stock is not liquid, it has to be traded, vested, etc. and given enough stock will tank yje value if too much of it is liquified at once.

[–] [email protected] 10 points 8 months ago (6 children)

I'm German you "hit" a decision.

[–] [email protected] 11 points 9 months ago

They run all of Gamepass as well as all of Sony's PS+ on Azure, I think they'll be fine.

[–] [email protected] 6 points 9 months ago

It's so, so, so much better. GenAI is actually useful, crypto is gambling pretending to be a solution in search of a problem.

[–] [email protected] 18 points 9 months ago (4 children)

In fact, the original script of The Matrix had the machines harvest humans to be used as ultra efficient compute nodes. Executive meddling led to the dumb battery idea .

[–] [email protected] 17 points 9 months ago

Eh, that's not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we're talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago)

Depends on the model/provider. If you're running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.

[–] [email protected] 100 points 9 months ago (32 children)

I've implemented a few of these and that's about the most lazy implementation possible. That system prompt must be 4 words and a crayon drawing. No jailbreak protection, no conversation alignment, no blocking of conversation atypical requests? Amateur hour, but I bet someone got paid.

[–] [email protected] 8 points 9 months ago

Remember when Internet Explorer/Edge was only used to download Chrome. Well, ironically these days I only use Chrome to make video calls.

[–] [email protected] 34 points 9 months ago (4 children)

I mean, I like a good Google hate train as much as the next guy, but that's kind of a legitimate thing to want.

[–] [email protected] 5 points 10 months ago

Also, one of these is a mere update hugging the tech plateau, the other is a disruptive hockey stick.

view more: next ›