Jamie

joined 1 year ago
[–] [email protected] 34 points 11 months ago (1 children)

Escalate to management as quickly as possible so you're not just annoying some poor front desk worker that had nothing to do with it.

[–] [email protected] 3 points 11 months ago (4 children)

They're probably referring to quantum entanglement, which affects the entangled particles instantly.

[–] [email protected] 3 points 11 months ago (2 children)

By the time we invent any sort of lightspeed travel, we'll have long conquered quantum entanglement. If you have a signal transferred over a properly quantum entangled technology, the signal would transfer instantaneously.

[–] [email protected] 4 points 11 months ago

It's already beatable right now, there are services in third world countries where people get paid fractions of a penny to solve captchas for machines.

[–] [email protected] 3 points 11 months ago (1 children)

Fools are easily parted with their money, and I typically view a lot of misinformation as ways to seek out those exact fools. Not all of it, but a lot.

Take a bunch of crazy people that polite society doesn't agree with, make them feel seen, and they throw money at you.

[–] [email protected] 3 points 11 months ago (1 children)

Interesting to find a RyanF9 video here and not in a motorcycle community. But yeah, probably most people here don't have much interest in Gore-Tex unless they ride or do other outdoorsy things.

[–] [email protected] 2 points 11 months ago (2 children)

I would say of the services to give money to, Discord is on the lesser evil side.

Sure, they lock a bunch of stuff behind Nitro, but they're at least only giving people ads for their own stuff and not scams or dong pills. Because if nobody paid for anything, that money would have to come from somewhere.

[–] [email protected] 5 points 11 months ago (1 children)

The only thing more eco-friendly than buying an eco-friendly printer, is to not buy a new printer at all.

Both of my local libraries offer printing at $0.25 a page. For photos, I just go to the photo lab at the store and print them there.

Both are cheaper than owning a printer unless you're doing a ton of it, and in the former case, I get to support a library just a little bit.

[–] [email protected] 2 points 11 months ago (2 children)

Even though the limitation on TPM is completely arbitrary, and anyone sufficiently savvy can bypass it in a few ways.

But most people are not that, so I guess the Linux crowd will embrace all those computers with open arms.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

Speaking for LLMs, given that they operate on a next-token basis, there will be some statistical likelihood of spitting out original training data that can't be avoided. The normal counter-argument being that in theory, the odds of a particular piece of training data coming back out intact for more than a handful of words should be extremely low.

Of course, in this case, Google's researchers took advantage of the repeat discouragement mechanism to make that unlikelihood occur reliably, showing that there are indeed flaws to make it happen.

[–] [email protected] 27 points 11 months ago (5 children)

I'm not an expert, but I would say that it is going to be less likely for a diffusion model to spit out training data in a completely intact way. The way that LLMs versus diffusion models work are very different.

LLMs work by predicting the next statistically likely token, they take all of the previous text, then predict what the next token will be based on that. So, if you can trick it into a state where the next subsequent tokens are something verbatim from training data, then that's what you get.

Diffusion models work by taking a randomly generated latent, combining it with the CLIP interpretation of the user's prompt, then trying to turn the randomly generated information into a new latent which the VAE will then decode into something a human can see, because the latents the model is dealing with are meaningless numbers to humans.

In other words, there's a lot more randomness to deal with in a diffusion model. You could probably get a specific source image back if you specially crafted a latent and a prompt, which one guy did do by basically running img2img on a specific image that was in the training set and giving it a prompt to spit the same image out again. But that required having the original image in the first place, so it's not really a weakness in the same way this was for GPT.

[–] [email protected] 1 points 11 months ago

I'm not talking strictly about ideas, I'm talking about a human having a vision, and taking action to make that vision into something. Whether something is copyrightable requires a "human element," which is the reasoning behind why machine or animal generated content cannot be copyrighted, because they lack that.

So the question is if someone tweaking an image, even if they're merely selecting things, then is that a sufficient human element to say that a person had enough hand in creating it?

 

cross-posted from: https://jamie.moe/post/113630

There have been users spamming CSAM content in [email protected] causing it to federate to other instances. If your instance is subscribed to this community, you should take action to rectify it immediately. I recommend performing a hard delete via command line on the server.

I deleted every image from the past 24 hours personally, using the following command: sudo find /srv/lemmy/example.com/volumes/pictrs/files -type f -ctime -1 -exec shred {} \;

Note: Your local jurisdiction may impose a duty to report or other obligations. Check with these, but always prioritize ensuring that the content does not continue to be served.

Update

Apparently the Lemmy Shitpost community is shut down as of now.

view more: next ›