Ashelyn

joined 1 year ago
[–] [email protected] 18 points 2 months ago (2 children)

Aren't there still massive issues with the Colorado River running dry? Hopefully they're not too dependant on that water source for their chips

[–] [email protected] 0 points 2 months ago

What if it is getting ripped/torn but there's just more space 'underneath' that instantly fills the gaps as they are created? I guess at that point it's indistinguishable from stretching but it's interesting to think about

[–] [email protected] 6 points 2 months ago

Why not just put up a Moonlight Tower?

[–] [email protected] 29 points 6 months ago

The device wouldn't necessarily have to be constantly streaming the audio to a central server. If it's capable of hearing wake up words like "Ok Google" it's capable of listening for other phrases and having onboard processing to relay back the results much more compressed. Whether or not this is common practice is another matter, and yes the algorithms are scary good even without eavesdropping.

[–] [email protected] 39 points 6 months ago (3 children)

Any time a news headline asks a question, the answer is almost always "no"

[–] [email protected] 8 points 6 months ago

I was just poking a bit of fun, because there's a good chance it was an autocorrect typo for the original commenter too :p

[–] [email protected] 2 points 6 months ago (2 children)

Vacuuming up days?

Like it sucks time from your life, siphoning precious time out from your life without even realizing it? I guess that's one way to frame browsing Polygon but I'd personally view it as a pretty tame example compared to sites like YouTube or Lemmy.

Or did you mean data like the site is harvesting information off your service when you click the link?

[–] [email protected] 2 points 7 months ago

That's fair. I think fundamentally a false positive/negative isn't that much different. Pretty much all tests—especially those dealing with real world conditions—are heuristic, as are all LLMs by necessity of the design. Hallucination is a pretty specific term given to AI as an attempt to assign agency to a system that doesn't actually have any (by implying it's crazy and making stuff up instead of a black box with deterministic inputs and outputs spitting out something factually wrong but with a similar format to what is trained on). I feel like the nature of any tool where "you can't trust this to be entirely accurate" should have an umbrella term that encompasses both types of providing inaccurate info under certain conditions.

I suppose the difference is that AI is a lot more likely to randomly go off, whereas a blood test is likelier to provide repeated false positives for the same person with their unique biology? There's also the fact that most medical tests represent a true/false dichotomy or lookup table, whereas an LLM is given the entire bounds of language.

Would an AI clustering algorithm (say, K-means for instance) giving an inaccurate diagnosis be a false positive/negative or a hallucination? These models can be programmed on a sliding scale and I feel like there's definitely an area where the line could get pretty blurry.

[–] [email protected] 3 points 7 months ago (2 children)

I mean, AI is used in fraud detection pretty often; when it hits a false positive (which happens frequently on a population-level basis), is that not a hallucination of some sort? Obviously LLMs can go off the rails much further because it's readable text, but any machine learning model will occasionally spit out really bad guesses almost any person could have done better with. (To be fair, humans are highly capable of really bad guesses too).

[–] [email protected] 11 points 8 months ago* (last edited 8 months ago)

I think there's a difference between using pre established characters and settings vs wholesale copy pasting someone else's entire work to sell as one's own (or directly and solely profit off of, regardless of whether credit is given). Whether or not there is a legal distinction between the two in terms of copyright, there's absolutely a line to be drawn on overt plagiarism.

[–] [email protected] 12 points 8 months ago (2 children)

And likewise, the sellers could be polite, ask permission and potentially settle on some amount of royalty payments, or they could just do it, make their money, and ask for forgiveness afterwards or just take down the listing and find another artist's work to repackage

[–] [email protected] 1 points 8 months ago

Also, I'm not sure if this is the same in Canada as the US, but I'm pretty sure that in many cases, vandalism is considered a much lesser crime than unauthorized computer tampering/hacking

view more: ‹ prev next ›