NaibofTabr

joined 1 year ago
[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

Yeah, and if you wrote some feedback to a magazine article, the editor might write a response to you and publish both in next month's issue, but that would be the end of it. No one who read your feedback as published in the magazine could respond to you directly - it's not really a conversation, it's slow and limited by the format. You could write another message to the editor responding to their response, but that wouldn't get published in the following issue so at most it would just be a one-to-one communication.

This is very different from writing a post on an internet message board and getting twenty responses from twenty different people in a span of minutes. The closest past equivalent I can think of is literal soapboxing, where you go stand on a street and talk at people walking by, and they can immediately respond to you if they choose - but then that's in person, face-to-face.

[–] [email protected] 9 points 1 day ago* (last edited 1 day ago) (2 children)

Yes...

It's easier to be an asshole to words than to people.

xkcd #438 (June 18, 2008)

Personally, I think that we (humans) haven't really socially adjusted to digital communications technology, its speed or brevity, or the relatively short attention span it tends to encourage. We spent millennia communicating by talking to each other, face to face, and we're still kind of bad at that but we do mostly try to avoid directly provoking each other in person. Writing gave us a means to communicate while separated, but in the past that meant writing a letter, a process that is generally slow and thoughtful. In contrast, commenting on social media is usually done so quickly that there isn't much thoughtfulness exhibited.

We've had three-ish? decades exchanging messages on the internet, having conversations with complete strangers, and being exposed to dozens, hundreds, even thousands of other people reading and responding to what we write... less than one human lifetime. We're not equipped for this, mentally, emotionally, historically. Social and cultural norms haven't adapted yet.

[–] [email protected] 3 points 1 day ago

It doesn't do computational photography, true - I don't know of any open source mobile apps that can, it's a very complicated subject.

It does allow switching between the various lenses, at least on my OnePlus 12r.

[–] [email protected] 11 points 2 days ago (1 children)
[–] [email protected] 34 points 2 days ago (2 children)

Open Camera is FOSS (GPLv3) and is available in both Google Play and F-Droid.

[–] [email protected] 249 points 3 days ago (28 children)

Is "dragged" the new "slammed"?

[–] [email protected] 7 points 3 days ago (1 children)
  • a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)

Um, maybe I'm misunderstanding, but you're storing keys in git repositories which are where...?

And remember, if you haven't tested your backups then you don't have backups!

[–] [email protected] 21 points 1 week ago
[–] [email protected] 61 points 1 week ago (5 children)

Yeah but the current build of libvegs has some conflicts with libfruit, so if you need to use both you have to build libvegs in a different directory and then simlink it in /lib.

[–] [email protected] 2 points 2 weeks ago

pics or it didn't happen

[–] [email protected] 0 points 2 weeks ago (3 children)

I see, so your argument is that because the training data is not stored in the model in its original form, it doesn't count as a copy, and therefore it doesn't constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it's a bit clearer now. It's still wrong, but at least it makes some kind of sense.

If the model "has no memory of training data images", then what effect is it that the images have on the model? Why is the training data necessary, what is its function?

[–] [email protected] 12 points 2 weeks ago

the presentation and materials viewed by 404 Media include leadership saying AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients’ personally identifying and protected health information. The demonstration also showed potential capabilities that included “detect pancreas cancer,” and “parse HL7,” a health data standard used to share electronic health records.

Because as everyone knows, LLMs do a great job of getting specific details correct and always produce factually accurate output. I'm sure this will have no long term consequences and benefit all the patients greatly.

view more: next ›