Spedwell

joined 1 year ago
[–] [email protected] 5 points 1 month ago (1 children)

If we're doing short stories, I have two recommendations:

  • Ted Chiang's Stories of Your Life and Others.
  • Kurt Vonnegut's Welcome to the Monkey House.
[–] [email protected] 16 points 3 months ago (1 children)

As the article points out, TSA is using this tech to improve efficiency. Every request for manual verification breaks their flow, requires an agent to come address you, and eats more time. At the very least, you ought not to scan in the hopes that TSA metrics look poor enough they decide this tech isn't practical to use.

[–] [email protected] 35 points 4 months ago (5 children)

I'm curious what issue you see with that? It seems like the project is only accepting unrestricted donations, but is there something suspicious about shopify that makes it's involvement concerning (I don't know much about them)?

[–] [email protected] 8 points 4 months ago

404media is doing excellent work on tracking the non-consentual porn market and technology. Unfortunately, you don't really see the larger, more mainstream outlets giving it the same attention beyond its effect on Taylor Swift.

[–] [email protected] 6 points 4 months ago (1 children)

Right concept, except you're off in scale. A MULT instruction would exist in both RISC and CISC processors.

The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.

[–] [email protected] 22 points 5 months ago

Huh, thanks for the heads up. Section 4 makes it look like they can close-source whenever they want.

I'm just glad FUTO is still letting Immich use the AGPL instead of this, though.

[–] [email protected] 7 points 5 months ago* (last edited 5 months ago)

There is an episode of Tech Won't Save Us (2024-01-25) discussing how weird the podcasting play was for Spotify. There is essentially no way to monetize podcasts at scale, primarily because podcasts do not have the same degree of platform look-in as other media types.

Spotify spent the $100 million (or whatever the number was) to get Rogan exclusive, but for essentially every other podcast you can find a free RSS feed with skippable ads. Also their podcast player just outright sucks :/

[–] [email protected] 2 points 5 months ago (5 children)

Errrrm... No. Don't get your philosophy from LessWrong.

Here's the part of the LessWrong page that cites Simulacra and Simulation:

Like “agent”, “simulation” is a generic term referring to a deep and inevitable idea: that what we think of as the real can be run virtually on machines, “produced from miniaturized units, from matrices, memory banks and command models - and with these it can be reproduced an indefinite number of times.”

This last quote does indeed come from Simulacra (you can find it in the third paragraph here), but it appears to have been quoted solely because when paired with the definition of simulation put forward by the article:

A simulation is the imitation of the operation of a real-world process or system over time.

it appears that Baudrillard supports the idea that a computer can just simulate any goddamn thing we want it to.

If you are familiar with the actual arguments Baudrillard makes, or simply read the context around that quote, it is obvious that this is misappropriating the text.

[–] [email protected] 12 points 5 months ago* (last edited 5 months ago)

The reason the article compares to commercial flights is your everyday reader knows planes' emissions are large. It's a reference point so people can weight the ecological tradeoff.

"I can emit this much by either (1) operating the global airline network, or (2) running cloud/LLMs." It's a good way to visualize the cost of cloud systems without just citing tons-of-CO2/yr.

Downplaying that by insisting we look at the transportation industry as a whole doesn't strike you as... a little silly? We know transport is expensive; It is moving tons of mass over hundreds of miles. The fact computer systems even get close is an indication of the sheer scale of energy being poured into them.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

concepts embedded in them

internal model

You used both phrases in this thread, but those are two very different things. It's a stretch to say this research supports the latter.

Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.

[–] [email protected] 7 points 6 months ago

It's not really stupid at all. See the matrix code example from this article: https://spectrum.ieee.org/ai-code-generation-ownership

You can't really know when the genAI is synthesizing from thousands of inputs or just outright reciting copyrighted code. Not kosher if it's the latter.

[–] [email protected] 9 points 6 months ago (6 children)

I get that there are better choices now, but let's not pretend like a straw you blow into is the technological stopping point for limb-free computer control (sorry if that's not actually the best option, it's just the one I'm familiar with). There are plenty of things to trash talk Neuralink about without pretending this technology (or it's future form) is meritless.

view more: next ›