chicken

joined 1 year ago
[–] [email protected] 2 points 2 weeks ago

It can also be a solid rubber duck for debugging.

A lot of the time I get 3/4 of the way through writing a prompt and don't bother hitting enter because I already figured it out. Great way to get your thoughts organized to have an incentive to put them down in writing.

[–] [email protected] 12 points 2 weeks ago

I wrote off politics media as hyperbolic and manipulative propaganda in 2016 and I actively distance myself from it, so I've only seen the broad strokes of this current election cycle. Unless you honestly believe you are doing important activism work, give yourself permission to just chill out about politics. If your life is full of problems caused by politics such that it's impossible for you to chill out about politics, you have my sympathy.

[–] [email protected] 0 points 2 weeks ago (1 children)

that is not the ... available outcome.

It demonstrably is already though. Paste a document in, then ask questions about its contents; the answer will typically take what's written there into account. Ask about something you know is in a Wikipedia article that would have been part of its training data, same deal. If you think it can't do this sort of thing, you can just try it yourself.

Obviously it can handle simple sums, this is an illustrative example

I am well aware that LLMs can struggle especially with reasoning tasks, and have a bad habit of making up answers in some situations. That's not the same as being unable to correlate and recall information, which is the relevant task here. Search engines also use machine learning technology and have been able to do that to some extent for years. But with a search engine, even if it's smart enough to figure out what you wanted and give you the correct link, that's useless if the content behind the link is only available to institutions that pay thousands a year for the privilege.

Think about these three things in terms of what information they contain and their capacity to convey it:

  • A search engine

  • Dataset of pirated contents from behind academic paywalls

  • A LLM model file that has been trained on said pirated data

The latter two each have their pros and cons and would likely work better in combination with each other, but they both have an advantage over the search engine: they can tell you about the locked up data, and they can be used to combine the locked up data in novel ways.

[–] [email protected] -5 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you're wondering about something and don't know what you don't know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.

[–] [email protected] 14 points 3 weeks ago (11 children)

The OP tweet seems to be leaning pretty hard on the "AI bad" sentiment. If LLMs make academic knowledge more accessible to people that's a good thing for the same reason what Aaron Swartz was doing was a good thing.

[–] [email protected] 2 points 3 weeks ago

A text message app with a keyword blocking feature is very useful to have

[–] [email protected] 6 points 3 weeks ago

I bought a large capacity unknown brand cheap SD card somewhat recently, it seemed real at first but after installing an OS on it and running a few minutes became bricked somehow. At least I got a refund.

[–] [email protected] 7 points 4 weeks ago* (last edited 4 weeks ago)

thepiratebay still exists but is regarded as untrustworthy and infested with malware. I'd say knowing you're getting something from a trustworthy source is harder than it used to be.

[–] [email protected] 2 points 4 weeks ago

I guess there are probably a lot of people trading that stuff dumb enough to be networking on facebook and instagram with their real identities

[–] [email protected] 1 points 4 weeks ago

do they need to? I don’t think so.

Why not? How can you be sure that all these laws are going to be about all the same things and not have many tricky edge cases? What would keep them from being like that? Again, these laws give unique rights to residents of their respective states to make particular demands of websites, and they aren't copy pastes of each other. There's no documented 'best practices' that is guaranteed to encompass all of them.

they don’t want this solution, however, but in my understanding instead to force every state to have weaker privacy laws

I can't speak to what they really want privately, but in the industry letter linked in the article, it seems that the explicit request is something like a US equivalent of the GDPR:

A national privacy law that is clear and fair to business and empowering to consumers will foster the digital ecosystem necessary for America to compete.

To me that seems like a pretty sensible thing to be asking for; a centrally codified set of practices to avoid confusion and complexity.

[–] [email protected] 12 points 1 month ago (2 children)

The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.

Any chance that's the real reason and not just a flimsy excuse? What kind of information would you even need a fake identity to gather from a public forum?

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (2 children)

In 2022, industry front groups co-signed a letter to Congress arguing that “[a] growing patchwork of state laws are emerging which threaten innovation and create consumer and business confusion.” In 2024, they were at it again this Congress, using the term four times in five paragraphs.

Big Tobacco did the same thing.

Is this really a fair comparison though? A variety of local laws about smoking in restaurants makes sense because restaurants are inherently tied to their physical location. A restaurant would only have to know and follow the rules of their town, state and country, and the town can take the time to ensure that its laws are compatible with the state and country laws.

A website is global. Every local law that can be enforced must be followed, and the burden isn't on legislators to make sure their rules are compatible with all the other rules. Needing to make a subtly different version of a website to serve to every state and country to be in full compliance with all their different rules, and needing to have lawyers check over all of them would create a situation where the difficulty and expense of making and maintaining a website or other online service is prohibitive. That seems like a legitimate reason to want unified standards.

To be fair there are plenty of privacy regulations that this wouldn't apply to, like the example the article gives of San Francisco banning the use of facial recognition tech by police. But the industry complaint linked in the article references laws like https://www.oag.ca.gov/privacy/ccpa and https://leg.colorado.gov/bills/sb21-190 that obligate websites to fulfill particular demands made by residents of those states respectively. Subtle differences in those sorts of laws seems like something that could cause actual problems, unlike differences in smoking laws.

view more: ‹ prev next ›