Aatube

joined 6 months ago
[–] [email protected] -2 points 13 hours ago (2 children)

I would agree with this if they somehow only harmed Hezbollah severely. That was not the case.

[–] [email protected] 13 points 13 hours ago

At least 12 people were killed after the attacks,[60][1][61] and more than 2,750 were wounded.[5][6] Civilians were also killed,[10][13][14] including four healthcare workers[62] and two children.[63] It is not clear if only Hezbollah members were carrying the pagers.[19] Lebanese Health Minister Firass Abiad said the vast majority of those being treated in emergency rooms were in civilian clothing and their Hezbollah affiliation was unclear.[64] He added the casualties included elderly people as well as young children. According to the Lebanese Health Ministry, healthcare workers were also injured and it advised all healthcare workers to discard their pagers.[64][65]

[–] [email protected] 8 points 14 hours ago

YouTube Music has all the album versions even though some of them are hidden from normal YouTube search

[–] [email protected] 5 points 1 day ago

It’s called a desperate gambit for survival.

[–] [email protected] 3 points 1 day ago (1 children)

Will they approve installing it on the remoted machine?

[–] [email protected] 17 points 1 day ago (6 children)

As a user of bionic reading, wtf did you do to your text

[–] [email protected] 2 points 2 days ago

I mean, their goal was readability, and at least they're trying new things.

[–] [email protected] 1 points 3 days ago

But what about their 307cu³ and counting of spent fuel?

[–] [email protected] 1 points 3 days ago (2 children)

Ooooh. Is there a catch? Are the Fukushima spent fuel caskets all fully recycled?

[–] [email protected] 2 points 3 days ago

I live within two hours of a decommissioned nuclear power plant and 10 seconds of a sink.

[–] [email protected] 3 points 3 days ago (2 children)

You either just treat them like normal buildings, treat them like tourist buildings, or just sell them to Holtec.

plumbing

Do you really think that out of the millions of demolished buildings, none had toilets‽

 

Identical text perceived as less credible when presented as a Wikipedia article than as simulated ChatGPT or Alexa output. The researchers note that these results might be influenced by the fact that it is easier to discern factual errors on a static text page like a Wikipedia than when listening to the spoken audio of Alexa or watching the streaming chat-like presentation of ChatGPT.

However, exploratory analyses yielded an interesting discrepancy between perceived information credibility when being exposed to actual information and global trustworthiness ratings regarding the three information search applications. Here, online encyclopedias were rated as most trustworthy, while no significant differences were observed between voice-based and dynamic text-based agents.

Contrary to our predictions, people felt higher enjoyment [measured using questions like "I found reading the information / listening to the information entertaining"] when information was presented as static or dynamic text compared to the voice-based agent, while the two text-based conditions did not significantly differ. In Experiment 2, we expected to replicate this pattern of results but found that people also felt higher enjoyment with the dynamic text-based agent than the static text.

Edit: Added "for credibility" to title

 

because we shouldn't be humanizing AI while depersonalizing the actual people who use stuff, according to MIT Technology Review.

111
that's not- (kbin.melroy.org)
 
 
 

It was only in the early 1980s that the first tangible evidence connecting a toxicant with neurodegeneration came to light. In a tragic experiment of nature, a group of drug users in California accidentally injected themselves with a bad batch of designer heroin and began suffering from symptoms closely resembling those of Parkinson’s disease. Investigators traced the batch to a back-alley chemist who had synthesized the drugs and found that he had mistakenly created a neurotoxin precursor known as MPTP as part of the concoction. As it turned out, MPTP also resembled aspects of the chemical makeup of paraquat, a common herbicide, opening the door to the notion that perhaps chronic exposure to synthetic toxins was triggering Parkinson’s in aging patients the same way that the bad batch of heroin had in the users. Since then, advancements in molecular and genetic testing have continued to reinforce the idea. Recent studies have linked brain disorders with chronic exposure to cyanobacterial blooms, pesticides, air pollution and numerous other toxicants. Some researchers have gone so far as to describe Parkinson’s disease in particular as “man-made.”

As summarized by the New York Times.

 

A paper[1] presented in June at the NAACL 2024 conference describes "how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages." A "research prototype" version of the resulting "STORM" system is available online and has already attracted thousands of users. This is the most advanced system for automatically creating Wikipedia-like articles that has been published to date.

The authors hail from Monica S. Lam's group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, "the first few-shot LLM-based chatbot that almost never hallucinates" – a paper that received the Wikimedia Foundation's "Research Award of the Year" some weeks ago).

Please read the article before commenting. Also, coming right up, another paper creates a structural diagram in comic sans.

 

[L]ately, Anthropic has been in the headlines for less noble reasons: It’s pushing back on a landmark California bill to regulate AI. It’s taking money from Google and Amazon in a way that’s drawing antitrust scrutiny. And it’s being accused of aggressively scraping data from websites without permission, harming their performance.

It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

 

The TRACTOR program aims to automate the translation of legacy C code to Rust. The goal is to achieve the same quality and style that a skilled Rust developer would produce, thereby eliminating the entire class of memory safety security vulnerabilities present in C programs. This program may involve novel combinations of software analysis, such as static analysis and dynamic analysis, and machine learning techniques like large language models.

Highlights from the forum thread:

There's even a conspiracy theory that the Rust Foundation's 501 organization type was chosen so it can conduct lobbying. The implication being that the Rust Foundation is behind government recommendations to move toward memory safe languages. (Big Borrow-Checker, if you will).

Assuming a worst case scenario, this could be the worst thing to happen to Rust’s image. We end up with billions of lines of rewritten Rust code that is full of soundness and logic bugs, and that no one understands.

DARPA funds some projects on a "there is an infinitesimal chance of success, but if you succeed, it's a big deal" basis. Silent Talk is an example here - very unlikely to succeed, even at the beginning, but if you could hold a radio conversation without sound, that'd be a huge deal for special operations forces.

 

The company says it’s proof that quality AI models don’t have to include controversial copyrighted content. Adobe trained Firefly on content that had an explicit license allowing AI training, which means the bulk of the training data comes from Adobe’s library of stock photos, says Greenfield. The company offers creators extra compensation when material is used to train AI models, he adds.

This is in contrast to the status quo in AI today, where tech companies scrape the web indiscriminately and have a limited understanding of what of what the training data includes. Because of these practices, the AI datasets inevitably include copyrighted content and personal data, and research has uncovered toxic content, such as child sexual abuse material.

 

QL was our first game and although it was a big milestone for us, it was created at a time before we understood version control software. We do not have access to the source code anymore and cannot make any fixes or changes to the game. Because of this, we have decided to disable the ability for anyone to buy copies of the game. Thank you for your time and feel free to reach out to us.

The trailer looks like an awesome vaporwave freeze tag indie game.

 

Some government employee made the “new logo” in the 90s for NCSA software (the Common Gateway Interface), and government work is public domain.

Just more evidence that big brother shall releaseth thee work and soul /s

 

…according to a Twitter post by the Chief Informational Security Officer of Grand Canyon Education.

So, does anyone else find it odd that the file that caused everything CrowdStrike to freak out, C-00000291-
00000000-00000032.sys was 42KB of blank/null values, while the replacement file C-00000291-00000000-
00000.033.sys was 35KB and looked like a normal, if not obfuscated sys/.conf file?

Also, apparently CrowdStrike had at least 5 hours to work on the problem between the time it was discovered and the time it was fixed.

view more: next ›