Stopthatgirl7

joined 8 months ago
 

As many users seek alternatives to X, rival social network Mastodon says that its official app downloads are up 47% on iOS.

Mastodon founder Eugen Rochko says downloads on Android are also up 17%, while total monthly sign-ups rose approximately 27% to 90,000.

The open-source X rival functions much like its competitor, the site formerly known as Twitter, on the outside. However, unlike the centralized Twitter, Mastodon consists of thousands of different social networks, which are integrated into a web it calls the "fediverse."

 

Apple quietly introduced code into iOS 18.1 which reboots the device if it has not been unlocked for a period of time, reverting it to a state which improves the security of iPhones overall and is making it harder for police to break into the devices, according to multiple iPhone security experts. 

On Thursday, 404 Media reported that law enforcement officials were freaking out that iPhones which had been stored for examination were mysteriously rebooting themselves. At the time the cause was unclear, with the officials only able to speculate why they were being locked out of the devices. Now a day later, the potential reason why is coming into view.

“Apple indeed added a feature called ‘inactivity reboot’ in iOS 18.1.,” Dr.-Ing. Jiska Classen, a research group leader at the Hasso Plattner Institute, tweeted after 404 Media published on Thursday along with screenshots that they presented as the relevant pieces of code.

 

X is rolling out its controversial update to the block feature, allowing people to view your public posts even if you have blocked them. People have protested this change, arguing that they don’t want blocked users to see their posts for reasons of safety.

Blocked users still can’t follow the person who has blocked them, engage with their posts, or send direct messages to them.

An old version of X’s support page says blocked users couldn’t see a user’s following and followers lists. The company has now updated the page to remove that reference, and it now allows users to see the following and followers lists of the people who have blocked them.

 

In a quarterly earnings call that was overwhelmingly about AI and Meta’s plans for it, Zuckerberg said that new, AI-generated feeds are likely to come to Facebook and other Meta platforms. Zuckerberg said he is excited for the “opportunity for AI to help people create content that just makes people’s feed experiences better.” Zuckerberg’s comments were first reported by Fortune.

“I think we’re going to add a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way,” he said. “And I think that that’s going to be just very exciting for the—for Facebook and Instagram and maybe Threads or other kind of Feed experiences over time.”

 

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

[–] [email protected] 43 points 3 weeks ago (4 children)

Grave of the Fireflies

[–] [email protected] 1 points 3 weeks ago

The chatbot was actually pretty irresponsible about a lot of things, looks like. As in, it doesn’t respond the right way to mentions of suicide and tries to convince the person using it that it’s a real person.

This guy made an account to try it out for himself, and yikes: https://youtu.be/FExnXCEAe6k?si=oxqoZ02uhsOKbbSF

 

The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

 

The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.

The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.

In addition to the pedestrian’s death, another crash involved an injury, the agency said.

Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”

 

In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called “AI-mazing Tech-venture” in which Google’s Gemini AI was presented as a tool archives employees could use to “enhance productivity.” During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.  

In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”

Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history. 

One worker who attended the presentation told 404 Media “I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit.”

[–] [email protected] 61 points 1 month ago

Respectfully requesting that in the future, you read articles before replying.

And:

According to Straight, the issue was caused by a piece of wiring that had come loose from the battery that powered a wristwatch used to control the exoskeleton. This would cost peanuts for Lifeward to fix up, but it refused to service anything more than five years old, Straight said.

"I find it very hard to believe after paying nearly $100,000 for the machine and training that a $20 battery for the watch is the reason I can't walk anymore?" he wrote on Facebook.

This is all over a battery in a watch.

 

A former jockey who was left paralyzed from the waist down after a horse riding accident was able to walk again thanks to a cutting-edge piece of robotic tech: a $100,000 ReWalk Personal exoskeleton.

When one of its small parts malfunctioned, however, the entire device stopped working. Desperate to gain his mobility back, he reached out to the manufacturer, Lifeward, for repairs. But it turned him away, claiming his exoskeleton was too old, *404 media *reports.

"After 371,091 steps my exoskeleton is being retired after 10 years of unbelievable physical therapy," Michael Straight posted on Facebook earlier this month. "The reasons why it has stopped is a pathetic excuse for a bad company to try and make more money."

 

The Federal Trade Commission is taking action against multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.

The cases being announced today include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts.

“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

 

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.

[–] [email protected] 11 points 1 month ago* (last edited 1 month ago) (1 children)

So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

[–] [email protected] 13 points 1 month ago (4 children)

If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

 

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

[–] [email protected] 7 points 3 months ago

The way I laughed just reading the first paragraph.

[–] [email protected] 8 points 4 months ago (1 children)

Someone posted links to some of the AI generated songs, and they are straight up copying. Blatantly so. If a human made them, they would be sued, too.

[–] [email protected] 7 points 4 months ago

…oh my GOD, they are cooked.

[–] [email protected] 4 points 5 months ago

Right now, it’s all being funded by one person, Zhang Jingna (a photographer that recently sued and won her case when someone plagiarized her work) but it’s grown so quickly she got hit with a $96K bill for one month.

[–] [email protected] 106 points 5 months ago (21 children)

I am just so, so tired of being constantly inundated with being told to CONSUME.

[–] [email protected] 12 points 5 months ago (4 children)

If they do, it’s going to be a bad time for them, since Cara has Glaze integration and encourages everyone to use it. https://blog.cara.app/blog/cara-glaze-about

view more: next ›