Stopthatgirl7

joined 1 year ago
 

By a 4-3 margin, the Arizona State Board for Charter Schools on Monday approved an application from Unbound Academy to open a fully online school serving grades four through eight.  Unbound already operates a private school that uses its AI-dependent “2hr Learning” model in Texas and is currently applying to open similar schools in Arkansas and Utah.

Under the 2hr Learning model, students spend just two hours a day using personalized learning programs from companies like IXL and Khan Academy. “As students work through lessons on subjects like math, reading, and science, the AI system will analyze their responses, time spent on tasks, and even emotional cues to optimize the difficulty and presentation of content,” according to Unbound’s charter school application in Arizona. “This ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.”

Spending less time on traditional curriculum frees up the rest of students’ days for life-skill workshops that cover “financial literacy, public speaking, goal setting, entrepreneurship, critical thinking, and creative problem-solving,” according to the Arizona application.

 

Microsoft-owned GitHub announced on Wednesday a free version of its popular Copilot code completion/AI pair programming tool, which will also now ship by default with Microsoft’s popular VS Code editor. Until now, most developers had to pay a monthly fee, starting at $10 per month, with only verified students, teachers, and open source maintainers getting free access.

GitHub also announced that it now has 150 million developers on its platform, up from 100 million in early 2023.

“My first project [at GitHub] in 2018 was free private repositories, which we launched very early in 2019,” GitHub CEO Thomas Dohmke told me in an exclusive interview ahead of Wednesday’s announcement. “Then we had kind of a v2 with free private organizations in 2020. We have free [GitHub] Actions entitlements. I think at my first Universe [conference] as CEO, we announced free Codespaces. And so it felt natural, at some point, to get to the point where we also have a completely free Copilot, not just one that is for students and open source maintainers.”

 

British mobile phone company O2 has unveiled a new creation, Daisy, a chit-chat and kitty-cat loving artificial intelligence "granny" who talks to scammers to keep them away from real people.

"Hello, scammers. I'm your worst nightmare," Daisy says by way of introduction to would-be ne'er-do-wells.

In the video introduction, featuring former Love Island contestant and scam victim Amy Hart, scammers are heard feeling much of the same frustrations they put their victims through as Daisy breezily yammers on about her kitten, Fluffy, and her inability to follow the scammers' instructions.

 

For about a year, I’ve gotten notes from readers asking why our YouTube embeds are broken in one very specific way: you can no longer click the title to open the video on YouTube.com or in the YouTube app. This used to work just fine, but now you can’t.

This bothers us, too, and it’s doubly frustrating because everyone assumes that we’ve chosen to disable links, which makes a certain kind of sense — after all, why on earth wouldn’t YouTube want people to click over to its app?

The short answer is money. Somewhat straightforwardly, YouTube has chosen to degrade the user experience of the embedded player publishers like Vox Media use, and the only way to get that link back is by using a slightly different player that pays us less and YouTube more.

 

AI company Embodied announced this week that they would be shutting down following financial difficulties and a sudden withdrawal of funding. Embodied’s main product was Moxie, an AI-powered social robot specifically made with autistic children in mind. The robot itself cost $799.00 and now, following the closure of Embodied, it will cease to function.

Moxie is a small blue robot with a big expressive face straight out of a Pixar movie. The robot used large language models in the cloud to answer questions, talk, and function. With Embodied out of business, the robot will soon no longer be able to make those calls. This outcome was always likely – any cloud based device is subject to the health of the company and LLMs are not cheap to run. This has actually happened before with a company called Vector. But the shocking part is that this was not an old device, it was fairly recent, expensive, and still being sold.

 

The District of Columbia sued Amazon on Wednesday, alleging the company secretly stopped providing its fastest delivery service to residents of two predominantly Black neighborhoods while still charging millions of dollars for a membership that promises the benefit. 

The complaint filed in District of Columbia Superior Court revolves around Amazon’s Prime membership, which costs consumers $139 per year or $14.99 per month for fast deliveries — including one-day, two-day and same-day shipments — along with other enhancements

In mid-2022, the lawsuit alleges, the Seattle-based online retailer imposed what it called a delivery “exclusion” on two low-income ZIP codes in the district — 20019 and 20020 — and began relying exclusively on third-party delivery services such as UPS and the U.S. Postal Service, rather than its own delivery systems.

 

Texas Gov. Greg Abbott has threatened to pull funding from a children's hospital after a doctor went viral for telling patients they are not legally required to disclose their citizenship status.

Dr. Tony Pastor, an adult congenital cardiologist at Texas Children's Hospital and assistant professor at Baylor College of Medicine, posted a video on TikTok, responding to Abbott's executive orderrequiring Texas public hospitals that accept Medicaid or Children's Health Insurance Plan to report on health care for undocumented immigrants. The order was implemented Nov. 1.

 

Over the last month, Jezebel spoke with several health care workers on the ground in Gaza as well as humanitarian organizations, who described a war overwhelmingly victimizing newborns and pregnant women. In January, Care International shared with Jezebel that the miscarriage rate had increased by 300% since October 2023, and Pope reported that this figure hasn’t changed. Meanwhile, Ammal Awadallah, executive director of Palestine Family Planning and Protection Association (PFPPA), which is part of the International Planned Parenthood Federation, told Jezebel that not only is she “still hearing about C-sections without anesthesia,” but she’s heard from doctors who’ve increasingly “seen women with C-sections developing infections that spread right up to their chest.”

Hospitals still aren’t functioning; medical supplies still aren’t arriving; and women are either unable to reach a hospital in time or unable to deliver in a clean and sterile environment if they do. “When people talk about PTSD, it doesn’t apply to Gaza, because it’s never ‘post.’ It’s a situation of continuous trauma,” Awadallah said. “Previously, Gaza was an open prison, but now it’s fully closed. It’s a cage.”

[–] [email protected] 3 points 4 months ago

That’s the other big reason I’m hesitant - different tests can give totally different results, so who knows what’s “right”?

[–] [email protected] 28 points 4 months ago (3 children)

I’ve got to admit, I’ve wanted to do one of those tests just because my family is such a mix of “lol we don’t know.” Like, no really, what IS my maternal grandma? She does not look like the rest of her family and had a different family name from her siblings. And ok really, where DID my paternal great-grandmother who lied about her race so she could marry my great-grandfather back when “miscegenation” was illegal, come from? And WAS that great-grandpa biracial himself?

There’s a reason I call myself an ethnic Rorschach test, and I’d love to know why it is I am. But the rest of my family is against the idea of finding out because “it doesn’t matter” plus who knows how just data might be used one day.

[–] [email protected] 18 points 4 months ago

I read the headline and went, “…I mean, what were you expecting?”

 

Researchers have discovered malicious code circulating in the wild that hijacks the earliest stage boot process of Linux devices by exploiting a year-old firmware vulnerability when it remains unpatched on affected models.

The critical vulnerability is one of a constellation of exploitable flaws discovered last year and given the name LogoFAIL. These exploits are able to override an industry-standard defense known as Secure Boot and execute malicious firmware early in the boot process. Until now, there were no public indications that LogoFAIL exploits were circulating in the wild.

The discovery of code downloaded from an Internet-connected web server changes all that. While there are no indications the public exploit is actively being used, it is reliable and polished enough to be production-ready and could pose a threat in the real world in the coming weeks or months. Both the LogoFAIL vulnerabilities and the exploit found on-line were discovered by Binarly, a firm that helps customers identify and secure vulnerable firmware.

 

A group of Canadian news and media companies filed a lawsuit Friday against OpenAI, alleging that the ChatGPT maker has infringed their copyrights and unjustly enriched itself at their expense.

The companies behind the lawsuit include the Toronto Star, the Canadian Broadcasting Corporation, the Globe and Mail, and others who seek to win monetary damages and ban OpenAI from making further use of their work.

The news companies said that OpenAI has used content scraped from their websites to train the large language models that power ChatGPT — content that is “the product of immense time, effort, and cost on behalf of the News Media Companies and their journalists, editors, and staff.”

[–] [email protected] 7 points 4 months ago (1 children)

If real women don’t like you, that’s a You issue. Make yourself a better person that other people - aka women - want to be around.

 

People in 2024 aren't just swiping right and left on online dating apps — some are crafting their perfect AI match and entering relationships with chatbots.

Eric Schmidt, Google's former CEO, recently shared his concerns about young men creating AI romantic partners and said he believes that AI dating will actually increase loneliness.

"This is a good example of an unexpected problem of existing technology," Schmidt said in a conversation about AI dangers and regulation on "The Prof G Show" with Scott Galloway released Sunday.

Schmidt said an emotionally and physically "perfect" AI girlfriendcould create a scenario in which a younger male becomes obsessed and allows the AI to take over their thinking.

"That kind of obsession is possible," Schmidt said in the interview. "Especially for people who are not fully formed."

 

A machine learning librarian at Hugging Face just released a dataset composed of one million Bluesky posts, complete with when they were posted and who posted them, intended for machine learning research.

Daniel van Strien posted about the dataset on Bluesky on Tuesday:

“This dataset contains 1 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data,” the dataset description says. “Each post contains text content, metadata, and information about media attachments and reply relationships.”

The data isn’t anonymous. In the dataset, each post is listed alongside the users’ decentralized identifier, or DID; van Strien also made a search tool for finding users based on their DID and published it on Hugging Face. A quick skim through the first few hundred of the million posts shows people doing normal types of Bluesky posting—arguing about politics, talking about concerts, saying stuff like “The cat is gay” and “When’s the last time yall had Boston baked beans?”—but the dataset has also swept up a lot of adult content, too.

[–] [email protected] 43 points 5 months ago (4 children)

Grave of the Fireflies

[–] [email protected] 1 points 5 months ago

The chatbot was actually pretty irresponsible about a lot of things, looks like. As in, it doesn’t respond the right way to mentions of suicide and tries to convince the person using it that it’s a real person.

This guy made an account to try it out for himself, and yikes: https://youtu.be/FExnXCEAe6k?si=oxqoZ02uhsOKbbSF

[–] [email protected] 61 points 6 months ago

Respectfully requesting that in the future, you read articles before replying.

And:

According to Straight, the issue was caused by a piece of wiring that had come loose from the battery that powered a wristwatch used to control the exoskeleton. This would cost peanuts for Lifeward to fix up, but it refused to service anything more than five years old, Straight said.

"I find it very hard to believe after paying nearly $100,000 for the machine and training that a $20 battery for the watch is the reason I can't walk anymore?" he wrote on Facebook.

This is all over a battery in a watch.

[–] [email protected] 11 points 6 months ago* (last edited 6 months ago) (1 children)

So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

[–] [email protected] 13 points 6 months ago (4 children)

If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

[–] [email protected] 7 points 7 months ago

The way I laughed just reading the first paragraph.

[–] [email protected] 8 points 9 months ago (1 children)

Someone posted links to some of the AI generated songs, and they are straight up copying. Blatantly so. If a human made them, they would be sued, too.

[–] [email protected] 7 points 9 months ago

…oh my GOD, they are cooked.

view more: next ›