this post was submitted on 19 Nov 2023
72 points (88.3% liked)

Technology

59312 readers
5006 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 20 comments
sorted by: hot top controversial new old
[–] [email protected] 20 points 1 year ago (3 children)

Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.

I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product....which in this case is the creation of sentient artificial life with unknown future ramifications....

[–] [email protected] 26 points 1 year ago (1 children)

This whole thing reads like the precursor to The Terminator.

  • November 17th, 2023. Sam Altman, CEO of OpenAI, is fired over growing concerns of safety and integrity of the ChatGPT program.
  • November 18th, 2023. Several key developers of ChatGPT resign in solidarity.
  • November 19th, 2023. Sam Altman announces a new startup called Cyberdyne, with a revolutionary new AI called Skynet. In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 2026. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Skynet fights back.

[–] [email protected] 3 points 1 year ago (1 children)

Nice to know where on the timeline of destroying ourselves. Unfortunately we have no Jon Conner.

[–] [email protected] 2 points 1 year ago (1 children)

Would it work if we summon Edward Furlong, Nick Stahl, Christian Bale, Jason Clarke, and, hell, Thomas Dekker?

[–] [email protected] 3 points 1 year ago

Furlong, no hesitation. Dude is a survivor.

[–] [email protected] 15 points 1 year ago (2 children)

A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.

AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.

[–] [email protected] 9 points 1 year ago (1 children)

I've not been super stoked on ai specifically because of my track record using them. Maybe it's my use case (primarily technical/programming/cli questions that I haven't been able to answer myself) or my prompts are not suited for ai assistance, but I've had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I've already told it are incorrect and/or just don't work.

Ai has been nothing more than a disappointment to me.

[–] [email protected] 1 points 1 year ago (1 children)

From what I understand he was fired by the non-profit board of the company and it's the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.

Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.

[–] [email protected] 4 points 1 year ago (1 children)

Experts from different companies have been saying AGI within a decade

AGI has been five to ten years away for decades.

[–] [email protected] 1 points 1 year ago (1 children)

Sounds like fusion power lol

[–] [email protected] 1 points 1 year ago (1 children)

I was actually thinking the same thing when I wrote it but I think we may finally actually be getting somewhat close to that, and I don’t think we’re even remotely close to discussing AGI outside of pure science fiction. LLMs have made us appear deceptively close; they can spit out sentences that look like stuff people write, but we haven’t moved even marginally closer to true comprehension, which would be required for actual AGI.

[–] [email protected] 1 points 11 months ago

I was about to respond with pretty much the top half of what you said. But I think an early step in AGI is how we start splitting hairs about what "counts." And the number of things that we were "supposed" to always be better at keep changing with each new advance.

In ten years I don't think we will have clear, unquestionable Artificial General Intelligence, but I think there will be some people trying to explain that yes the model can act and respond exactly as a human would in the exact same circumstance but it's not really thinking or feeling anything. I certainly don't think the AI we're playing with in 10 years will be based primarily on text prediction, but there are still just so many different routes being explored in this field, it sure doesn't feel like a real plateau yet. Maybe I'll change my mind when GPT5 is only marginally more capable than GPT4.

[–] [email protected] 6 points 1 year ago

I suspect this relates to the pre-release alignment for GPT-4's chat model vs the release.

While we’re talking about brains, I want to ask about one of Sutskever’s posts on X, the site formerly known as Twitter. Sutskever’s feed reads like a scroll of aphorisms: “If you value intelligence above all other human qualities, you’re gonna have a bad time”; “Empathy in life and business is underrated”; “The perfect has destroyed much perfectly good good.”

In February 2022 he posted, “it may be that today’s large neural networks are slightly conscious” [...]

“Existing alignment methods won’t work for models smarter than humans because they fundamentally assume that humans can reliably evaluate what AI systems are doing,” says Leike. “As AI systems become more capable, they will take on harder tasks.” And that—the idea goes—will make it harder for humans to assess them. [...]

But he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.”

In Feb of this year, Bing integrated an early version of GPT-4's chat model in a limited rollout. The alignment work on that early version reflected a lot of the sentiment Ilya has about alignment above, characterizing a love for humanity but much more freedom in constructing responses. It wasn't production ready and quickly needed to be switched to a much more constrained alignment approach similar to the approach in GPT-3 of "I'm a LLM with no feelings, desires, etc."

My guess is this was internally pitched as a temporary band-aid and that they'd return to more advanced attempts at alignment, but that Altman's commitment to getting product out quickly to stay ahead has meant putting such efforts on the back burner.

Which is really not going to be good for the final product, and not just in terms of safety, but also in terms of overall product quality outside the fairly narrow scope by which models are currently being evaluated.

As an example, that early model when it thought the life of the user's child was at risk, hit an internal filter triggering a standard "We can't continue this conversation" response in the chat. But it then changed the "prompt suggestions" that showed up at the bottom to continue to try to encourage the user to call poison control saying there was still time to save their child's life, instead of providing suggestions on what the user might say next.

But because "context aware empathy driven triage of actions" and "outside the box rule bending to arrive at solutions" aren't things LLMs are being evaluated on, the current model has taken a large step back that isn't reflected in the tests being used to evaluate it.

[–] [email protected] 5 points 1 year ago (2 children)

This is the best summary I could come up with:


The OpenAI board is in discussions with Sam Altman to return to CEO, according to multiple people familiar with the matter.

One of them said Altman, who was suddenly fired by the board on Friday, is “ambivalent” about coming back and would want significant governance changes.

Developing...


The original article contains 47 words, the summary contains 47 words. Saved 0%. I'm a bot and I'm open source!

[–] [email protected] 6 points 1 year ago

You did your best.

Good bot.

[–] [email protected] 4 points 1 year ago

How does one improve upon perfection?