this post was submitted on 15 Jul 2024
72 points (77.3% liked)

Technology

59374 readers
7244 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 28 comments
sorted by: hot top controversial new old
[–] [email protected] 135 points 4 months ago* (last edited 4 months ago) (2 children)

I am increasingly starting to believe that all these rumors and "hush hush" PR initiatives about "reasoning AI" is an attempt to keep the hype going (and VC investments) till the vesting period for their stock closes out.

I wouldn't be surprised if all these "AI" companies have come to a point where they're basically at the limits of LLM capabilities (due to problems with its fundamental architecture) while not being able to solve its core drawbacks (hallucinations, ridiculously high capex and opex cost).

[–] [email protected] 44 points 4 months ago (2 children)

Yea this. It's a weird time though. All of it is hype and marketing hoping to cover costs by searching for some unseen product down the line ... even the original chatGPT feels like a basic marketing stunt: "If people can chat with it they'll think it's miraculous however useful it actually is".

OTOH, it's easy to forget that genuine progress has happened with this rush of AI that surprised many. Literally the year before AlphaGo beat the world champion no one thought it was going to happen any time soon. And though I haven't checked in, from what I could tell, the progress on protein folding done by DeepMind was real (however hyped it was also). Whether new things are still coming or not I don't know, but it seems more than possible. But of course, it doesn't mean there isn't a big pile of hype that will blow away in the wind.

What I ultimately find disappointing is the way the mainstream has responded to all of this.

  1. The lack of conversation about what we want this to look like in the end. There's way too much of a passive "lets see where the technology and big-corp capitalism take us and hope it doesn't lead to some sort of apocalypse"
  2. The very seamless and reflexive acceptance that an AI chat interface could be an all knowing authority for everything in life ... was somewhat shocking to me. Obviously decades of "Googling" to get the answers to things has laid the groundwork for that, but still, there was IMO an unseemly acceptance of a pretty troubling future that indicated just how easily some dark timeline could arise.
[–] [email protected] 25 points 4 months ago* (last edited 4 months ago) (2 children)

Progress is definitely happening. One area that I am somewhat knowledgeable about is image/video upscaling. Neural net enhanced upscaling has been around for a while, but we are increasingly getting to a point where SD (DVD source, older videos from the 90s/2000s) to HD upscaling is working almost like in the science fiction movies. There are still issues of course, but the results are drastically better than simply scaling the source media by x2.

The framing of LLMs as some sort of techno-utopian "AI oracle" is indeed a damning reflection of our society. Although I think this topic is outside the scope of current "AI" discussions and would likely involve a fundamental reform of our broader social, economic, political and educational models.

Even the term "AI" (and its framing) is extremely misleading. There is no "artificial intelligence" involved in a LLM.

[–] [email protected] 8 points 4 months ago

Sure but that's specialist models.

Generalist models are stagnant and show little potential for progress.

[–] [email protected] 7 points 4 months ago

One area that I am somewhat knowledgeable about is image/video upscaling

Oh I believe you. I've seen it done on a home machine on old time-lapse photos. It might have been janky for individual photos, but as frames in a movie it easily elevated the footage.

[–] [email protected] 20 points 4 months ago (2 children)

It's the Elon Musk narrative making we've been seeing over and over again. It's hype. They're about to run out of input data because they've sucked up everything they could. The Internet is being fed a bunch of bad results that come from LLM produced output which enshittifies the Internet further. These companies are burning cash and grid energy while the world burns. Unless there's a spectacular breakthrough, this can't keep going on much longer.

[–] [email protected] 0 points 4 months ago

It would cost a lot of money, but you can definitely go through and manually sanitize the data.

That would give a good bump in performance, both quality and resources required to run it.

Quality over quantity.

[–] [email protected] -1 points 4 months ago

they're not even close to out of input data, you forget youtube exists.

[–] [email protected] 42 points 4 months ago* (last edited 4 months ago)

For those not wanting to read the article, note that they revealed (to employees) a progress framework, not any actual progress.

The framework is just a five-tiered classification of potential future AIs: Chatbots (1); Reasoners (2); Agents (3); Innovators (4); and Organizations (5). They characterize their current progress as near level 2, but there’s no indication of recent progress that would be newsworthy of its own accord.

[–] [email protected] 32 points 4 months ago

Marketers be marketeering.

[–] [email protected] 16 points 4 months ago

Is this where we play "how long can we tease a breakthrough before the market loses interest"?

[–] [email protected] 14 points 4 months ago

they probably just hardcoded some replies to the cabbage/wolf/boat riddle.

[–] [email protected] 12 points 4 months ago

Lying to get stock to go up again are we Altman?

[–] [email protected] 11 points 4 months ago

I would like to know the energy consumption of this one before we open the floodgates yet again, OpenAI.

[–] [email protected] 11 points 4 months ago (1 children)

I highly doubt it. They may be able to simulate the appearance of reasoning, but I won't believe that they've accomplished this goal until their robots start killing humans over ideological differences.

[–] [email protected] 1 points 4 months ago (2 children)

Yeah, wake me up when the murder bots are here.

[–] [email protected] 2 points 4 months ago (1 children)

"Hey! That's just a machine programmed to kill me, it's not making the decision to kill me itself!"

[–] [email protected] 1 points 4 months ago

Yeah, I really care about the motive of the thing that kills me. It's honestly the most important part.

[–] [email protected] 1 points 4 months ago (1 children)

To be fair, it might be too late by then, but it also might be true that it's not just the fairy tales with happy endings that are not realistic. No sense worrying about T-1000s coming for you in real life when that whole movie was mostly special effects, if the world is about to die then I don't see it coming from machines. We don't know where free will comes from or even if it's just a math equation or something truly beyond explanation, but computers don't seem to have it.

Scarily enough, the Quran (of all the things that implies, I am not saying this is actually reality, only that parallels should not fall into place that way under random chance) points out that this conclusion was engineered in some sense, that electronics were never going to give us godhood due to the limitations of reality. It's kind of blunt in saying it, so I get why the skepticism needs to stay involved, but the idea is that our "household gods" of Siri and Alexa and such are really just basic circuitry compared to a housefly or mosquito, let alone to anything larger or capable of emotional attachment.

Sorry if this is preachy, I'm a writer who hasn't done enough writing lately and I'm just at a stage where I feel like it's too late for my writing to matter.

[–] [email protected] 2 points 4 months ago (1 children)

Yeah, no worries, I get it.

I'm a perennial optimist, so I look more at the Star Trek future than any of the dystopias, though dystopia is my favorite type of book (setting? genre?). In every dystopia, we get the same general theme of the human spirit pushing against evil, with the difference to other stories being the lack of success.

I think people take these warnings to heart and avoid worst of it. I don't think we'll get to the Star Trek utopia, but I think we'll get closer than any of the various dystopias people concoct. Humans are late at responding to issues, but we generally do respond.

I think the same is true for AI. It'll start as a helpful piece of tech, transform into a monster, then we'll correct and control it. We've done that in the past with slavery, nuclear weapons, and fascism, and I think we'll continue to overcome climate, AI, and other challenges, albeit much later than we should.

[–] [email protected] 1 points 3 months ago (1 children)

Actually, I'm glad to know you're interested in utopian settings. I was mostly depressed because my utopian sci-fi story I published (I won't spam but DM me if you use Amazon for reading books) had been outright attacked by other writers for being "too optimistic"; for some inexplicable and seemingly irrational reason, the idea of an artificial afterlife built entirely by human hands is outright offensive to Atheists. It was, admittedly, an unorthodox utopia: Resurrecting 125 billion people at a rate of (iirc) ~2.5 people every four minutes (I did the math, I just no longer have the notes) for 30 million years (Homo Australopithecus to Homo Sapiens Sapiens) and giving all of them immortality (via respawns with a 9 month timeskip every time you die 3 times in a single week), mental health care, privacy, security, education, water, food, mail and courier service, library membership (they saved the books that were burnt or lost too), shelter (hey, some people like living outdoors), transit, electricity, television, internet, and recreational drugs in that order and without being the only provider.

Basically, a constitutional oligarchy with municipal elected officials with full intent and obligation to transition to full democracy on vote, and which strives to balance capitalism and socialist regulation of that capitalism (because yes, outright communism would never actually work, but socialism is the "parent category" of communism and is why we have both TGVs and Interstate Highways in real life; taxes and tax-funded public services are the definition of socialist policy and I honestly believe they're the best option seeing as it's worked more or less consistently since the 1950s) because the oligarchy are REQUIRED to survive on the smallest income in the entire society (the leadership live completely on the same Universal Basic Income as the poorest citizens, and thus must raise the UBI to raise their own income) which leads to greater equity without complicated systems of bureaucracy.

To be fair, I don't know if it would work, given all the historical factors involved, but I actually did research about what has and hasn't worked and relied on that over my own opinion as much as possible. So it really hurt for people to outright reject it because 'I don't want anyone to get inspired to create anything like it entirely based on my hatred of an unrelated religious philosophy' was/is(?) prominent among the current trend of 'the societal implications of technology (Hint: wE hAtE tEcHnOlOgY aNd NeRdS!!!!)' in the sci-fi writing community.

Long story short, thank you, optimistic readers who want optimistic stories are in short supply lately.

[–] [email protected] 1 points 3 months ago (1 children)

That certainly sounds interesting, but I think there are a few issues here:

  • Artificial afterlife - aside from the technical issues, which I'm guessing you addressed, I wonder if this wouldn't devolve into extreme levels of violence and corruption. If you remove the consequences for murder/death, what's to stop you from taking extreme risks to get what you want?
  • Where's the conflict? That's what drives a story in most cases, aside from "slice of life" stories, which I honestly don't understand.
  • Why would elected officials be okay with living off UBI? When you underpay your representatives, they get paid through other means, so surely that would lead to corruption instead? You want your elites feeling like they're at the top so they don't give in to bribes and whatnot.

But personally, when I read a story, I'm not looking to read about how things could be, I'm looking for insight into why things are the way they are and what we need to change to get what we want. Star Trek is interesting to me, not because of the utopian setting, but because they explore some facet of humanity in each episode, usually through visiting other planets. The setting is interesting, but I'm there for the story. The Moon is a Harsh Mistress is interesting, not because of the "libertarian utopia" setting, but because it's about an underdog pushing against an oppressor. We get just enough insight into the society on the moon to understand the conflict and resolution, and that's it.

So perhaps you didn't get a great reception because the setting took too much of the stage?

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

My point with the setting was that, at least according to verifiable evidence, certain aspects of society have been proven to run better in all implementations under circumstances that translate to all or many cultures, but we don't use them in most places because they're strange or because of demonization in the eyes of the more influential demographics.

In short, it's a setting that proposes "a near utopia would require a lot of planning and transition periods, but the biggest blocker now is greed, arrogance and hatred, not technology" in a fantastical way, but it's basic messages being relevant to today.

If people want to know the societal implications of technology, I wanted to give someone a reason to be able to trust technology when people are trustworthy, and that governments and corporations can only be trusted as long as that trust is unbroken, but individual people can change.

It is illegal in the setting for the oligarchs to remain in control if so much as ONE of them is ever caught with non-UBI currency, because you get one residence period and it's small because of the huge population size (~100 billion when the novels would have started) and that UBI includes the free residence. Which means the oligarchs are not just on UBI, they can't spend more than that UBI per month and it's in special corruption-resistant currency that has all transactions publicly visible. The only security the "council" gets is that they don't take 9 months to respawn if they get killed 3 times in a week.

That's not what drives the story, though. The corruption in business still exists to a degree, but besides that the inhabitants have time to heal from trauma, so much that though certain inanimate objects are made eternal, most are not because it would make life boring and economics (even if just as what could be compared to game mechanics rather than an actual economy) relies on a degree of scarcity.

The characters learn when they're resurrected that immortality is provided for it's own sake and because nobody deserves to stop existing, but not everyone is as easily swayed to the idea that there's no room in this setting for hatred. There are a lot of things that cause cynicism but all of them give a different kind of person a stress reaction to immortality not seen in people without significant mental trauma, which is what the story would have been about; learning to be okay with the realization you can never really reach a final destination, that if the afterlife is a game then you have to play that game to a degree or you'll just be miserably bored in unnecessary "tribute" to the idea that worth is based on numbers or reputation.

Unfortunately, even when I provided free samples of the stories, I only received blatant disapproval of the setting and outright demands to modify it to be something that is dystopian in practice, not just appearance. A big theme was supposed to be that the setting wasn't built to be beautiful, but because the people in it are not being constantly pushed down, and the structure of society resembles the best real life has ever had, and grafitti and personal additions for beautification is both legal and encouraged, even a world of creaking thousand year old buildings and standardized apartment modules with solar panel exteriors feels less like cyberpunk and more like solarpunk than solarpunk itself ever has.

Eventually I gave up, because people saying "your work should not be about how this society avoids dystopia, but about how I think of it as dystopia because people I disagree with are there" does not change the fact that if we had to pick restrictions, it would be to put a ban on people like Hitler running for any political position or keeping their original identity, not leaving them dead entirely because then everybody starts complaining that because they dislike Person X, that Person X not even be allowed to state their case. Once you start getting into resurrection and reprogramming reality itself, letting slippery slopes like that begin to crumble is essentially playing god with a Russian roulette. But no, people still think their personal standards are the center of morality and even defended leaving groups dead based purely on association. I write fantasies, not tragedies. If that's how people think, I'll be writing a much less kind assessment of what we can become that we each would actually deserve.

[–] [email protected] 4 points 4 months ago
[–] [email protected] 3 points 4 months ago

Pit of despair, hype cycle, etc

[–] [email protected] 1 points 4 months ago