this post was submitted on 14 Apr 2024
60 points (84.1% liked)

Technology

60052 readers
3071 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 8 months ago (3 children)
[–] [email protected] 28 points 8 months ago* (last edited 8 months ago) (1 children)

Why 'making "pretend people" with artificial intelligence' is a waste of energy

[–] [email protected] 7 points 8 months ago

Oh, thank you

[–] [email protected] -5 points 8 months ago (1 children)

Simply because the title makes zero sense.

[–] [email protected] 2 points 8 months ago

The article itself doesn't really clear it up, IMO.

[–] [email protected] 19 points 8 months ago (4 children)

The headline was confusing and reading the article doesn't really clear things up. I don't think Gill is imagining the same sort of "pretend person" that I would want out of AGI. What I want is a personal assistant that knows me extremely well, is able to tirelessly work on my behalf, and has a personality tailored to my needs and interests. It should be general enough to understand me on a personal level and do a good job anticipating what I want.

That would not at all be a waste of energy to me.

[–] [email protected] 8 points 8 months ago (1 children)

Depends on how much energy it takes. If it takes more resources than it frees, then I'd say it is not worth it.

[–] [email protected] 2 points 8 months ago (1 children)

I am quite sure it'll cost less than it would to hire a human for the job.

[–] [email protected] 6 points 8 months ago (1 children)

I'm talking about the energy and resources to actually create and provide this service.

[–] [email protected] 2 points 8 months ago
[–] [email protected] 6 points 8 months ago

knows me extremely well, is able to tirelessly work on my behalf, and has a personality tailored to my needs and interests.

Those may still be ANI applications.

Today's LLM's marketed as the future of AGI are more focused on knowing a little bit about everything. Including a little bit about how MRIs work and a summary of memes floating around a parody subreddit. I fail to see how LLM's as they are trained today will know you extremely well and give you a personality tailored to your needs. I also think commercial interests of big tech are pitted against your desire for "tirelessly work[ing] on my behalf".

[–] [email protected] 2 points 8 months ago

Like Farnsworth Bentley?

[–] [email protected] 1 points 8 months ago (1 children)

What I want is a personal assistant that knows me extremely well, is able to tirelessly work on my behalf, and has a personality tailored to my needs and interests.

and you're not concerned at all about this information being compromised and used against you?

phew.....

[–] [email protected] 3 points 8 months ago (1 children)

Of course I'm concerned about it. That's why I would take measures to ensure the information is well protected. I already run local LLMs and image generators for most of the stuff I use AI for, both to ensure that I have control over what sort of outputs they generate and to keep any inputs I run through them private. An AGI assistant like what I'm describing is something I would want to run on my own hardware too.

[–] [email protected] 1 points 8 months ago (1 children)

Do you really think you'll be able to run a full fledged, real-feeling AGI on home hardware?

Perhaps an assistant, maybe...

but good on you for forethought.

[–] [email protected] 2 points 8 months ago

Yes, I do. Perhaps not the current generation of hardware, but the chip manufacturers are currently throwing hundreds of billions of dollars into designing the next generation of AI-specialized hardware so I expect the next generation to be very impressive. The software has also been getting more efficient, making better use of the hardware that already exists. I've been experimenting a lot with it.

[–] [email protected] 4 points 8 months ago (1 children)

What should we make pretend people with?

[–] [email protected] 8 points 8 months ago* (last edited 8 months ago)

Play-Doh

Edit: And candy!

[–] [email protected] 3 points 8 months ago (1 children)

I know quite a lot of people doing research into AI. Some from my own background in applied mathematics, and some from my partners partner's background in computational linguistics. They all have very different ideas about the future of AI, but they all laugh about the idea of a generic AGI.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

Humans are generally intelligent so we know general intelligence exists. Is there something special about meat computers that can't be replicated in silica? Possibly but probably not since they're both made of physical matter. So what's stopping us from creating AGI? Nothing. It's only a matter of time. We just have no idea how far away we are from it. It might take decades or it may pop into existence next week and if we're dismissive of it like your AI research buddies then we'll be completely caught off guard.

[–] [email protected] 2 points 8 months ago

This is the best summary I could come up with:


Interview While the likes of OpenAI and Google DeepMind chase after some fabled artificial general intelligence, not everyone thinks that's the best use of our time and energy in developing AI.

Computer scientist Binny Gill – CEO and co-founder of business automation firm Kognitos, and formerly chief architect and cloud CTO at Nutanix – thinks the push for AGI is the entirely wrong approach in what could be the next industrial revolution.

Rather than trying to replicate humans with some kind of general-purpose artificial intelligence, Gill thinks we should look to the past to see what sort of systems we should be building.

Gill instead hopes we'll see the rise of what he calls artificial narrow intelligence, or ANI.

This isn't a new concept; it's the sort of application-specific machine learning that already exists behind things like self-driving cars.

To learn more about Gill's optimistic vision for the future of AI, watch our full video interview with him by clicking on play above.


The original article contains 279 words, the summary contains 163 words. Saved 42%. I'm a bot and I'm open source!