this post was submitted on 26 Feb 2025
485 points (96.9% liked)

Technology

63746 readers
3662 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hot off the back of its recent leadership rejig, Mozilla has announced users of Firefox will soon be subject to a ‘Terms of Use’ policy — a first for the iconic open source web browser.

This official Terms of Use will, Mozilla argues, offer users ‘more transparency’ over their ‘rights and permissions’ as they use Firefox to browse the information superhighway — as well well as Mozilla’s “rights” to help them do it, as this excerpt makes clear:

You give Mozilla all rights necessary to operate Firefox, including processing data as we describe in the Firefox Privacy Notice, as well as acting on your behalf to help you navigate the internet.

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

Also about to go into effect is an updated privacy notice (aka privacy policy). This adds a crop of cushy caveats to cover the company’s planned AI chatbot integrations, cloud-based service features, and more ads and sponsored content on Firefox New Tab page.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 154 points 5 days ago (1 children)

Oh, that last paragraph doesn't give me hope at all. Fucking AI chatbots.

[–] [email protected] 210 points 5 days ago (3 children)

The actual addition to the terms is essentially this:

  1. If you choose to use the optional AI chatbot sidebar feature, you're subject to the ToS and Privacy Policy of the provider you use, just as if you'd gone to their site and used it directly. This is obvious.
  2. Mozilla will collect light data on usage, such as how frequently people use the feature overall, and how long the strings of text are that are being pasted in. That's basically it.

The way this article describes it as "cushy caveats" is completely misleading. It's quite literally just "If you use a feature that integrates with third party services, you're relying on and providing data to those services, also we want to know if the feature is actually being used and how much."

[–] [email protected] 84 points 5 days ago (1 children)

The problem is the inclusion of the feature to begin with. It should be an opt in add install.

[–] [email protected] 51 points 5 days ago (2 children)

I agree to a point, but I look at this similar to how I'd view any feature in a browser. Sometimes there are features added that I don't use, and thus, I simply won't use them.

This would be a problem for me if it was an "assistant" that automatically popped up over pages I was on to offer "help," but it's not. It's just a sidebar you can click a button in the menu to pop out, or you can never click that button and you'll never have to look at it.

It's not a feature that auto-enables in a way that actually starts sending data to any AI company, it's just an optional interface, that you have to click a specific button to open, that can then interface with a given AI model if you choose to use it. If you don't want to use it, then you ideally won't even see it open during your use of Firefox.

[–] [email protected] 16 points 5 days ago (1 children)

Please let them not ruin Firefox with some bullshit AI. I can't take much more of this, Firefox is one of the last things I have left.

[–] [email protected] 22 points 5 days ago (1 children)

It's two things:

  1. Sidebar you can open from the hamburger menu that is basically just a tiny chat UI
  2. Right click to paste the selected text into the sidebar

If you don't want it, they don't seem to be pushing it any further than that. Just don't click the option in the menus and you'll be fine. (I believe you can also fully disable the option from appearing in settings too)

[–] [email protected] 0 points 5 days ago (1 children)

Yes, I gathered that from the previous comment, but thank you for the additional info.

I just hope it doesn't progress further in the future. AI is quite possibly a more catastrophic technological development than nuclear weapons.

[–] [email protected] 8 points 5 days ago (2 children)

AI is quite possibly a more catastrophic technological development than nuclear weapons.

I wouldn't go that far. A technology that wastes a lot of energy and creates a lot of bad quality content isn't the same as a bomb that directly kills millions.

[–] [email protected] 6 points 5 days ago

Until the tech bros let an AI manage nuclear weapons because "cost savings"

[–] [email protected] 0 points 5 days ago (1 children)

But nuclear weapons have only been used twice in 80 years for military purposes. They have arguably prevented more deaths than they have caused.

And you're drastically underselling the potential impact of AI. If anything, your reaction is a defense mechanism because you can't bear to stomach the potential consequences of AI.

One could have easily reacted the same way to the invention of the printing press, or the automobile, or the analog computer. They all wasted a lot of energy for limited benefit, at first. But if the technology develops enough, it can destroy everything that we hold dear.

Human beings engineering their own obsolescence while cavalierly disregarding the potential consequences. A tale as old as time

[–] [email protected] 1 points 5 days ago (1 children)

But nuclear weapons have only been used twice in 80 years for military purposes. They have arguably prevented more deaths than they have caused.

Nukes only "prevent" deaths by saying they'll cause drastically large numbers of deaths otherwise. If the nukes didn't exist, there wouldn't then be the threat of death from the nukes, which is being prevented by more people having the nukes.

If anything, your reaction is a defense mechanism because you can’t bear to stomach the potential consequences of AI.

"AI" is just more modern machine learning techniques that we've had for decades. Most implementations of it today are things that nobody actually wants, producing worse quality outputs than that of a human. Maybe it will automate some jobs, sure, that can happen. Just like how tons of automation historically has just pushed people from direct labor to management of machine labor.

Heck, if "AI" automated most of the work people did and put us out of a job, that would just accelerate our progress towards pushing for UBI/or an era of superabundance, which I'd welcome with open arms. It's a lot easier to convince people that centralized ownership of wealth and resources makes no sense if goods can be produced automatically by machines for free.

But sure, seeing matrix multiplication causing statistically probable sentences to be formed really has me unable to stomach the potential consequences. /s

One could have easily reacted the same way to the invention of the printing press, or the automobile, or the analog computer. They all wasted a lot of energy for limited benefit, at first. But if the technology develops enough, it can destroy everything that we hold dear.

And what did the printing press, automobile, and analog computer bring?

A rapid advancement in the spread of information and local news, faster individualized transport that later contributed to additional developments to rail and bus transit solutions, and software solutions that can massively reduce workloads while accelerating human progress.

And all of those things either raised the standard of living without causing equivalent harm from job loss, or actively created substantially more jobs.

Human beings engineering their own obsolescence while cavalierly disregarding the potential consequences. A tale as old as time

Make human work obsolete so we can do what we care about and hang out with people we like instead of spending our days doing labor to produce goods we rely on? Sign me up.

[–] [email protected] -1 points 5 days ago* (last edited 5 days ago) (1 children)

Nukes only “prevent” deaths by saying they’ll cause drastically large numbers of deaths otherwise. If the nukes didn’t exist, there wouldn’t then be the threat of death from the nukes, which is being prevented by more people having the nukes.

Okay? But war existed long before nuclear weapons, and it also causes a large number of deaths. If nukes didn't exist, there would potentially be more wars, and thus more death.

Heck, if “AI” automated most of the work people did and put us out of a job, that would just accelerate our progress towards pushing for UBI/or an era of superabundance, which I’d welcome with open arms.

I wouldn't be so sure about that. We have already automated essentially everything else, and yet people work more than ever. If goods can be produced automatically by machines for free, what's to stop the owners of the machines from simply eliminating what used to be the working class?

But sure, seeing matrix multiplication causing statistically probable sentences to be formed really has me unable to stomach the potential consequences. /s

Your defensiveness speaks volumes.

And what did the printing press, automobile, and analog computer bring?

An ever more powerful nucleus of mechanization that has resulted in the most devastating wars and the most widespread suffering in all of human history. Genocides, chattel slavery, famine, biochemical and nuclear weapons; mass extinction and the imminent destruction of the very planet on which we live.

Make human work obsolete so we can do what we care about and hang out with people we like instead of spending our days doing labor to produce goods we rely on? Sign me up.

Sweet summer child. Making human work obsolete makes human beings obsolete. I envy your naivety.

[–] [email protected] 2 points 5 days ago (1 children)

If nukes didn’t exist, there would potentially be more wars, and thus more death.

Nukes enable larger amounts of death. They increase the possible death, while also increasing the incentive to do a war, to prevent that death. In a world with no nukes, the threat and preventative force of less deadly weapons would simply match each other, just as they currently do with nukes, and have the same effect on disincentivizing war.

We have already automated essentially everything else, and yet people work more than ever.

Oh no we have not. See:

  • Every single service job that relies on human experience/interaction (robotic replacements are still only ever used as gimmicks that attract customers for that fact, but not as a continual experience in broader society, precisely because we value human connection)
  • Any work environment with arbitrary non-planned variables too far outside the scope of a robot's capabilities
  • Most creative works related jobs (AI generated works are often shunned by the masses because they feel inhuman and more sterile than human made works, at least on average)

Not to mention that when we automate something, and a job goes away because of that, that doesn't mean there's no new work that gets created as a result. Sure, when a machine replaces a human worker in a factory, that job goes away, but then who repairs and maintains the machine, checks that it's doing what's required of it, etc? Thus, more jobs shift to management style roles.

Your defensiveness speaks volumes.

You're defensive over believing AI will actually make humans obsolete, that must mean you're actually unable to stomach the reality that you'll have to keep working the rest of your life. Your defensiveness speaks volumes. /s

Seriously, I welcome automation and the reduction in the amount of labor human beings have to engage in so that people are free to engage in their own interests outside of producing material goods for society. A future where work is entirely optional because we've simply eliminated the need to work to survive is great to me.

An ever more powerful nucleus of mechanization that has resulted in the most devastating wars and the most widespread suffering in all of human history. Genocides, chattel slavery, famine, biochemical and nuclear weapons; mass extinction and the imminent destruction of the very planet on which we live.

Ah yes, the printing press, car, and computer, the cause of all genocides. /s

Seriously man, do you not understand that people will just do bad things regardless of if a given job/task is automated?

By the way, your logic literally has no end here. The printing press, car, etc, is just an arbitrary starting point. There's nothing about these inventions that's inherently the starting point for any other consequences. This argument quite literally goes all the way back to the development of fire.

Fire brought the ability to burn people to death. Guess we should never have used fire for anything because it could possibly lead to something bad on a broader societal scale, maybe, in some minute way, that in no way outweighs the benefits!

Sweet summer child. Making human work obsolete makes human beings obsolete. I envy your naivety.

Were you ever a kid? Y'know, the people across nearly every society on this planet that don't get jobs for years, and have little to no responsibilities, yet are provided for entirely outside of their own will and work ethic? Yet I have a sneaking suspicion you don't believe that children are obsolete because they don't do work.

The assumption that work is what gives humans their value is a complete and utter myth that only serves capitalists who want to convince you that it's good to spend most of your time doing labor, actually.

[–] [email protected] -2 points 5 days ago* (last edited 5 days ago) (2 children)

Hmm, you seem like a relatively intelligent person, so perhaps you're not accustomed to being corrected.

Your arguments contradict themselves and lack logical consistency. They are flimsy at best, and I lack the energy to explicitly demonstrate their triviality at the current moment. It seems that you start with the assumption that humanity is destined for a post scarcity utopia, and haphazardly arrange your arguments to help justify that conclusion.

Or perhaps it's because you refuse to admit to yourself that your original comment was ill-considered, and thus you are forced to spout this nonsense in order to protect yourself from the emotional ramifications of admitting you may have misjudged the relative harm of nuclear weapons as compared to AI.

Regardless, it's frustrating to watch you spin this web of sophistry instead of simply acknowledging that you were mistaken. I sincerely hope that you did not utilize AI to assist in writing that wall of text.

I would recommend that you reflect on my words when you've given yourself some time to calm down. It's not so bad to be wrong sometimes, just think of it as an opportunity to learn and become smarter.

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago) (1 children)

calm down

sweet summer child

I know these tactics, they're designed to goad me into an emotive response so I lose the argument!

They're not a case in themselves and your smugness is distasteful. Your interlocutor is treating you with more respect than you are showing in return.

[–] [email protected] 1 points 1 day ago

Yes, I was admittedly tired when I responded to this thread, and then seeing such long winded responses was quite annoying to me.

But I wasn't trying to goad them, I was just exhausted at having to spend so much time and energy just to make my point, which seemed relatively non-controversial to me when I originally posted it.

[–] [email protected] 3 points 5 days ago (1 children)

It seems that you start with the assumption that humanity is destined for a post scarcity utopia

I'm not. Apologies if I was unclear, but I was specifically referencing the fact that you were saying AI was going to accelerate to the point that it replaces human labor, and I was simply stating that I would prefer a world in which human labor is not required for humans to survive, and we can simply pursue other passions, if such a world where to exist, as a result of what you claim is happening with AI. You claimed AI will get so good it replaces all the jobs. Cool, I would enjoy that, because I don't believe that jobs are what gives human lives meaning, and thus am fine if people are free to do other things with their lives.

Or perhaps it’s because you refuse to admit to yourself that your original comment was ill-considered, and thus you are forced to spout this nonsense in order to protect yourself from the emotional ramifications of admitting you may have misjudged the relative harm of nuclear weapons as compared to AI.

The automation of labor is not even remotely comparable to the creation of a technology who's explicit, sole purpose is to cause the largest amount of destruction possible.

Could there hypothetically be an AI model far in the future, once we secure enough computing power, and develop the right architecture, that technically meets the definition of AGI, (however subjective it may be) that then decides to do something to harm humans? I suppose, but that's simply not looking to be likely in any way, (and I'd love if you could actually show any data/evidence proving otherwise instead of saying "it just is" when claiming it's more dangerous) and anyone claiming we're getting close (e.g. Sam Altman) just simply has a vested financial interest in saying that AI development is moving quicker and at a higher scale than it actually is.

Regardless, it’s frustrating to watch you spin this web of sophistry instead of simply acknowledging that you were mistaken.

It’s not so bad to be wrong sometimes, just think of it as an opportunity to learn and become smarter.

It's called having a disagreement and refuting your points. Just because someone doesn't instantly agree with you doesn't mean that I'm automatically mistaken. You're not the sole arbiter of truth. Judging from how you, three times now, have assumed that I must be secretly suppressing the fact that AI is actually going to do more damage than nuclear bombs, just because I disagree with you, it's clear that you are the one making post-hoc justifications here.

You are automatically assuming that because I disagree, I actually don't disagree, and must secretly believe the same thing as you, but am just covering it up. Do not approach arguments from the assumption that the other person involved is just feigning disagreement, or you will never be capable of even considering a view other than the one you currently hold.

I sincerely hope that you did not utilize AI to assist in writing that wall of text.

The fact you'd even consider me possibly using AI to write a comment is ridiculous. Why would I do that? What would I gain? I'm here to articulate my views, not my views but only kind of, without any of my personal context, run through a statistical probability machine.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

I’m not. Apologies if I was unclear, but I was specifically referencing the fact that you were saying AI was going to accelerate to the point that it replaces human labor, and I was simply stating that I would prefer a world in which human labor is not required for humans to survive, and we can simply pursue other passions, if such a world where to exist, as a result of what you claim is happening with AI. You claimed AI will get so good it replaces all the jobs.

I'm sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs. I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.

Cool, I would enjoy that, because I don’t believe that jobs are what gives human lives meaning, and thus am fine if people are free to do other things with their lives.

Your conception of labor is limited. You're only taking into account jobs as they exist within a capitalist framework. What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don't give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member of a community or family unit.

The automation of labor is not even remotely comparable to the creation of a technology who’s explicit, sole purpose is to cause the largest amount of destruction possible.

If you actually understood my point, you wouldn't be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences. We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives. We intended to use the printing press to spread knowledge and understanding, but it ultimately came to spread hatred and fear. This dichotomy is applicable to almost every technological development. Human creators are never wise enough to foresee the negative externalities that will ultimately result from their creations.

Again, you're the one who has been positing some type of AI singularity and simultaneously arguing it would be a good thing. I never said anything of the sort, you simply attached a meaning to my comment that wasn't there.

And again, nuclear weapons have been used twice in wartime. Guns, swords, spears, automobiles, man made famines, aeroplanes, literally hundreds of other technologies have killed more human beings than nuclear weapons have. Nuclear fission has also provided one of the cleanest sources of energy we possess, and probably saved untold amounts of environmental damage and additional warfare over control of fossil fuels.

Just because nuclear weapons make a big boom doesn't make them more destructive than other technologies.

I'm glad that you didn't use AI. I was wrong to assume you were feigning disagreement, but sometimes it just baffles me how things that I consider so obvious can be so difficult to grasp for other people. My apologies for my tone, but I still think you're very naive in your dismissal of my arguments, and quite frankly you come off as somewhat arrogant and close minded by the way you attempt to systematically refute everything that I say, instead of engaging with my ideas in a more constructive way.

As far as I can tell, all three of your initial retorts about the relative danger of nuclear weapons are basically incoherent word salads. Even if I were to concede your arguments regarding the relative dangers of AI (which I am absolutely not going to do, although you did make some good points), you would still be wrong about your initial statement because you clearly overestimated the relative danger of nuclear weapons. I essentially dismantled your position from both sides, and yet you refuse to concede even a single inch of ground, even on the more obvious issue of nuclear weapons only being responsible for a relatively paltry number of deaths.

[–] [email protected] 1 points 22 hours ago (1 children)

I’m sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs. I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.

My apologies, I'm simply quite used to people arguing against AI using specifically the automation of jobs as their primary concern, and assumed that it was a larger concern of yours when it came to the "consequences." of AI as a concept.

If you actually understood my point, you wouldn’t be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences.

Obviously, but the statistical probability of a thing being used for bad purposes, especially in a way that outweighs the benefit of the technology itself, is always higher for a thing designed to be harmful from the start, as opposed to something started with good intentions. That doesn't mean a thing created to be harmful can't do or cause a good thing later on, but it's much less likely to than something designed to help people as its original goal.

We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives.

Had we not invented our uses of fire, would we have any of the comforts, standard of living, and capabilities that we do now? Would we be able to feed as many people as we do, keep our food safe and prevent it from spoiling, keep ourselves from dying in the winter, etc? Fire has brought a larger benefit than it has harms.

We intended to use the printing press to spread knowledge and understanding, but it ultimately came to spread hatred and fear.

While some media is used to spread hatred and fear, a much worse scenario is one in which no media can be spread at the same scale, and information dissemination is instead entirely reliant on word of mouth. This means extremely delayed knowledge of current events, an overall less informed population, and all the issues that come along with disseminating knowledge through a literal game of telephone. Things get lost, mixed up, falsified, and so on, and the ability to disseminate knowledge quickly can make those things much less likely.

Will they still happen? Sure. But I'd prefer a well-informed world that is sometimes subjected to misinformation, fear, and hate, to a world where all information is spread via ever-changing word of mouth, where information can't be easily fact-checked, shared, or researched, and where rumors can very frequently hold the same validity as fact for extended periods of time without anyone even being capable of checking if they're real.

The printing press has brought a larger benefit than it has harms. Do you see the pattern here?

And again, nuclear weapons have been used twice in wartime. Guns, swords, spears, automobiles, man made famines, aeroplanes, literally hundreds of other technologies have killed more human beings than nuclear weapons have.

Just because nuclear weapons make a big boom doesn’t make them more destructive than other technologies.

Cool, I never once stated that Nukes were more deadly than any of these other examples provided. I only stated that I don't believe that AI is more dangerous than nukes, in contrast to your original statement.

Nuclear fission has also provided one of the cleanest sources of energy we possess,

Nuclear fission research was taking place before the idea of using it for a deadly bomb was even a thing. The development of nuclear bombs came afterwards.

What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don’t give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member of a community or family unit.

A few points on this one. Firstly, just because a technology can be used, I don't necessarily think it should. If a tool is better than humans at something (let's say AI becomes good enough to automate all woodworkers with physical woodworking robots adapted for any task) I'll still support allowing humans to do that thing if it brings them joy. (People could simply still do woodworking, and I could get a table from one of them instead of from the AI, just because I feel like it.) The use of any technology after it's developed is not an inevitability, even if it's an option.

Secondly, I personally believe in doing what I can to maximize overall human happiness. If AI was better at raising children, but people still wanted to enjoy raising children, and we didn't see any demonstrable negative outcomes from having humans raise children instead of AI, then I would support whatever mechanism the parents preferred based on what they think would make them more happy, raising a child, or not.

If AI was a better romantic partner, in the sense that people broadly preferred AI to real people, and there wasn't evidence that such a trend increasing would make people broadly more unhappy, or unsatisfied with life, then I'd support it, because it wouldn't be doing any harm.

Ask yourself why you consider such things to be bad in the first place. Is it because you personally wouldn't enjoy those things? Cool, you wouldn't have to. And if society broadly didn't enjoy those things, then nobody would use them in the first place. You're presupposing both that society would develop and use AI for those purposes, but also not actually prefer using them, in which case they wouldn't be a replacement, because no society would choose to implement them.

This is like saying "what if we gave everyone IV drips that gave them dopamine all the time, but this actually destroyed the fabric of society and everyone was less happy with it?" Great, then nobody will use the IVs because they make them less happy than not using the IVs.

This entire argument assumes two contradictory things: That society will implement a thing to replace people because it's better, and they'd prefer to use it, but also that society will not prefer to use it because it will make them less happy. You can't have both.

As far as I can tell, all three of your initial retorts about the relative danger of nuclear weapons are basically incoherent word salads. Even if I were to concede your arguments regarding the relative dangers of AI (which I am absolutely not going to do, although you did make some good points), you would still be wrong about your initial statement because you clearly overestimated the relative danger of nuclear weapons.

Your only argument here for why AI would be relatively more dangerous is... "it could be." Simply stating that in the future, it may get good enough to do X or Y, and because that's undesirable to you, therefore the technology as it exists now will obviously do those things if allowed to progress.

Do you have any actual evidence or reason to believe that AI will do these things? That it will ever even be possible for it to do X or Y, that society would simultaneously willingly implement it while also not wanting it to be implemented because it harms them, or that the current trajectory of the industry even has a chance of driving the development of technologies that would ever be capable of those things?

Right now, the primary developments in "AI" are just better LLMs, which are just word probability predictors. Sure, they're getting better at predicting the probability of words, but how would that lend itself to practically, say, raising a child?

I essentially dismantled your position from both sides, and yet you refuse to concede even a single inch of ground, even on the more obvious issue of nuclear weapons only being responsible for a relatively paltry number of deaths.

And how many people has AI killed today? Oh wait, less than nuclear bombs? Just because today nukes haven't yet been responsible for a large number of deaths, but AI might be in the future, then stating that AI is possibly more dangerous than nuclear bombs must be correct!

You're making arguments from two completely different points in time. You're saying that because nukes haven't yet killed as many people as you think that AI will do in the future, they are therefore less dangerous. (Even while nukes still pose a constant threat, that can cause a chain reaction of deaths given the right circumstances, in the future) Unless you can substantiate your claim with some form of evidence that shows AI is likely to do any of these dangerous things on our current trajectory, you're arguing current statistics against a wholly unsubstantiated, imagined future, and then saying you're correct because in what you think the future will be like, AI will actually be doing all these bad things that make it worse than nukes.

Substantiate why you think AI will ever even get to that point, and also be implemented in a way that damages society, instead of just assuming the worst case scenario and assuming it's likely.

[–] [email protected] 1 points 16 hours ago* (last edited 15 hours ago) (1 children)

Obviously, but the statistical probability of a thing being used for bad purposes, especially in a way that outweighs the benefit of the technology itself, is always higher for a thing designed to be harmful from the start, as opposed to something started with good intentions. That doesn’t mean a thing created to be harmful can’t do or cause a good thing later on, but it’s much less likely to than something designed to help people as its original goal.

Citation needed. How did you calculate that statistical probability, my friend?

Had we not invented our uses of fire, would we have any of the comforts, standard of living, and capabilities that we do now? Would we be able to feed as many people as we do, keep our food safe and prevent it from spoiling, keep ourselves from dying in the winter, etc? Fire has brought a larger benefit than it has harms.

While some media is used to spread hatred and fear, a much worse scenario is one in which no media can be spread at the same scale, and information dissemination is instead entirely reliant on word of mouth. This means extremely delayed knowledge of current events, an overall less informed population, and all the issues that come along with disseminating knowledge through a literal game of telephone. Things get lost, mixed up, falsified, and so on, and the ability to disseminate knowledge quickly can make those things much less likely.

Will they still happen? Sure. But I’d prefer a well-informed world that is sometimes subjected to misinformation, fear, and hate, to a world where all information is spread via ever-changing word of mouth, where information can’t be easily fact-checked, shared, or researched, and where rumors can very frequently hold the same validity as fact for extended periods of time without anyone even being capable of checking if they’re real.

The printing press has brought a larger benefit than it has harms. Do you see the pattern here?

According to whom? How are you defining harm and benefit? You're attempting to quantify the unquantifiable.

Cool, I never once stated that Nukes were more deadly than any of these other examples provided. I only stated that I don’t believe that AI is more dangerous than nukes, in contrast to your original statement.

So you are open to the possibility that nukes are less dangerous than spears, but more dangerous than AI? Huh.

A few points on this one. Firstly, just because a technology can be used, I don’t necessarily think it should. If a tool is better than humans at something (let’s say AI becomes good enough to automate all woodworkers with physical woodworking robots adapted for any task) I’ll still support allowing humans to do that thing if it brings them joy. (People could simply still do woodworking, and I could get a table from one of them instead of from the AI, just because I feel like it.) The use of any technology after it’s developed is not an inevitability, even if it’s an option.

Secondly, I personally believe in doing what I can to maximize overall human happiness. If AI was better at raising children, but people still wanted to enjoy raising children, and we didn’t see any demonstrable negative outcomes from having humans raise children instead of AI, then I would support whatever mechanism the parents preferred based on what they think would make them more happy, raising a child, or not.

If AI was a better romantic partner, in the sense that people broadly preferred AI to real people, and there wasn’t evidence that such a trend increasing would make people broadly more unhappy, or unsatisfied with life, then I’d support it, because it wouldn’t be doing any harm.

Ask yourself why you consider such things to be bad in the first place. Is it because you personally wouldn’t enjoy those things? Cool, you wouldn’t have to. And if society broadly didn’t enjoy those things, then nobody would use them in the first place. You’re presupposing both that society would develop and use AI for those purposes, but also not actually prefer using them, in which case they wouldn’t be a replacement, because no society would choose to implement them.

This is like saying “what if we gave everyone IV drips that gave them dopamine all the time, but this actually destroyed the fabric of society and everyone was less happy with it?” Great, then nobody will use the IVs because they make them less happy than not using the IVs.

This entire argument assumes two contradictory things: That society will implement a thing to replace people because it’s better, and they’d prefer to use it, but also that society will not prefer to use it because it will make them less happy. You can’t have both.

Ah of course, because human beings famously never use or do anything that makes them less happy. Human societies have famously never implemented anything that makes people less happy. Do we live on the same planet?

Your only argument here for why AI would be relatively more dangerous is… “it could be.” Simply stating that in the future, it may get good enough to do X or Y, and because that’s undesirable to you, therefore the technology as it exists now will obviously do those things if allowed to progress.

Do you have any actual evidence or reason to believe that AI will do these things? That it will ever even be possible for it to do X or Y, that society would simultaneously willingly implement it while also not wanting it to be implemented because it harms them, or that the current trajectory of the industry even has a chance of driving the development of technologies that would ever be capable of those things?

Right now, the primary developments in “AI” are just better LLMs, which are just word probability predictors. Sure, they’re getting better at predicting the probability of words, but how would that lend itself to practically, say, raising a child?

And how many people has AI killed today? Oh wait, less than nuclear bombs? Just because today nukes haven’t yet been responsible for a large number of deaths, but AI might be in the future, then stating that AI is possibly more dangerous than nuclear bombs must be correct!

You’re making arguments from two completely different points in time. You’re saying that because nukes haven’t yet killed as many people as you think that AI will do in the future, they are therefore less dangerous. (Even while nukes still pose a constant threat, that can cause a chain reaction of deaths given the right circumstances, in the future) Unless you can substantiate your claim with some form of evidence that shows AI is likely to do any of these dangerous things on our current trajectory, you’re arguing current statistics against a wholly unsubstantiated, imagined future, and then saying you’re correct because in what you think the future will be like, AI will actually be doing all these bad things that make it worse than nukes.

Substantiate why you think AI will ever even get to that point, and also be implemented in a way that damages society, instead of just assuming the worst case scenario and assuming it’s likely.

I'm utilizing my intelligence and my knowledge about human nature and human history to make an educated guess about future possible outcomes.

Again, based on your prose, I would expect you to intuitively understand the reasons why I might believe these things, because I believe they should be fairly obvious to most people who are well educated and intelligent. Hence why I suspected you of using AI, because you repeatedly post walls of text that are based on incredibly faulty and idiotic premises. Like really dude, I have to explain to you that human beings have historically used technologies in self destructive ways? It reminds me of the way that AI will write essays that sound very knowledgeable and cogent to the untrained mind, but an expert on the topic can easily recognize that they make no sense whatsoever.

Cheers mate, have a good one.

[–] [email protected] 1 points 14 hours ago

Citation needed. How did you calculate that statistical probability, my friend?

I don't, because I don't spend all my time calculating the exact probability of every technology to exist harming or not harming people. You also did not provide any direct mathematical evidence when trying to argue the contrary, that these things actually do cause more harm than they provide a benefit even if they're created to do good things. We're arguing on concepts here.

That said, if you really think that things made to be bad, with only a chance at doing something good later will have the same or larger chance of doing bad things as something created to be good, with only a chance of doing something bad later on, then I don't see how it's even possible to continue this conversation. You're presupposing that any technology you view as harmful has automatically done more harm than good, without any reason whatsoever for doing so. My reasoning is simply that harm is more likely to occur from something created to do it from the start, rather than something with only a chance of becoming bad.

Something with a near 100% chance of doing harm, because it was made for that purpose, generally speaking, won't do less harm than something with less than a near 100% chance of doing it from the start, because any harm would only be a possibility rather than a guarantee.

So you are open to the possibility that nukes are less dangerous than spears, but more dangerous than AI? Huh.

I'm open to the idea that they've caused more deaths, historically, since that's the measure you seemed to be going with when you referenced the death toll of nukes, then used other things explicitly created as weapons (guns, spears, swords) as additional arguments.

I don't, however, see any reason for AI being more likely to cause significant harm, death or otherwise, compared to say, the death toll of spears, and I don't think nukes are less harmful than spears directly, because they're highly likely to cause drastically larger amounts of future death and environmental devastation, which I back up based on the fact that countries continue to expand their stockpiles, increasingly threatening nuclear attacks as a "deterrent," while organizations such as Bulletin of the Atomic Scientists continue to state that the risk of future nuclear war is only growing. If we talk about current death tolls, sure, they've probably done less, but today is not the only time by which we can judge possible risk.

According to whom? How are you defining harm and benefit? You’re attempting to quantify the unquantifiable.

Yes, you've discovered moral subjectivity. Good job. I define harm and benefit based on what causes/prevents the ability of humans to experience the largest amount of happiness and overall well-being, as I'm a Utilitarian.

Ah of course, because human beings famously never use or do anything that makes them less happy. Human societies have famously never implemented anything that makes people less happy. Do we live on the same planet?

Your argument was based on things that are entirely personal, self-driven positions, such as finding AI to be a better partner. If people didn't enjoy that more, then they wouldn't be seeking out AI partners when specifically trying to find someone that will provide them with the most overall happiness. Of course people can do things that make them less happy, all I'm saying is that you're not providing any evidence for why people would do so, in the scenarios you're providing. You're simply assuming not only that AI will develop into something that can harm humans, but that humans will also choose to use those harmful things, without explaining why.

Again, apologies if my wording was unclear, but I'm not saying humans are never self-destructive, just that you've provided no evidence as to why they would choose to be that way, given the circumstances you provided.

I’m utilizing my intelligence and my knowledge about human nature and human history to make an educated guess about future possible outcomes.

I would expect you to intuitively understand the reasons why I might believe these things, because I believe they should be fairly obvious to most people who are well educated and intelligent.

No, I don't intuitively understand, because your own internal intuitive understanding of the world is not the same as mine. We are different people. This answer is not based in anything other than "I feel like it will turn out bad, because humans have used technology bad before." You haven't even been capable of showing it's even possible for AI to become that capable in the first place, let alone show the likelihood of it being developed to do those bad things, and also get implemented.

This is like arguing that our current weapons will necessarily lead to the development of the Death Star, because we know what a Death Star could be, weapons are improving, and humans sometimes use technology in bad ways. I don't just want your "intelligence and knowledge about human nature and human history" to back up why our weapons will necessarily create the Death Star, I want you to show that it's even possible, and demonstrate why you think it's likely we choose to develop it to that specific point. I hope that analogy makes sense.

Hence why I suspected you of using AI, because you repeatedly post walls of text that are based on incredibly faulty and idiotic premises.

Sorry for trying to explain myself with more nuance than most people on the internet. Sometimes I type a lot, too bad I guess.

Cheers mate, have a good one.

You as well.

[–] [email protected] 19 points 5 days ago

That's good to know actually.

[–] [email protected] -5 points 5 days ago (1 children)

So phone-home telemetry that you can't opt out of. The ghost of Mitchell Baker will haunt us forever.

[–] [email protected] 29 points 5 days ago (1 children)

So phone-home telemetry that you can’t opt out of.

You can opt out of it. You've always been able to opt out of Mozilla's telemetry. Not to mention that if you actually read the Privacy Notice, there's an entire section detailing every single piece of telemetry that Mozilla collects, and if you read the section very clearly titled "To provide AI chatbots," you'll see what's collected:

  • Technical data
  • Location
  • Settings data
  • Unique identifiers
  • Interaction data

The consent required for the collection to even start:

Our lawful basis

Consent, when you choose to enable an AI Chatbot.

And links that lead to the page explaining how to turn off telemetry even if you're using the in-beta AI features.

This page > FAQ > Telemetry Collection & Deletion page