I hope everybody criticizing the move either do not use products from Mozilla or, if they do, contribute however they can up to their own capabilities. If you don't, if you ONLY criticize, yet use Firefox (or a derivative, e.g. LibreWolf) or arguably worst use something fueled by ads (e.g. Chromium based browsers) then you are unfortunately contributing precisely to the model you are rejecting.
utopiah
What's driving me nuts is that people will focus on the glasses.
Yes, the glasses ARE a problem because Meta, despite being warned by experts like AccessNow to SHOW when a camera is recording, you know with a bright red LED as it's been the case with others devices before, kept it "stealthy" because it's... cool I guess?
Anyway, the glasses themselves are but the tip of the iceberg. They are the end of the surveillance apparatus that people WILLINGLY decide to contribute to. What do I mean? Well that people who are "shocked" by this kind of demonstrations (because that's what it is, not actual revelations) will be whining about it on Thread or X after sending a WhatsApp message to their friends and sending GMail to someone else on their Google, I mean Android, phone and testing the latest version of ChatGPT. Maybe the worst part in all this? They paid to get a Google Nest inside their home and an Amazon Ring video doorbell outside. They ARE part of the surveillance.
Those people are FUELING surveillance capitalism by pouring their private data to large corporations earning money on their usage.
Come on... be shocked yes, be horrified yes, but don't pretend that you are not part of the problem. You ARE wearing those "glasses" in other form daily, you are paying for it with money and usage. Stop and buy actual products, software and hardware, from companies who do not make money with ads, directly or indirectly. Make sure the products you use do NOT rely on "the cloud" and siphon all your data elsewhere, for profit. Change today.
Indeed, AKA the OpenAI playbook.
As per usual, in order to understand what it means we need to see :
- performance benchmark (A100 level? H100? B100? GB200 setups?)
- energy consumption (A100 performance level and H100 lower watt? the other way around?)
- networking scalability (how many cards cards can be interconnected for distributed compute? NVLink equivalents?)
- software stack (e.g can it run CUDA and if not what alternatives can be used?)
- yield (how many die are usable, i.e. can it be commercially viable or is it R&D still?)
- price (which regardless of possible subsidies would come from yield)
- volume (how many cards can actually be bought, also dependent on yield)
Still interesting to read after announcements, as per usual, and especially who will actually manufacture them at scale (SMIC? TSMC?).
PS: full disclosure, I still believe self-hosting AI is interesting, cf my notes on it https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but that doesn't mean AGI can be reached, even less that it'd be "soon". IMHO AI itself as a research field is interesting enough that it doesn't need grandiose claims, especially not ones leading to learned helplessness.
Read few months ago, warmly recommended. Basically on self selection bias and sharing "impressive" results while ignoring whatever does not work... then claiming it's just the "beginning".
I haven't seriously read the article for now unfortunately (deadline tomorrow) but if there is one thing that I believe is reliable, it's computational complexity. It's one thing to be creative, ingenious, find new algorithms and build very efficient processors and datacenters to make things extremely efficient, letting us computer things increasingly complex. It's another though to "break" free of complexity. It's just, as far as we currently know, is impossible. What is counter intuitive is that seemingly "simple" behaviors scale terribly, in the sense that one can compute few iterations alone, or with a computer, or with a very powerful set of computers... or with every single existing computers... only to realize that the next iteration of that well understood problem would still NOT be solvable with every computer (even quantum ones) ever made or that could be made based on resources available in say our solar system.
So... yes, it is a "stretch", maybe even counter intuitive, to go as far as saying it is not and NEVER will be possible to realize AGI, but that's what their paper claims. It's a least interesting precisely because it goes against the trend we hear CONSTANTLY pretty much everywhere else.
It's a classic BigTech marketing trick. They are the only one able to build "it" and it doesn't matter if we like "it" or not because "it" is coming.
I believed in this BS for longer than I care to admit. I though "Oh yes, that's progress" so of course it will come, it must come. It's also very complex so nobody else but such large entities with so much resources can do it.
Then... you start to encounter more and more vaporware. Grandiose announcement and when you try the result you can't help but be disappointed. You compare what was promised with the result, think it's cool, kind of, shrug, and move on with your day. It happens again, and again. Sometimes you see something really impressive, you dig and realize it's a partnership with a startup or a university doing the actual research. The more time passes, the more you realize that all BigTech do it, across technologies. You also realize that your artist friend did something just as cool and as open-source. Their version does not look polished but it works. You find a KickStarter about a product that is genuinely novel (say Oculus DK1) and has no link (initially) with BigTech...
You finally realize, year after year, you have been brain washed to believe only BigTech can do it. It's false. It's self serving BS to both prevent you from building and depend on them.
You can build, we can build and we can build better.
Can we build AGI? Maybe. Can they build AGI? They sure want us to believe it but they have lied through their teeth before so until they do deliver, they can NOT.
TL;DR: BigTech is not as powerful as they claim to be and they benefit from the hype, in this AI hype cycle and otherwise. They can't be trusted.
I'm curious, any advice on that? How does one do "good" telemetry? I'm the first to complain about Microsoft, Apple, (even worst) Google, Meta and now OpenAI collecting data to sell me stuff... but it's true that also some data is needed to get some kind of introspection in terms of usage. Developers need to understand what is actually happening with the software they develop.
Now I'm wondering specifically about 2 side :
- how to do the data collection correctly (e.g local only, only send on crash, only send without PII, store only aggregate)
- how to get informed consent from users (e.g off by default, UX that supports understanding of why it's done and how)
I'm genuinely glad that the mindset around privacy have changed since the last few years but I'm wondering how, when it's a genuinely positive good case (to truly make better products), to do it.
I forgot the exact number but while installing Debian (Bookworm and Sid) this weekend I was shocked by how small the base install, with a window manager ("big" one by your standards, i.e KDE), was. Maybe 2Gb, definitely less than 4Gb. It all worked fine, I could browse the Web, print, edit rich text, watch video, etc.
I installed a ton more stuff since, e.g Steam, Inkscape, Python libraries for computer vision, etc and it's still not even 10Gb.
So... my suggestion is the same as I shared earlier in https://lemmy.ml/post/20673461/13899831 namely do NOT install preemptively! Assuming you have a fast and stable connection I would argue stick to the bare minimum and all add as you need.
In fact... if you want to be minimalist I would suggest to do another fresh install (it's fast, less than 1hr and you can do something else at the same time) and stick to the bare minimum right away.
TL;DR: don't get rid of, just avoid adding from the first place.
I thought saying
was actually very clear but seems I wasn't clear enough so that means... literally doing ANYTHING except only criticizing. That can mean being an open-source developer, yes, but that can also means translation, giving literally 1 cent, etc. It means doing anything at all that would not ONLY be saying "this is good, but it's not good enough" without doing actually a single thing to change, especially while actually using another free of charge browser that is funded by advertisement. Honestly if that's not clear enough I'm not sure what would be ... but please, do ask again I will genuinely try to be clearer.