maegul

joined 2 years ago
[–] [email protected] 4 points 2 months ago

Yea, the "cheaper than droids" line in Andor feels strangely prescient ATM.

[–] [email protected] 22 points 2 months ago (7 children)

Not a stock market person or anything at all ... but NVIDIA's stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I've seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

I'd wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.

[–] [email protected] 1 points 2 months ago

Yea I know, which is why I said it may become a harsh battle. Not being in education, it really seems like a difficult situation. My broader point about the harsh battle was that if it becomes well known that LLMs are bad for a child’s development, then there’ll be a good amount of anxiety from parents etc.

[–] [email protected] 33 points 2 months ago (2 children)

Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

Tech is hyper focused on removing the "doing" and reproducing the result. Now that it's trying to put itself into the "thinking" part of human work, this tension is making itself unavoidable.

I think we can all take it as a given that we don't want to hand total control to machines, simply because of accountability issues. Which means we want a human "in the loop" to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn't mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn't be surprised if banning them outright in what may become a harshly fought battle isn't too far away.

All that being said, I also think LLMs raise questions about what it is we're doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn't worth assessing. Of course, as I've said above, that's likely manifestly rubbish ... building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

[–] [email protected] 10 points 2 months ago (1 children)

What difference does it make?

 

While territorial claims are and will likely be heated, what struck me is that the area is right near the Drake Passage, in the Weddell Sea (which is fundamental to the world's ocean currents AFAIU).

I don't know how oil drilling in the antarctic could affect the passage, but still, I'm not sure I would trust human oil hunger with a 10ft pole on that one.

Also interestingly, the discovery was made by Russia, which is a somewhat ominous clue about where the current "multi-polar" world and climate change are heading. Antarctica, being an actual continent that thrived with life up until only about 10-30 M yrs ago, is almost certainly full of resources.

[–] [email protected] 3 points 2 months ago

Sure, but IME it is very far from doing the things that good, well written and informed human content could do, especially once we're talking about forums and the like where you can have good conversations with informed people about your problem.

IMO, what ever LLMs are doing that older systems can't isn't greater than what was lost with SEO ads-driven slop and shitty search.

Moreover, the business interest of LLM companies is clearly in dominating and controlling (as that's just capitalism and the "smart" thing to do), which means the retention of the older human-driven system of information sharing and problem solving is vulnerable to being severely threatened and destroyed ... while we could just as well enjoy some hybridised system. But because profit is the focus, and the means of making profit problematic, we're in rough waters which I don't think can be trusted to create a net positive (and haven't been trust worthy for decades now).

[–] [email protected] 2 points 2 months ago (2 children)

I really think it’s mostly about getting a big enough data set to effectively train an LLM.

I mean, yes of course. But I don't think there's any way in which it is just about that. Because the business model around having and providing services around LLMs is to supplant the data that's been trained on and the services that created that data. What other business model could there be?

In the case of google's AI alongside its search engine, and even chatGPT itself, this is clearly one of the use cases that has emerged and is actually working relatively well: replacing the internet search engine and giving users "answers" directly.

Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it's important to appreciate how we got here ... by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop.

IMO, to ignore the "carnivorous" dynamics here, which I think clearly go beyond ordinary capitalism and innovation, is to miss the forest for the trees. Somewhat sadly, this tech era (approx MS windows '95 to now) has taught people that the latest new thing must be a good idea and we should all get on board before it's too late.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago) (4 children)

I mean, their goal and service is to get you to the actual web page someone else made.

What made Google so desirable when it started was that it did an excellent job of getting you to the desired web page and off of google as quickly as possible. The prevailing model at the time was to keep users on the page for as long as possible by creating big messy "everything portals".

Once Google dropped, with a simple search field and high quality results, it took off. Of course now they're now more like their original competitors than their original successful self ... but that's a lesson for us about what capitalistic success actually ends up being about.

The whole AI business model of completely replacing the internet by eating it up for free is the complete sith lord version of the old portal idea. Whatever you think about copyright, the bottom line is that the deeper phenomenon isn't just about "stealing" content, it's about eating it to feed a bigger creature that no one else can defeat.

[–] [email protected] 2 points 2 months ago

Oops. lol. I’ll leave the typo now!

[–] [email protected] 13 points 2 months ago (2 children)

Some vague stories around my grandfather before they migrated that sound like a godfather film but which no one knitted anything about.

[–] [email protected] 5 points 2 months ago

Ha. Oops! I got the vibe that the conversation had become more general. But also I’m genuinely tired and tired and not wearing my glasses. Sorry!

 

cross-posted from: https://hachyderm.io/users/maegul/statuses/112442514504667645

Google's play on Search, Ads and AI feels obvious to me.

* They know search is broken.
* And that people use AI in part because it takes the ads and SEO crap out.
* IE, AI is now what Google was in 2000. A simple window onto the internet.
* Ads/SEO profits will fall with AI.
* But Google will then just insert shit into AI "answers" for money.
* Ads managed + up-to-date AI will be their new mote and golden goose.

@technology

See @caseynewton 's blog post: https://mastodon.social/@caseynewton/112442253435702607

Cntd (Edit):

That search/SEO is broken seems to be part of the game plan here.

It’s probably like Russia burning Moscow against Napoleon and a hell of a privilege Google enjoy with their monopoly.

I’ve seen people opt for chatGPT/AI precisely because it’s clean, simple and spam free, because it isn’t Google Search.

And as @caseynewton said … the web is now in managed decline.

For those of us who like it, it’s up to us to build what we need for ourselves. Big tech has moved on

 

By "augmenting human intellect" we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

Man's population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity. Augmenting man's intellect, in the sense defined above, would warrant full pursuit by an enlightened society if there could be shown a reasonable approach and some plausible benefits.


Quote from Doglas Engelbart provided in this talk by @[email protected] (Bret Victor).

 

Oooff ... I don't think it's like MKBHD to come down so hard on a product. But this thing seemed weird (and probably dumb) when it was launched and so I guess this lines up.

Not that a wearable assistant doesn't make some sense, but some former Apple higher ups who think they're good enough to disrupt the smartphone market by ... checks notes ... relying entirely on other companys' new/untested/problematic/maybe-just-shit AI services and pretending that all of the other "smart" devices we have just don't exist in some sort of volley in the ongoing platform wars ... really does kinda epitomise all of shittiness of the current tech world.

 

I am ashamed that I hadn’t reasoned this through given all the rubbish digital services have pulled with “purchases” being lies.

 

cross-posted from: https://lemmy.ml/post/6745228

TLDR: Apple wants to keep china happy, Stewart was going after china in some way, Apple said don’t, Stewart walked, the show is dead.

Not surprising at all, but sad and shitty and definitely reduces my loyalty to the platform. Hosting Stewart seemed like a real power play from Apple, where conflict like this was inevitable, but they were basically saying, yes we know, but we believe in things and, as a big company with deep pockets that can therefore take risks, to prove it we’re hosting this show.

Changing their minds like this is worse than ever hosting the show in the first place as it shows they probably don’t know what they’re doing or believe in at all, like any big company, and just going for what seems cool, and undermining the very idea of a company like Apple running a streaming platform. I wonder if the Morning Show/Wars people are paying close attention.

view more: next ›