bradd

joined 2 years ago
[–] [email protected] 5 points 5 hours ago

Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

[–] [email protected] 2 points 7 hours ago

That makes sense. I plugged in what I think my dad was making in 95 and it was quite a bit more than I'm making now. Explains the big house, kids, etc.

ლ(ಠ益ಠლ)

[–] [email protected] 13 points 8 hours ago (2 children)

Partner and I are millinials, household income ~200K, one child, excellent credit, no debt. Partner's standards are a tad high but I'm unusually spartan with some minor capital expenditures, so I feel we balance out.

I grew up middle class and on paper we put my parents to shame, nevertheless they built a huge house, had three kids, five cars, fed the family... while my partner and I struggle to find a home while paying for one kid.

Something doesn't add up.

That said I do wonder if it would basically be impossible to top the boomers on wealth and cost of living. Think back before WWII and how hard was it on the average joe, probably a lot harder than we want to admit. The boomers mighta hit the jackpot and millennials are stuck basically with the expectation that we should do that well while also footing the bill for all of the "progress" they have made since the 60's.

Don't get me wrong, there has been real progress but there has been a lot of "progress" in the wrong directions as well, in some cases 180°. Millennials have been paying for it our whole lives, and I don't think we are ever going to really come out ahead, we'll bust our asses to break even but honestly I'm okay with that if it sets our children up to have a better life.

[–] [email protected] 4 points 12 hours ago

I'd be more inclined to call this a misc utensils drawer. I have one just like it, with many of the same items, but I also have a true "junk drawer", but it has anything but utensils in it. Like, batteries, screws, magnifying glass, fire starters, a deck of cards, etc. All of the shit that ends up near the kitchen that doesn't have a whole space dedicated to similar things, finds a home in the junk drawer.

[–] [email protected] 1 points 3 days ago

If I put text into a box and out comes something useful I could give a shit less if it has a criteria for truth. LLM's are a tool, like a mannequin, you can put clothes on it without thinking it's a person, but you don't seem to understand that.

I work in IT, I can write a bash script to set up a server pivot to an LLM and ask for a dockerfile that does the same thing, and it gets me very close. Sure, I need to read over it and make changes but that's just how it works in the tech world. You take something that someone wrote and read over it and make changes to fit your use case, sometimes you find that real people make really stupid mistakes, sometimes college educated people write trash software, and that's a waste of time to look at and adapt... much like working with an LLM. No matter what you're doing, buddy, you still have to use your brian.

[–] [email protected] 1 points 3 days ago

I understand your skepticism, but I think you're overstating the limitations of LLMs. While it's true that they can generate convincing-sounding text that may not always be accurate, this doesn't mean they're only good at producing noise. In fact, many studies have shown that LLMs can be highly effective at retrieving relevant information and generating text that is contextually relevant, even if not always 100% accurate.

The key point I was making earlier is that LLMs require a different set of skills and critical thinking to use effectively, just like a knife requires more care and attention than a spoon. This doesn't mean they're inherently 'dangerous' or only capable of producing noise. Rather, it means that users need to be aware of their strengths and limitations, and use them in conjunction with other tools and critical evaluation techniques to get the most out of them.

It's also worth noting that search engines are not immune to returning inaccurate or misleading information either. The difference is that we've learned to use search engines critically, evaluating sources and cross-checking information to verify accuracy. We need to develop similar critical thinking skills when using LLMs, rather than simply dismissing them as 'noise generators'.

See these:

[–] [email protected] 1 points 6 days ago

I call myself an "IT systems engineer".

[–] [email protected] 1 points 2 weeks ago

Weird how "a nation of immigrants" wants to know where they are from.

[–] [email protected] 1 points 2 weeks ago

There are alternate on-prem solutions that are now good enough to compete with vmware, for a majority of the people impacted by vmwares changes. I think the cloud ship has sailed and the stragglers have reasons for not moving to the cloud, and in many cases companies nove back from the cloud once they realize just how expensive it actually is.

I think one of the biggest drivers for businesses to move to the cloud is they do not want to invest in talent, the talent leaves and it's hard to find people who want to run in house infra for what is being offered. That talent would move on to become SRE's for hosting providers, MSP's, ISP's, and so on. The only option the smaller companies have would be to buy into the cloud and hire what is essentially an administrator and not a team of architects, engineers, and admins.

[–] [email protected] 2 points 2 weeks ago

It was a dumb move. They had a niche market cornered, (serious) enterprises with on-prem infrastructure. Sure, it was the standard back in the late 2000's to host virtualization on-prem but since then, the only people who have not outsourced infrastructure hosting to cloud providers, have reasons not to, including financial reasons. The cloud is not cheaper than self-hosting, serverless applications can be more expensive, storage and bandwidth is more limited, and performance is worse. Good example of this is openai vs ollama on-prem. Ollama is 10,000x cheaper, even when you include initial buy-in.

Let VMware fail. At this point they are worth more as a lesson to the industry, turn on your users and we will turn on you.

[–] [email protected] 1 points 2 weeks ago (2 children)

As a side note, I feel like this take is intellectually lazy. A knife cannot be used or handled like a spoon because it's not a spoon. That doesn't mean the knife is bad, in fact knives are very good, but they do require more attention and care. LLMs are great at cutting through noise to get you closer to what is contextually relevant, but it's not a search engine so, like with a knife, you have to be keenly aware of the sharp end when you use it.

[–] [email protected] 1 points 2 weeks ago (2 children)

I guess it depends on your models and tool chain. I don't have this issue but I have seen it for sure, in the past with smaller models no tools and legal code.

view more: next ›