Hazzard

joined 1 year ago
[–] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago)

Sounds like a CEO who doesn't have a damn clue how code works. His description sounds like he thinks every line of code takes the same amount of time to execute, as if x = 1; takes as long as calling an encryption/decryption function.

"Adding" code to bypass your encryption is obviously going to make things run way faster.

[–] [email protected] 15 points 1 month ago

I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but "formal reasoning" is exactly how this technology is being pitched to the masses. "Take a picture of your homework and OpenAI will solve it", "have it reply to your emails", "have it write code for you". All reasoning-heavy tasks.

On top of that, Google/Bing have it answering user questions directly, it's commonly pitched as a "tutor", or an "assistant", the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it's weaknesses in their marketing.

As it becomes more and more common, more and more users who don't understand it's fundamentally incapable of reliably doing these things will crop up.

[–] [email protected] 2 points 1 month ago

30% / 0% / 70%

[–] [email protected] 4 points 1 month ago

Yeah, this is the problem with frankensteining two systems together. Giving an LLM a prompt, and giving it a module that can interpret images for it, leads to this.

The image parser goes "a crossword, with the following hints", when what the AI needs to do the job is an actual understanding of the grid. If one singular system understood both images and text, it could hypothetically understand the task well enough to fetch the information it needed from the image. But LLMs aren't really an approach to any true "intelligence", so they'll forever be unable to do that as one piece.

[–] [email protected] 18 points 1 month ago (6 children)

Eh, this is a thing, large companies often have internal rules and maximums about how much they can pay any given job title. For example, on our team, everyone we hire is given the role "senior full stack developer", not because they're particularly senior, in some cases we're literally hiring out of college, but because it allows us to pay them better with internal company politics.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Ah, he recommends saving 1000$, then tackling your debt, then building to 3-6 months expenses. Which is... fine, I agree with the principle of it, but that number is definitely one of those things I'd consider being more flexible with. The amount I think you should save before tackling your debts depends on a lot of factors.

I also don't necessarily agree with saving that amount in two blocks, we personally saved 1000$, paid the most pressing card off, and then saved another 1000$. I think it makes sense to adjust that minimum emergency fund number as your situation evolves.

Just another case where I find he works fine as a starting point, but where most people shouldn't follow his advice to the letter.

[–] [email protected] 2 points 2 months ago (1 children)

Mmm, excellent addendum to my proposed changes. 1000$ is better than nothing, but it hasn't really kept up with inflation, and circumstances really change things. For example, if you have a house, the potential opportunity and cost of an "emergency" goes up immensely.

But yeah, for us personally we pretty quickly went up to a 2000$ emergency fund, despite the relative stability of renting and driving a fairly new car. We'll be working on our 3-6 month expense emergency fund soon. I definitely think it's better to view the baby steps as flexible guidance on a starting point, rather than the concrete law they frame it as.

[–] [email protected] 7 points 2 months ago (7 children)

I think I have an interesting perspective here, as someone who did kinda get their finances under control thanks to a Dave Ramsey course, and later had the unpleasant experience of discovering how much of a right-wing idiot he is during COVID.

Something I've noticed is that a lot of his advice seems targeted towards people who are crushingly bad at navigating debt. One of the most viral things they do is called "the debt free scream", where people share their stories on his radio show after getting debt free, and just... do a victory scream, essentially. Kinda fun, not really a bad thing, but it shows how most of the people he deals with directly and the ones that make the best marketing are people with hundreds of thousands or millions of dollars of debt despite making very average money. Just absolutely no self-preservation instinct around available credit.

And for these people I think his advice makes sense. Absolutely no debt, debt is the enemy, it will crush you. And stuff like how he pushes you to chase paying debt with high intensity, get multiple jobs, etc. Because otherwise it's impossible to even manage to put money on the principle of a debt that large.

For the average person though? His best advice is basic budgeting, focusing on paying your debts one by one so you can celebrate each victory quickly, and building an emergency fund so you don't need to go backwards as soon as you have a car problem. Also, yeah, ditch the brand new truck, it's burying you in debt you didn't need.

But absolutely, I'd highly recommend modifying his recommendations for most people, and I don't doubt someone out there is doing a better job of teaching this stuff than Ramsey is. My advised tweaks:

  • Find a budget you can live with, paying your debts a couple months faster isn't worth being miserable, and makes it more likely you'll be able to stick to a budget for as long as it takes.
  • Zero-based budgeting (budgeting every dollar at the start of the month) isn't really necessary, leaving a little loose change that you can allocate later once the month is actually happening is pretty helpful. It's ok to shift things around so long as you aren't spending money you don't have.
  • Actually do keep "fun money" or "restaurant money", so long as you're capable of including it in the budget without hamstringing your ability to pay debt. If you're giving more to debt than these things, then you're probably fine.
  • Ultimately just... think for yourself, and make your own decisions, based on your own income and expenses. Ramsey is a decent, if aggressive, starting point (and again, not the best person, he seems to have lost the plot somewhere).
[–] [email protected] 2 points 4 months ago

Ugh, if only. Amazon has done everything in their power to bury and strip that number from the internet. Once upon a time that worked great.

[–] [email protected] 36 points 4 months ago (9 children)

Storytime! Earlier this year, I had an Amazon package stolen. We had reason to be suspicious, so we immediately contacted the landlord and within six hours we had video footage of a woman biking up to the building, taking our packages, and hurriedly leaving.

So of course, I go to Amazon and try to report my package as stolen.... which traps me for a whole hour in a loop with Amazon's "chat support" AI, repeatedly insisting that I wait 48 hours "in case my package shows up". I cannot explain to this thing clearly enough that, no, it's not showing up, I literally have video evidence of it being stolen that I'm willing to send you. It literally cuts off the conversation once it gives its final "solution" and I have to restart the convo over and over.

Takes me hours to wrench a damn phone number out of the thing, and a human being actually understands me and sends me a refund within 5 minutes.

[–] [email protected] 16 points 7 months ago

I don't necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don't think an LLM will actually be any part of an AGI system.

Because fundamentally it doesn't understand the words it's writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and "Waluigis" or "jailbreaks" are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.

[–] [email protected] 3 points 7 months ago

Same honestly. And if I ever ask a question that someone might think is a duplicate, I link to that question and say something like "I found X, but the answers here don't reflect Y".

view more: next ›