Poik

joined 1 year ago
[–] [email protected] 14 points 2 weeks ago (4 children)

Check the vanguard target retirement income fund (vtinx) and other similar funds. There was a dip in 2021 that absolutely destroyed a number of retirements, my patents included, despite being low risk options. Total bond index funds also suffered for some reason, and those are as low risk as you can get. Every other fund I have is doing great, but the ones that are supposed to be safe are not doing great.

[–] [email protected] 1 points 2 months ago

It's a very hard game. I really got into it playing with the Noita Together mod, and the Spell Labs mod when I was playing solo, to really figure out the game. Then once I felt I had a good grasp, I beat it, did the sun quest, eventually beat 33 orb Kolmi... Lost all my progress and had to do it again.

If you can't tell, I love Noita, but I fell in love with the wand building first. Spell Labs has excellent tutorials on improving your wand builds too. But I now have modded the game, so I'm not a casual player of it either.

[–] [email protected] 3 points 2 months ago

I've seen a few, but it's still kind of controversial. That being said, there is a time and a place for agile where it works, but also there is a team composition and a style of agile which works and that style tends to piss off micromanaging middle managers, so it rarely is allowed.

I had an article saved in my work slack before I left that company (for health reasons), but a currently popular one seems to be this one: https://johnfarrier.com/agile-failure-what-drives-268-higher-failure-rates/

My take is based on years of interaction with companies and friends in other companies. The biggest problem isn't necessarily Agile, but instead that agile is not intended for long term projects. Agile is fantastic in short turnaround interactions such as web dev, and because these short turnaround places have such easily visible results, managers take them to be gospel. Thus comes Corporate Agile: https://web.archive.org/web/20240524230754/https://bits.danielrothmann.com/corporate-agile Link is from the Internet archive because I can't find his new site if he moved.

Long story short, corporate agile is the agile the bosses want, as it allows them to be constantly involved with more and more "agile" meetings. You know. Meetings. The antithesis of Agile. The place productivity goes to die. I had to remind our bosses that Agile dictated that stand ups included the developers and the scrum master ONLY multiple times and pointed them to the agile training they gave me. Didn't matter. They're the boss. This is a pretty common breakdown in Agile. So, that turned daily standup into daily meeting, since the quick status updates now had to be broken down for the boss. Every. Single. Day.

Agile at its most basic is intended to reduce meetings to once a week so the rest of the time can be spent developing. Every company I know starts including devs in at least 300% more meetings (even junior devs) after switching to Agile for at least 6 months. And on average, it takes half an hour for a programmer to return to the level of productivity they hit before any interruption. This is generally due to the limitations of working memory. (Many research papers on this if you want.)

But to get back to the original point. Because agile concentrates on short immediately tangible and verifiable benefits, any progress that takes longer than a sprint isn't allowed. (It actually is, with proper implementation, as Agile is supposed to be edited on a team by team basis to make things work, but companies want everyone on exactly the same page.) Guess what doesn't have immediately tangible and verifiable benefits? That's right, research. Guess what it's still in a research phase? Aside from basically anything that isn't in market yet, self driving technology is very much research driven. Lots of trial, error, and long development cycles. Longer than a sprint for sure. And anyone who says self driving is in market should try an exercise if finding one level 5 self driving car that hasn't been recalled due to false marketing or safety concerns. The technology isn't there yet. It could be getting there, but profits are getting in the way of progress.

[–] [email protected] 16 points 2 months ago

Realistically. Trains will revolutionize road transport of goods and people if the train industry properly maintained their rails, operated above board (unlike the one that had the chemical spill in Ohio and other issues), and expands a bit. The largest expense in good transport is long haul and no one wants to drive long haul. Last mile will probably need trucks and drivers for at least 3 to 5 more decades. And taxi services have similar challenges to last mile delivery. Personal self driving systems need even more consideration than taxi services, and will likely take five to ten years after taxi services become recognized as safe.

[–] [email protected] 9 points 2 months ago (2 children)

In my (in the industry) experience: Agile killed safe development by pushing superficial internal deadlines that look good instead of are good. Safety requirements therefore are never met, but people keep looking like they're approaching at least one, but end up sacrificing other things that no one is concentrating on, causing more set backs than improvements. Self driving will not be legally commercialized until either someone lobbies bad development onto the roads, or capitalism realizes that quarter profit isn't as important as ten year profit and Agile finally burns in a god damn fire.

[–] [email protected] 5 points 2 months ago

I was told in 2009 "Why optimize? Hardware upgrades will make your efforts obsolete anyway." So... I devoted my time to optimization, because fuck that. I ended up doing algorithm optimization in my first full time job, and loved... That part of the job at least.

Indie games and co-op games are my jam. I feel for all of this comment.

[–] [email protected] 7 points 2 months ago

He got free food and a bed? Jealous.

[–] [email protected] 1 points 3 months ago (1 children)

Then why are you saying it's incorrectly formatted? I'm directly backing its premise.

[–] [email protected] 0 points 3 months ago (3 children)

Except, usage defines language. If it didn't, English wouldn't exist. Therefore, usage is correct when people understand and use it.

[–] [email protected] 8 points 9 months ago

I love discord, for what it's for. Quick synchronous talks you will never refer back to again. So not software development where indexable logs of information are necessary. I know discord has indexing, and now some form of forum. But every discord I've been to for development (especially modding communities) has a large corpus of synchronous logs where people get annoyed if you ask a question that was answered one before a long time ago with extremely common language making it nearly impossible to search for because the keywords have been used out of context of your question hundreds of times since the question was asked.

If the Dev communities used the forums mode in discord more, it wouldn't always solve it, but it'd be much better. There are better places than discord for these things, but I have been trying to meet people where they're established.

[–] [email protected] 2 points 9 months ago

Recalling data, communication. Two things humans are notoriously bad at...

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

And I wouldn't call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don't understand constantly and confidently. It's common place, and you could call it fake it until you make it, but a lot of times it's more of people thinking they understand something.

LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

I'm being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as "acceptable" language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don't mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That's out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That's not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren't the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That's why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.

view more: ‹ prev next ›