this post was submitted on 22 Jul 2024
267 points (95.6% liked)

Technology

59123 readers
2290 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 181 points 3 months ago (3 children)

If capitalism insists on those higher up getting exorbitantly more money than those doing the work, then we have to hold them to the other thing they claim they believe in: that those higher up also deserve all the blame.

It's a novel concept, I know. Leave the Nobels by the doormat, please.

[–] [email protected] 29 points 3 months ago

Wait, are you trying to say that Risk/Reward is an actual thing?

/s (kinda)

[–] [email protected] 15 points 3 months ago (1 children)

I doesn't seem unfair for executives to earn the vast rewards they take from their business by also taking on total responsibility for that business.

[–] [email protected] 11 points 3 months ago

Moreover, that's the argument you hear when talking about their compensation. "But think of the responsibility and risk they take!"

[–] [email protected] 14 points 3 months ago (2 children)

Was there a process in place to prevent the deployment that caused this?

No: blame the higher up

Yes: blame the dev that didn’t follow process

Of course there are other intricacies, like if they did follow a process and perform testing, and this still occurred, but in general…

[–] [email protected] 32 points 3 months ago

If they didn't follow a procedure, it is still a culture/management issue that should follow the distribution of wealth 1:1 in the company.

[–] [email protected] 23 points 3 months ago (3 children)

How could one Dev commit to prod without other Devs reviewing the MR? IF you're not protecting your prod branch that's a cultural issue. I don't know where you've worked in the past, or where you're working now, but once it's N+1 engineers in a code base there needs to be code reviews.

load more comments (3 replies)
[–] [email protected] 128 points 3 months ago (2 children)

Git Blame exists for a reason, and that's to find the engineer who pushed the bad commit so everyone can work together to fix it.

Blame the Project manager/Middle manager/C-Level exec/Unaware CEO/Greedy Shareholders who allowed for a CI/CD process that doesn't allow ample time to test and validate changes.

Software needs a union. This shit is getting out of control.

[–] [email protected] 14 points 3 months ago (3 children)

Or it needs to be a profession.

Licensed professional engineers are expected to push back on requests that endanger the public and face legal liability if they don't. Software has hit the point where failure is causing the economic damage of a bridge collapsing.

[–] [email protected] 11 points 3 months ago (1 children)

Sounds like the kind of oversight that tends to come with a union and the representation therein.

load more comments (1 replies)
[–] [email protected] 11 points 3 months ago (1 children)

Software engineering is too wide and deep for licensing to be feasible without a degree program- which would be a massive slap in the face to the millions of skilled self taught devs.

[–] [email protected] 2 points 3 months ago (1 children)

Some states let some people get professional licensure through experience alone. It just ends up taking more than a decade of experience to meet the equivalent requirements of a four year degree.

[–] [email protected] 2 points 3 months ago (1 children)

Yeaaa that's not exactly a solution

[–] [email protected] 3 points 3 months ago (1 children)

Why not? It is still valuing the self education of people. It just means having a license to manage the system requires people with significant experience.

And it isn't like a degree alone is required for licensure.

[–] [email protected] 2 points 3 months ago (3 children)

Because a decade of professional experience is a long time, and doesn't value independent experience. I've been coding for over 11 years, but professionally only a couple. Also software development is very international, how would that even be managed when working with self-taught people across continents?

I agree developers should be responsible, but licensing isn't it, when there are 16 year olds that are better devs than master's graduates.

[–] [email protected] 4 points 3 months ago (3 children)

Do we allow for self taught doctors or accountants?

Also, these regulations aren't being developed for all servers, just ones that can cause major economic damage if they stop functioning. And you don't need everyone to be qualified to run the service. How many water treatment pants are there where you only have a small set of managers running the plant, but most people aren't licensed to do so?

load more comments (3 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 65 points 3 months ago (1 children)

"George Kurtz, the CEO of CrowdStrike, used to be a CTO at McAfee, back in 2010 when McAfee had a similar global outage. "

[–] [email protected] 11 points 3 months ago (1 children)

Wonder if he partied with John?

[–] [email protected] 5 points 3 months ago

John left McAfee 15 years earlier

[–] [email protected] 37 points 3 months ago (3 children)

I do wonder how frequent it is that an individual developer will raise an important issue and be told by management it's not an issue.

I know of at least one time when that's happened to me. And other times where it's just common knowledge that the central bureaucracy is so viscous that there's no chance of getting such-and-such important thing addressed within the next 15 years is unlikely. And so no one even bothers to raise the issue.

[–] [email protected] 19 points 3 months ago (1 children)

Reminds me of Microsoft's response when one of their employees kept trying to get them to fix the vulnerability that ultimately led to the Solar Winds hack.

https://www.propublica.org/article/microsoft-solarwinds-golden-saml-data-breach-russian-hackers

[–] [email protected] 7 points 3 months ago (1 children)

And the guy now works for CrowdStrike. That's ironic.

[–] [email protected] 3 points 3 months ago (1 children)

I’m imagining him going on to do the same thing there and just going “why am I the John McClain of cybersecurity? How can this happen AGAIN???”

load more comments (1 replies)
[–] [email protected] 8 points 3 months ago

Hey man, look, our scrums are supposed to be confidential. Why are you putting me on blast here in public like this?

load more comments (1 replies)
[–] [email protected] 29 points 3 months ago (3 children)

If you don't test an update before you push it out, you fucked up. Simple as that. The person or persons who decided to send that update out untested, absolutely fucked up. They not only pushed it out untested, they didn't even roll it out in offset times from one region to the next or anything. They just went full ham. Absolutely an idiot move.

[–] [email protected] 21 points 3 months ago* (last edited 3 months ago) (1 children)

The bigger issue is the utterly deranged way in which they push definitions out. They've figured out a way to change kernel drivers without actually putting it through any kind of Microsoft testing process. Utterly absurd way of doing it. I understand why they're doing it that way but the better solution would have been to come up with an actual proper solution with Microsoft, rather than this work around that seems rather like a hack.

[–] [email protected] 8 points 3 months ago (1 children)

This is the biggest issue. Devs will make mistakes while coding. It's the job of the tester to catch them. I'm sure some mid-level manager said "let's increase the deployment speed by self-signing our drivers" and forced a poor schmuck to do this. They skipped internal testing and bypassed Microsoft testing.

load more comments (1 replies)
[–] [email protected] 14 points 3 months ago (1 children)

We still don’t know exactly what happened, but we do know that some part of their process failed catastrophically and their customers should all be ready to dump them.

[–] [email protected] 5 points 3 months ago

I'm quite happy to dump them right now. I still don't really understand why we need their product there are other solutions that seem to work better and don't kill the entire OS if they have a problem.

[–] [email protected] 6 points 3 months ago

The kernel driver devs also fucked up. Their driver could not support reading a file containing zeros.

[–] [email protected] 23 points 3 months ago

If I'm responsible for the outcome of the business, I want a fair share of the profits of the business.

[–] [email protected] 23 points 3 months ago (2 children)

Wild theory: could it have been malicious compliance? Maybe the dev got a written notice to do it that way from some incompetent manager.

[–] [email protected] 3 points 3 months ago

While that’s always possible, it’s much more likely that pressures to release quickly and cheaply made someone take a shortcut. It likely happens all the time with no consequences so is “expected” in the name of efficiency, but this time the truck ran over grandma.

load more comments (1 replies)
[–] [email protected] 11 points 3 months ago* (last edited 1 month ago)

I get that it's not the point of the article or really an argument being made but this annoys me:

We could blame United or Delta that decided to run EDR software on a machine that was supposed to display flight details at a check-in counter. Sure, it makes sense to run EDR on a mission-critical machine, but on a dumb display of information?

I mean yea that's like running EDR on your HVAC controllers. Oh no, what's a hacker going to do, turn off the AC? Try asking Target about that one.

You've got displays showing live data and I haven't seen an army of staff running USB drives to every TV when a flight gets delayed. Those displays have at least some connection into your network, and an unlocked door doesn't care who it lets in. Sure you can firewall off those machines to only what they need, unless your firewall has a 0-day that lets them bypass it, or the system they pull data from does. Or maybe they just hijack all the displays to show porn for a laugh, or falsified gate and time info to cause chaos for the staff.

Security works in layers because, as clearly shown in this incident, individual systems and people are fallible. "It's not like I need to secure this" is the attitude that leads to things like our joke of an IoT ecosystem. And to why things like CrowdStrike are even made in the first place.

[–] [email protected] 8 points 3 months ago (1 children)

As a counterpoint to this articles counterpoint, yes, engineers should still be held responsible, as well as management and the systems that support negligent engineering decisions.

When they bring up structural engineers and anesthesiologists getting "blame" for a failure, when catastrophic failures occur, it's never blaming a single person but investigating the root cause of failures. Software engineers should be held to standards and the managers above them pressuring unsafe and rapid changes should also be held responsible.

Education for engineers include classes like ethics and at least at my school, graduating engineers take oaths to uphold integrity, standards, and obligations to humanity. For a long time, software engineering has been used for integral human and societal tools and systems, if a fuck up costs human lives, then the entire field needs to be reevaluated and held to that standard and responsibility.

load more comments (1 replies)
[–] [email protected] 7 points 3 months ago (1 children)

CTOs that outsourced to a software they couldn't and didn't auidit are to blame first. Not having a testing pipeline for updates is to blame. Windows having a verification system loophole is too blame. Crowd strike not testing this patch are too blame. Them building a system to circumvent inspect by MS is their fault.

Now with each org there is probably some distribution of blame too, but the execs in charge are first and for most in charge...

Honestly this is probably enough serious damages in some cases that I suspect ever org to have pay some liability for the harms their negligence caused. If our system is just that is, and if it is not than we have a duty to correct that as well

[–] [email protected] 7 points 3 months ago

I've said it before and I'll say it again.

Corporate culture is a malicious bad actor.

Corporate culture, from management books to magazine ads to magic quadrants is all about profits over people, short term over stability, and massaging statistics over building a trustworthy reputation.

All of it is fully orchestrated from the top down to make the richest folks richer right now at the expense of everything else. All of it. From open floor plans to unlimited PTO to perverting every decent plan whether it be agile or ITIL or whatever, every idea it lays its hands on turns into a shell of itself with only one goal.

Until we fix that problem, the enshittification, the golden parachutes, and the passing around of horrible execs who prove time and time again they should not be in charge of anything will continue as part of the game where we sacrifice human beings on the Altar of Record Quarterly Profits.

load more comments
view more: next ›