7heo

joined 1 year ago
[–] [email protected] -1 points 7 months ago* (last edited 7 months ago) (2 children)

This is the way. And I might add, Unix desktop. Let's not start bikeshedding between FOSS Unix distributions out of dogmatic reasons (I'm sure you didn't mean to specifically single out "Linux" here, but I wish we would stop opposing "Linux" and other Unixes like BSD, Illumos, etc).

The point is, voting with your data for software that is defending your interests, and respecting your rights.

Edit: Dang, I didn't expect to get so much slack for "Unix as opposed to Unix-Like". I absolutely meant "Unix-Like", but my point is that it shouldn't matter. Most software is trying to be compatible, these days, and Linux isn't (in spite of all that marketing material) an OS. It is a kernel. So semantics for semantics, can it even be compared to something it is not? I merely tried to be inclusive.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

Maybe they mean it in the sense of "forgery". You know, as in "let people imagine what it is like to have friendships, by letting them make forgeries of their lives, but with friends in it" 🤪

[–] [email protected] 5 points 7 months ago
[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (1 children)

I think we can all agree on that... But without the entire article, one can only parametrise their answer... I was hoping someone with a full version could do an HTML dump. 😅

Or at the very least a markdown dump in here.

[–] [email protected] 3 points 7 months ago

Is it using XLR?

[–] [email protected] 8 points 7 months ago (5 children)

Is it just me, or is everyone here commenting on a half article, the other half being behind a paywall? 😬

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

I think you're overstating the compute power [...]

I don't actually think so. A100 GPUs in server chassis have a 400 or 500W TDP depending on the configuration, and even if I'm assuming 400, with 4 per watercooled 1U chassis, a 47U rack with those would consume about 100kW with power supply efficiency and whatnot.

Running those for a day only would be 2.4GWh.

Now, I'm not assuming Amazon would own 100s of those racks at every DC, but they probably would use at least a couple of such racks to train their model (time is money, right?). And training them for a week with just two of those would be 35GWh, and I can only extrapolate from there.

So I don't think that going to TWh is such an overstatement.

[...] and understating the amount of cardboard Amazon uses

That, very possibly.

I have seldom used Amazon ever, maybe 5 times tops, and I can only remember two times. Those two times, I ordered a smartphone and a bunch of electronics supplies, and I don't remember the packaging being excessive. But I know from plenty of memes that they regularly overdo it. That, coupled with the insane amount of shit people order online... And yes, I believe you are right on that one.

Even so, as long as it is cardboard, or paper, and not plastic and glue, it isn't a big ecological issue.

However, that makes no difference to Amazon financially, cost is cost, and they only care about that.

But let's not pretend they are doing a good thing then. It is a cost effective measure for them, that ends up worsening the situation for everyone else, because the tradeoff is good economically, and terrible ecologically.

If they wanted to do a good thing, they could use machine learning to optimise the combining of deliveries in the same area, to save on petrol, and by extension, pollution from their vehicles, but that would actually worsen the customer experience, and end up costing them more than it would save them, so that's never gonna happen.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

IMHO the issue is two folds:

  1. The makefile were never supposed to do more than determine which build tools to call (and how) for a given target. Meaning that in very many cases, makefile are abused to do way too much. I'd argue that you should try to keep your make targets only one line long. Anything bigger and you're likely doing it wrong (and ought to move it in a shell script, that gets called from the makefile).
  2. It is really challenging to write portable makefiles. There's BSD make and GNU make, and then there are different tools on different systems. Different dependencies. Different libs. Etc. Not easy.
[–] [email protected] 6 points 7 months ago* (last edited 7 months ago) (5 children)

Yeah, it is one of the least bad uses for it.

But then again, using literal tera-watts-hours of compute power to save on the easiest actually recyclable material known to man (cardboard), maybe that's just me, maybe I'm too jaded, but it sounds like a pretty bad overall outcome.

It isn't a bad deal for Amazon, tho, who is likely to save on costs, that way, since energy is still orders of magnitude cheaper than it should be[^1], and cardboard is getting pricier.

[^1]: if we were to account for the available supply, the demand, and the future (think sooner than later) need for transition towards new energy sources... Some that simply do not have the same potential.

[–] [email protected] 2 points 7 months ago

The thing is, devops is pretty complex and pretty diverse. You've got at least 6 different solutions among the popular ones.

Last time I checked only the list of available provisioning software, I counted 22.

Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, "cloud LAN", etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the "lesser ones"?

And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

And that's not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

The entire ecosystem has considerable overlap too.

So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged... And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There's no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

Do bullets kill soldiers?

Infantry soldiers in the open, possibly. Soldiers in an APC? No.

Same applies to companies. A single sufficient bad review on a small, one-person company can take it out entirely. A single review of a big corporation? Not even one from a big shot like MKBHD.

This headline is dumb.

view more: ‹ prev next ›