@7heo - eviltoast
  • 28 Posts
  • 822 Comments
Joined 1 year ago
cake
Cake day: May 31st, 2023

help-circle

  • So, OK, I’m willing to learn: please show me good brands then.

    They need to resist to mud (thick mud, the kind with a ton of suction that will keep your soles when you try and move), seawater, rocks and sand, and pretty dense vegetation.

    They also need to have steel toe caps, good soles (vibram or equivalent if possible) that don’t slip, and that aren’t too hard (wet stone is enough of a female dog as it is), and to go higher than my ankle.

    The best brand I tried so far was caterpillar, but they lasted only 3 years. That’s a far cry from “a decade or more”.




  • Yeah so, the amount of meals is correct. But that’s about it. I mean, I can’t say about the taste, to each their own, but one kg of cow meat needs two dozen kg of grain.

    That’s about as inefficient as it gets.

    As for the leather, the industry doesn’t like products that last a decade, so it isn’t actually using the leather in such a way. Industrial leather boots last a year tops.

    Finally, pet food is made out of discarded cuts of meat, the uglies, etc. But also lots of cereals, and vegetables.

    So we could really afford eating less meat. It isn’t good for anything. Not for us, not for the other species (certainly not for the cows, that get often half assed butchered in a hasty way because of quotas and profit), and absolutely not for the ecosystem.

    But I guess the taste is all that matters.













  • 7heo@lemmy.mltoTechnology@lemmy.worldAmazon builds AI model to optimize packaging
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    7 months ago

    I think you’re overstating the compute power […]

    I don’t actually think so. A100 GPUs in server chassis have a 400 or 500W TDP depending on the configuration, and even if I’m assuming 400, with 4 per watercooled 1U chassis, a 47U rack with those would consume about 100kW with power supply efficiency and whatnot.

    Running those for a day only would be 2.4GWh.

    Now, I’m not assuming Amazon would own 100s of those racks at every DC, but they probably would use at least a couple of such racks to train their model (time is money, right?). And training them for a week with just two of those would be 35GWh, and I can only extrapolate from there.

    So I don’t think that going to TWh is such an overstatement.

    […] and understating the amount of cardboard Amazon uses

    That, very possibly.

    I have seldom used Amazon ever, maybe 5 times tops, and I can only remember two times. Those two times, I ordered a smartphone and a bunch of electronics supplies, and I don’t remember the packaging being excessive. But I know from plenty of memes that they regularly overdo it. That, coupled with the insane amount of shit people order online… And yes, I believe you are right on that one.

    Even so, as long as it is cardboard, or paper, and not plastic and glue, it isn’t a big ecological issue.

    However, that makes no difference to Amazon financially, cost is cost, and they only care about that.

    But let’s not pretend they are doing a good thing then. It is a cost effective measure for them, that ends up worsening the situation for everyone else, because the tradeoff is good economically, and terrible ecologically.

    If they wanted to do a good thing, they could use machine learning to optimise the combining of deliveries in the same area, to save on petrol, and by extension, pollution from their vehicles, but that would actually worsen the customer experience, and end up costing them more than it would save them, so that’s never gonna happen.


  • IMHO the issue is two folds:

    1. The makefile were never supposed to do more than determine which build tools to call (and how) for a given target. Meaning that in very many cases, makefile are abused to do way too much. I’d argue that you should try to keep your make targets only one line long. Anything bigger and you’re likely doing it wrong (and ought to move it in a shell script, that gets called from the makefile).
    2. It is really challenging to write portable makefiles. There’s BSD make and GNU make, and then there are different tools on different systems. Different dependencies. Different libs. Etc. Not easy.

  • 7heo@lemmy.mltoTechnology@lemmy.worldAmazon builds AI model to optimize packaging
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    7
    ·
    edit-2
    7 months ago

    Yeah, it is one of the least bad uses for it.

    But then again, using literal tera-watts-hours of compute power to save on the easiest actually recyclable material known to man (cardboard), maybe that’s just me, maybe I’m too jaded, but it sounds like a pretty bad overall outcome.

    It isn’t a bad deal for Amazon, tho, who is likely to save on costs, that way, since energy is still orders of magnitude cheaper than it should be[1], and cardboard is getting pricier.


    1. if we were to account for the available supply, the demand, and the future (think sooner than later) need for transition towards new energy sources… Some that simply do not have the same potential. ↩︎


  • The thing is, devops is pretty complex and pretty diverse. You’ve got at least 6 different solutions among the popular ones.

    Last time I checked only the list of available provisioning software, I counted 22.

    Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, “cloud LAN”, etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the “lesser ones”?

    And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

    And that’s not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

    The entire ecosystem has considerable overlap too.

    So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged… And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

    And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There’s no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.