Shell scripts were a mistake. The weirdness you have to remember to safely stop executing when something fails is mind-boggling.
nushell scripts aren’t shellscripts?
Shell scripts were a mistake. The weirdness you have to remember to safely stop executing when something fails is mind-boggling.
nushell scripts aren’t shellscripts?
servers rarely see updates. Maybe it happens in larger firms, but not in smaller shops.
*ouch*
adding PPAs or RPM repos, or installing things from source, I’d say that number is a lot higher than 0.
Nothing wrong with that. Unlike docker that’s cryptographically protected toolchain/buildchain/depchain. Thus, a PPA owner is much less likely to get compromised.
Installing things from source in a secure environment is about as safe as you can get, when obtaining the source securely.
Docker contains that nonsense in a way that’s easy to update.
Really? Ist there already a builtin way to update all installed docker containers?
What’s uneasy about apt full-upgrade
?
Package managers don’t provide a sandbox.
I didn’t say that.
average user who doesn’t run updates consistently, may add sketchy dependencies, and doesn’t audit things would be better off with Docker.
That’s false.
but they’re less likely to cause widespread issues since each is in its own sandbox.
Also false. Sandbox evasion is very easy and the next local PE kernel vulnerability only weeks away. Also VM evasion is a thing.
Basically one compromised container giving local execution is enough to pwn your complete host.
in the same way that installing a malware-laden executable isn’t an OS problem
except no one is doing that. Every major distro hast mechanisms for software supply chain security and reproducible builds.
Do your due diligence, especially if you’re not a developer and thus looking at the Dockerfiles is impractical.
You’re on to something here. If you automate that process, you end up with something we call a package manager.
it’s likely blog posts and users that are at fault.
Exactly. And sincer reviewing Dockerfiles is impractical, there’s no way docker prevents you from shooting your own foot. Distros learned that long ago: Insecure default configs or injected dependencies are a thing of the past there. With docker, those get reintroduced.
What you are saying is not new but you don’t seem to grasp the difference in risk when you run someone else’s configured environment on your system vs. manually setting them up yourself. You save a lot of time by using docker images but it comes with a price.
There’s no docker vulnerability
No need to. Like sudo doesn’t need a vulnerability when you let contributors of some repository use it on your box.
Things like snyk exist for a reason but it’s not mitigation, just monitoring.
You should stop telling people that using docker is no security problem because that’s wrong, as it adds attack surface to even the most secure projects. Sure, it saves time but things like OPs news will keep popping up in the future like it did in the past. It can’t be fixed other than just not using it in production. At least build your own containers.
Don’t forget various past issues:
This entirely misses the point of Docker.
It’s just pointing out the risk of letting someone you don’t know with no legal obligations setup your complete environment.
How likely
Probably as likely as someone cracking your really secure ssh password. Still, any sane expert will recommend disabling password auth.
I only pull containers based on some official project.
How do you know they weren’t compromised?
but I don’t see anything here about Docker itself being a problem
The problem is that rootless docker is a pain and no one does it. Privileged software sideloading other software is a huge risk.
That risk now became an incident. Even if you’re not affected, the risk still remains.
exactly. Forking for any reason is the essence of FOSS.
Scenarios like OPs were taken care of right from the start. That’s just the legal side, tho. But someone still needs to do the actual work which is why it sometimes fails.
always_has_been.jpg
Public funds.
There actually are lots of initiatives (e.g. https://bigdatastack.eu/european-open-source-initiative ) but it’s still young and there are multiple problems between available public money and contributors actually earning a salary.
Money is not the problem.
either earn a good living being a code monkey, or find a job in a small company that has passion
crazy idea: let’s publicly fund FOSS projects so devs working on stuff they like with a passion can actually make a good living and enable sustainable non-profits to hire expertise, marketing and all the stuff a company needs
the result would be actually good software and happy devs
25 years in the industry here. As I said there’s nothing against learning something new but I doubt it’s as easy as “leveling up”.
Both fields profit a lot from experience and it’s as much gain for a scientist do become a software dev as an architect becoming a carpenter. It’s simply not productive.
there is so much time lost in research institutes because of shoddy programming
Well, that’s the way it is. Scientific code and production code have different requirements. To me that sounds like “that machine prototype is inefficient - just skip the prototype next time and build the real thing right away.”
It’s always good to learn new stuff but in terms of productivity: Don’t attempt to be a programmer. Rather attempt to write better research code (clean up code, revision control, better commenting, maybe testing…)
Rather try to improve cooperation with programmers, if necessary. Close cooperation, asking stupid questions instead of making assumptions etc. makes the process easy for both of you.
Also don’t be afraid to consult different programmers since beyond a certain level, experience and expertise in programming is vastly fragmented.
Experienced programmers mostly suck on your field and vice versa and that’s a good thing.
air gapping doesn’t really help when basically any interface is an attack vector.
evil maid attacks still work.
why would using a cdn I don’t control, from a non-contracted 3rd party and their “PageShield” app reduce my supply chain attack risk?
Am I not just increasing the attack surface since now my visitors can be victim not only by my servers being compromised but now also by the 3rd party being compromised?
serious question.
wenn es so einfach wäre…
“Speicher…! Wir brauchen Speicher!” – Olli Khan
Consequence:
Software can only be good, when enough people WANT to work on it and with it along the complete life-cycle. There’s a critical amount of developers/contributors/testers and (feedback providing) users.
Hence a lot of critical consumer stuff is based on popular opensource.
Also, we’re entering an aera where the difference between hardware/firmware/software gets increasingly blurred. So all of this applies to more and more hardware, too.
Jedenfalls sicher keine Ungeimpften. Die haben Immunsistem und da wird in die Hände gespukt…!!
/s
articles don’t mention mitigation methods.
what to disable in thunderbird to not be vulnerable to “obfuscated JavaScript file that is sent to the victim through emails in archive files.” and prevent that “The JavaScript file drops a self-copy at “C:\Users\<Username>” location with random names like “needlereportcreepy.bat”. The bat file is then executed”?