If that’s elephant garlic, then it’s the wrong amount. That shit is 3x the size with 1/10 the flavor. Fuck that, I’ll peel a whole head of normal garlic myself.
If you want to move your containers to a different location, look into configuring docker’s data-root
: https://stackoverflow.com/questions/24309526/how-to-change-the-docker-image-installation-directory
You copy /var/lib/docker
to a new location and update /etc/docker/daemon.json
I will say: Moving data-root to an NFS mount isn’t going to work well. I’ve tried it, and docker containers rely on filesystem features to run their overlays. On an NFS, this feature isn’t present, so your services will duplicate the container’s entire filesystem. This will tank your performance and is basically unusable for anything but trivial examples. Docker data-root basically needs to be a “physical” disk.
I’ve had no issues using NFS shares mounted as docker volumes. It’s just the data-root where it’ll fail.
If you’re doing it from scratch, I’d recommend starting with a filesystem that has parity checks and filesystem scrubs built in: eg BTRFS or ZFS.
The benefit of something like BRTFS is that you can always add disks down the line and turn it into a RAID cluster with a couple commands.
Yep, the problem was that docker started before the NFS mount. Adding the dependency to my systemd docker unit did the trick!
isn’t it an annoyance having to connect to your home network all the time?
It’s less annoying than the gnawing fear that my network might be an easy target for attackers.
Proliant G9 is an EoL server that hasn’t been sold since 2018. Meanwhile, Debian bookworm released last year. I’d be surprised if the problem were that your installer gave you a kernel that’s too old.
What is the output of ip addr show
?
It might also be worth ruling out low-level issues:
To be pedantic, Ford’s threat is to “rearrange [the computer’s] memory banks with an axe”
The countdown is until he starts doing it.
How to make a suckless.org contributor cry
A much better idea than when I tried to organize my restaurant with hashtables.
It was too much for the waitstaff, who had to reindex the floor plan every time they added or removed a plate.
On the plus side, delivering the right food was always O(1).
Surely this could be good, right?
If celebrities need to be accessible to their biggest fans, maybe it would induce them to leave the birdsite? And if this is as big a migration as the article suggests, it has the potential to snowball in network effects, giving other influential users one less reason to feel chained to a dumpster fire.
Not that I was ever interested in being military, but I was at a lunch with two older lifelong army retirees. They kept talking about how military service broke their bodies and politicians won’t cover their medical costs. These injuries were independent of any combat: It’s just expected that you sell every part of yourself when you sign up.
Who wants to be 45 years old with a limp, be unable to hear a quiet conversation, and have horrible back problems?
Yes, OP I highly recommend a GL.iNet device. It’s pocket sized and always does the job.
It’s also great for shitty wifi that tries to limit how many devices you can connect. The router will appear as one MAC and then all your other devices can route traffic through it.
A story I heard was that it was the poor indigenous farmers who were forced to cultivate coffee for the Dutch. They weren’t allowed any of the beans they grew, but were able to collect it from the dung of civets that prowled around near the plantation. Of course, once the colonizers learned that it tasted “good”, it was commoditized too.
Might be apocryphal.
Why’d ye spill yer memes, Winslow? Why’d ye spill yer memes?
As someone who has owned enterprise servers for self-hosting, I agree with the previous comment that you should avoid owning one if you can. They might be cheap, but your longterm ownership costs are going to be higher. That’s because as the server breaks down, you’ll be competing with other people for a dwindling supply of compatible parts. Unlike consumer PCs, server hardware is incredibly vendor locked. Hell, my last Proliant would keep the fans ramped at 100% because I installed a HDD that the BIOS didn’t like. This was after I spent weeks tracking down a disk that would at least be recognized, and the only drives I could find were already heavily used.
My latest server is built with consumer parts fit into a 2U rack case, and I sleep so much easier knowing I can replace any of the parts myself with brand new alternatives.
Plus as others have said, a 1U can be really loud. I don’t care about the sound of my gaming computer, but that poweredge was so obnoxious that despite being in the basement, I had to smother it with blankets just so the fans didn’t annoy me when I was watching TV upstairs. I still have a 1U Dell Poweredge, but I specifically sought out the generation that still let you hack the fan speeds in IPMI. From all my research, no such hack exists for the Proliant line.
The problem with chromebooks is that the base specs are pretty shit. A lot of them have 4 GiB of RAM and maybe 16GiB of disk if you’re lucky.
They were designed to be thin clients to connect students to the internet, and little else. Maybe they could be hacked into something useful, but I don’t think it’ll ever make a good PC. They were always destined for the landfill.
Meanwhile, the best thinkpads were quality machines back when they came out. IMO, that’s why they’re still so versatile today. Free software can’t fix bad fundamentals.
Look up the GPU on these charts to find out what codecs it will support: https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
NVENC support will tell you what codecs your GPU can generate for client devices, and NVDEC support determines the codecs your GPU can read.
Then compare it with the list of codecs that your Intel can handle natively.