Yes but it’s unregulated and like most unregulated TLDs it has become a cesspool of malware and dark dealings. I don’t think anybody would never if that were to happen to .io.
Yes but it’s unregulated and like most unregulated TLDs it has become a cesspool of malware and dark dealings. I don’t think anybody would never if that were to happen to .io.
Normally that would have been the preferred solution, but since IANA has experienced all kinds of shenanigans on similar occasions they have decided to not allow ccTLD’s to survive their former country anymore.
The dev has not made available any means to donate to him directly. He asks that people donate to the maintainers of the block lists instead.
Linux printing is very complex. Before Foomatic came along you got to experience it in all it’s glory and setting up a working printing chain was a pain. The Foomatic Wikipedia page has a diagram that will make your head spin.
override the auto driving
I must be tired right now but I don’t see how a remote operator could have driven better in this situation.
You can’t get away from someone blocking your car in traffic without risk.of hitting them or other people or vehicles.
You probably meant they ought to drive away regardless of what they hit, if it helps the passenger escape a.dire.situation? But I have to wonder if a remote operator would agree to be put on the spot like that.
Great trick, I had no idea Flatpak can use an existing install as a repo!
If you end up with resizing /var as the only solution, please post your partition layout first and ask, don’t rush into it. A screenshot from an app like Disk Manager or Gparted should do it, and we’ll explain the steps and the risks.
When you’re ready to resize, you MUST use a bootable stick, not resize from inside the running system. You have to make a stick using something like Ventoy, and drop the ISO for the live version of GParted on the stick, then boot with it and pick the Gparted live. You’ll have to write down the instructions and be careful what you do, and also hope that there’s no power outage during.
The safest method, if your /home has enough space, is to use it instead of /var for (some) Flatpak installs. You can force any Flatpak install to go to /home by adding --user
to the command.
If you look at the output of flatpak list
it will tell you which package is installed in user home dir and which in system (/var). You can also show the size of each package with flatpak list --columns=name,application,version,size,installation
.
I don’t think you can move installed apps directly between system/user like Steam can (Flatpak is REALLY overdue for a good package manager) but you can uninstall apps from system, then run flatpak remove --unused
, then install them again with --user
.
Please note that apps installed with --user
are only seen by the user that installed them. Also you’ll have to cleanup separately for system and user(s) in the future (flatpak remove --unused
for system, then flatpak remove --unused --user
for each user).
It’s not an issue on Arch & derivates, due to the simple fact I mentioned above: third-party (AUR) packages are never allowed to use the name of an official package.
If a third-party package was already using a name that a new official package wishes to use, users are required to willingly uninstall the third-party package in order to be allowed to install the official one, and can never re-install the third-party package unless it changes its name.
It also helps that there’s only one third-party repo (the AUR) so it prevents name overlaps among third-party packages. Although that’s of secondary importance since it can be bypassed by crafting custom packages locally.
I appreciate the difficulty of enacting such a rule on Debian or Ubuntu now, considering the vast amount of already existing, widely established third-party repos, and also the fact that Debian official repos contain 3-4 times as many packages as Arch official repos. Which is why I think there’s no way to fix this aspect of Debian/Ubuntu anymore.
I’m not saying that makes them unusable… but I believe that anybody who uses them should be [made] aware of this caveat. It’s not readily apparent and by the time it bites a new user she’s probably already invested a couple of years in them.
Interesting, I’ll keep it in mind.
Still not sure it would help in all cases. Particularly when 3rd party repos have to override core packages because they need to be patched to support whatever they’re installing. Which is another very bad practice in the Ubuntu/Debian world, granted.
I’m not sure how that would help. First of all, it would still end up blocking proper updates. Secondly, it’s hard to figure out what exactly you’re supposed to pin.
Third party package mechanism is fundamentally broken in Ubuntu (and in Debian).
Third party repos should never be allowed to use package names from the core repos. But they are, so they pretend they’re core packages, but use different version names, and at upgrade time the updater doesn’t know what to do with those version and how to solve dependencies.
That leaves you with a broken system where you can’t upgrade and can’t do anything entirely l eventually except a clean reinstall.
After this happened several times while using Ubuntu I resorted to leaving more and more time between major upgrades, running old versions on extended support or even unsupported.
Eventually I figured that if I’m gonna reinstall from scratch I might as well install a different distro.
I should note I still run Debian on my server, because that’s a basic install with just core packages and everything else runs in Docker.
So if you delegate your package management to a completely different tool, like Flatpak, I guess you can continue to use Ubuntu. But it seems dumb to be required to resort to Flatpak to make Ubuntu usable.
How do you avoid interaction if it’s being done automatically by your machine when you open up a print dialog, and if malicious servers can use the same names as legit printers?
People often think that things like recording your screen or keylogging are the worst but they’re not. These attacks would require you to be targeted by someone looking for something specific.
Meanwhile automated attacks can copy all your files, or encrypt them (ransomware), search for sensitive information, or use your hardware for bad things (crypto mining, spam, DDoS, spreading the malware further), or most likely all of the above.
Automated attacks are much more dangerous and pervasive because they are conducted at massive scale. Bots scan massive amounts of IPs and try all the known exploits and vulnerabilities without getting tired, without caring how daunting it may be, without even caring if they’re trying the right vulnerability against the right kind of OS or app. They just spray everything and see what sticks.
You’re thousands of times more likely to be caught by such malware than it is to be targeted by someone with the skill and motive to record your screen or your keyboard.
Secondly, if someone like that targets you and has access to your user account, Wayland won’t stop them. They can gain access to your root account, they can install elevated spyware, they can patch Wayland and so on.
What Wayland is doing is the equivalent of asking you to wear a motorcycle helmet 24/7, just in case you slip on some spilled juice, or a flower pot falls on your head, or the bus you’re in crashes. All those things are possible and the helmet would come in handy but are they likely? We don’t do it because it’s not, and it would be a major inconvenience.
You were merely lucky that they didn’t break.
Lucky… over 5 years and with a hundred AUR packages installed at any given time? I should play the lottery.
I’ve noticed you haven’t given me any example of AUR packages that can’t be installed on Manjaro right now, btw.
it wasn’t just a rise in popularity of Arch it was Manjaro’s PAMAC sending too many requests DDoSing the AUR.
You do realize that was never conlusively established, right? (1) Manjaro was already using search caching when that occured so they had no way to spam AUR, (2) there’s more than one distro using pamac, and (3) anybody can use “pamac” as a user agent and there’s no way to tell if it’s coming from an actual Manjaro install.
My money is on someone actually DDoS’ing AUR and using pamac as a convenient scapegoat.
Last but not least you’re trying to use this to divert from the fact AUR packages work fine on Manjaro.
That’s exactly the problem. Wayland is a set of standards, more akin to FreeDesktop.Org than to X. It lives and dies by its implementations, and it’s so utterly dependent on them that “KDE Wayland” has started to become its own thing. KDE are pretty much forging ahead alone nowadays and when they make changes it becomes the way to do it. Also what they do can’t be shared with other desktops because they’d have to use KDE’s own subsystems and become dependent on its whims.
It wasn’t supposed to be “Kdeland” and “Gnomeland” but that’s what it’s slowly becoming. We’re looking at major fragmentation of the Linux desktop because desktop teams have and do stop seeing eye to eye on major issues all the time. And because there’s no central implementation to keep them working together they’re free to do their own thing.
We need to keep a balance between security and convenience, to avoid systems becoming too awkward to use. Wayland tipped this balance too far on the side of security. Malicious local exploitation of the graphics stack has never been a big issue; consider the fact that someone or something would need to compromise your own account locally, at which point they could do much worse things than moving your windows around. It’s not that the security threat doesn’t exist, it’s that Wayland has approached it at the wrong end and killed a lot of useful functionality in the process.
Also consider that this issue has existed for the entire history of desktop graphics on *nix and nobody has ever deemed it worth to destroy automation for it. If it were such a grave security hole surely someone would have raised the alarm and fixed it during all this time.
My opinion is that Wayland has been using this as a red herring, to bolster its value proposition.
Manjaro has no purpose, it’s half-assed at being arch and it’s half-assed at being stable.
My experience with Manjaro and Fedora, OpenSUSE etc. contradicts yours. Manjaro has the best balance between stability and rolling out of the box I’ve seen.
“Out of the box” is key here. You can tweak any distro into doing anything you want, given enough time and effort. Manjaro achieves a good balance without the user having to do anything. I remind you that I’ve tested this with non-experienced users and they have no problem using it without any admin skills (or any admin access).
Debian testing is a rolling.
It is not.
AUR isn’t a problem in Manjaro because of lack of support, it’s a problem because packages there are made with Arch and 99.999% of its derivatives in mind, aka latest packages not one week old still-broken packages.
And yet I’ve managed to install dozens of AUR packages just fine. How do you explain that?
Matter of fact, I’ve never run into an AUR package I couldn’t install on Manjaro. What package is giving you trouble?
Manjaro literally accidentally DDoSes the AUR every now and then because again they’re incompetent.
You’re being confused.
AUR had very little bandwidth to begin with and could not cope with the rise in popularity of Arch-based distros. That’s a problem that needs to be solved by the AUR repo first and foremost. Manjaro did what they could when the problem became apparent and has added caching wherever it could. Both Manjaro and Arch devs have worked together to improve this.
If Wayland is so fragile as to only work with KDE, and is not responsible for anything, how long until it’s relegated to a KDE internal subsystem?
And let’s not forget Cortana.