No comment on the hosting end of things, however, I switched to Ubuntu on my PC a couple of weekends ago and I still have not adjusted to the peace and quiet of not getting 87 meaningless windows notifications every time I boot up.
I'm not exactly a windows lover, but what are you people doing to your windows installs to get so many notifications? I get nothing at boot and the only notifications I get are from when windows defender blocks something and I need to add an exclusion.
What edition are you using? The home edition straight-up has pop-up ads[1] for random Xbox games, even on a clean install with all notification settings turned off.
One time I had the misfortune of installing the free version of Driver Booster. After that I think I spent the following 15 minutes closing ads and uninstalling all the bloated apps it installed alongside it.
I actually thought it was a virus until I discovered people actually pay money for it.
That was really common back in the day. AdAware and SpyBot Seek and Destroy came around just to remove the junk that was bundled with other software we forgot to uncheck. You didn’t always want 32 browser toolbars you never asked for.
It was never part of the OS until these last few years.
After you get frustreted having to use the terminal to make Ubuntu work, try Fedora.
Ubuntu is outdated linux, it part of Debian-family, which goes by the misleading name 'stable'. Its not stable like a table. Its stable like software versions are frozen, despite bugs being fixed.
Fedora isnt Arch, I think most Debian-family users don't realize they are unrelated. Fedora is just well maintained and generally up to date.
I started using Linux desktops around 2012, and always used Debian-based distros (Mostly Debian, Ubunutu, and Mint).
I switched to Fedora this year, and I've been super pleasantly surprised. There are some sharp edges (Mostly due to Wayland and Flatpaks), but I don't think I'll be going back to Debian any time soon. Things seem way more stable than on Ubuntu.
That's really interesting. A problem I've been having with Ubuntu is just quirky things with bluetooth devices and a monitor that doesn't always get recognized when waking the pc up from sleep.
In your experience, does Fedora handle these better than Ubuntu?
I've been using Fedora since 2011, haven't had any monitor or bluetooth issues.
Originally had a wifi issue when I first got a Ryzen computer, but it was solved fairly easily and haven't had an issue since. The upgrade from 42 to 43 borked my local postgres, but it seems that they understand what their mistake was there.
There are certainly a lot of great options. I tried Fedora, Mint and Cachy recently and for my machine and use cases it had the most issues. Mint and cachy basically work perfectly out of the box
Just bear in mind that the OSS nvidia kernel module often causes breakages there with mismatched firmware. The entirely proprietary module or nv are fine.
Tumbleweed is good for a mostly stable, clean KDE distro, but I wouldn't recommend it for gaming or codec integration. The first-class btrfs snapshots are probably my favorite feature.
Depends whether you go with Tumbleweed, Slowroll, or Leap. I believe the Kernel Of The Day repository is only available for Tumbleweed. By 'latest' kernel you did mean bleeding edge nightly builds, right?
I want to have VMs that are kind of like Arch but a little bit more stable, yet have very latest versions of everything I need with minimal risk (no need for the bleeding edge at all times; Manjaro does this semi-okay with its two weeks grace period).
I don't need 100% of all software. Just a tiny fraction and they're modern tools that are heavily iterated on. Is it possible they have bugs? Very much so!
But "stay on an older version to be safe" is not the panacea many try to pretend that it is. Way too many bugs and security vulnerabilities on old versions as well.
If you’re on debian, there’s the backports repository, And stable means stable in terms of feature. They still patches for bugs and security, and quite fast for the latter.
Right, cause latest packages bring only latest features and bug fixes and never bring new bugs, do you ever wonder how those bugs that the latest packages fix get in ?
I recommend for people that want things to not change and not get new bugs every update to use an LTS distro like Kubuntu and only get latest kernels or drivers from a PPA or upstream if you really have to. I am not running the latest KDE stuff and I feel fine, I am not suffering in pain for some cool new feature in Plasma and some new bug, I am comfortable with the existing features and existing bugs.
I've tried debian variants many times over the years. However, my actual experience has been one of struggles with outdated software, knowing that the fix I need is just out of reach. Trying to pull in some of these fixes from a PPA often leads to a dependency mess. I'm sure I could deal with it better if I took the time, but I just want to do the thing I want to do. The other "reason" to use Debian is the supposed large user base and community support. But I've found more often than not that many of the solutions to my problems are outdated or don't work for whatever reason.
Ironically, my best experience so far in that regard is an arch variant (CachyOS).
That said, people shouldn't be afraid of experimenting to find the best software for their purposes, and something like Linux Mint is still a great option to recommend to people who are new to Linux.
OK, but stop with bullshit that you get more stability because of new packages, you get latest features you read about or watched in some video but you also get the latest bugs, and to get fixes for this latest bugs you will upgrade again in a few months and you get the latest fixes and some new bugs.
There are some valid usecases to use rolling or some bleeding edge distro, like if you want to contribute to KDE or similar project you would want to track latests library versions, but for doing say a web dev job and soem enterteminent an LTS distro works better, you do not upgrade and you have the surprise that GNOME removed yet some new feature you were using, or soem stuff in Plasma broke and now you get a ton of notifications about something not working, or maybe you did not read the Arch forums before upgrading and you had cool package Y that conflicts with cool package Z and now your system is unbootable and you need to fix it instead of doing your actual work. (Arch fans should first Google Arch upgrade briked my system before commenting that this never happened to them).
Btw I used Arch in the past too when I had more free time and loved thinkering with my system.
Proxmox is probably overkill for beginners. You'll know it when you want it.
I recommend docker-compose based tools, especially dockge [1]. It drastically cuts down on the surface of weird things you have to deal with. Just put up a reasonable distro (I recommend Debian here, since Fedora tends to get some SELinux issues which would confuse beginners), install docker, and run it. You don't have to touch anything else system-wise (maybe except setting up encryption when installing)
Most self-hostable services provide docke-compose files, which you can just paste in with some customizations, and run it from there.
Tailscale for external access is probably the easiest solution.
Some self hosted tools really want to be installed on the host os like Home Assistant. Proxmox just makes it so much easier to blow away and reinstall the OS without having to drag a monitor and keyboard to the server.
TFA's author is using GPU passthrough to get Windows to run games with anti-cheats that won't work under Linux. So you want full hardware virtualization with GPU passthrough for that to work: several hypervisors would do but Proxmox is Debian+hypervisor+LXC+ZFS and it is easy to use.
I got my brother, who's not a techie and who lives at the other end of the world, to install Proxmox and get GPU passthrough working.
> Just put up a reasonable distro (I recommend Debian here
Proxmox is basically Debian.
Proxmox allows to do things that are totally overkill for beginners indeed but it's still simple to use for simple stuff.
I think we should encourage beginners, like my brother, to use solutions like Proxmox, not discourage them.
Needing to run games on the same machine as your server is IMO not the most common use case. It tends to be expensive if power bills are a concern and using a VM as your main machine is tricky IMO.
Most games using anticheat won't run on VMs without a fight too. Your Proxmox supports taking a memory snapshot and restore, which would allow most cheats to work if they are convenient enough.
For every person like your brother we have many more half-serious people who need some type of reward before committing more mental effort. Wtf is a storage pool? What do I do with all these clusters, high availability thing it keeps asking about? Flattening out the learning curve is a nice benefit on its own.
The machine that I use now for my server was my main machine back in the day. Apologies if my wording in the post resulted in some confusion but I don't do gaming on the server at all.
And unstable for novices that have no clue what they are working with OS wise.
The virt-manager is easy for most Desktop folks looking to drop Win11 in a frozen backing-image sandbox with a local samba folder loop-back mount (allows fake network share in Win11 or MacOS guest OS.) =3
Virt manager is the least intuitive (discounting actively antiuser crap) program I've ever dealt with. I still don't quite get it and I've used Linux exclusively for more than 5 years.
GUI might not be as powerful, but in my experience, it's similarly non-intuitive as alternatives, such as VirtualBox / UTM (macOS) / VMware Fusion/Player.
For anything more complex (e.g. GPU passthrough) you will need to drop into manually modifying XML files.
The GUI. Random permission errors, python tracebacks, saving the settings don't always work, and the mysterious charade with "storage pools" that causes new permission problems.
I just started using Incus now. It seems way more intuitive. Its remote feature is amazing too.
For those who switched in the past few years: Has Wayland given you trouble? I was on a journey this morning after testing my software. It's a GUI CAD-style program for structural biology. I test it on Linux periodically to make sure it's cross-platform. The checks usually pass, with some subtlety regarding linking Cuda. Today, I observed that mouse + keyboard inputs to the 3D portion stopped working.
Root cause: Ubuntu and some other distros recently switched to a GUI backend called Wayland. I don't remember upgrading, but maybe it happened during a system update? It has disabled low-level device inputs except for mouse movement. You can use window-based events instead, but IMO this is a mistake. An OS-level function shouldn't block hardware access. I want the OS to facilitate software and hardware; not create friction.
Wayland was turned on by default in ubuntu 21.04 xorg (the x11 server) was removed from ubuntu in ubuntu 25.10.
Desktops like gnome have dropped support for x11 so you can expect that wayland will be the only way to do things from here on out.
There is a compatibility layer called "xwayland" that should work, but there's definitely some rough edges between x11 apps and wayland apps. x11 gave all apps a pretty large ability to intercept information from across the system. Wayland locks that down pretty significantly.
The interplay between X11 and Wayland is still bad in my experience- if you don't have all of the XDG-portal stuff setup or force all of the constituent drivers and applications to render on Wayland there will be issues.
More performant is good! Of interest, I was able to insert a workaround (a few lines of `unsafe` which alter local env vars) into the program to have it use Xorg instead.
> For those who switched in the past few years: Has Wayland given you trouble?
No, since I'm on Mint, and thus still on X11.
I keep hearing about Wayland problems, and I pray they don't reach me when Mint switches; that the relevant problems will have been fixed by them time Mint does switch.
The fact that I keep hearing people have issues because it's now the default doesn't fill me with confidence. It's probably a minority of people having these issues, and I imagine the problems with screensharing and the like are gone by this point, but accessibility tools which need access to everything X gave them are never going to be compatible without being completely rewritten with Wayland in mind.
I'm also not filled with confidence by posts like this[1], which confirm that Wayland was still deeply broken in many ways just in 2024. Two years is not a short amount of time, but given it took Wayland 10+ years to be anywhere close to usable for desktop usage, I'm not placing my bets on the pace having gone up so fast that long-standing issues are solved yet. That post takes somewhat of an extreme position, but kernel of truth is that it breaks a lot of things, many of which apparently still aren't solved.
This post[2] has a better outlook I believe. I.e. that things are broken today (since it is for accessibility), but that there is real potential and that it can be fixed. It just has to be done. Shame that doing that is hard and that accessibility ends up taking a back seat.
I had a look at that gist, and the majority of it is a result of the responsibility of wayland being smaller. xorg was dumping ground for a lot of functionality it shouldn't have been responsible for, so the majority of those Xs on the list are things that very much do work on a Linux system on wayland, just in theory you could build a wayland spec compliant DE which doesn't have them, but if you just install Gnome/KDE, it will all work.
For all the noise about the "unix philosophy" and systemd being too all encompassing, Wayland went the other way and people still aren't happy.
The exact reasons for it not working doesn't matter. It's the 'not working' part that's annoying people when it comes to Wayland. If X worked, and then Wayland doesn't, no matter how small the way in which it doesn't work, it's still a valid reason to not be a fan.
I'm yet to see someone complain about systemd and Wayland in the same comment, or even comment chain for that matter, which does scan, given how they have diametrically opposed philosophies to things. The Venn diagram of people who dislike and Wayland and systemd are probably not quite two separate circles, but I can't imagine the overlap is very large. I actually like systemd a fair bit, since it seems to do its job pretty well. I've seen people have problems with it, but most of the opposition to it are on philosophical grounds rather than about matters of functionality, whereas with Wayland, the opposite is the case.
That's the thing though, this stuff does actually work, if you just install debian or whatever it works. The complaint here is that the projects or layers that implement the features have changed.
It would be like if Linux used to include a bundled web browser in the kernel and they decided to split it off in to a user space app which comes preinstalled on the distro. And then people complain because now the kernel doesn't have a web browser and in theory a distro could exist which doesn't have one.
Xorg was bloated with a kitchen sink of features which have now been broken out into separate projects which are still included out of the box when you install a normal non minified distro.
> That's the thing though, this stuff does actually work, if you just install debian or whatever it works.
You say that, but your anecdote is no more true than anyone else's. Or maybe other people just have different needs to you and the others for whom Wayland seems to work fine, and those needs are in conflict with the state of Wayland today.
Not to mention there are people in this very thread telling people to not recommend Debian to newcomers for fairly good reasons.[1] If I should use Debian because Wayland works fine there, but then I shouldn't use Debian because my new-ish GPU doesn't work well with it, or whatever other reason I'd rather not use Debian, then that translates into 'don't use Linux'.
> The complaint here is that the projects or layers that implement the features have changed.
Which is a reasonable complaint when that results in missing functionality compared to what was before.
> It would be like if Linux used to include a bundled web browser in the kernel and they decided to split it off in to a user space app which comes preinstalled on the distro. And then people complain because now the kernel doesn't have a web browser and in theory a distro could exist which doesn't have one.
People aren't complaining because there in theory could be Wayland implementations which lack functionality. They're complaining because the Wayland implementations which are used in distros today lack functionality they use which was present when they used X. They don't want to use a Wayland implementation which doesn't have
> Xorg was bloated with a kitchen sink of features which have now been broken out into separate projects which are still included out of the box when you install a normal non minified distro.
I'll take a bloated kitchen sink that works over a normal sink that randomly sprays water in my face.
Still waiting for drag-and-drop to not be broken on Plasma.[2] I doubt it's hitting everyone, since then there would be work happening to fix it, but it still seems to be making the rounds every now and then.[3] The problem doesn't seem to be the distro having been cut down to the bone, at least to me. I've heard a non-trivial amount of Wayland problems from Ubuntu and CachyOS users. Ubuntu is by no means minified, and although CachyOS is, it's one of the more popular distros and one people seem to like, so random Wayland problems should trickle up and generate bug reports and fixes to the point that people shouldn't be having significant issues any more.
Anecdote from just now, from someone who's wanted to like LInux for ~20 years:
I updated Nvidia drivers from version 580 to 590 on Ubuntu 24, using the additional drivers window. Rebooted. Now instead of the OS, it's booting into something called "BusyBox (initramfs). with a shell. No error message. I don't feel like dedicating an indefinite time to fixing it. Maybe now LLMs will have replaced searching forums and Stack Overflow.
It is this class of problems that has kept me from switching. I'm not sure if it would be easier to track this down, or install the OS clean again. I lament having to make this decision.
I switched over 30 years ago and the trick is to buy hardware with Linux in mind. Linux tries to work with as much hardware as possible but it does so with mixed results. If you buy well supported hardware it becomes much easier. Finding out which hardware is best supported is really the main problem. Probably the easiest way to solve this is to buy from a Linux native vendor. System76 is probably the best known (to me anyways) but there are others.
ChatGPT: Ran me in circles as you describe; suggested a number of things I didn't understand, including commands that don't work in the BusyBox shell. (It's not a normal shell)
Gemini: Nailed it on selecting an older kernel in the Grub menu.
This is usally due to DKMS (dynamic kernel module system). Each time you upgrade your kernel, the nvidia portion needs to be recompiled. If this goes wrong, you end up in a state where you cannot boot.
This is why everyone in the linux world has a very sour taste for nvidia. It's not their fault per-say, but they did opt to go this route versus contributing their drivers to the kernel like everyone else.
In the future or even today with immutable distros, you'll be able to roll back the entire OS the same way you can roll back the kernel to undo any change that breaks the system.
This is the exact class of problems that pushed me into Apple's waiting arms. All (well, most. all of what I care about) of the flexibility and ease-of-use and efficiency of linux, stuffed into a gorgeous, well engineered product.
I wish I could fix the things, I'll fully cosign the beef the hacker community has for Apple on that front. That being said, I don't see myself buying a Windows laptop anytime soon.
Do you (or anyone else) know if this is related to that famous clip of Linus Torvalds saying "fuck you" to nvidia? I get the impression that nvidia has not prioritized linux at all really over the decades.
I don’t know about the anecdote really. But Unix ABI is only stable for the userspace. Anything kernel side is not. So that means recompilation when things changes. So most drivers are in source code form in the kernel, or at least provide shims to firmware. AMD and Intel drivers are part of the kernel, nvidia’s are not. So every updates, you’re praying the the stars align and your card can be used and does not crash the system.
Counter anecdote, I have an Ubuntu install that never had a problem for -years-. Last time it would not boot it was my fault for trying to do something stupid (wanted a new kernel for which the virtualbox extension does not work and ended up enabling it without generating initramfs first).
Why was this posted to HN? There is nothing new or original in the setup presented. People have been self-hosting all kinds of things on commodity hardware for decades, even things said to be "impossible" to self-host like email.
Also, nobody should be buying an (overpriced) Raspberry Pi for self-hosting, when used mini-PCs are faster, more reliable (no SD card, better cooling), and often cheaper.
Finally, I don't think you should use Proxmox in a home setting: too much abstraction, too much overhead (mainly memory). Use Docker where it makes sense, and deploy the rest bare metal.
There's nothing new or original in a lot of things that get posted here. Reading about someone starting a journey provides an interesting catalyst for discussion. What they did right, what they did wrong, other things to try, or even just providing a push to someone else to also try.
I'll take my turn on the soapbox to say I hope people keep posting about their adventures and misadventures in trying something new. I'd much rather be reading that than seeing yet another post on LLM-based agentic startups or pelicans riding bicycles.
Thanks for the recommendation! I am aware of this repo and hope to try some of the projects mentioned when I have the time. For now I am quite happy with my current setup.
>There are still issues with driver and software compatibility but it is getting better in the recent years thanks to projects like Wine and Proton .
OP, stop using outdated linux. Debian is intentionally outdated. You will never have a good experience with drivers when you are always using 2 year old kernels and software. 99.9% of humans think the word 'Stable' means bug free, but that isnt what Debian means by it.
I recommend Fedora, which is not Arch, its just up-to-date linux.
The current "stable" distribution of Debian is version 13, codenamed trixie. It was initially released as version 13.0 on August 9th, 2025 and its latest update, version 13.3, was released on January 10th, 2026.
So as of today the latest "stable" release of Debian is a month old.
By contrast the last stable release of Fedora is
Fedora 43, released on October 28, 2025 which four months old at this point.
Really once you get software that works all of this is pointless anyway, you have working software and you update once every year or so, or when you find you need to.
When you "need" to update is so personal that it cannot be predicted, but your FUD about Debian being universally old and outdated is clearly misleading at best and deliberately misleading at worst.
You are getting too worked up about this, not to mention cherry-picking.
Debian Trixie, to my knowledge, comes with Linux kernel 6.12 LTS. Many people with more modern hardware want the most modern Linux kernel -- currently 6.18 -- to support their devices. There are also countless stabilization patches (I heard some of my acquaintances praising their Linux kernel upgrades as finally giving them access to all features of various Bluetooth periphery but did not ask for details).
Having a modern kernel is important. With Debian though, it's a friction.
Can it still be done? Sure, or at least I hope so as I want to repurpose my gaming machine as a remote worker / station and the only viable choice inside WSL2 is Debian. I do hope I can somehow make Debian install a 6.18 kernel.
Furthermore, you putting the word "need" in quotes implies non-determinism or even capriciousness -- those two cannot be further from the truth.
Arch and Fedora can't come to WSL2 soon enough.
...and none of that is even touching on the issue of much older versions of all software in there. I want the latest Neovim, for example. For objective developer experience reasons.
Debian stable is for purists or server admins. Not for users.
No. I just see the same person in this discussion making multiple posts saying "Fedora is modern, fedora is good, stable Debian is broken, old, and wrong".
Of course my reply is a little mechanical and biased because I'm refuting a strawman.
Suggesting that Debian's stable release is no good for users, when I'm sat here using it, and many many other people do so is crazy hyperbole!
Maybe you can show me that person and their claims so we can work with them?
Because I'm not that person.
Sure I said users and not programmers. Sue me.
I was criticizing Debian's model. I'll be getting Arch or a derivative on my main machine but for WSL2 (secondary machine that is for now stuck on Windows) I don't have much choice so I'll have to work with a distro where I'll have to actively work against how it normally operates. I'll handle it, but it doesn't need to be that way.
You don’t lose stable. It will only install the package you select and deps.
Also the terminal is the main interface for Linux and the BSDs. Why does having to learn it is a negative? A computer is not a toy. You don’t drive a truck with no training.
Or just understand that Debian stable can be moved to Debian testing (or even Debian unstable if even 2 weeks is too long) trivially. The best decision that Debian has ever made is not to distribute or advocate for testing as a rolling distribution, because if you're too ignorant to change your repo to testing, you're really too ignorant to be using testing.
Admitting that getting 6.18 on Debian is some sort of insurmountable mountain is not something I would do in public while trying to show off my expertise. I'm not running it, because I don't need a kernel that's been out for 5 minutes and offers me nothing that can't wait a month or two. I'm running what's current on testing, which is 6.17.13. It's about a minute of work to switch to testing. I run stable on all my servers, and testing on my laptops, it is a triviality. But to all you bleeding edge software people, it's somehow rocket surgery.
> Many people with more modern hardware want the most modern Linux kernel
To run the latest version of Progress Quest. Need biggest number available.
> Arch and Fedora can't come to WSL2 soon enough.
So, it's really still Windows, then. I assume you've moved from spending years ranting about how Linux people were purist server admins and Windows was for users and just worked, and now you've chosen the same posture after being pushed out of Windows.
> Debian stable is for purists or server admins. Not for users.
You're not a typical user. Most users want a functional computer, not the largest numbers they can find.
>Admitting that getting 6.18 on Debian is some sort of insurmountable mountain is not something I would do in public while trying to show off my expertise.
I genuinely don't care to show off expertise. I just want a distro that works.
I'm really not sure what made you so rude but I'm not participating. You're intentionally misrepresenting because I didn't say even one thing of those you so criticize, yet have the gall to speak about showing something in public.
Just one suggestion, I would put the lab network on a separate vlan and access it through a VPN (or tailscale, netbird, etc.) that way you don’t bother with any security risk and only you can access it once you are authenticated to the network, and even if you want to expose a service to the public, you can do so by reverse proxy or service-specific features like funnel from tailscale, so you replace ddns and portforwarding and keeping things secure.
The issue isn't WireGuard, WireGuard is secure, and the services listed above are built on top of it, although they make things easier with centralized management. The issue is in the DDNS, there are many issues with it in terms of security (1), along with PF, you can look them up online, but the short answer goes back to the fact that you are exposing an internal service to the public internet, all it takes is some crawler/mass scanner/etc to find your running service and poke it. So, for example, if your home server is running CCTV or network storage service, accessed through DDNS, and the attacker found an exploit in that service, all your data is now under their mercy. The best risk management strategy is always avoidance, not mitigation, so if you can avoid any risk by never exposing it online and only access it through internal VPN, and maybe plus a reverse proxy, then you are set.
Like many here I also run a few things at home and I've got a variety of machine: a good old Pi as an unbound DNS server that is on 24/7, A N100 modern NUC, running headless, to stream movies/music (wife or kid or I turn it on when needed), a good old rock stable solid HP Z440 workstation which I use as a server (ECC RAM and 14 cores, yummy) for Proxmox/VMs/Docker/ZFS, etc.
The workstation which I use as a server is only powered up when I need it.
> I used Syncthing for file synchronization and PiHole as my local network DNS server to block unwanted incoming traffic.
Nitpicking but a local DNS resolver doesn't block unwanted incoming traffic: it prevents unwanted domain names from resolving. Arguably if it blocks traffic then it's ongoing traffic that it blocks.
Maybe he meant that the machine running PiHole also runs a firewall? I use unbound, not PiHole, so I'm not that familiar with PiHole (maybe PiHole also acts as a firewall?).
Any would-be switchers, note these are 2 very different things. Self-Hosting sucks, period. For a million reasons that have nothing to do with the OS :) Trying both at once is going to be exponentially more painful than doing either alone.
Half-assing self-hosting sucks, regardless of the underlying platform. You tie things together with shoestrings and gum, leaving ticking timebombs and riddles to your future self.
This is the point where I'm supposed to describe my self-hosting solution on my so-called homelab, where my blog lives. I won't, because it's both stupid in smart ways and smart in stupid ways, therefore it sucks all the way.
Self-hosting is like any hobby. Half-ass it and you'll half-like it.
Self-hosting sucks, yes. But I'll be damned if it isn't fun. It's definitely not for everyone. Not only is it not for everyone, but it is not for everyone's families. I'm so grateful that my wife is willing to put up with me experimenting on our home network, trying out different apps, waiting for me to resolve a DNS issue because I forgot to assign a static IP to my pihole.
I feel like it sucks a lot less than it used to. New apps like Immich work incredibly well, new tools like Tailscale make security and access to the home network easy, and LLMs make solving problems and getting ideas a lot easier.
I think it’s fair to say it depends on what you are hosting and why - personal projects and curiosities, fun! Self hosting business critical data and customer data, minefield! Currently self hosting some little projects, using cloudflare tunnels to serve to the web, and it is surprisingly fun and efficient!
In my experience, containerization has made self-hosting most software a breeze. The biggest pain points I've come across are related to network architecture and security. I've frequently run into issues with certificates, proxy setups, DNS, etc. It seems like much of that stems around how many modern web concepts were not designed to easily support offline-first environments. Then again, that stuff has never been my area of expertise.
For me I've decided to just have everything behind a VPN. Tailscale and Cloudflare tunnels make this quite easy to set up, dealing with ddns and CGNAT for you.
The upside is the security risk is massively reduced, an attacker would have to exploit both the VPN and the service behind it, both of these in theory being secure anyway. The downside is obviously that you require installing a VPN client to access services, but if it's only you using the server this isn't a huge deal.
I love learning that stuff too, until I learn what it takes to maintain it. At least internet facing stuff. I fully gave up on email a decade ago. That alone crushed all my excitement for the general idea. I have a local media server, but it's also just my main desktop.
Hot take alert! As an avid self-hoster, I'd like to hear why.
Personally, I self host because the benefits I receive simply aren't available anywhere else at the level of quality I've come to expect - Jellyfin is a great media player, it's free, and I don't want to switch. Pihole provides ad protection and privacy for my whole home network. It's also free. Homeassistant is amazing, and free. Etc etc.
Only if you don't care about your time or if your media collection is tiny.
Don't get me wrong, I love my 20 TB hard drives full of Linux ISOs, but it's a hard sell on anyone who doesn't have 'dicking about with computers' as their hobby. Regular old piracy using torrents has been a easier sell in my experience, once you can get over the hurdle of getting someone familiar with using a torrent client and the relevant search bar. Popcorn Time back in the day made that hurdle trivial. Getting people to use Jellyfin isn't hard. Getting someone to be the family/friend group Jellyfin sysadmin is a significantly tougher sell.
Pihole and the like is an easier sell, since it can be mostly set and forget, but it's not free unless you already have a computer which isn't doing anything, and even if you do, that computer isn't guaranteed to be one which has near-zero running costs when you factor in electricity.
The same sorts of problems apply to most things you can self-host.
I don't think many advice non tech people to get in to self hosting, but there are a lot of people who do enjoy messing with computers who these articles are marketing to.
The average user will only self host when it's a managed box they plug in and it just works. Like how Apple/Google home automation works. Maybe we will see managed products for photo / file syncing pop up.
> I don't think many advice non tech people to get in to self hosting, but there are a lot of people who do enjoy messing with computers who these articles are marketing to.
I agree, but even I, someone who does have this as a hobby and does self-host a few things, have my limits for the same reasons that the casuals do. Even when I have a computer that I can use for one more purpose, I rarely do that unless I know it will be set and forget, since having one more thing to deal with in my already overburdened life is a hard sell.
> The average user will only self host when it's a managed box they plug in and it just works. Like how Apple/Google home automation works. Maybe we will see managed products for photo / file syncing pop up.
Very true. I do hope some products like that will appear, but the workflow and UX will have to be damn near perfect, something which home automation often isn't (unless you use Home Assisant and thus have it as a hobby. Funny how that works).
I have the ikea home hub and it basically is maintanance free. There is no need or even a UI for updating/managing/reinstalling the OS. It just works and has been just working for years.
For other systems like photos and file storage the main complication would be around backups which fall back on the user. If your home automation hub dies, you just chuck a new one in and re-pair your lights. If your photo server drives die it's a disaster. Realistically you'd want to have a backup copy on the cloud, which would lead many casual users to wonder what the point of even hosting the server is if you still need to pay for the cloud backup.
I can't speak for Jellyfin, as I currently use Plex. But it truly has been "set it and forget it" for me. I've never had an update break things, it just does its job and does it well.
It depends, do you have the time to maintain few things, or sometimes more than “few things”? Do you have the skills or at least willing to learn? Are you willing to deal with some basic contingency planning in case some stuff goes wrong? What kind of services are you replacing, if emails, probably not worth it, storage? Definitely. Do you have a reliable internet/power combo to rely on accessing your lab remotely? Say you are traveling and want to access your NAS to get a copy of XYZ document, last thing you want is that you are unable to do so because your power is down.
So it really depends on the use case and many factors, if it works for you, great, otherwise and you are willing to pay some subscriptions, then be it.
As I had zero plans on moving to Windows 11, I was looking into which distros are popular nowadays over the past few weeks. Today, I tried Cachy OS and Aurora(non-gaming version of Bazzite) in VirtualBox and after 5 minutes I knew that after 30 years of using a computer with windows(dos, then w95 and onward) Linux is still not there yet on the desktop. I just can't believe how they still can't get the utmost basic things right. Yet here we are.
And yes, you can game on linux nowadays, finally! Even get better performance due to Windows bloat. Office, OBS, internet, video...everything is working...yet it still is not there in usability.
To be specific what irked me today when I tested them was installing new programs. On Cachy, I wanted to test jetbrains IDE. Last time i tested it was on suse and fedora in virtualbox last year and it worked but neither distribution was there just yet in UX. This time, I downloaded the tar version from jetbrains website. I could not open it(maybe due to it being run in live cd mode in virtualbox) or extract(no option in dir manager or decompression program) the content in Cachy. So I wanted to get 7zip but there was no linux version. Cachy has its own packages that can be opened(website) via its welcome screen(otherwise there is no program manager - no snaps, flatpacks...) and after downloading it with some arch file extension i could not install it. I could open it and see usr and bin directories but that helped me fuckall and i was not willing to tinker with this bs in 2026. Then in Aurora, it has bazaar for flatpacks, before i wasted bandwidth to download the IDE in vain again i preemptively wanted zip manager, there was pea..something. So i clicked install, it did and .. nothing. Nowhere to be found. Tried multiple times and no result. Could not find it anywhere. So I said F that and am sticking with the indian windows spyware. The devil you know and whatnot.
>So I wanted to get 7zip but there was no linux version.
Maybe Cachy don't have 7z, I don't know. But Arch (its base) has it: https://wiki.archlinux.org/title/7-Zip , and I never had any trouble opening 7zips in any Linux either from console or any graphical tool.
Thanks for your review of five minutes of using Linux after 30 years of Windows use. Sorry to hear that zero effort didn't work out, but what can you do.. it's just not ready for the desktop. Enjoy Windows 11.
Use a standard Linux distro like Kubuntu if you like KDE (Windows like) or Ubuntu if you like Gnome like and stay away from those ultra customized, time bomb distros and you will be fine. That's assuming your desired is sincere.
> So I wanted to get 7zip but there was no linux version.
Of course 7zip has a Linux version. I'm pretty sure it went a long time exclusively on Linux without having a Windows version. I'm also pretty sure your problem is that you were looking for a 7zip GUI because I don't even remember installing 7zip the last time I installed.
Stop using weird distros and just install Mint or something basic. If you're not a power user and you don't want to be doing power user things, don't pose as one. Mint and Ubuntu are made for handholding people who are afraid to type, and will give you tools to avoid having to do it.
Or, instead, you can realize that if you learned how to use the commandline in Linux 25 years ago, the skills you learned would still be useful. If you learned unix on a mainframe, you could still figure out what to do. It's not a wasted investment, like all of the time I wasted getting good at .BAT files.
And typing "apt install 7zip" isn't exactly hard. Or "7zip x [myfile.7z]".
Ubuntu Desktop 24 LTS: Kernel 6.0.8 will work on older GPU/Laptop hardware, but OS will be deprecated in 2029
Ubuntu Desktop 26 LTS will be out in a few months: Will be supported till 2038, but note old GPU drivers may not work on more modern Linux Kernels above >6.0.15
The normal Ubuntu Desktop requires a few days to make it usable, and a lot of customization to make it enjoyable. However, network printer and webcam access is usually trivial to install. google equipment installs before you buy... ymmv
Dual boot from two SSD if you need to work on the machine. You will swear less when (not if) you break something, and not everything windows works in Wine or kvm. =3
i know that. i run my own linux servers and i know how to use bash. but i specifically do not want to be doing any of these things on desktop. i was hoping, after such hype-wave for linux due to w11 being utter crap, that things got better. they did not. and yes, of course, this is just N1 experience.
It seems like your experience with linux may have actually sabotaged your ability to point and click install things.
KDE has the "discover" app which does what it looks like you want (including installing intellij with 1 click). [1]
There's also bazaar for gnome which offers similar things [2]
Ubuntu also offers the "snap store" which similarly offers a 1 click install of apps. [3]
The mistake you made is going directly to the app distributors for installation. Because there's no unified linux it's impossible for app distributors to offer a single way to install their apps. They can't count on your PC having anything. That's why intellij distributes with a tar.
This, however, is typical in linux. Using a package manager is how you do things in standard linux, those package managers have just been typically ran by the command line.
This is understandable, to want everything point and click and go. But doubt your mindset matches that of the community, so unfortunately it may take a while…
Maybe try something more commercial like Zorin OS?
You seem to have missed a couple of things that have caused you a bit of a headache here, I'm hoping I can encourage you to try again with a little bit of info. I've been using Linux for as long as it has existed, I'm also a backend-dev that works on a Linux machine and targets Linux-based platforms for deployment, even my kids use Linux. Windows went downhill for me after about Windows 2000 and Linux has only gotten better.
> yet it still is not there in usability
I want to wholeheartedly disagree with you. Nothing comes close to Linux in terms of usability for me, but a lot of it is about what you're used to, I've used Window's, I've used Mac, Mac I could live with, but I'll never intentionally use Windows again.
> To be specific what irked me today when I tested them was installing new programs. On Cachy, I wanted to test jetbrains IDE
Ok, let's begin; this one is partly JetBrains' fault, and partly yours.
You can open a terminal and type `paru jetbrains-toolbox`, hit enter a couple of times and it's installed.
Don't know what `paru` is? I recommend reading the frankly excellent documentation from CachyOS[0].
> or extract(no option in dir manager or decompression program) the content in Cachy
You didn't specify which Desktop Environment you chose, this is important when helping newcomers because each comes with its own set of tools; but in Gnome's (what I use) the file manager, called Nautilus, I can right-click almost any archive type and will be presented with "Extract", "Extract to..." as well as a few other options. I just looked up how KDE does it, in case you're using that, the file manager is called Dolphin, and apparently you might need to install an archive tool first such as Ark and/or 7zip, gotta give you that one, I'm a little shocked, that's a pretty shitty OOBE in my opinion, but a quick search and you'll now probably be confused because the solution is here[1] but they say to use `apt install...` which you don't have on an Arch based distro. But once you know what the file managers you do have access to are, it should be easier.
> So I wanted to get 7zip but there was no linux version
There certainly _is_ a Linux version. `paru 7zip` and I get at least 3 legit options; the base package, an architecture optimised package, and a GUI for it, as well as a dozen or two community options. You can also try the standard arch package manager aptly named "pacman"; `sudo pacman -S 7zip` and it installs it for me after I hit enter to confirm, don't even need to choose the package. Wtf is `sudo`? That's how Administrator is typically done in Linux.
> Cachy has its own packages that can be opened(website) via its welcome screen(otherwise there is no program manager - no snaps, flatpacks
On Gnome there is "Software" which supports Flatpaks as well as other package types; don't worry about snaps, you don't want them, and there's Octopi from CachyOS. In KDE there's a GUI called "Discover". There are a bunch of others such as Bazaar which you mentioned.
Usability really isn't an issue in Linux once you know the way of your distro; If you're used to Windows, then it's _different_, sure, and in that case I'd suggest taking an hour to read the CachyOS docs; Arch Wiki (CachyOS is based on Arch) is also an amazing resource for all things Linux, and learn a little about how software management is different, we don't (usually) pull random crap from websites, we install from package managers, and sometimes compile the source ourselves.
If you didn't choose one of the two DEs I've mentioned (Gnome, KDE), I'd recommend giving them a go, they're both very mature and usable. If you're into Discord, I can suggest hitting up the CachyOS or another distro's Discord servers, there's lots of helpful people there willing to help, if you had any other questions give me a shout.
>I just looked up how KDE does it, in case you're using that, the file manager is called Dolphin, and apparently you might need to install an archive tool first such as Ark and/or 7zip, gotta give you that one, I'm a little shocked, that's a pretty shitty OOBE in my opinion[..]
I think that's an artifact from running just the liveimage and not installing it fully to the VM. I'm 99% certain Ark is included in a default Cachy/KDE install.
This is precisely what it is, I hopped on my cachyOS install that I set up a few days ago and I can double-click the IntelliJ IDEA tarball from Dolphin and Ark pops up. I didn't install these manually, they came pre-packaged with KDE.
Thanks for the reply. I used KDE. And sure, Gnome's Nautilus might have worked but I have huge distaste for it(it's pretty, but omg what a pain to set up to be an actual DE, let alone out of the box).
Anyhow.. "You can open a terminal and type".. yeah, no. This is exactly what I or any other Windows/desktop user does not want to be doing on a desktop computer. Linux always promised to get rid of this "just use terminal bro", which is what being a desktop OS is all about, but it never got there..it seems.
The premise of my test was to see whether the OS is ready out of the box(the main point of a linux distribution, after all). But neither was. Again, I am not saying it is not usable. I am just saying it requires more work to be put into it from the get-go than I am willing to put in, despite having the skill and knowledge to do so.
> Anyhow.. "You can open a terminal and type".. yeah, no. This is exactly what I or any other Windows/desktop user does not want to be doing on a desktop computer.
Then stay on windows. You'll have the same issues with MacOS from time to time.
If you're willing to learn things you have plenty of options. If not? You'll be limited. Tradeoffs.
I really don't want to be "RTFM" but and (mostly) apologize for the tone here, but I don't want this FUD to go without criticism.
1. There are actually TWO GUI package installers in Cachy.
a. Cachy Package Manager (the one that you get from Hello)
b. Octopi (as shown from their install instructions at their wiki, pasted below).
"Octopi is a graphical package manager for Arch-based distributions that provides a convenient way to manage packages and updates. To update your system with Octopi, follow these steps:
Launch Octopi from the application menu.
In the main window, click on the Check updates button (Top left), now next to it System upgrade.
Octopi will now check for available updates and prompt you to either install them on Octopi itself or in a terminal.
To proceed with the update, click the Apply button.
Octopi will download and install the updates.
It is advised to reboot your computer after a big update (especially if the kernel got an update)."
--------
Why are you not actually using the tools the wiki and install instructions tell you. Tou wanted to search for Peazip, and you downloaded an arch file. You don't download the file, you use pacman or paru (or yay).
pacman is the core arch repo installer tool (think apt or rpm). Only for the official repos (IIRC).
For AUR* you need something like yay or paru.
* Arch User Repository, for applications not in the default repo's contributed by end users, but require a bit more attention since anyone can add an AUR build script.
paru $PKG_NAME will give you a list of all possible packages relating to that, and usually the first one you want is the one at the top of the list. It tells you what repo it's coming from, (cachy, extra, base/arch, aur (non-official builds by user contributions).
Can you at least try to put forth an effort? You're looking at all the wrong places (going to jetbrains instead of Cachy's repos... just use the repos)
This is the laziest attempt at "I know what I"m doing and I don't need to try and therefore it sucks and isn't ready" If you bothered reading the install instructions and following along it's not that bad.
I could use your logic against Apple since I'm not familiar with their software, and it doesn't operate the same way as Windows. But I don't blame MacOS for not operating the way Windows does. If you want a Linux that's more like Windows, then use Mint or Fedora or Ubuntu or something.
I fail to see how any of these are "failing to get it right" and not understanding you just want to use Windows and what you're used to and grew up thinking "this is how it should be" ignoring the fact that it's that way because you were exposed X number of years to it.
Do I think Linux is perfect? No of course not, nothing is, no operating system is.
Maybe it's not for you, but to say "it's not ready" is a bit of a joke when you've taken two very specialist distros and use it to act as if it's a Linux issue.
It seems you want convenience, but when you get it you complain about "indian spyware"
First - Convenience is the Enemy. If all you want is convenience enjoy the "indian windows spyware". Frankly, just use a distro that isn't as specialist/geek/game heavy
Try Ubuntu, Mint, Fedora, Debian or whatever.
Second - Knock it off with that "indian" bullshit. Sorry I know I'm supposed to be positive and not pick fights here, but I'm not the one whose making racist insults/implications here.
The "UX" issue seems to be more PEBKAC than actual issues of the software.
Sorry but this is just FUD with someone who wants the easy way forward instead of adapting to a new system/paradigm and learning how the system works. There is nothing wrong with that, but don't blame Linux for your inability to read and follow directions and not have everything handed to you on a platter (frankly - there is SO much more and better documentation on Arch wiki about Linux (across all platforms) compared to trying to solve Windows issues and get support on that from a million other forums with "Microsoft Ambassadors" or whatever giving you the most basic useless info.
reply