• 0 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • My point is the different levels of just working are subjective, not objective. I personally have spent far more time fixing bugs or just reinstalling ubuntu systems then I have over the same period for Arch systems. So many of my ubuntu installs just ended up breaking after a while where I have had the same Arch install on systems for 5+ years now. Could never get a Ubuntu system to last more then a year.

    Everyone has different stories about the different OSs. It is all subjective.


  • nous@programming.devtoLinux@lemmy.mlWindows doesn't "just work"
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    You can cherry-pick examples of problems from every OS. That is my point. They all have issues that you may or may not encounter and quite a few that would make people from other OSs scratch their head and think what the hell the devs are thinking. Pointing out one issue of one OS does not change any of that.

    Which is proven by the other replys to your comment - others dont find this issue to be as show stopping as you do and just live with it or dont use it at all. How many issues do you do the same for on your favorite OS?


  • There is no perfect OS that just works for everyone. They are all software so they all have bugs. People how say an OS just works have never hit those bugs or have gotten used to fixing/working around or flat out ignoring them.

    This is true of all OSs, including Windows, Linux and MacOS. They are all differently buggy messes.

    Linux is the buggy mess that works best for me though.


  • Realtime is important on fully fledged workstations where timing is very important. Which is the case for a lot of professional audio workloads. Linux is now another option for people in that space.

    Not sure Linux can run on microcontrollers. Those tend to not be so powerful and run simple OSs if they have any OS at all. Though this might help the embedded world a bit increasing the number of things you can do with things that have full system on chips (like the Raspberry pi).



  • And how did you, advanced Linux user, get to the stage your at now?

    Incrementally over time by reading the documentation and/or manuals of the commands I need to run and looking up how others solve the problems that I need to get other ideas about things (even, periodically, for things that I already know how to do to see if anyone has found a better way to do it or if a new tool has come out that helps). And trying things out/experimenting with different ways of doing things to find out what works well or not.


  • nous@programming.devtoLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Huh? You seem to be arguing both ways? If the system drive is full you have problems well before you risk losing data and if the home drive is full you have problems saving data? Both of these things can happen in a split partition or single partition setup. The split partition just means you have to get the space correct or end up with long resizing options for juggling the size around. And with a single partition it gives you more places to free up space when you do run out.

    Need to save a file but the disk is full? Clean out the package manager cache. You cannot do that if the partitions are separate. An update does not have enough space? Delete a steam game or clear out your downloads folder.

    Ext also has a reserved space option which when there is less free space than that option it refuses writes to anything but the root user - which is meant to solve the issue of a user trying to use up to much space, there is always a reserved bit that the system can do what it needs to. Though I have never seen this configured correctly for a running system and root can blast past the default 5% on smaller drives with a simple update. Or some other process is running as root is already consuming that space.

    Other partition types like btrfs have proper quotas that can be set per directory or user to prevent this type of issue as well and gives you a lot more control over the allocated space without needing to reboot into a live USB to resize the partitions.

    People seem to think a split partition helps but I have generally found it just causes more problems then it solves and there are now better tools that actually solve these problems in more elegant ways.


  • nous@programming.devtoLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    You don’t actually require a separate partition - you just need to not reformat the current one when reinstalling. Most distros I have seen will delete system folders if you don’t format but will always leave the home folder intact. Manually deleting the system folders is also an option if the installer does not.

    TBH I am not sure a separate partition actually buys you anything but false confidence (which we do sometimes need ;) ). During the partitioning phase you can easily delete or format the wrong one (hell, if you only have one then it is less error prone to skip it all together). And after that step the drives are mounted and there is nothing protecting your files from the installer deleting them. It is just installers don’t touch the home folder or anything other then the system ones if it is on one partition or 50 different ones - it just sees the files in the directory it wants to install to. The only way a separate partition would add protection is if it were mounted after the install - which I do not know of any installer that actually does that.

    As with anything. ALWAYS backup the data you care about before installing a new OS. The separate partition does NOT protect your data from deletion in any way. Leaving your home folder is simply a convenience option so you don’t need to restore all your files after the installation - not a replacement for a backup.


  • nous@programming.devtoLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    helps with issues like running out of diskspace

    Or causes that problem if you don’t manage to predict your usage patterns correctly. I have seen many people run out of space on one or the other but have plenty overall and would not have had a problem with a single partition.


  • Linux From Scratch (aka LFS) is a set of documentation and resources that describe one way in which to build everything on a Linux system yourself. It is not the only way though. Embedded systems is one place you might build every image from scratch but if you go down that route you are typically using something like yocto or buildroot which are designed to compile simple embedded distros for specific projects using an existing system for the build process. These are useful as embedded systems are often resource constraint and you don’t want to include things that are not required and often on different architectures from the host systems (such as ARM CPUs).

    These days there is very little commercial purpose to creating your own distro from scratch that are not for embedded systems. It is a lot of work and generally not worth the effort unless building a distro is the point of your business - but even then you better have a good reason that using an existing one as a base is not a good idea. Packaging everything for a general purpose distro is a lot of work with very little benefit for a company to do. It is vastly easier to use what others have done as the base until you can justify the expense of managing everything your self (if it ever makes sense to do that).

    So the only real place that you would go down building a distro from scratch is if you have a new or different idea about package management. Arch Linux did this with pacman, Gentoo with emerge, Alpine with apk, and Nixos with nix. These types of things typically start out as hobbyist projects and grow from there rather than with a commercial intent in mind.

    The only other thing that makes sense is from a very high threat model for security reasons - thinking nation state level actors not your every day home user. You may want to build everything from scratch if you want to absolutely trust everything on your system and have the time and resources to do this.




  • Generally speaking you shouldn’t be poking around running containers. It is rare that I have ever needed to do that. If you want to inspect the contents of an image then tools like dive are helpful. If the container produces some useful output that you might need then put that into a volume, you can then mount that volume to a debug/inspect container to read the files without messing around with the rest of the container.

    Shell-less containers are a great security feature - it is extremely hard to get a reverse shell on something that does not have any shell. And if you must have a shell to debug something docker already has a feature for that docker debug which works for shell-less containers as well.



  • Don’t have a knee-jerk reaction to every news post that you see. We have yet to see what will happen and you will have loads of time to decide on what to do when we do know if it will get pulled. You will be able to use your current kernel version with it for as long as you need to even if it does get pulled from the next version. So I would wait and see what actually happens.

    Best option is likely a reinstall of your OS to move off it though there are other more involved ways like copying your rootfs off, reformatting and copying it back before reinstalling your bootloader. A reinstall is likely going to be quicker though.


  • Not anymore because all the reason I mentioned. Has the experience change in recent years? Not likely. It is the same software as in other distros - just years out of date. That has not changed as the goals of these projects have not changed. They might be on newer versions then 10 years ago but they are still way behind more frequently updated distros - or at least will be very shortly. That is fundamentally how these enterprise distros work. Their target audience is businesses needing support, not lots of end users.

    The big attraction towards these distros are the support that enterprise people will pay for - which you do not get with the free version. If you don’t mind older versions of things then it might be nice for you. If not then I would stay clear of them.


  • Older software is the most noticeable thing. Enterprise does not mean it is better - just that it is supported for a long time and they do that by not changing much on them. They are more designed for servers rather than workstations and generally not a great experiences unless you are running hundreds or thousands of them in an enterprise situation.

    Professional just means payed for. What you are paying for is support in managing the systems, not a great user experience.

    For home desktops it is far nicer to be on newer software rather than things that came out 5 to 10 years ago.


  • Um no. Containers are not just chroot. Chroot is a way to isolate or namespace the filesystem giving the process run inside access only to those files. Containers do this. But they also isolate the process id, network, and various other system resources.

    Additionally with runtimes like docker they bring in vastly better tooling around this. Making them much easier to work with. They are like chroot on steroids, not simply marketing fluff.


  • When I change devices or hit file size limits, I’ll compress and send things to my NAS.

    Whaaatt!?!!? That sounds like you don’t use git? You should use git. It is a requirement for basically any job and there is no reason to not use it on every project. Then you can keep your projects on a server somewhere, on your NAS if you want else something like github/gitlab/bitbucket etc. That way it does not really matter about your local projects, only what is on the remote and with decent backups of that you don’t need to constantly archive things from your local machine.


  • It doesn’t technically have drivers at all or go missing. All supporting kernel modules for hardware are always present at the configuration level.

    This isn’t true? The Linux kernel has a lot of drivers in the kernel source tree. But not all of them. Notably NVIDIA drivers have not been included before. And even for the included drivers they may or may not be compiled into the kernel. They can and generally are compiled with the kernel but as separate libraries that are loaded at runtime. These days few drivers are compiled in and most are dynamically loaded depending on what hardware is present on the system. Distros can opt to split these drives up into different packages that you may or may not have installed - which is common for less common hardware.

    Though with the way most distros ship drivers they don’t tend to spontaneously stop working. Well, with the exception of Arch Linux which deletes the old kernel and modules during an upgrade which means the current running kernel cannot find its drivers and stops dynamically loading them - which often results in hotplug devices like USB to stop working if you try to plug them in again after the drivers get unloaded (and need a reboot to fix as that boots into the latest kernel that has its drivers present).