Author: Joe

  • Where’s The Sudo? The Current Portrayals and Betrayals of Permissions

    What’s Up With the Lack of Permissions Discussion?

    When discussing sudo permissions, it always seems that it is at most glazed over, but at worst completely reduced to seemingly no importance. Many posts or instructional videos will show setting up a user with sudo privileges, but not actually dig down into details like the importance of only doing the minimum permissions required. Certain use cases that go beyond the single-user personal computer need higher security, so setting up users with full sudo access that don’t need it is careless in an instructional setting, let alone careless to yourself to begin with. The way sudo is presented in most public content, it is framed as if a user either does or doesn’t have superuser privileges. Clearly, there is much more to sudo than that.

    Another case of carelessness I’ve seen is completely foregoing setting up a new user with sudo in the first place and doing every task as root. There is a time and a place where using root might be necessary, but there are many more times when setting up a user with sudo access and locking the root account is the best way to go. Root can basically go undetected in logs for any action it takes, and this is scary if someone is able to gain access to your machine. Only running as root can be seen as the equivalent of leaving all the doors in your house wide open while you’re away on vacation: Sure, someone would have to find a way to your house first (gain remote or physical access to your computer), but once they’re there, what’s stopping them from taking everything inside? This is why root-only usage is not best practice.

    Setting up users with the minimum amount of permissions needed is always the way to go, regardless of whether it’s an enterprise environment or your personal computer. There’s no better pain in a malicious actor’s backend than realizing the extent of a user’s sudo privileges is nowhere near what they need to do their damage. Think of tasks you realistically do on a daily basis, for example. This can vary widely from person to person, but let’s say the extent of your need for sudo is for package maintenance: updating, installing, and removing packages is all you realistically need to do on a daily basis in this case. So what would be the real harm in restricting your daily driver user account to only have sudo access for these functions? If that’s truly the only thing you ever need daily, then it won’t be an inconvenience, and you can have an additional account with more sudo privileges if you ever need to do more. Utilizing this method, if your user ever gets infected or compromised, you’ve effectively safeguarded your computer from further damage. Even though this is usually best practice for less knowledgeable users’ permissions, it’s not a bad idea to utilize it for yourself as well. It’s taking a Standard User and Admin User style approach to permissions, more like how you would preferably have a Windows machine.

    When setting up sudo permissions for only certain applications, you also have to watch out for the nasty ways someone can still escalate their permissions through the access they have. A common form of this is being able to execute shell commands through text editors like Vim. If you’re not careful and don’t ensure their permissions in the sudoers file include NOEXEC, they can execute commands from Vim and BOOM, they’ve escalated their permissions to the undetectable root user. This also applies if they need access to run shell scripts. Now they can’t run shell commands due to NOEXEC in their sudoers permissions, but if they have sudo permissions to run certain shell scripts while also having access to edit those scripts, you’re still in a world of hurt. All they need to do is add escalation commands into the scripts after they already have permission to run said scripts, and once again, you’ve got a huge problem. Always be aware of the nuances regarding sudo permissions, as exploits like these won’t always be an apparent side effect but will leave you no better off than you started.

    Although this was a mini rant due to seeing a high occurrence of sudo omission, I couldn’t encourage enough the inclusion of more detailed and appropriate sudo setups in creators’ content. As someone who started from no knowledge, just like everyone else, I look back on the videos and media I consumed early on and I wish I would have seen more examples of the importance of sudo and security hardening as a whole. A lot of these instructionals on how to set up a home server exposed to the internet have an amazing lack of detail regarding operating system security, considering the amount of exposure to threats it introduces. I guess if getting you hacked was secretly their plan, then it makes sense that they would make you assume no additions to sshd_config and sudoers was perfectly okay and normal…

  • Are Books In Tech Still Needed?

    In the current age with such robust resources accessible right at our finger tips, what role do books and physical media really play in the tech world? To be fair, it is hard to beat online resources not taking up physical space, having unlimited revisions without needing to buy new editions, and the best part of them all: most are free. That being said, there are still benefits to keeping books and manuals at your disposal. There’s a reason why they are still being printed and purchased, even though everything has their own digital versions for sale simultaneously.


    Here’s my latest read, with 1 billion sticky notes

    Ease of Use

    One thing that I always loved back in my Music days, was having most of my text in physical formats. Copies of textbooks, sheet music, or any other reading material was always on hand, littered with penciled in notes and alterations. The ease of being able to have something in front of you with no technology needed, was in my experience, removing obstacles from your ultimate goal: having the material, and editing whatever you wanted without wasting time. Appending sections with additional notes or markings, and removing sections altogether, was as simple as putting the graphite on the paper. Without having to worry how to configure these things on a program, and then losing your flow altogether, makes a huge difference. These items from those days are still with me today for reference (to my fiance’s frustration, since the collection is quite extensive and she doesn’t let me forget it).

    Although this was more practical as a gigging musician, and maybe not as convenient in tech since you are already by a computer, it still has it’s place. It provides a distraction free environment that can always be transferred into a more readily accessible digital format. A lot of the times specifically with sheet music, I would print it out and do my edits on that copy, and later on either scan that paper into my computer or re edit the sheet music file altogether. The pencil and paper were my literal scratch pad, and then once I was home (or had the will to do it), I would take the time to program the edits into Sibelius notation software.

    With regards to tech books, you can always make your notes in the book, and make a reference document using the books material with your editions added in. This way, not only are you going through writing it physically, but then taking the time to synthesize the book and your words into a reference for whatever it is you’re learning about. This is learning 101, because it is well studied how the processes described are tools for more effective learning, including a quicker retention of the material. Besides, let’s be real, everybody loves good documentation.

    Version Control

    Unless you are purposely copying and storing your own little library of data containing your favorite online resources, the state of online resources is they are living documents: they can be edited or completely altered at a moments notice. Now, this admittedly isn’t a huge concern, but is something you can rely on physical media for. What you have in front of you is yours, and isn’t subject to total annihilation just because someone forgot to renew their domain. You can depend on that information being there just like you remember it, as long as you don’t cripple the text with a misplaced cup of coffee (don’t as me how I know this, I’m still bitter).

    Text, whether in digital or physical form, go through changes and new editions. No one is arguing that. But when a useful paragraph regarding the BTRFS filesystem you were referencing ceases to exit, it is a pain to go through archival tools like waybackmachine just retrieve this data as you proclaim “what was the reason for this!”. With physical media, unless you completely nuke your book with coffee (again, don’t ask), it will always be a reliable source for your information. Keeping multiple editions of books, although spatially taxing, is also a good way to keep information from the older volumes that may have been phased out in the newer editions. With something as fast progressing as tech, it’s not beyond anyone to know it’s adapting rapidly and new things are replacing technologies all the time, meaning these documents need to keep up with the latest and greatest. Depending on your use case, maybe you still need the edition that isn’t completely overtaken with systemd knowledge, and the second edition still retains robust chapters on your now technologically prehistoric init system. Well lucky you, it’s sitting on your bookshelf, more than likely covered in dust.

    Downsides

    Look, it’s not always going to be the best decision to have physical media. There’s something to be said about having the text in digital formats, regardless of if you have them in physical media. Books take up space, they’re often times expensive, and most people don’t touch them after they read them through one time. This is always going to be a thing of preference; no two people are going to feel the same way. Also let’s be real, with tech, you’re not exactly talking to the right demographic about keeping things in an objectively more inconvenient format. Why have a book when a PDF is always on my computer, and I can go right to where I need once I search a keyword. Absolutely valid. If I need to take them somewhere, why have bulky books when I could just bring my laptop instead? You’re not wrong. There will always be a give and take, and this is no exception.

    Final Thoughts

    As I clearly am not unbiased on this topic, I feel having physical text is indispensable. It’s way too easy to make edits to the text, it will always be the same information without surprise alterations, and it gives your eyes a much needed break from the bright glowing screen like the one you are looking at right now (I can see they are quite bloodshot, take a break!). Using multiple tools at your disposal is a proven way to fast track your learning, so writing things down and then typing them up later are a great example of deeper learning at work. A multimedia approach is what carried me through my musical studies all those years, and is what helps me now as I navigate my travels through the world of Linux. Although these Linux technical books aren’t as cool as my physical copy of The Planets full orchestral score (you best believe that’s all penciled up!), using physical media will always be the groundwork for my studies, and always an indispensable asset to anyone’s learning toolkit.

  • Netdata Having Trouble Installing On RHEL 10

    Here’s the skinny: Running the Netdata install/register script on RHEL 10 didn’t completely work on the two VM servers I recently constructed for a future post diving into Netdata. I spun up a Debian 13 and Ubuntu 24.04 server as well, and these installed Netdata and its plugins without any issues, so it seems it might be a problem with RHEL 10 or just RHEL in general. I haven’t tried this on RHEL 9 or a RHEL comparable like Rocky or Alma Linux, so that might an additional edit add-on in the future if I test those. All the VM’s used a Bridged Network Adapter to access the internet running in Virtual Box, and were all registered with a Red Hat Developer account. Even after booting up a third RHEL 10 server, it still gave the same error, so I was able to get a screenshot.

    Here’s the script error after spinning up a 3rd RHEL server

    Update Sept 30th 2025:

    I did a test run with Rocky 10 to further narrow down this issue, and it worked with no problem. The only real difference in RHEL and Rocky is the subscription needed to register the device on RHEL, and if it’s not registered it won’t have access to the app repositories. That being said, it can now be narrowed down to something going on in RHEL that causes the epel-release package from not properly pulling during the Netdata installation via their script. Something in the script is causing it to crash while loading the RHEL repositories.


    Update Oct 3rd 2025:

    After a little more digging, and a fruitful post on Reddit, it seems you still have to manually install the EPEL repo using the conventional means. One commenter on Reddit perfectly described it as a “chicken or the egg” scenerio: the epel-release command activates the EPEL repo, but that command can only be run after you enable the EPEL repo, because it’s only in the EPEL repo… Pretty brutal. Rocky 10 was able to circumvent this by including their Rocky Linux 10 - Extras repo by default, giving the option to activate EPEL with only epel-release (hence the screenshot above showing it working with no issues). This should be something in RHEL by default as well, since it gets rid of the extra commands to get EPEL, but it’s not on by default until you activate it.

    This is, ultimately though, a Netdata issue with their script. It is because it should have a way to pull EPEL repo activation command when it recognizes a RHEL operating system. This could be done when the script checks all active repos, and depending on which RHEL product and version number it’s on, be able to pull the correct commands down before moving on with the rest of the process. This would alleviate any hiccups in the process, and we could’ve avoided this whole blog post in the first place!


    What Exactly Happened?

    When using the Linux script to add a node it seemed to fail installing it with their rpm package with RHEL, and notified its falling back to a different install method, which is grabbing the files directly from the Netdata GitHub. This way will still install Netdata and connect your node, but the script installing this way didn’t include everything that is provided in the former install method.

    I noticed this initially after connecting my two RHEL 10 VM instances with my dashboard. Under the logging section for both RHEL servers, there wasn’t logging enabled at all, so I couldn’t see any of that data being reported and categorized into the Netdata console display. It was completely absent of any data display, and just contained text noting that logging is not set up and had a hyperlink to learn more about Netdata’s logging features. This is a default setting that shouldn’t require any setup, and obviously seeing system logs in a dashboard is one of the attractive features about a monitoring service like Netdata, so I had to find a way to fix this for my own sanity.

    The Fix

    Skipping the details as to not bore you with my about-an-hour-or-so of testing a couple different methods, the way I ultimately found to fix this is to pull the netdata.run file straight from the Netdata GitHub and manually execute it outside of the original setup script. This eliminates the need to use the Epel repository, which they utilize on their kickoff script, since epel-release was part of the original issue. I already had these two RHEL 10 server instances running and attached to Netdata, but running netdata.run didn’t cause any disconnection to Netdata during or after execution: It just added the missing packages, restarted the services, and what do you know the services that were missing now showed up. The script can detect already installed files and connected nodes, so luckily it just filled in the gaps and didn’t do a complete overwrite.

    Before I pulled the stable release of netdata.run from GitHub, I made sure to cd into the /tmp directory to keep the file out of the way since we only need it just this once. Also since we’re manually downloading and running this file, you have to make it executable with the chmod command after you download it. I did run this while I was root, but remember you have to start these with sudo if you are opting to run these under a user account with sudoer privileges instead:

    cd /tmp
    wget https://github.com/netdata/netdata/releases/latest/download/netdata-latest.gz.run -O netdata.run
    chmod +x netdata.run
    sh netdata.run
    
    This is the tail of netdata.run after execution

    Additional Thoughts

    The only thing I wish I would’ve done differently is used something like tmux in the terminal so I could scroll back and see what it said when it failed, instead of the red blurs that I did see fly by during the install. It is common practice to use tmux when accessing a machine via ssh anyways, but since this was a local install for testing purposes I omitted it during this process. Another option could’ve been sending the output to a log file, which would be a more permanent solution that I could access to review the material.

    Here’s the Logs section when it’s operational
  • Is Nala Still Good After Apt ver.3 Update?

    Debian 13 Trixie on Apt ver.3

    With Debian 13 Trixie becoming the new stable branch, apt has been upgraded to version 3. A lot of much needed and welcomed updates to the UI have made their way into this new version, making it the cleanest that apt has looked. Upgrades from the previous version are:

    • Includes better output on the terminal, bringing columned and colorized output to give a more pleasing display to the user.
    • Puts warnings towards the end so they are no longer buried in the output.
    • Uses a new dependency resolution engine “solver3”, which uses better logic on which packages to install, keep, or remove.

    Nala

    Available in the Debian repository, Nala is a frontend for apt, and is a fix to some of apt’s longstanding problems. Nala offers a cleaner UI, used colorized outputs before apt even integrated that feature, and tends to overall be a better frontend option for desktop users. Even though Nala has been available before this new upgrade to apt, it still offers improvements over the current version like:

    • Better mirror selection with nala fetch, which will ping all Debian repository mirrors and give you a list of mirrors from fastest to slowest, giving you a definitive way to select the fastest choices for your machine.
    • Supports parallel downloads like Fedora’s DNF or Arch’s Pacman package managers, so download speeds are much faster than just using apt which utilizes sequential downloads.
    • Contains a transaction log, which you can view recent changes done to packages and even use it to roll back updates if needed.
    • To update packages, you just need the one command nala update instead of the traditional apt update and apt upgrade, which is a great quality of life feature.

    Is Nala a No-Brainer then?

    Well, just like anything, it’s not always that cut and dry. Nala isn’t without its own problems, and that’s something you have to consider when adding an additional layer of software: doing so introduces another potential point of failure. This comes into play when considering Nala for other use cases like servers; does this offer enough upside to be warranted?

    Most servers would be best advised to get updates from a central machine getting its repository updates from the internet, so mirror selection for speed isn’t as crucial, and the visual upgrades aren’t as crucial when observing servers through a cloud console or in a remote terminal with ssh.

    Now for desktop use, the upsides are far greater than the potential downsides, because in this environment you will benefit more from the visual changes Nala provides, and desktop users tend to be more hands on with running manual updates instead of unattended ones, so you can see in real time if an error occurs.


    This topic has been played out online before, so it’s nothing new, just like this recent thread on Reddit discussing using Nala. The new update to apt seems to be bringing the topic up again, but the collective thought seems to be equally divided between using Nala and not using Nala. Some users even share Nala breaking or causing issues, so that reaffirms the sentiment I had talked about the paragraph before.

    At the end of the day weigh your options and use what’s best for you, because in the true nature of open source, the choice is ultimately yours! If you have experience with Nala, leave a comment and let me know your experience.

  • Fresh Install Debian 13 Trixie manually with Btrfs, Timeshift, and Grub-Btrfs

    I Made My First Guide on GitHub!

    This goes back a couple of weeks now since it was completed, but I’ve been updating it with incremental improvements to ensure quality and accuracy since then. This was born from accidentally nuking a system upgrade from Debian 12 to 13, but then taking the opportunity to fresh install without an ext4 filesystem, and use something that can recover a botched system more easily. Lemons to Lemonade!

    The main goal of the guide is to give a detailed walkthrough of setting up the new stable branch Debian 13 with:

    • a manually sub-volumed btrfs filesystem,
    • the application Timeshift to facilitate automatically scheduled snapshot creation
    • Grub-Btrfs so the snapshots are readily available in your boot menu for easy rollback to previous system states.

    There are many more options that you can do to facilitate btrfs snapshot creation, even automatic ones taken before every app install/update/remove using scripts for Timeshift or Snapper, so I hope to dig into those further in the future. In my mind, the ultimate setup to cover all bases will be using Snapper for the automatic snapshots creation for every app install/update/remove using the aforementioned scripts, in conjunction with Timeshift using rsync to schedule complete backups to an additional hard drive.

    The GitHub repository is here: Debian-13-BTRFS-Install-Guide

    If anyone wants to contribute, you’re more than welcome to submit a pull request!

  • Obligatory JosephTSuarez.com Going Live Post

    My First Post: Going Live!

    Welcome to my blog! This is my first post as my site officially goes live.

    I’m starting this blog to share my tech journey with you, as I learn Linux administration and explore all the fun stuff in between.

    My journey began about a year ago when I discovered this thing called Linux. What started as a side project to customize my first Linux distribution quickly turned into a deep dive I never saw coming. Since then, I’ve been hooked.

    This blog will be where I:

    • Document my projects
    • Dive into new (to me) tech topics
    • Share how-to guides based on my experiences

    My Goal

    My main goal is to become proficient in Linux System Administration, with a focus on operating and hardening enterprise Linux operating systems like Debian and RHEL.

    Stick around and follow along as I navigate this journey – with all the bumps, wins, and lessons along the way!