Graylog: Syslog aggregation for free?

Welcome back! Today, I’ll be discussing one of my favorite homelab projects: Graylog.

Graylog is a free for personal use syslog aggregator. What does that mean? Well, we’ll start with syslogs. Syslogs are system log messages that various operating systems use to document what is happening with the system. This includes Linux syslog (/var/log/syslog) and Windows Event Logs for both server and home editions of Windows. These come in handy when troubleshooting issues or when you need to check specific time frames to see what a system was doing. But, this requires going into each system individually to read these; If there’s an issue affecting multiple systems, you’ll need to check out each system and correlate time frames to troubleshoot. Enter Graylog.

Graylog allows us a way to receive all syslogs from across a single or multiple networks. In the case of my homelab I run about 25 virtual machines of Windows and Linux varieties. Trying to nail down an issue across multiple servers can be a bit of a pain, but with Graylog I can see everything in one spot. But that’s just scratching the surface.

One of the most powerful functions of Graylog is the ability to set rules depending on messages it receives from your systems. Want to look out for DCHP releases to see if anything is talking on the network that shouldn’t be? Set up a rule for DHCP renewals to email you when something requests one. Want to know if someone escalates to Admin in Windows? Build a system event log to do just that. It’s incredible, really, that this software is free.

I’m currently running Graylog Enterprise as my daily intake of messages is well below the 2GB/Day maximum even with ~25 systems sending logs and doing routine maintenance across the systems (which can sometimes cause quite a lot of messages if it’s been a while before doing updates). The dashboard I run is a saved search for the past 12 hours and you can have it automatically refresh to keep an eye on it (I’m a huge sucker for dashboards):

Keeping this up also allows me to see if there’s anything sending a larger than normal amount of messages. I had a situation one time where my DNS server (PiHole) was being bombarded by a smart home device and I was able to see this from the dashboard and reboot the device to fix the issue.

So yeah, I dig it. Graylog is super neat, the searches are extremely fast even for large amounts of data and deploying it on Ubuntu only takes an afternoon’s worth of work for someone with limited Linux knowledge. It’s also neat that they allow some Enterprise features for the free version if you’re under a set limit of messages per day, allowing me to test this further on my Homelab before potentially using this for a real production environment. Overall: 9/10, would install and use again.

Thanks for reading! I hope you enjoyed my post and keep checking back for more information and projects. Cheers!

What life is like with Linux as my primary OS

I say this a lot: but something I’ve wanted to do for a very, very long time is run a Linux OS as my primary operating system for day to day usage. I took the plunge recently and have learned a lot. One of the biggest things learned? How much easier Linux has become after last trying this around a decade ago. Here’s what I learned…

Firstly, Linux isn’t nearly the beast it was in 2010. A lot of things just work out of the box now, including third party drivers for such things as graphics cards. My first time testing this a decade back I couldn’t get multiple monitors to work, I couldn’t get USB devices to register and everything was very prone to crashing. So, it’s time to start tinkering. Let’s start with my rig:

*Ryzen 3600, 32GB DDR4 RAM, NVidia RTX 3070

With that, I tossed an extra Sata SSD into my desktop and dived headfirst into my first distro to test: Garuda.

Garuda bills itself as a gaming focused OS. Most things worked out of the box but I quickly found that my background in Debian/Ubuntu wasn’t much help when getting into package management and application support. I decided to change it up and quickly went to Xubuntu, based on the XFCE GUI which I’ve been a fan of for some years due to it’s low resource requirements which should mean it will run quickly on my gaming rig. I was disappointed, however, with some of the limitations and how the GUI looked/felt. This brought me to the my current and hopefully final OS: Kubuntu

Kubuntu is Ubuntu running KDE Plasma as a GUI. As with the other distros, it worked mostly out of the box. Speakers, monitors, USB hubs, everything just worked. With a little tinkering I got my monitor positions lined up and dug into getting the software I wanted running on this install.

My goal was to replace as much of my Windows 11 experience as possible. This included everything from Dropbox, Slack, Discord, Spotify and my VPN provider which were all available in .deb packages and easy to get running. After about 2 days of tinkering, I’m up and running with 90% of my Windows apps in Linux and all working seamlessly. Steam is also installed but I’m still having some issues with games but I knew this would be a challenge going into this, not a big deal.

The good: The customizations available within Kubuntu are incredible and I’ve had a ton of fun messing with settings to get everything just the way I want. Things just kinda work, I even got a local wikipedia mirror to run at boot for offline browsing as a fun project. I feel a lot more secure knowing this is running Linux versus Windows 11 which has become more and more based on collecting user data.

The bad: Steam games are crashing quite a lot; I’m not sure what the issue is but working through this now. I also cannot for the life of me get my Bluetooth headphones to connect without the GUI crashing but this also sounds like a common issue. Windows 11 definitely has a more user friendly interface; I find myself missing the tabbed File Explorer natively within Windows. Nvidia Broadcast is also not available in Linux which was an amazing product to use for Zoom meetings.

In summary: I’m very happy with using this as a daily driver. Apart from a couple crashes, things have run very smooth. I’m very comfortable working in a Debian/Ubuntu environment so this has been a treat. I’m looking forward to using this as my primary OS for the near future and will continue to post more as I learn and progress with this.

I hope you enjoyed the read, have a good one!

It’s 2024! Celebrating 10 years of AASullivan.com

Hey all, welcome to my site and blog. I realized recently that I’m coming up on ten years here with this website and wanted to post some of the history and challenges I’ve run into while running this site.

Starting in 2014, AASullivan.com was a small project based upon Ubuntu 14.04LTS. Running off a VM on ESXI, it was a way to get familiar with web hosting using Linux. I didn’t really have a theme at the time apart from just wanting to post things I found interesting or helpful to others.

Over the years, I’ve had quite a lot of ups and downs. Included in this were multiple hacks and an unintentional DDoS attack which took a couple months to sort. The first hack was a site I also hosted on the webserver on an unsecured WordPress install. Thankfully backups saved the day and I was able to lock things down to prevent further issues. The DDoS was several different IPs trying to brute force their way into my primary site which caused a few issues and took some time to diagnose. The issue showed as MySQL crashing due to RAM running out. Once I found the issue, Fail2Ban was installed and configured and quickly took this issue out. Thankfully, things have been relatively quiet since then.

My next big project for the site was upgrading to Ubuntu 22.04LTS. This came with a slew of issues with PHP versions which took a weekend to iron out but I was able to get things sorted and felt a lot more comfortable on a newer and actively patched site.

Next up: SSL/HTTPS. I decided to go the LetsEncrypt route and with another weekend down, I was able to get certs for my site working along with permanent redirect to HTTPS on my site. Again, another happy sigh as the site was just a little more secure. This along with another upgrade to Ubuntu 22.04LTS and I was up to date and running the latest software.

The final iteration of the current site was moving everything from one server to another. I ran into some issues with the former systems during the upgrade process of Ubuntu and ended up needing to build an entirely new server to handle everything. The migration process took about a week to get ironed out but everything was successfully moved to the new host and this has been running for about a year now without issue. Knock on wood, this is the last big move for the site.

With all of this, I’ve learned more about website security, OS patching, upgrades and migrations. It’s been a blast, albeit quite frustrating at times. But with the frustration comes accomplishment and success. A fair trade in my book.

Happy 10 years, and here’s hoping 10 more!

AI Comprehensive Guide to pfSense

Introduction: In today’s interconnected world, network security is paramount. Whether it’s for home or business use, having a robust firewall solution is essential to safeguarding your network from cyber threats. One such solution that has gained immense popularity is pfSense. In this article, we’ll delve into what pfSense is, how to install and set it up, and weigh its pros and cons.

What is pfSense? pfSense is an open-source firewall and router software distribution based on FreeBSD. It offers advanced features and functionality typically found in expensive commercial firewalls, making it an attractive choice for both small-scale home networks and large enterprise environments.

Installation of pfSense: Installing pfSense is a relatively straightforward process. Here’s a step-by-step guide:

  1. Download the pfSense ISO image from the official website (https://www.pfsense.org/download/).
  2. Create a bootable USB drive or burn the ISO image to a CD/DVD.
  3. Boot your computer from the USB drive or CD/DVD containing the pfSense installer.
  4. Follow the on-screen prompts to begin the installation process.
  5. Choose the installation mode (e.g., Standard Install, Quick/Easy Install).
  6. Select the disk where you want to install pfSense and confirm the installation.
  7. Once the installation is complete, remove the installation media and reboot the system.

Setting up pfSense: After installing pfSense, the next step is to configure it to suit your specific requirements. Here’s an overview of the basic setup process:

  1. Connect to the pfSense web interface by entering the IP address assigned to the LAN interface in a web browser.
  2. Log in using the default credentials (admin/pfsense).
  3. Follow the initial setup wizard to configure basic settings such as WAN and LAN interfaces, IP addressing, and DNS.
  4. Configure firewall rules to control traffic flow between different network segments.
  5. Set up additional services such as DHCP, VPN, NAT, and VLANs as needed.
  6. Install packages/extensions from the pfSense Package Manager to add extra functionality like intrusion detection/prevention, caching, and reporting.

Pros of pfSense:

  1. Cost-effective: Since pfSense is open-source software, it’s free to download and use, making it a cost-effective solution compared to proprietary firewalls.
  2. Flexibility: pfSense offers a high degree of flexibility and customization, allowing users to tailor the firewall to meet their specific needs.
  3. Robust feature set: pfSense comes packed with a wide range of features, including firewalling, VPN support (OpenVPN, IPsec), traffic shaping, load balancing, and more.
  4. Active community support: The pfSense community is large and active, providing extensive documentation, forums, and user-contributed packages to assist users with setup and troubleshooting.
  5. Scalability: pfSense is scalable, capable of handling networks ranging from small home setups to large enterprise environments.

Cons of pfSense:

  1. Learning curve: While pfSense offers powerful features, it has a steep learning curve, especially for users with limited networking experience.
  2. Hardware requirements: While pfSense can run on a variety of hardware platforms, optimal performance may require dedicated hardware or virtualization.
  3. Maintenance and updates: Regular maintenance and updates are necessary to keep pfSense secure and up-to-date, which may require time and effort.
  4. Limited official support: While community support is abundant, official commercial support options are available but may come at an additional cost.

Conclusion: pfSense is a powerful and versatile firewall solution that provides robust security features, flexibility, and scalability. By following the installation and setup process outlined in this article, users can harness the full potential of pfSense to protect their networks from cyber threats. While there are some challenges such as the learning curve and maintenance requirements, the benefits of using pfSense often outweigh these drawbacks, making it a popular choice for network administrators worldwide.

Source: OpenAI

State of (my) Network: 2023

Welcome to AASullivan.com! I’ve been meaning to do this for a while, so here it is: The State of My Network. I’ve been planning to do an overview of my entire set up for some time now and I’ve finally kept everything in a consistent state now for a few months after many years of tinkering. A little background…

My home network (also known as my “home lab”) has been a long term project of mine to both try out new technologies and keep my skills sharp in a real server environment. It started in 2014 with a single server in my apartment running ESXI 3.5 on a system with a 6 core AMD CPU and 16GB of RAM. It was a blast to try new things (DNS server, file server, Plex, etc) and I’ve learned a lot since those times. My current Home Lab is considerably bigger now and is larger now than many small company systems; My poor power bill will never be the same. Let’s take a look at some of the set up!

To start with the basics, I’m running a pfSense firewall connecting out to a Verizon Fios gigabit connection. Going from this is a 48 port gigabit switch with 10GB SFP connections to a backbone 10GB SFP Microtek switch. I have 10GB between all servers and my desktop allowing regular speeds of around 550MB/s when going SSD to SSD. This is all the core backbone of my network. Time for the fun stuff.

I run a large number of self hosted services including, but not limited to:

  • PiHole DNS blocking
  • Plex
  • Home Assistant
  • Killing Floor 2 gameserver
  • Tiny Tiny RSS
  • NextCloud
  • Zabbix
  • Uptime Kuma
  • Bookstack
  • Graylog
  • …and many more

Most things are running on an ESXI 6.5 server. This is a 1U with 192GB RAM, dual 6 core Xeons and (6) 1TB SSDs in RAID 10. Along side this is two Unraid servers (Dual Dell R510) mirrored and both running multiple services. RIP power bill.

The original goal of this network and home lab was to learn more about some enterprise solutions and keep my skills sharp. Instead, I’ve built a network more robust than many small business networks with a lot more control and functionality.

Things I’ve learned while doing all of this:

  • Your power bill is going to suffer (worth it!)
  • Servers put out a LOT of heat. I often notice this even upstairs with the floor being hot over the servers in the basement
  • Server hardware gives you a ton of ways to troubleshoot things, often with LEDs and beep codes to help narrow down issues
  • Low power hardware options are out there but are often much more expensive up front
  • Knowing what’s going on in your network is awesome. Knowing everything that’s going on can also drive one nuts when you see how often items are connecting to the internet (Looking at you Windows 10 and IoT devices)
  • If you want to build something, searching the internet can give you a lot of ideas and input. Most of my projects were done in an afternoon after finding a new product and reviewing issues/installation

All in all, I’ve taken control of my network and as much as it can drives me nuts doing all the maintenance and updates, it has been very stable for years and it’s a very good feeling to know what’s going on with everything.

Keep in mind: this can all start with an old desktop computer running Docker or a few services to tinker with. I started this all with an Ubuntu 6.06 LAMP server in 2008 and has grown into the passion it is today.

Thanks for sticking through my brain dump, I hope you enjoyed the read and will stop by again. Cheers!

VMWare Kubernetes Cluster Build Notes

I built a Kubernetes cluster tonight to try and learn a bit about this technology, here are my notes about building it!

Built on a new ESXI server with the following specs (you’d be surprised how well this rig holds up):

  • Dell Optiplex 7010
  • Core i7-3770
  • 32GB RAM
  • 1TB SSD

Created virtual network only accessible to VMs:

Used pfSense as firewall to keep independent from network and to provide DHCP to internal virtual network. Used a single Windows Server 2019 VM to access WebGUI to control pfSense. Needlessly complicated to do all of this but I wanted to get a virtual network going to test all of this out on.

Built the following VMs on Ubuntu 22.04 LTS:

  • Controller: 2 vCPUs, 4GB RAM, 50GB HDD
  • Worker X2: 2 vCPUs, 4GB RAM, 25GB HDD

I then set all their IPs in pfSense to keep them from changing. I then used the following guide, specifically the scripts from github mentioned in the article:

https://devopscube.com/setup-kubernetes-cluster-kubeadm/

Command to download scripts:

git clone https://github.com/techiescamp/kubeadm-scripts

Chmod+x both scripts to set executable. Run the “common.sh” script on all systems, both controller and nodes/workers. Edit the “master.sh” script. Change the “MASTER_IP=” portion to your controller’s IP address. Change the network range as needed for the nodes under “POD_CIDR” (I left these as default). Run the script on the controller.

Run this command to get the join command:

kubeadm token create --print-join-command

Run the FULL output on any workers/nodes you want to connect to it. Check connection on master/controller:

kubectl get nodes

Example output:

Note that their roles are set to “none”. Fix by running this command, substituting out the names of the nodes/workers:

kubectl label node mini-worker-1 node-role.kubernetes.io/worker=worker

Then check again to confirm they are now workers:

Congrats! You now have a (slightly dirty) Kubernetes cluster running and ready to assign deployments to! This is the bare basics but should get you rolling, good luck and happy tinkering!

Using Uptime Kuma for Network Monitoring

Welcome back!

Today I’m looking into Uptime Kuma, a monitoring tool that in my eyes is perfect, simple and intuitive and does exactly what I need for monitoring my home network. What is Uptime Kuma? I’m glad you asked.

The Background

Uptime Kuma is an Open Source monitoring solution, perfect for SOHO (Small Office/Home Office) environments where there isn’t a ton of devices/services to monitor. I’ve been looking for something like this for some time after building a very rudimentary monitoring system for myself (relying solely on ping and notifying via webhooks in Slack). This is one of those pieces of software I wish I had known about for years.

The setup is exceptionally simple, especially when using Docker. I simply used a small Dell SFF Optiplex (i3, 4GB RAM, 240GB SSD) running Ubuntu 22.04LTS which is more than capable of handling the 40-50 checks I’m running. Using a dedicated instance instead of a virtual machine on one of my hosts also will allow me to monitor in case those hosts go offline.

The Build

As I mentioned, I simply installed a base image of Ubuntu 22.04LTS to have the latest software running. Then, use snap to install docker:

snap install docker

Then, install the Uptime Kuma docker to run in the background automatically:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1

Fire up a web browser and go to http://YOUR_DEVICE_IP:3001 to set up an admin account and begin working with the software. Note: Installing this way will automatically bring up the docker in the background upon reboot.

Putting it together:

For Uptime Kuma, there are a wide variety of tests and monitors it can use to keep tabs on your network. I’m using a mix of Ping, TCP and DNS checks to make sure my network is working as expected. This also helped me sort out a brief internet outage this morning.

An example of one I use to make sure my internet is actually connected is to simply ping Google.com every minute to make sure my traffic can get to the WAN:

If this goes down, then my WAN connection to the internet isn’t working and gives me an immediate pop up on the page to tell me. The dashboards are pretty cool too:

Neat, so there’s the checks for the internet down to monitor when/if the internet goes down.

One thing to keep in mind is how you name the devices/ports/etc that are being monitored. My solution was to name the test first, then the device in question so at a glance I know what I’m looking at:

From top to bottom: First two are WAN access checks, then TCP port checks for my various servers I host for my friends and I. The TCP port is super handy for monitoring when a process/services goes offline. The third portion is where I begin the Ping tests for my home devices.

The Wrap Up

All in all, this is the solution I’ve been looking for for a long time now. It’s simple, elegant and easy to use while still being extremely valuable as a monitoring tool. The system I run uses a private Discord server to notify me if anything goes down, less than 5 minutes after, normally less than 1 minute. Being able to configure retries and time outs is also very handy too for high latency connections. I can only recommend this software to someone wanting to set up more monitoring for their SOHO network, but I could see this being used for enterprise solutions as well. Uptime Kuma, you have a new fan.

Tekkit2 Minecraft Build notes on Ubuntu 22.04LTS

Assuming a fresh install of Ubuntu 22.04LTS, accepting default options and no extra packages except OpenSSH server to login.

Install zip/unzip:

apt-get update; apt-get install zip unzip

Download archive for Tekkit2 (as of 20221202):

wget https://servers.technicpack.net/Technic/servers/tekkit-2/Tekkit-2_Server_v1.1.3.zip

Unzip:

unzip Tekkit-2_Server_v1.1.3.zip

Install Java and add Repos:

add-apt-repository ppa:webupd8team/java
apt install openjdk-8-jdk

Attempt to launch to confirm server is working:

chmod +x LaunchServer.sh
./LaunchServer.sh

Should take several minutes to load, depending on hardware. If this works, set up cron to similar to below to launch server upon reboot:

root@syr-tekkit:# cat /etc/cron.d/root_gameserver
@reboot root sleep 5 && cd /home/asullivan && /usr/bin/screen -dmS gameserver-screen /home/asullivan/LaunchServer.sh

Open up port 25565 to internet while server is running to connect. Use the Technic Launcher.

Notes from installing Nextcloud on Ubuntu 22.04LTS

Today I set up another Nextcloud server after taking the former one offline due to the size of backups for it getting a little out of hand. This also allows me to run the latest current version of Ubuntu Server (22.04LTS).

Here’s the guide I followed with a fresh install of Ubuntu:

How to Install Nextcloud on Ubuntu 22.04

Following this, I ran into a number of issues here:

Downgrading to PHP 7.4:

apt install php7.4-mysql php7.4-mbstring php7.4-xml php7.4-curl php7.4-gd

Select 7.4 from list:

update-alternatives –config php

Set version:

sudo a2dismod php8.1
sudo a2enmod php7.4
sudo systemctl restart apache2

Install missing php packages:

apt install -y apache2 mariadb-server libapache2-mod-php7.4
php7.4-gd php7.4-json php7.4-mysql php7.4-curl
php7.4-intl php7.4-mcrypt php-imagick
php7.4-zip php7.4-xml php7.4-mbstring

sudo systemctl restart apache2

 

Building a non-logging, encrypted DNS server

Welcome back! Today I’m working on a project to secure my web surfing to be an anonymous as possible using a combination of a software package called “Pi-Hole” and a VPN provider.

So, let’s start at the basics: VPN and DNS

DNS, or Domain Name System, is how we access data on the web. Think of it like a pointer: When you go to facebook.com, your request goes to a DNS server which takes the website name (facebook.com) and converts it to an IP address of a server to access the website. Example:

euthonis@DESK:~$ nslookup facebook.com 172.31.1.26
Server: 172.31.1.26
Address: 172.31.1.26#53

Non-authoritative answer:
Name: facebook.com
Address: 31.13.67.35

Notice  the 31.13.67.35. This is the IP address of Facebook’s server to connect to their website. Neat, eh? This is how most web access occurs except for rare circumstances where you would need an IP directly.

Now, VPNs.

VPNs are marketed as a way to hide your browsing and activity online, and this is true in most cases. VPN stands for “Virtual Private Network”. In a nutshell, they create a “tunnel” through which all of your web browsing goes through this encrypted tunnel so your ISP cannot see what you’re doing. This offers a great level of privacy but doesn’t prevent website tracking cookies, so there are limits to it. Most VPN services (Nord, Mullvad, TorGuard) all claim to use 0 logging on their systems; Even if ordered by a court, they do not have logs on your browsing history. Yes, this does sound a bit sketchy but even a normal user/person can benefit by not being tracked by your ISP, having your data sold to advertising companies.

So what happens if you want the ad-blocking that Pi-Hole offers along with the privacy of a VPN? You build your own DNS (Pi-Hole) server and set it up to be as anonymous as possible.

I followed the below guide for my build using Ubuntu 22.04LTS and ignoring the NetData portions (not needed for my use cases):

Create DNS-over-TLS bridge with Pi-hole, unbound and stubby on Ubuntu Server

There’s a couple configuration changes needing to be made to help us keep the Pi-Hole from logging any requests:

  1. In the GUI/Admin interface, go under Settings > Privacy (tab), select from the options “Anonymous Mode”. If an error occurs, go into Settings and click “Disable Query Logging”, then “Flush Logs (last 24 hours)”. This will disable all Pi-Hole logging
  2. Modify the file:
    sudo nano /etc/unbound/unbound.conf.d/logs.conf

    Edit it to look like this:

# enable query logging
server:
    # If no logfile is specified, syslog is used
    # logfile: "/var/log/unbound/unbound.log"
    log-time-ascii: no
    log-queries: no
    log-replies: no
    verbosity: 0

Restart the services:

sudo systemctl restart unbound stubby ; systemctl status unbound stubby -l

With these options set, there is no longer any logging on the server.

For the final part of all of this, a lot of VPN providers allow custom DNS servers to be used. Take the IP address of your DNS server and enter it into the custom DNS server of the VPN and connect. You should be able to use the internet over the VPN as before, but now you have your own controlled adblocking via Pi-Hole and the security of knowing your server does not keep logs or any history. You should, assuming your VPN is trustworthy, be essentially invisible on the internet now.

I hope this write up was helpful! I’ve been tinkering with these projects for some time off and on.

One last tip: If you find a website is blocked improperly from Pi-Hole, you may need to enable logging again (reversing the items from Step 1, above) to whitelist the problem domain. Don’t forget to turn logging back off after!