What life is like with Linux as my primary OS

I say this a lot: but something I’ve wanted to do for a very, very long time is run a Linux OS as my primary operating system for day to day usage. I took the plunge recently and have learned a lot. One of the biggest things learned? How much easier Linux has become after last trying this around a decade ago. Here’s what I learned…

Firstly, Linux isn’t nearly the beast it was in 2010. A lot of things just work out of the box now, including third party drivers for such things as graphics cards. My first time testing this a decade back I couldn’t get multiple monitors to work, I couldn’t get USB devices to register and everything was very prone to crashing. So, it’s time to start tinkering. Let’s start with my rig:

*Ryzen 3600, 32GB DDR4 RAM, NVidia RTX 3070

With that, I tossed an extra Sata SSD into my desktop and dived headfirst into my first distro to test: Garuda.

Garuda bills itself as a gaming focused OS. Most things worked out of the box but I quickly found that my background in Debian/Ubuntu wasn’t much help when getting into package management and application support. I decided to change it up and quickly went to Xubuntu, based on the XFCE GUI which I’ve been a fan of for some years due to it’s low resource requirements which should mean it will run quickly on my gaming rig. I was disappointed, however, with some of the limitations and how the GUI looked/felt. This brought me to the my current and hopefully final OS: Kubuntu

Kubuntu is Ubuntu running KDE Plasma as a GUI. As with the other distros, it worked mostly out of the box. Speakers, monitors, USB hubs, everything just worked. With a little tinkering I got my monitor positions lined up and dug into getting the software I wanted running on this install.

My goal was to replace as much of my Windows 11 experience as possible. This included everything from Dropbox, Slack, Discord, Spotify and my VPN provider which were all available in .deb packages and easy to get running. After about 2 days of tinkering, I’m up and running with 90% of my Windows apps in Linux and all working seamlessly. Steam is also installed but I’m still having some issues with games but I knew this would be a challenge going into this, not a big deal.

The good: The customizations available within Kubuntu are incredible and I’ve had a ton of fun messing with settings to get everything just the way I want. Things just kinda work, I even got a local wikipedia mirror to run at boot for offline browsing as a fun project. I feel a lot more secure knowing this is running Linux versus Windows 11 which has become more and more based on collecting user data.

The bad: Steam games are crashing quite a lot; I’m not sure what the issue is but working through this now. I also cannot for the life of me get my Bluetooth headphones to connect without the GUI crashing but this also sounds like a common issue. Windows 11 definitely has a more user friendly interface; I find myself missing the tabbed File Explorer natively within Windows. Nvidia Broadcast is also not available in Linux which was an amazing product to use for Zoom meetings.

In summary: I’m very happy with using this as a daily driver. Apart from a couple crashes, things have run very smooth. I’m very comfortable working in a Debian/Ubuntu environment so this has been a treat. I’m looking forward to using this as my primary OS for the near future and will continue to post more as I learn and progress with this.

I hope you enjoyed the read, have a good one!

It’s 2024! Celebrating 10 years of AASullivan.com

Hey all, welcome to my site and blog. I realized recently that I’m coming up on ten years here with this website and wanted to post some of the history and challenges I’ve run into while running this site.

Starting in 2014, AASullivan.com was a small project based upon Ubuntu 14.04LTS. Running off a VM on ESXI, it was a way to get familiar with web hosting using Linux. I didn’t really have a theme at the time apart from just wanting to post things I found interesting or helpful to others.

Over the years, I’ve had quite a lot of ups and downs. Included in this were multiple hacks and an unintentional DDoS attack which took a couple months to sort. The first hack was a site I also hosted on the webserver on an unsecured WordPress install. Thankfully backups saved the day and I was able to lock things down to prevent further issues. The DDoS was several different IPs trying to brute force their way into my primary site which caused a few issues and took some time to diagnose. The issue showed as MySQL crashing due to RAM running out. Once I found the issue, Fail2Ban was installed and configured and quickly took this issue out. Thankfully, things have been relatively quiet since then.

My next big project for the site was upgrading to Ubuntu 22.04LTS. This came with a slew of issues with PHP versions which took a weekend to iron out but I was able to get things sorted and felt a lot more comfortable on a newer and actively patched site.

Next up: SSL/HTTPS. I decided to go the LetsEncrypt route and with another weekend down, I was able to get certs for my site working along with permanent redirect to HTTPS on my site. Again, another happy sigh as the site was just a little more secure. This along with another upgrade to Ubuntu 22.04LTS and I was up to date and running the latest software.

The final iteration of the current site was moving everything from one server to another. I ran into some issues with the former systems during the upgrade process of Ubuntu and ended up needing to build an entirely new server to handle everything. The migration process took about a week to get ironed out but everything was successfully moved to the new host and this has been running for about a year now without issue. Knock on wood, this is the last big move for the site.

With all of this, I’ve learned more about website security, OS patching, upgrades and migrations. It’s been a blast, albeit quite frustrating at times. But with the frustration comes accomplishment and success. A fair trade in my book.

Happy 10 years, and here’s hoping 10 more!

AI Comprehensive Guide to pfSense

Introduction: In today’s interconnected world, network security is paramount. Whether it’s for home or business use, having a robust firewall solution is essential to safeguarding your network from cyber threats. One such solution that has gained immense popularity is pfSense. In this article, we’ll delve into what pfSense is, how to install and set it up, and weigh its pros and cons.

What is pfSense? pfSense is an open-source firewall and router software distribution based on FreeBSD. It offers advanced features and functionality typically found in expensive commercial firewalls, making it an attractive choice for both small-scale home networks and large enterprise environments.

Installation of pfSense: Installing pfSense is a relatively straightforward process. Here’s a step-by-step guide:

  1. Download the pfSense ISO image from the official website (https://www.pfsense.org/download/).
  2. Create a bootable USB drive or burn the ISO image to a CD/DVD.
  3. Boot your computer from the USB drive or CD/DVD containing the pfSense installer.
  4. Follow the on-screen prompts to begin the installation process.
  5. Choose the installation mode (e.g., Standard Install, Quick/Easy Install).
  6. Select the disk where you want to install pfSense and confirm the installation.
  7. Once the installation is complete, remove the installation media and reboot the system.

Setting up pfSense: After installing pfSense, the next step is to configure it to suit your specific requirements. Here’s an overview of the basic setup process:

  1. Connect to the pfSense web interface by entering the IP address assigned to the LAN interface in a web browser.
  2. Log in using the default credentials (admin/pfsense).
  3. Follow the initial setup wizard to configure basic settings such as WAN and LAN interfaces, IP addressing, and DNS.
  4. Configure firewall rules to control traffic flow between different network segments.
  5. Set up additional services such as DHCP, VPN, NAT, and VLANs as needed.
  6. Install packages/extensions from the pfSense Package Manager to add extra functionality like intrusion detection/prevention, caching, and reporting.

Pros of pfSense:

  1. Cost-effective: Since pfSense is open-source software, it’s free to download and use, making it a cost-effective solution compared to proprietary firewalls.
  2. Flexibility: pfSense offers a high degree of flexibility and customization, allowing users to tailor the firewall to meet their specific needs.
  3. Robust feature set: pfSense comes packed with a wide range of features, including firewalling, VPN support (OpenVPN, IPsec), traffic shaping, load balancing, and more.
  4. Active community support: The pfSense community is large and active, providing extensive documentation, forums, and user-contributed packages to assist users with setup and troubleshooting.
  5. Scalability: pfSense is scalable, capable of handling networks ranging from small home setups to large enterprise environments.

Cons of pfSense:

  1. Learning curve: While pfSense offers powerful features, it has a steep learning curve, especially for users with limited networking experience.
  2. Hardware requirements: While pfSense can run on a variety of hardware platforms, optimal performance may require dedicated hardware or virtualization.
  3. Maintenance and updates: Regular maintenance and updates are necessary to keep pfSense secure and up-to-date, which may require time and effort.
  4. Limited official support: While community support is abundant, official commercial support options are available but may come at an additional cost.

Conclusion: pfSense is a powerful and versatile firewall solution that provides robust security features, flexibility, and scalability. By following the installation and setup process outlined in this article, users can harness the full potential of pfSense to protect their networks from cyber threats. While there are some challenges such as the learning curve and maintenance requirements, the benefits of using pfSense often outweigh these drawbacks, making it a popular choice for network administrators worldwide.

Source: OpenAI

State of (my) Network: 2023

Welcome to AASullivan.com! I’ve been meaning to do this for a while, so here it is: The State of My Network. I’ve been planning to do an overview of my entire set up for some time now and I’ve finally kept everything in a consistent state now for a few months after many years of tinkering. A little background…

My home network (also known as my “home lab”) has been a long term project of mine to both try out new technologies and keep my skills sharp in a real server environment. It started in 2014 with a single server in my apartment running ESXI 3.5 on a system with a 6 core AMD CPU and 16GB of RAM. It was a blast to try new things (DNS server, file server, Plex, etc) and I’ve learned a lot since those times. My current Home Lab is considerably bigger now and is larger now than many small company systems; My poor power bill will never be the same. Let’s take a look at some of the set up!

To start with the basics, I’m running a pfSense firewall connecting out to a Verizon Fios gigabit connection. Going from this is a 48 port gigabit switch with 10GB SFP connections to a backbone 10GB SFP Microtek switch. I have 10GB between all servers and my desktop allowing regular speeds of around 550MB/s when going SSD to SSD. This is all the core backbone of my network. Time for the fun stuff.

I run a large number of self hosted services including, but not limited to:

  • PiHole DNS blocking
  • Plex
  • Home Assistant
  • Killing Floor 2 gameserver
  • Tiny Tiny RSS
  • NextCloud
  • Zabbix
  • Uptime Kuma
  • Bookstack
  • Graylog
  • …and many more

Most things are running on an ESXI 6.5 server. This is a 1U with 192GB RAM, dual 6 core Xeons and (6) 1TB SSDs in RAID 10. Along side this is two Unraid servers (Dual Dell R510) mirrored and both running multiple services. RIP power bill.

The original goal of this network and home lab was to learn more about some enterprise solutions and keep my skills sharp. Instead, I’ve built a network more robust than many small business networks with a lot more control and functionality.

Things I’ve learned while doing all of this:

  • Your power bill is going to suffer (worth it!)
  • Servers put out a LOT of heat. I often notice this even upstairs with the floor being hot over the servers in the basement
  • Server hardware gives you a ton of ways to troubleshoot things, often with LEDs and beep codes to help narrow down issues
  • Low power hardware options are out there but are often much more expensive up front
  • Knowing what’s going on in your network is awesome. Knowing everything that’s going on can also drive one nuts when you see how often items are connecting to the internet (Looking at you Windows 10 and IoT devices)
  • If you want to build something, searching the internet can give you a lot of ideas and input. Most of my projects were done in an afternoon after finding a new product and reviewing issues/installation

All in all, I’ve taken control of my network and as much as it can drives me nuts doing all the maintenance and updates, it has been very stable for years and it’s a very good feeling to know what’s going on with everything.

Keep in mind: this can all start with an old desktop computer running Docker or a few services to tinker with. I started this all with an Ubuntu 6.06 LAMP server in 2008 and has grown into the passion it is today.

Thanks for sticking through my brain dump, I hope you enjoyed the read and will stop by again. Cheers!

VMWare Kubernetes Cluster Build Notes

I built a Kubernetes cluster tonight to try and learn a bit about this technology, here are my notes about building it!

Built on a new ESXI server with the following specs (you’d be surprised how well this rig holds up):

  • Dell Optiplex 7010
  • Core i7-3770
  • 32GB RAM
  • 1TB SSD

Created virtual network only accessible to VMs:

Used pfSense as firewall to keep independent from network and to provide DHCP to internal virtual network. Used a single Windows Server 2019 VM to access WebGUI to control pfSense. Needlessly complicated to do all of this but I wanted to get a virtual network going to test all of this out on.

Built the following VMs on Ubuntu 22.04 LTS:

  • Controller: 2 vCPUs, 4GB RAM, 50GB HDD
  • Worker X2: 2 vCPUs, 4GB RAM, 25GB HDD

I then set all their IPs in pfSense to keep them from changing. I then used the following guide, specifically the scripts from github mentioned in the article:

https://devopscube.com/setup-kubernetes-cluster-kubeadm/

Command to download scripts:

git clone https://github.com/techiescamp/kubeadm-scripts

Chmod+x both scripts to set executable. Run the “common.sh” script on all systems, both controller and nodes/workers. Edit the “master.sh” script. Change the “MASTER_IP=” portion to your controller’s IP address. Change the network range as needed for the nodes under “POD_CIDR” (I left these as default). Run the script on the controller.

Run this command to get the join command:

kubeadm token create --print-join-command

Run the FULL output on any workers/nodes you want to connect to it. Check connection on master/controller:

kubectl get nodes

Example output:

Note that their roles are set to “none”. Fix by running this command, substituting out the names of the nodes/workers:

kubectl label node mini-worker-1 node-role.kubernetes.io/worker=worker

Then check again to confirm they are now workers:

Congrats! You now have a (slightly dirty) Kubernetes cluster running and ready to assign deployments to! This is the bare basics but should get you rolling, good luck and happy tinkering!

Building a non-logging, encrypted DNS server

Welcome back! Today I’m working on a project to secure my web surfing to be an anonymous as possible using a combination of a software package called “Pi-Hole” and a VPN provider.

So, let’s start at the basics: VPN and DNS

DNS, or Domain Name System, is how we access data on the web. Think of it like a pointer: When you go to facebook.com, your request goes to a DNS server which takes the website name (facebook.com) and converts it to an IP address of a server to access the website. Example:

euthonis@DESK:~$ nslookup facebook.com 172.31.1.26
Server: 172.31.1.26
Address: 172.31.1.26#53

Non-authoritative answer:
Name: facebook.com
Address: 31.13.67.35

Notice  the 31.13.67.35. This is the IP address of Facebook’s server to connect to their website. Neat, eh? This is how most web access occurs except for rare circumstances where you would need an IP directly.

Now, VPNs.

VPNs are marketed as a way to hide your browsing and activity online, and this is true in most cases. VPN stands for “Virtual Private Network”. In a nutshell, they create a “tunnel” through which all of your web browsing goes through this encrypted tunnel so your ISP cannot see what you’re doing. This offers a great level of privacy but doesn’t prevent website tracking cookies, so there are limits to it. Most VPN services (Nord, Mullvad, TorGuard) all claim to use 0 logging on their systems; Even if ordered by a court, they do not have logs on your browsing history. Yes, this does sound a bit sketchy but even a normal user/person can benefit by not being tracked by your ISP, having your data sold to advertising companies.

So what happens if you want the ad-blocking that Pi-Hole offers along with the privacy of a VPN? You build your own DNS (Pi-Hole) server and set it up to be as anonymous as possible.

I followed the below guide for my build using Ubuntu 22.04LTS and ignoring the NetData portions (not needed for my use cases):

Create DNS-over-TLS bridge with Pi-hole, unbound and stubby on Ubuntu Server

There’s a couple configuration changes needing to be made to help us keep the Pi-Hole from logging any requests:

  1. In the GUI/Admin interface, go under Settings > Privacy (tab), select from the options “Anonymous Mode”. If an error occurs, go into Settings and click “Disable Query Logging”, then “Flush Logs (last 24 hours)”. This will disable all Pi-Hole logging
  2. Modify the file:
    sudo nano /etc/unbound/unbound.conf.d/logs.conf

    Edit it to look like this:

# enable query logging
server:
    # If no logfile is specified, syslog is used
    # logfile: "/var/log/unbound/unbound.log"
    log-time-ascii: no
    log-queries: no
    log-replies: no
    verbosity: 0

Restart the services:

sudo systemctl restart unbound stubby ; systemctl status unbound stubby -l

With these options set, there is no longer any logging on the server.

For the final part of all of this, a lot of VPN providers allow custom DNS servers to be used. Take the IP address of your DNS server and enter it into the custom DNS server of the VPN and connect. You should be able to use the internet over the VPN as before, but now you have your own controlled adblocking via Pi-Hole and the security of knowing your server does not keep logs or any history. You should, assuming your VPN is trustworthy, be essentially invisible on the internet now.

I hope this write up was helpful! I’ve been tinkering with these projects for some time off and on.

One last tip: If you find a website is blocked improperly from Pi-Hole, you may need to enable logging again (reversing the items from Step 1, above) to whitelist the problem domain. Don’t forget to turn logging back off after!

Notes from installing Nextcloud on Ubuntu 22.04LTS

Today I set up another Nextcloud server after taking the former one offline due to the size of backups for it getting a little out of hand. This also allows me to run the latest current version of Ubuntu Server (22.04LTS).

Here’s the guide I followed with a fresh install of Ubuntu:

How to Install Nextcloud on Ubuntu 22.04

Following this, I ran into a number of issues here:

Downgrading to PHP 7.4:

apt install php7.4-mysql php7.4-mbstring php7.4-xml php7.4-curl php7.4-gd

Select 7.4 from list:

update-alternatives –config php

Set version:

sudo a2dismod php8.1
sudo a2enmod php7.4
sudo systemctl restart apache2

Install missing php packages:

apt install -y apache2 mariadb-server libapache2-mod-php7.4
php7.4-gd php7.4-json php7.4-mysql php7.4-curl
php7.4-intl php7.4-mcrypt php-imagick
php7.4-zip php7.4-xml php7.4-mbstring

sudo systemctl restart apache2

 

Using Uptime Kuma for Network Monitoring

Welcome back!

Today I’m looking into Uptime Kuma, a monitoring tool that in my eyes is perfect, simple and intuitive and does exactly what I need for monitoring my home network. What is Uptime Kuma? I’m glad you asked.

The Background

Uptime Kuma is an Open Source monitoring solution, perfect for SOHO (Small Office/Home Office) environments where there isn’t a ton of devices/services to monitor. I’ve been looking for something like this for some time after building a very rudimentary monitoring system for myself (relying solely on ping and notifying via webhooks in Slack). This is one of those pieces of software I wish I had known about for years.

The setup is exceptionally simple, especially when using Docker. I simply used a small Dell SFF Optiplex (i3, 4GB RAM, 240GB SSD) running Ubuntu 22.04LTS which is more than capable of handling the 40-50 checks I’m running. Using a dedicated instance instead of a virtual machine on one of my hosts also will allow me to monitor in case those hosts go offline.

The Build

As I mentioned, I simply installed a base image of Ubuntu 22.04LTS to have the latest software running. Then, use snap to install docker:

snap install docker

Then, install the Uptime Kuma docker to run in the background automatically:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1

Fire up a web browser and go to http://YOUR_DEVICE_IP:3001 to set up an admin account and begin working with the software. Note: Installing this way will automatically bring up the docker in the background upon reboot.

Putting it together:

For Uptime Kuma, there are a wide variety of tests and monitors it can use to keep tabs on your network. I’m using a mix of Ping, TCP and DNS checks to make sure my network is working as expected. This also helped me sort out a brief internet outage this morning.

An example of one I use to make sure my internet is actually connected is to simply ping Google.com every minute to make sure my traffic can get to the WAN:

If this goes down, then my WAN connection to the internet isn’t working and gives me an immediate pop up on the page to tell me. The dashboards are pretty cool too:

Neat, so there’s the checks for the internet down to monitor when/if the internet goes down.

One thing to keep in mind is how you name the devices/ports/etc that are being monitored. My solution was to name the test first, then the device in question so at a glance I know what I’m looking at:

From top to bottom: First two are WAN access checks, then TCP port checks for my various servers I host for my friends and I. The TCP port is super handy for monitoring when a process/services goes offline. The third portion is where I begin the Ping tests for my home devices.

The Wrap Up

All in all, this is the solution I’ve been looking for for a long time now. It’s simple, elegant and easy to use while still being extremely valuable as a monitoring tool. The system I run uses a private Discord server to notify me if anything goes down, less than 5 minutes after, normally less than 1 minute. Being able to configure retries and time outs is also very handy too for high latency connections. I can only recommend this software to someone wanting to set up more monitoring for their SOHO network, but I could see this being used for enterprise solutions as well. Uptime Kuma, you have a new fan.

Battery backups: maintaining access when there’s no power

A little background: Some years back I learned a very hard lesson about losing power on a RAID array that didn’t have an onboard battery backup. The result was ~7TB of data gone, about 1.5TB completely irreplaceable including old school work and photos. This was a hard pill to swallow and helped me get better about redundant backups and another thing that was especially important: UPS backups, or Uninterruptible Power Supplies.

A UPS is a device which provides power for a short time during a home or business power failure by providing an AC output to whatever is plugged into it. I have several of these scattered throughout my home, including for my desktop and a couple lights around the house which act as emergency lighting using older, smaller UPS devices. My servers are always running UPS backups, but on a big larger scale.

My company was kind enough in the fall of last year to be giving away a large amount of hardware. In this was a 2U style UPS system which can run all of my (4) 2U servers, firewall and backbone switches in my basement for about 30-40 minutes. I also, however, had my old UPS backups: a 1350VA and 1500VA systems. Now, I wanted to use these to their maximum potential.

I had to move around the server rack (store bought metal shelving FTW) and it gave me a great opportunity to plan ahead for what I wanted to do. Most servers come with redundant power supplies; That is, they allow multiple inputs, so if one power supply fails or is no longer providing/receiving power, the server will switch to the other without interruption or power loss. Neat, eh?

I ran all the primary power supplies into the new UPS backup and then routed all my secondary power supplies and networking hardware into the older UPS units. This allows a total runtime somewhere in the 40+ minute range during a power outage. Not bad, but there’s a catch: How do I shut down (2) ESXI hosts, (2) unRAID hosts and multiple other smaller systems when the power is out and I need remote access? Simple: Battery backup on my desktop in my office.

I used yet another UPS unit in my office and ran a dedicated 10GB SFP line to my core switch in the basement which is also on a UPS. This allows my desktop to run somewhere between 25-30 minutes off power along with some LEDs in my office acting as emergency lights essentially, and one large monitor. I’ve done testing but I finally had a real situation pop up.

Last night we had quite a wind storm and the power was out for about 30 minutes or so. Sometimes my power flashes for 1-2 minutes but this was a long outage it seemed. After ten minutes, I logged in and began gracefully shutting down the VMs on the ESXI hosts and shut down my two unRAID hosts as well. In just a few minutes, my entire network was gracefully shutdown, without data loss or interruption. Adding to this, I was actually in the middle of a hard drive swap on one of my unRAID hosts which was done without issue due to the UPS backups. Another win for preparedness.

I hope this gives you, the reader, some ideas for a home network or a small business and shows why this is so important; Had I had a complete power failure without the battery backups, I could risk losing multiple 10’s of terabytes of data and could have corrupted the disk rebuild on my unRAID system. A little planning goes a long way in this case. Plus, it’s just cool; Why else would we homelab? Cheers!

Raspberry Pi offline Wikipedia

Wikipedia is a vast archive of knowledge and information we tend to forget is there. An encyclopedia of knowledge brought by users and edited by a community, it has a high accuracy rate and information on just about any subject you could want. You can also download an entire archive of it at around 90GB at the time of this writing!

I’ve had the idea for a while now about making an offline version to run locally for myself or friends, maybe something just to browse during a flight or roadtrip. Or, as my prepping thoughts say, maybe something for when the power’s out and easy to access! Enter the Raspberry Pi, a low cost and low power computer to run this using a suite of tools you can run off a battery pack and access from one’s phone/tablet/computer. Well, this is easier than you might think! I’ll be going over the ideas and thought processes of this at a high level as the project took some time. I can provide more details if you’d like by reaching out to me at if you’d like.

The goals of this project were as follows:

  • Use a Raspberry Pi to run this off a battery pack for several hours at minimum
  • Must be 100% self contained; This should be able to boot, run and provide access without user input
  • Access must be simple; In this case, a self created Wifi network hosted from the Raspberry Pi
  • Small, easy to travel with. For this reason I went with a Raspberry Pi Zero W. One of the smallest Raspberry Pi SOC computers that exists (about the size of a large flash drive).

Starting with the basics: Raspberry Pi Zero W. This is a single board computer a little bigger than a flash drive which can be powered by a small USB batter pack and a micro-USB adapter. I installed a 128GB micro-SD card and flashed an image of Raspberry Pi OS onto it (an ARM branch of Ubuntu server).

The next steps were to download a suite of tools called “Kiwix Tools”. This neat set of applications allow you to host a downloaded archive of Wikipedia, providing users a simple web interface just like Wikipedia. Once done, I could then access this over my local network.

Next up was the user hostapd/dnsmasq to build a Wifi network using the onboard wifi chipset to connect to the raspberry pi, provide DHCP and resolving locally which allowed me to connect via any device with wifi. I used a tablet to configure and confirm this.

For the last portion, I installed a LAMP stack onto the system (Linux + Apache + Mysql + PHP) and installed a copy of wordpress. I wanted a way to easily write notes into a webpage for anything to see when accessing this. Information, notes, ideas, etc, this was the easiest way to write and view this. It’s surprising how well this runs off the little system too!

In the end, I have a small ultra low power server essentially which allows me access to a vast amount of information on an almost endless supply of topics, along with WordPress for notes and further information I want to add!

In the future, I may build HTTRACK onto the system to rip websites for offline viewing as well to have even more access and information offline as I want. This was more of a proof of concept system but I’m quite proud of how it built out. Anyways, I hope you enjoyed my rambling and thoughts. Cheers!