Saturday, March 21, 2026

My Linode Web Server is Now Containerized and Utilizing Both Domains!

I got a bit bored today and felt that I could tackle testing a Docker container to leverage Let's Encrypt SSLs.

As usual, I wanted to leverage AI and, since I had major success with little problems with Gemini last go-around, I used it again.

The First Domain:

I started with wigglit.com.  As I already had a compose file to work with and I'd previously tested running containerized Wordpress on the Linode, it was a rather easy start.  I shared the file with Gemini and ask what needed to be added/changed.  The compose file enabled an Nginx-based sidecar to handle the SSL termination (and the domain was already set up for use).  That was the biggest change.

Well, when I tried to access the site, I couldn't - I was geting "502 Bad Gateway" errors.  As well, SSL was not working.  I checked the logs and saw many 404 errors, not just from my public IP but from other IPs, too.  I'd copied over a pre-existing copy of the Wordpress site and I think some permissions were borked - the new compose file didn't like something within my new workspace.  I found that the file ownership and permissions were jacked, and even after sorting that out, I still had issues - while the gateway errors were resolved, and I could now see server code 200s in the logs, the pages being served lacked content (they were blank).

Instead of waiting for Gemini to further help with this, I decided to just create a new workspace but without the copy of the Wordpress content I was previously using.  I started from scratch, to test.  Well, the test worked. I immediately got the Wordpress setup page, and I set up a quick default page, knowing I was just going to retrieve an offsite backup (yes, I keep backups).  It took maybe 15 minutes to get a recent copy of the site (minus a week of data).  And, on top of that, SSL was working!

So, the Wordpress site is now running, as a container, and using SSL.  It's not like I hadn't done this before, but when I last had things containerized, it wasn't leveraging an SSL.  As well, I was running two domains through Apache and didn't know how to run both using a container.  When I'd run the container, it would get in the way of running the other site using a different domain.  While I'd discovered how to run both outside of Docker, I was still researching if I could do it with Docker.

The Second Domain:

My other domain is unixfool.us.  It it a plain static website with very old content.  I use it mainly to archive and share photos.  As it is just sharing pictures, I don't really need it to leverage SSL.  Eventually, I will enable SSL with that domain, but for now, I just need it running, 

I gave Gemini instructions to leverage the same compose file to get unixfool.us to serve pages, via basic HTTP.  Gemini gave me an update compose file so that I could leverage that Nginx proxy with unixfool.us, even via port 80.  This one was easy, as all I really had to do was point the compose file to where the website content was located on the server.

Gemini screwed up at this point, though.  I'd asked it to give me the updates to the existing compose file that would enable unixfool.us to work.  It actually changed login credentials for Wordpress, in addition to giving me what I asked.  I had to compare both files to see the discrepancies, but I also had a hint, because I'd tried to run the compose file and saw some DB errors - I suspected credentials had changed.  This is why it's important to always check AI output - never fully trust it.  Trust but also verify.

After I fixed the credentials, I was able to access both websites while both were running as Docker containers.  The sites are fast!

I do have a question about the SSL certificate, though.  It expires in 3 months.  With the old setup, I had a cron job running that was supposed to renew the certificate before it expired....that setup was before the new setup that leverages Docker.  I think all I have to do is restart the container sometime before the 3 months expires, but I'm not sure.  I may have to research further about this (or I can just test this out next week - checking to see if the cert is recreated when I shut down and restart the container, keeping track of the "issued on" and "expiring on" dates).

This project was fun, though...seriously!

UPDATE:  I researched the certificate renewal and the https-portal (the reverse proxy) container actually runs a cron job that periodically checks the SSL expiry and will renew the certificate when reaches the 30 day mark.

UPDATE:  I ended up going down a rabbit hole of remediation issues, as the unixfool.us site wasn't allowing viewing of the images (fixed with an .htaccess entry); also had an issue where Wordpress wasn't allowing uploads of images bigger than 2 MB, which took a LONG time to remedy, as AI kept giving me the wrong suggestions (both Gemini AND ChatGPT) - I ended up using Google to find the answer.  Again, I had to add environment variables to the .htaccess file in the folder where the wp-config.php resides, but even so, the docker instance of Wordpress doesn't allow large pixel sets - I had to adjust the file size to reduce the pixel count.

UPDATE:  Annnnnd...I thought I was done but found that the image I was using (was using wordpress:latest) was using an older version of PHP (v8.2).  Wordpress was showing it as a security issue.  Instead of waiting to be hacked, I decided to see if the official Wordpress Docker repository had an image with a later version.  I tried version php8.4-apache, which got rid of the security alert, then I saw that there was an image for php8.5-apache, which I settled on.  I'll watch to see if this image is problematic.

UPDATE:  In conducting next-day checks, I noticed that 2FA stopped working (I'm using native 2FA).  I had to disable it and then re-enable it, which meant I had to re-enter the QR code in my authenticator app.  This was a bit annoying to see occur, because it meant that my account didn't have 2FA the past 12+ hours, but it's fixed now.

Saturday, March 14, 2026

Another Win - Redirecting HTTP to HTTPS; Untangling Apache Configuration Files

I posted a while back that I'd enabled SSL on one of my public domains for the first time.  I was running into issues with forcing SSL though.

The specific issue was that I'm running two websites on the Linode host - https://wigglit.com and http://unixfool.us.  One is using SSL certs when the other is not.  Each website had it's own VirtualHost configuration file.  When testing, I was able to reach https://wigglit.com without issue, but when going to http://wigglit.com, to ensure I'd be redirected to https://wigglit.com, the redirect was failing.  That issue causes problems in meeting Google's searching/indexing requirements.

I found that I had to create a third VirtualHost config file, specifically for the redirect to port 443.  So, I have a configuration file for unixfool.us:80, wigglit.com:80 (for the redirect to port 443), and wigglit.com:443.

I'd been struggling with this for days.  It was important that I have both wigglit.com:443 and unixfool.us:80, running on the Linode host, as I use unixfool.us to share photos.

I've been posting quite a bit about using AI/LLMs to assist in solving computing issues.  I'd previously been using ChatGPT.  I sometimes am frustrated with ChatGPT as I've to babysit it a lot - sometimes it's instructions aren't clear and I've also caught it sharing bad data more than once.  I know just enough to be able to question the data it offers.  I tried to leverage ChatGPT again with the SSL/Apache issue, but mid-stream, got frustrated and dediced to try a different LLM.  

I used Gemini (since I already have a Google account and heavly use Google tools).  I immediately noticed that Gemini appears to be the better AI, as it's answers were more clearly understandable (I didn't have to ask for clarification).  Also, I didn't catch it sharing bad data or questionable answers to my issue.  Usually, I am very clear in how I present the problem to the LLM, and in this case, it was no different.  I described the issue, shared the architectural layout of the Apache server and how I had it currently configured.  It was a decent paragraph-worth of information.  Gemini took the info and offered a clean solution!

I think I'll be using Gemini going forward.



Friday, March 13, 2026

SSH Local Port Forwarding - Accessing LAN Nextcloud Without VPN Connection

I've been using Nextcloud on my LAN.  It is provisioned to only be used on the LAN.

For the first time in a long while, I'm away visiting relatives.  I wanted to access the Nextcloud app, but since it's only accessible from the LAN, I thought I'd not be able to access it from another state without jumping through some hoops.  I was wrong.

Now, I've done this before but it's been like 15 years and I was initially super rusty with this:  I wanted to try to access the Nextcloud console by establishing an SSH tunnel.

I've one machine that has port 22 exposed to the internet (using SSH key authentication - yeah, I'm not totally aloof).  That machine is a Mac Mini - it is functioning as an SSH jumphost.  The Mac Mini can talk to the machine that is hosting a Docker instance of Nextcloud.  The Nexcloud console is mapped to port 1234 and is accessible using HTTP.

How did I establish a connection?

The public address to my LAN is 203.0.113.25.  The Mac Mini's IP is 192.168.1.200.  The Nextcloud IP is 192.168.1.22 (listening on port 1234).  

All of the IP info is fictional for this exercise.   To get this to work with your systems, change the IPs to match the hosts of your systems.

I ran the following:

ssh -L 1234:192.168.1.22:1234 ron@203.0.113.25

The above runs in the foreground (it establishes an active shell connection)

ssh -f -N -L 1234:192.168.1.22:1234 ron@203.0.113.25

The above runs in the background (it prompts you for login creds or key authentication, and nothing else)

Then, open a browser and type:  http://localhost:1234

Using the above-mentioned steps, I was able to access the Nexcloud console using a browser client.  Not only that, Nexcloud has an agent client.  I pointed that to http://localhost:1234 and it connected!

To better script this process, you can also add the following to your SSH config:

Host home

    HostName YOUR_PUBLIC_IP

    User username

    LocalForward 1234 192.168.1.22:1234

And then run the following command:

ssh home

Needless to say, if you've SSH exposed to the internet, you should use key-based authentication and disable password authentication.  As well, I recommend some type of rate-limiting, as you're going to see a crapload of bots attempting brute force authentication against your exposed SSH port (I use fail2ban).  Using a non-standard port to serve SSH connections is also an option, as most bots tend to only look for port 22 (note that this is considered to be obscuring, which is not really making anything secure).

This example is also for my Nextcloud setup but can be used for anything.  For example, I can use it to access my Portainer console on my Docker host.  I can use it to access other hosts on the LAN besides the Docker host system, as well.  Anything goes.

This isn't really anything super revealing...folks have been doing this for years in corporate IT and home.  I just thought it would be cool to share something that could help some folks that have never done such a thing.  Have fun with it!

Wednesday, February 18, 2026

I Probably Shouldn't Have Bought The AirPods Pro Max

I'm starting to hate my Airpods Pro Max headset.

Most of the time, when I need it, I can't get a good connection.  It's buggy as hell.  I'm so tired of having to reset the headset, only for it to still not get a connection.  I shouldn't have to remove the ear cups from the head harness just to clean the connections EVERY TIME I NEED THEM.

And when I do get a connection, it's like it's a half connection.  When this happens, I can't enable ANC or adjust the volume.

And the ear cushions are a pain in the ass, too.  They need to be cleaned.  A LOT.  If you don't clean them, they start smelling funky.  In fact, I bought a new set, as the smell wouldn't come out of the original cushions.

I've also been seeing a lot of moisture inside the ear cups, under the ear cushions.  I'm not sure if that's causing the connectivity issues, but I've removed the cushions and wiped the insides with a Kleenex and the internals were moist enough to where the Kleenex was soaked.

I'm on the edge of deciding to ask for warranty support (I've not owned them a year yet, and I've AC+).

After they fix it, I think I'm going to sell them and get something nice but non-Apple.

It's a pity, because, when they work, they're outstanding.  WHEN THEY WORK.  They're more broken than used.  So many folks complain on subreddits about the exact same issues.

This is disappointing, because prior to this, I'd bought a set of Beats Studio Pros...they had durability issues - they just started falling apart, but at least they never failed to work.  I still have them, too...last I checked, they were still working.

If you're thinking of buying a Max headset, DON'T DO IT. 

Friday, February 06, 2026

Added Cockpit to Ubuntu 25.10 on the Pi 500+

Today, I wanted to add Cockpit to the Pi 500+ Ubuntu install, but I didn't want to sit at the desk where I'd placed the keyboard.

When I tried to ssh into the Pi, I kept getting connection refusals, which I thought was odd.  I ended up having to spend some time at the Pi keyboard, investigating why I couldn't connect to it on port 22.

I found out that I'd never installed ssh!  I could've sworn I did, but maybe it was the Pi OS install that I installed it.

So, after I installed it, I installed Cockpit (I wanted to try it instead of using WebMin).  I then found that, after the install, when logged into Cockpit, it was only allowing my user limited administrator access.  It gave the option to gain full admin privileges, but when I clicked it, it gave an error that sudo couldn't be leveraged to escalate privileges.  When I googled that error, I found that one of the suggested fixes was to add your user to the wheel group.  My Pi didn't have a wheel group, so I had to create one.  Once I created the wheel group, I had to add my user to that group.  

I then double-checked my research and found a link to a Cockpit bug report of this exact issue back in October 2025.  The issue was that sudo was recently redeveloped in Rust code, and apparently does not support the --askpass flag, which is used by Cockpit.  The fix is to run the following (it's the non-Rust sudo implementation, which is still available):  

# update-alternatives --set sudo /usr/bin/sudo.ws

Now, I've Ubuntu all over the house on various machines, most of them running Cockpit.  I've not seen this issue before, and I've done a bunch of recent installs of Ubuntu 25.10.  In fact, I installed Cockpit on my docker container host, today.  It didn't exhibit this issue/bug.  I'm wondering why I'm only now seeing it.  I'm glad there's a workaround, though.


Saturday, January 24, 2026

Google Search Console - Requirements Overkill!

I've always had a not-so-good experience with Google's Search Console.  

I get it - they're trying to ensure that web content is meaningful.  

I get it, but damn, every single page object appears to have arcane criteria, otherwise Google will not crawl the page.

This is problematic for me, because I don't want to end up being a slave to the process of having Google process my website because, in my opinion, they go overboard with things. 

For example, there's a 40% chance that I'll include an embedded video when I submit a new post to my Wordpress-powered web page.  I've always wondered why my videos aren't being indexed by Google.  I'm now discovering that the videos won't be processed if they don't reside on a "watch page".  A blog post that reviews an embedded video will not be indexed because the video is complementary to the rest of the content on the page.  This means that I've to create a dedicate video landing page.  WTF.

That's one of many examples.  I've a large batch of pages that aren't being indexed because of the ridiculous criteria that Google requires.

The act of creating a video landing page within Wordpress isn't difficult.  Me having to walk backward to obtain all the posts that contain embedded videos so that I can add them to a watch page -- that's a lot of work. And then what?  Do I have to make the prior posts with embedded videos link to the newly built landing page's videos?

Bureaucracy overkill, that's what this is.

What I'll do is create the new watch page and add a few videos a day.  Maybe I'll be done in 6 months?  I'm certainly not going to let this overburden me. 

Sunday, January 18, 2026

I Just Set Up a Web Server to Use an SSL Cert, Using Let's Encrypt!

Yesterday, I was bored and had been contemplating setting up one of my public web sites to use an SSL certificate.

While most business websites use SSL certificates, SSL certs aren't really mandatory for use in just serving web content for reading purposes.  I've been using Apache to serve web pages a LONG time and never felt the need to enable HTTPS, as it wasn't required.  That changed when I found that I wanted my website to be more noticeable within search engine results.  To place higher within search engine results, HTTPS is required to be used on the web server that is serving the content.

As I host my own server, my options were to set up my own SSL certificate or to buy an SSL certificate for use with my server.  I decided to set up and deploy my own.

I used this link's instructions (I used CertBot, which uses Let's Encrypt, which I'll reference as LE) to set everything up.  Keep in mind that I'm using Ubuntu 25.10 to host my server, using Linode as a server.

After I built the certificates, I had a difficult time determining how to leverage them.  I initially tried using a WordPress plugin to import the certificate, but I tried like 5 different plugins and neither worked.  I then pivoted and tried a different method - I'm running Apache to serve Wordpress, so I set up the Apache config file to use HTTPS and pointed Apache to the LE certs.  I then used an SSL checker to check that everything was working.  It was.

Afterward, I then set up a cron job to renew the certs automatically.

Now, when I check the browser for indications that the website is using SSL, there's no lock icon that I can see, but I researched and saw that I also had to ensure the website's prior content wasn't using HTTP links to intneral server content, so I used some Wordpress tools to search and change HTTP links pointing to my web server to use HTTPS.  I also saw that a lot of my plugins and themes are using HTTP links that that's supposed to be a no-no for HTTPS compliance - I can't control how plugin providers construct their plugins, so I'm not sure what to do with that.

I think I'm going to enable SSL with with my other domain, as well (unixfool.us).

Eventually, I plan to replace my Wordpress website with a docker instance.  I'd need to research how to use SSL certs within a docker compose YML file.  I'm thinking it should be pretty straight-forward.  The only thing I can think of that might be an issue is the automatic renewal bit (the bit where I added a cron job to renew the certificate).

UPDATE (1/20/2026):  I just checked again and I can now see that the web page (https://wigglit.com) is showing as secure!