Saturday, January 24, 2026

Google Search Console - Requirements Overkill!

I've always had a not-so-good experience with Google's Search Console.  

I get it - they're trying to ensure that web content is meaningful.  

I get it, but damn, every single page object appears to have arcane criteria, otherwise Google will not crawl the page.

This is problematic for me, because I don't want to end up being a slave to the process of having Google process my website because, in my opinion, they go overboard with things. 

For example, there's a 40% chance that I'll include an embedded video when I submit a new post to my Wordpress-powered web page.  I've always wondered why my videos aren't being indexed by Google.  I'm now discovering that the videos won't be processed if they don't reside on a "watch page".  A blog post that reviews an embedded video will not be indexed because the video is complementary to the rest of the content on the page.  This means that I've to create a dedicate video landing page.  WTF.

That's one of many examples.  I've a large batch of pages that aren't being indexed because of the ridiculous criteria that Google requires.

The act of creating a video landing page within Wordpress isn't difficult.  Me having to walk backward to obtain all the posts that contain embedded videos so that I can add them to a watch page -- that's a lot of work. And then what?  Do I have to make the prior posts with embedded videos link to the newly built landing page's videos?

Bureaucracy overkill, that's what this is.

What I'll do is create the new watch page and add a few videos a day.  Maybe I'll be done in 6 months?  I'm certainly not going to let this overburden me. 

Sunday, January 18, 2026

I Just Set Up a Web Server to Use an SSL Cert, Using Let's Encrypt!

Yesterday, I was bored and had been contemplating setting up one of my public web sites to use an SSL certificate.

While most business websites use SSL certificates, SSL certs aren't really mandatory for use in just serving web content for reading purposes.  I've been using Apache to serve web pages a LONG time and never felt the need to enable HTTPS, as it wasn't required.  That changed when I found that I wanted my website to be more noticeable within search engine results.  To place higher within search engine results, HTTPS is required to be used on the web server that is serving the content.

As I host my own server, my options were to set up my own SSL certificate or to buy an SSL certificate for use with my server.  I decided to set up and deploy my own.

I used this link's instructions (I used CertBot, which uses Let's Encrypt, which I'll reference as LE) to set everything up.  Keep in mind that I"m using Ubuntu 25.10 to host my server, using Linode as a server.

After I built the certificates, I had a difficult time determining how to leverage them.  I initially tried using a WordPress plugin to import the certificate, but I tried like 5 different plugins and neither worked.  I then pivoted and tried a different method - I'm running Apache to serve Wordpress, so I set up the Apache config file to use HTTPS and pointed Apache to the LE certs.  I then used an SSL checker to check that everything was working.  It was.

Afterward, I then set up a cron job to renew the certs automatically.

Now, I when I check the browser for indications that the website is using SSL, there's no lock icon that I can see, but I researched and saw that I also had to ensure the website's prior content wasn't using HTTP links to intneral server content, so I used some Wordpress tools to search and change HTTP links pointing to my web server to use HTTPS.  I also saw that a lot of my plugins and themes are using HTTP links that that's supposed to be a no-no for HTTPS compliance - I can't control how plugin providers construct their plugins, so I'm not sure what to do with that.

I think I'm going to do with with my other domain, as well (unixfool.us).

Eventually, I plan to replace my Wordpress website with a docker instance.  I'd need to research how to use SSL certs within a docker compose YML file.  I'm thinking it should be pretty straight-forward.  The only thing I can think of that might be an issue is the automatic renewal bit (the bit where I added a cron job to renew the certificate).

UPDATE (1/20/2026):  I just checked again and I can now see that the web page (https://wigglit.com) is showing as secure!  

Friday, January 02, 2026

Rasberry Pi OS, Begone!

Last night, I tried to use a docker container on the new Pi system that I've been able to use on other systems without issue.

This experience was pretty much a nightmare.

I was able to install Docker without issue and the 'hello world' container worked fine.

When I tried to run a Wordpress container, there were cascading issues.  Granted, I know that the Pi runs on the ARM chipset, so I did have to make adjustments for that, which wasn't all that difficult.

The main issue I had related to the Pi OS misconfiguring things.  There were things being blocked by the OS due to bad routing.

While I was able to get the Worpdress container to run, I couldn't connect to it intially.  In fact, I couldn't reach the internet, using curl or any other browser client.  Apparently, curl is kinda weird on the Pi OS, as it requires usage of port 80 and I'd tried to use port 80 as the Wordpress service port.  Since I wasn't using port 80 or any other service that was configured to use port 80, I initially felt it was safe to use port 80 for the Wordpress container.  NOPE!  When I did, it broke some things relating to curl and routing.  After ChatGPT informed me that it's best to not use port 80 for the Wordpress container, I changed the port to 8888 with no success.   It ended up taking me like 6 hours to determine the issue.  ChatGPT kept repeating repair steps that weren't working, until I forced it to look for other issues.

At 4 AM this morning, I finally was able to reach the container using curl, Chromium, and Firefox, but was still experiencing connection drops when trying to use Duckduckgo.  I also noticed several other connection drops (some Wordpress plugins requires backend callbacks to 'home' using curl - those started breaking again.

The fix was to remove some default routes that were associated with the containers.  I also had to remove some rules from IPTables.  I also had to remove some IP links, and also had to add additional config context to the wp.config.php and compose.yml files.

Later in the day, I checked the container again and noticed that the problem routes that I'd removed had been readded by Pi OS, reverting my work.

I got fed up and decided to start from scratch with another OS.  

I chose Ubuntu, since I'm already familiar with it.  The only wildcard is that this Pi system is still powered by ARM, so I might still run into some things that are currently unknown to me...I'll just have to be prepared for any chipset-related issues that may occur, but I trust Ubuntu more than Pi OS at this point.

Ubuntu 25.10 is now installed on the system's internal SSD.  I used the Pi boot options to reinstall the OS...that's a cool option, but I wish it would also give the option to use wireless connections instead of ethernet, as I had to jump through hoops to ensure I could use the ethernet where the Pi system is currently located.

As well, I wasn't prepared for the new OS install to take 45 minutes.

As frustrated as I was, it's all a learning experience for me.  As well, there's less frustration in reinstalling when I'm using a Pi.

I'd post some of my ChatGPT session, but it was messy and an hours-long chat.

I'll keep you all updated on my progress with Ubuntu on the Pi.

UPDATE (1/2/2026):  Yeah, I already deployed a docker instance of Wordpress in Ubuntu, on the Pi.  I had none of the issues I had last night with deploying the same .yml file on the Pi OS, beyond another issue with changing code so that the images being pulled supported ARM.  The two experiences were very different.

UPDATE (1/3/2026):  I've still not noticed any issues.  All is well, I think!

UPDATE (1/10/2026):  One issue I've noticed - I've lost sound (thru HDMI connection).  I'm not able to hear system sounds or anything like Youtube audio or music streaming using Audacious.  I've been working on getting it to work but have yet to see success.  With Audacious, I can actually see the music playing, but can't hear it at all (the audio devices are showing as up).

Thursday, December 25, 2025

Christmas Day 2025 - Received Rasberry Pi 500+ As A Gift!

Good day, all!

My family opened Christmas presents last night at 12 AM (most of us didn't want to wake up early to open presents).

I received a Rasberry Pi 500+ for Christmas - it was a gift from my daughter.



What spurred this interest?  I mean, I was never really curious about Rasberry Pi devices, as I've always had very robust computer systems in my household.  Only now am I hating all the systems sprawled about within my basement.  

My son received a Rasberry Pi 5 from my daughter in November and I got to see it (I'd never held or seen them prior to that).  He was able to configure it as a media server pretty much immediately after he got it.  I loved the small form factor, which spurred my interest.

Once I saw his, I went to Rasberry Pi to see the things they had.  I was immediately curious about the 500+ and had planned to get it on my own, but as my family uses Elfster.com to gift each other, I added it to my wish list.

The keyboard is NICE!  I love how it soft-clicks (it's a mechanical keyboard)...I've a Royal Kludge S98 and that thing is noisy AF compared to this.  I also love the 500+'s RGB setup of preconfigured keyboard configs.

This Pi seems to be powerful enough to where I'm considering using it as my main docker host, but my current docker host is an Alienware M17X R3, which I do not think the 500+ can match across the performance spectrum, but it is a great second choice.  The thing about the Alienware is that it is a laptop and has a functional battery, so if power hiccups or if I lose power, I can gracefully shut down that system.  Plus, that system is quite antiquated for a gaming system, so hosting docker containers is a good use for it.  

Where does this leave me with the 500+?  It means that I can shut down one of my older and less capable systems, which will declutter my office/lab.  

I can actually envision buying several of these to replace old systems.

Everything resides within the keyboard, which is why it is thick.  Actually, the SoC is small AF, though, so I'm not sure why the keyboard is so thick.  The system has a heatsink to dissipate heat - there are no fans, so the system stays quiet.

As this system comes with a 256 GB SSD drive, I did not have to muck with micro SD cards, although I've the option if I feel the need.  The SSD drive is preconfigured with the Rasberry Pi OS.  The SSD drive can also be replaced with something bigger - SSD drive replacements would need to use the M.2 NVMe format.

The system is BT- and Wifi-capable.  It has two mini-HDMI ports, and three USB-A ports (2 x v3 and 1 x v2).  It also has an ethernet port and 16 GB of memory.

I'll be sure to share my Pi journey here.

Thursday, November 13, 2025

Containerized Nextcloud & Owncloud

I've been using Nextcloud for several years.  I prefer Owncloud but Owncloud, IMO, is pretty arcane.  The con for Nextcloud is that it feels heavy and is slow.

Nextcloud is a PITA to maintain via snaps in Ubuntu.  Something is always breaking or not working properly and most of those issues tend to be related to snaps.

I decided to try Nextcloud via containers.  I am very surprised - it feels light and quick in comparison to installing natively on HDD.  The host system has an SSD.  I deployed it via Portainer, but I had to butcher someone  else's docker compose YML file.  The file looks ugly but I've a running system.  This is my second attempt at deploying Nextcloud as a container - the first attempt had DB access issues that I was having a difficult time sorting.

Even when importing files (videos, pictures, and music) into Nextcloud, there was less of a system load.

For now, I'll monitor the system while using it with a small subset of data (it currently has 40 GB of files).  I don't want to spend the effort of moving a massive amount of files only for the instance to die (I do have persistent volumes enabled for the container, though).  The app container is consuming 4 GB of memory, though - that's a bit high, IMO...not sure if it's experiencing a memory leak, as it's using 4 GB while idle.

UPDATE (11/14/2025):

I decided to try to deploy a containerized Owncloud instance.  The compose YML file was a bit more beefy.  It was copied from the Owncloud documentation.  

I had to deploy this one from CLI, for now...I ran into an issue that I need to sort out - once I sort it out, I'll redeploy using Portainer.

I did run into an environment setting issue.  OWNCLOUD_TRUSTED_DOMAINS needed an IP value (IP of the server itself) - the documentation is vague on this and I found the answer from within a bug report.

I thought that a containerized Nextcloud instance was quick - this server is even quicker than a containerized Nextcloud instance.

I will have a bake-off of these two instances, but I suspect I'll be again adopting Owncloud as a docker cloud app.

UPDATE (11/17/2025):

One thing that is super weird is that Owncloud won't allow uploading of directories.  To upload a directory of MP3s, for example, I've to create a folder named, "MP3s" and then upload all the files within the MP3 directory.  WTF?!  

Note that I can move folders if I use the Owncloud client software.  I'm not wanting to install the client software on every system I have.  It's like they're actively fighting to not have a directory upload feature.  With Nextcloud (and Google Drive, and OneDrive), I just have to select the folder and the whole folder is treated as an object (meaning, the the directory and it's contents will be uploaded/downloaded).  It's damned silly not to include it.  

I think I ran into the same issue years ago when I used Owncloud (like 7+ years ago!).  I researched and someone said, well it works with Google...blame the browser creators (double-WTF?!)  Nah...I'm blaming Owncloud because things like that are silly and if they're doing things like this, what else are they doing within the code?  It looks like Owncloud decided for me which to use (and it's not Owncloud).  

I'm glad I didn't manually install it, only to see the lack of directory uploading.  

Sunday, November 09, 2025

Containers Update

I posted awhile back that I was having issues with a containerized deployment of Pihole.

I also posted not long ago that I decided to use Portainer to manage my deployment container stacks.

I thought I could fix the original Pihole container, but after seeing it die again, I immediately began work on using a different system as a host for the containers.  

I have three laptops that weren't being used.  Each was running Window desktop OS variants.  I wanted to install Linux on each.

The three candidate replacement systems were:

Dell Latitude E5530:  This system is a very old system that I bought used for $100 - 4 GB of RAM; i5-3380 CPU

Alienware 15 R2:  This system is also an older system, but not quite as old as the E5530 system mentioned above.  This was used by my daughter.  We initially thought the system was broken but I couldn't find anything wrong with it.  It has 8 GB of RAM and uses the i5-6300 CPU.  It has a 1 TB HDD.  It also has a small (256 GB) SSD.

Alienware M17x R3:  This system is also an older system but has the best specs of the three candidate systems:  12 GB RAM, i7-2760QM CPU; 750 GB SSD

I installed Ubuntu 24.04 onto each of the three systems, testing to determine how well that Ubuntu version would operate on those hardware platforms (while also keeping in mind that they were laptops).

I found that the Alienware M17x system was the most robust and ran Ubuntu without issues.  The other two systems run Ubuntu well enough, but I noticed they were under higher load when idle.

As the systems already had Ubuntu, it was pretty easy for me to install the prerequisite packages for Docker.  It was super easy to get my containers up and running again on the new host.

On top of that, I installed Portainer not long after getting the new host sorted.  I then had to duplicate the Pihole container from within Portainer so that Portainer would have full control over it.  Until I did that, I had limited control over Pihole using Portainer.  Note that I've already posted about Portainer.

I've been monitoring the new host and redeployed hosts.  I've noticed no issues.

I also kept the original system running (a Dell XPS 8930 with an i5-8400 CPU, 8 GB RAM, and 1 TB HDD).  A Pihole container is still running on that host and it hasn't thrown errors since I moved to a new hosting system, oddly enough.  

About the only thing that I had to enable as a requirement was for the M17x system to not sleep/hibernate when I closed the lid.  I found a way to disable hibernation on that host.

As a server, the M17x runs like a champ, especially when Linux is used.

I'd still be using that system if it weren't for the fact that it tends to eat GPUs.  It's been through two Nvidia GeForce 580M GPUs and those weren't cheap.  I think it was the 580Ms that were fragile.

In fact, all of those Dell systems responded extremely well to Linux, especially the Alienware 15 R2, as Windows was choking it...this was why my daughter stopped using it.  It was running Windows 10 when she stopped using it and while I couldn't find anything wrong with the system, when I was troubleshooting, I saw that the HDD appeared to be what was choking the system. Drive resources were constantly pegged when monitoring Task Manager.  That all stopped when I installed Ubuntu.  That's the power of Linux right there!

I'll update the blog if I see anything bad, but I've been monitoring the new host for almost 2 months and I've not seen any issues.

Monday, October 13, 2025

Which Mac Will Be My Next System?

I've always been curious about the Mac Studio and I'd initially had that on my list of must-have systems until the new Mac Mini M4s came out.

I now have tentative plans to buy the Mac Mini M4.  I want to use that system to heavy-lift creation of my videos, as well as to maybe utilize it for housing of my Docker containers.

I love my current M1 Mini but it is a base model and I'm somewhat limited in the above-mentioned use cases.  While I can crunch video footage, I usually have to kill all other resource-intensive running processes when doing that.  I've not even tried to use Docker containers on that system, since running multiple containers usually requires a somewhat robust system (plus there's some system overhead since Docker can't natively run on Mac systems).

The real difference between the Studio and Mini would be ports and connectivity to peripherals, which would be extremely beneficial.  

As well, you can better spec out a Studio, as the platform is designed to be more open-ended as it relates to performance.  UPDATE:  Nope!!  I was wrong - I can spec out more RAM and undercut the price (drastically) of a somewhat similarly spec'd Studio.

So, my dilemma is, which one would be better for me, a high spec (lots of memory) Mini or a decently spec'd Studio?