Wednesday, February 18, 2026

I Probably Shouldn't Have Bought The AirPods Pro Max

I'm starting to hate my Airpods Pro Max headset.

Most of the time, when I need it, I can't get a good connection.  It's buggy as hell.  I'm so tired of having to reset the headset, only for it to still not get a connection.  I shouldn't have to remove the ear cups from the head harness just to clean the connections EVERY TIME I NEED THEM.

And when I do get a connection, it's like it's a half connection.  When this happens, I can't enable ANC or adjust the volume.

And the ear cushions are a pain in the ass, too.  They need to be cleaned.  A LOT.  If you don't clean them, they start smelling funky.  In fact, I bought a new set, as the smell wouldn't come out of the original cushions.

I've also been seeing a lot of moisture inside the ear cups, under the ear cushions.  I'm not sure if that's causing the connectivity issues, but I've removed the cushions and wiped the insides with a Kleenex and the internals were moist enough to where the Kleenex was soaked.

I'm on the edge of deciding to ask for warranty support (I've not owned them a year yet, and I've AC+).

After they fix it, I think I'm going to sell them and get something nice but non-Apple.

It's a pity, because, when they work, they're outstanding.  WHEN THEY WORK.  They're more broken than used.  So many folks complain on subreddits about the exact same issues.

This is disappointing, because prior to this, I'd bought a set of Beats Studio Pros...they had durability issues - they just started falling apart, but at least they never failed to work.  I still have them, too...last I checked, they were still working.

If you're thinking of buying a Max headset, DON'T DO IT. 

Friday, February 06, 2026

Added Cockpit to Ubuntu 25.10 on the Pi 500+

Today, I wanted to add Cockpit to the Pi 500+ Ubuntu install, but I didn't want to sit at the desk where I'd placed the keyboard.

When I tried to ssh into the Pi, I kept getting connection refusals, which I thought was odd.  I ended up having to spend some time at the Pi keyboard, investigating why I couldn't connect to it on port 22.

I found out that I'd never installed ssh!  I could've sworn I did, but maybe it was the Pi OS install that I installed it.

So, after I installed it, I installed Cockpit (I wanted to try it instead of using WebMin).  I then found that, after the install, when logged into Cockpit, it was only allowing my user limited administrator access.  It gave the option to gain full admin privileges, but when I clicked it, it gave an error that sudo couldn't be leveraged to escalate privileges.  When I googled that error, I found that one of the suggested fixes was to add your user to the wheel group.  My Pi didn't have a wheel group, so I had to create one.  Once I created the wheel group, I had to add my user to that group.  

I then double-checked my research and found a link to a Cockpit bug report of this exact issue back in October 2025.  The issue was that sudo was recently redeveloped in Rust code, and apparently does not support the --askpass flag, which is used by Cockpit.  The fix is to run the following (it's the non-Rust sudo implementation, which is still availabe:  

# update-alternatives --set sudo /usr/bin/sudo.ws

Now, I've Ubuntu all over the house on various machines, most of them running Cockpit.  I've not seen this issue before, and I've done a bunch of recent installs of Ubuntu 25.10.  In fact, I installed Cockpit on my docker container host, today.  It didn't exhibit this issue/bug.  I'm wondering why I'm only now seeing it.  I'm glad there's a workaround, though.


Saturday, January 24, 2026

Google Search Console - Requirements Overkill!

I've always had a not-so-good experience with Google's Search Console.  

I get it - they're trying to ensure that web content is meaningful.  

I get it, but damn, every single page object appears to have arcane criteria, otherwise Google will not crawl the page.

This is problematic for me, because I don't want to end up being a slave to the process of having Google process my website because, in my opinion, they go overboard with things. 

For example, there's a 40% chance that I'll include an embedded video when I submit a new post to my Wordpress-powered web page.  I've always wondered why my videos aren't being indexed by Google.  I'm now discovering that the videos won't be processed if they don't reside on a "watch page".  A blog post that reviews an embedded video will not be indexed because the video is complementary to the rest of the content on the page.  This means that I've to create a dedicate video landing page.  WTF.

That's one of many examples.  I've a large batch of pages that aren't being indexed because of the ridiculous criteria that Google requires.

The act of creating a video landing page within Wordpress isn't difficult.  Me having to walk backward to obtain all the posts that contain embedded videos so that I can add them to a watch page -- that's a lot of work. And then what?  Do I have to make the prior posts with embedded videos link to the newly built landing page's videos?

Bureaucracy overkill, that's what this is.

What I'll do is create the new watch page and add a few videos a day.  Maybe I'll be done in 6 months?  I'm certainly not going to let this overburden me. 

Sunday, January 18, 2026

I Just Set Up a Web Server to Use an SSL Cert, Using Let's Encrypt!

Yesterday, I was bored and had been contemplating setting up one of my public web sites to use an SSL certificate.

While most business websites use SSL certificates, SSL certs aren't really mandatory for use in just serving web content for reading purposes.  I've been using Apache to serve web pages a LONG time and never felt the need to enable HTTPS, as it wasn't required.  That changed when I found that I wanted my website to be more noticeable within search engine results.  To place higher within search engine results, HTTPS is required to be used on the web server that is serving the content.

As I host my own server, my options were to set up my own SSL certificate or to buy an SSL certificate for use with my server.  I decided to set up and deploy my own.

I used this link's instructions (I used CertBot, which uses Let's Encrypt, which I'll reference as LE) to set everything up.  Keep in mind that I"m using Ubuntu 25.10 to host my server, using Linode as a server.

After I built the certificates, I had a difficult time determining how to leverage them.  I initially tried using a WordPress plugin to import the certificate, but I tried like 5 different plugins and neither worked.  I then pivoted and tried a different method - I'm running Apache to serve Wordpress, so I set up the Apache config file to use HTTPS and pointed Apache to the LE certs.  I then used an SSL checker to check that everything was working.  It was.

Afterward, I then set up a cron job to renew the certs automatically.

Now, I when I check the browser for indications that the website is using SSL, there's no lock icon that I can see, but I researched and saw that I also had to ensure the website's prior content wasn't using HTTP links to intneral server content, so I used some Wordpress tools to search and change HTTP links pointing to my web server to use HTTPS.  I also saw that a lot of my plugins and themes are using HTTP links that that's supposed to be a no-no for HTTPS compliance - I can't control how plugin providers construct their plugins, so I'm not sure what to do with that.

I think I'm going to do with with my other domain, as well (unixfool.us).

Eventually, I plan to replace my Wordpress website with a docker instance.  I'd need to research how to use SSL certs within a docker compose YML file.  I'm thinking it should be pretty straight-forward.  The only thing I can think of that might be an issue is the automatic renewal bit (the bit where I added a cron job to renew the certificate).

UPDATE (1/20/2026):  I just checked again and I can now see that the web page (https://wigglit.com) is showing as secure!  

Friday, January 02, 2026

Rasberry Pi OS, Begone!

Last night, I tried to use a docker container on the new Pi system that I've been able to use on other systems without issue.

This experience was pretty much a nightmare.

I was able to install Docker without issue and the 'hello world' container worked fine.

When I tried to run a Wordpress container, there were cascading issues.  Granted, I know that the Pi runs on the ARM chipset, so I did have to make adjustments for that, which wasn't all that difficult.

The main issue I had related to the Pi OS misconfiguring things.  There were things being blocked by the OS due to bad routing.

While I was able to get the Worpdress container to run, I couldn't connect to it intially.  In fact, I couldn't reach the internet, using curl or any other browser client.  Apparently, curl is kinda weird on the Pi OS, as it requires usage of port 80 and I'd tried to use port 80 as the Wordpress service port.  Since I wasn't using port 80 or any other service that was configured to use port 80, I initially felt it was safe to use port 80 for the Wordpress container.  NOPE!  When I did, it broke some things relating to curl and routing.  After ChatGPT informed me that it's best to not use port 80 for the Wordpress container, I changed the port to 8888 with no success.   It ended up taking me like 6 hours to determine the issue.  ChatGPT kept repeating repair steps that weren't working, until I forced it to look for other issues.

At 4 AM this morning, I finally was able to reach the container using curl, Chromium, and Firefox, but was still experiencing connection drops when trying to use Duckduckgo.  I also noticed several other connection drops (some Wordpress plugins requires backend callbacks to 'home' using curl - those started breaking again.

The fix was to remove some default routes that were associated with the containers.  I also had to remove some rules from IPTables.  I also had to remove some IP links, and also had to add additional config context to the wp.config.php and compose.yml files.

Later in the day, I checked the container again and noticed that the problem routes that I'd removed had been readded by Pi OS, reverting my work.

I got fed up and decided to start from scratch with another OS.  

I chose Ubuntu, since I'm already familiar with it.  The only wildcard is that this Pi system is still powered by ARM, so I might still run into some things that are currently unknown to me...I'll just have to be prepared for any chipset-related issues that may occur, but I trust Ubuntu more than Pi OS at this point.

Ubuntu 25.10 is now installed on the system's internal SSD.  I used the Pi boot options to reinstall the OS...that's a cool option, but I wish it would also give the option to use wireless connections instead of ethernet, as I had to jump through hoops to ensure I could use the ethernet where the Pi system is currently located.

As well, I wasn't prepared for the new OS install to take 45 minutes.

As frustrated as I was, it's all a learning experience for me.  As well, there's less frustration in reinstalling when I'm using a Pi.

I'd post some of my ChatGPT session, but it was messy and an hours-long chat.

I'll keep you all updated on my progress with Ubuntu on the Pi.

UPDATE (1/2/2026):  Yeah, I already deployed a docker instance of Wordpress in Ubuntu, on the Pi.  I had none of the issues I had last night with deploying the same .yml file on the Pi OS, beyond another issue with changing code so that the images being pulled supported ARM.  The two experiences were very different.

UPDATE (1/3/2026):  I've still not noticed any issues.  All is well, I think!

UPDATE (1/10/2026):  One issue I've noticed - I've lost sound (thru HDMI connection).  I'm not able to hear system sounds or anything like Youtube audio or music streaming using Audacious.  I've been working on getting it to work but have yet to see success.  With Audacious, I can actually see the music playing, but can't hear it at all (the audio devices are showing as up).

Thursday, December 25, 2025

Christmas Day 2025 - Received Rasberry Pi 500+ As A Gift!

Good day, all!

My family opened Christmas presents last night at 12 AM (most of us didn't want to wake up early to open presents).

I received a Rasberry Pi 500+ for Christmas - it was a gift from my daughter.



What spurred this interest?  I mean, I was never really curious about Rasberry Pi devices, as I've always had very robust computer systems in my household.  Only now am I hating all the systems sprawled about within my basement.  

My son received a Rasberry Pi 5 from my daughter in November and I got to see it (I'd never held or seen them prior to that).  He was able to configure it as a media server pretty much immediately after he got it.  I loved the small form factor, which spurred my interest.

Once I saw his, I went to Rasberry Pi to see the things they had.  I was immediately curious about the 500+ and had planned to get it on my own, but as my family uses Elfster.com to gift each other, I added it to my wish list.

The keyboard is NICE!  I love how it soft-clicks (it's a mechanical keyboard)...I've a Royal Kludge S98 and that thing is noisy AF compared to this.  I also love the 500+'s RGB setup of preconfigured keyboard configs.

This Pi seems to be powerful enough to where I'm considering using it as my main docker host, but my current docker host is an Alienware M17X R3, which I do not think the 500+ can match across the performance spectrum, but it is a great second choice.  The thing about the Alienware is that it is a laptop and has a functional battery, so if power hiccups or if I lose power, I can gracefully shut down that system.  Plus, that system is quite antiquated for a gaming system, so hosting docker containers is a good use for it.  

Where does this leave me with the 500+?  It means that I can shut down one of my older and less capable systems, which will declutter my office/lab.  

I can actually envision buying several of these to replace old systems.

Everything resides within the keyboard, which is why it is thick.  Actually, the SoC is small AF, though, so I'm not sure why the keyboard is so thick.  The system has a heatsink to dissipate heat - there are no fans, so the system stays quiet.

As this system comes with a 256 GB SSD drive, I did not have to muck with micro SD cards, although I've the option if I feel the need.  The SSD drive is preconfigured with the Rasberry Pi OS.  The SSD drive can also be replaced with something bigger - SSD drive replacements would need to use the M.2 NVMe format.

The system is BT- and Wifi-capable.  It has two mini-HDMI ports, and three USB-A ports (2 x v3 and 1 x v2).  It also has an ethernet port and 16 GB of memory.

I'll be sure to share my Pi journey here.

Thursday, November 13, 2025

Containerized Nextcloud & Owncloud

I've been using Nextcloud for several years.  I prefer Owncloud but Owncloud, IMO, is pretty arcane.  The con for Nextcloud is that it feels heavy and is slow.

Nextcloud is a PITA to maintain via snaps in Ubuntu.  Something is always breaking or not working properly and most of those issues tend to be related to snaps.

I decided to try Nextcloud via containers.  I am very surprised - it feels light and quick in comparison to installing natively on HDD.  The host system has an SSD.  I deployed it via Portainer, but I had to butcher someone  else's docker compose YML file.  The file looks ugly but I've a running system.  This is my second attempt at deploying Nextcloud as a container - the first attempt had DB access issues that I was having a difficult time sorting.

Even when importing files (videos, pictures, and music) into Nextcloud, there was less of a system load.

For now, I'll monitor the system while using it with a small subset of data (it currently has 40 GB of files).  I don't want to spend the effort of moving a massive amount of files only for the instance to die (I do have persistent volumes enabled for the container, though).  The app container is consuming 4 GB of memory, though - that's a bit high, IMO...not sure if it's experiencing a memory leak, as it's using 4 GB while idle.

UPDATE (11/14/2025):

I decided to try to deploy a containerized Owncloud instance.  The compose YML file was a bit more beefy.  It was copied from the Owncloud documentation.  

I had to deploy this one from CLI, for now...I ran into an issue that I need to sort out - once I sort it out, I'll redeploy using Portainer.

I did run into an environment setting issue.  OWNCLOUD_TRUSTED_DOMAINS needed an IP value (IP of the server itself) - the documentation is vague on this and I found the answer from within a bug report.

I thought that a containerized Nextcloud instance was quick - this server is even quicker than a containerized Nextcloud instance.

I will have a bake-off of these two instances, but I suspect I'll be again adopting Owncloud as a docker cloud app.

UPDATE (11/17/2025):

One thing that is super weird is that Owncloud won't allow uploading of directories.  To upload a directory of MP3s, for example, I've to create a folder named, "MP3s" and then upload all the files within the MP3 directory.  WTF?!  

Note that I can move folders if I use the Owncloud client software.  I'm not wanting to install the client software on every system I have.  It's like they're actively fighting to not have a directory upload feature.  With Nextcloud (and Google Drive, and OneDrive), I just have to select the folder and the whole folder is treated as an object (meaning, the the directory and it's contents will be uploaded/downloaded).  It's damned silly not to include it.  

I think I ran into the same issue years ago when I used Owncloud (like 7+ years ago!).  I researched and someone said, well it works with Google...blame the browser creators (double-WTF?!)  Nah...I'm blaming Owncloud because things like that are silly and if they're doing things like this, what else are they doing within the code?  It looks like Owncloud decided for me which to use (and it's not Owncloud).  

I'm glad I didn't manually install it, only to see the lack of directory uploading.