I’m not all that interested in writing long year in review (YoR) articles anymore, so I’ll just create some bullets and a bunch of photos. The bullets come first.
I had three different managers at my place of employment.
I got on a plane to travel out of my home area a whopping 17 times, 7 for work and 10 of a personable nature. Notable destinations were Israel (1H of the year), Ireland, Las Vegas, Chicago, and of course Seattle & the Bay Area.
I switched my primary vehicle to a non-Tesla EV.
I was promoted at my place of employment.
I attended concerts by Gareth Emery and P!nk.
My wife and I did not travel for Thanksgiving, Christmas, or New Years.
I returned to the office. My commute time ranges from 40 to over 90 minutes depending on my mode of transportation and various other timings.
I started drinking espresso shots.
I upgraded my main workstation to an Intel Core i9-13900KF CPU with 128GiB of RAM and a Nvidia GeForce RTX 3060 GPU.
I had an irrigation system installed for our end unit townhome. This installation resulted in 1 week of no running water in our home due to a valve that is part of the fire suppression system, which I blame squarely on the builder.
That about it, I suppose. In actuality I got tired of processing photos.
I also started listening to melodic techno instead of a steady stream of trance and house music. I also have a whopping 4x EDM shows scheduled through March 2024 already!
I run this webserver (dax.prolixium.com, a VirtualBox VM running on a bare metal server that I lease from Vultr in Parsippany, NJ) on FreeBSD as well as another test VirtualBox VM at home, trance.prolixium.com. Although it’s a less-than-scientific development environment, I usually try FreeBSD upgrades on trance before upgrading dax.
I did the trance upgrade from 13.2 over the weekend during some evening down time I had while in Sarasota, FL (visiting a family member). I did a source upgrade of the base system with a binary package upgrade. I used to do a 100% source upgrade, which included FreeBSD ports. However, over time the /build/ dependencies spiralied out of control (think multiple versions of llvm..) so I switched to binary packages.
The only quirks I ran into was that the first installworld failed with some ld-elf.so.1: /usr/bin/make: Undefined symbol "__libc_start1@FBSD_1.7" error. I ran installworld a second time and it worked. I’m guessing there’s some race condition or dependency error (I am using make -j1). Some folks on Lily indicated this is par for the course, although I haven’t encountered it before and I’ve been running and upgrading FreeBSD since 4.8. The other quirk is that openssh-portable seems to be gone from the available binary packages. As a result, pkg upgrade -f didn’t reinstall it to link to new libs and I ran delete-old- libs. After the reboot sshd was not running due to a libcrypto mismatch. I had to VNC in and switched to the OpenSSH from base. The only reason I used openssh-portable originally was because the OpenSSH version in base always lagged behind considerably but this doesn’t seem to be the case anymore since it’s 9.5. I’m still curious why openssh-portable was removed from the available binary packages—a quick wed search didn’t turn up anything interesting. I still see it in ports so I guess I could still build it from source if I wanted.
The trance VM runs WireGuard and FRR since I’ve had problems with that combination in the past (also OpenVPN and Quagga). I figured it’s a good soak and /fairly/ representative of what I run on dax, which I suppose I’ll upgrade in the next couple of weeks.
I switched over to MPV from MPlayer a year or two ago and haven’t looked back. However, after I upgraded my Intel Core i9-9900K with an RTX 2060 to A i9-13900KF with an RTX 3060 earlier this year I noticed videos played with MPV were very laggy, to the point where the FPS would be much slower and it’d have to drop frames to keep up with the audio. VLC and browser-based video was unaffected. It was just MPV.
I did all sorts of debugging, which included changing video drivers (--vo), decoders (--vd), and even playing with CPU scaling on the system (to the point where I even tried writing 0 to /dev/cpu_dma_latency) but nothing helped. The odd part is that some video files weren’t affected by this, specifically videos from my iPhone (H.265 in MOV container). I mostly switched to VLC, as a result, and moved on with life.
Today I came across an article where someone was troubleshooting a similar laggy video issue with MPV and even though the problem described in the thread was not related to mine, one of the troubleshooting steps helped me solve mine.
I run PulseAudio because some things like to use it but if I have the choice, I tell applications to run directly off the ALSA API. MPV uses PulseAudio by default over ALSA. It turns out that there is a slight difference in the audio controller on the motherboard on my i9-13900KF system (MSI MPG Z790 Carbon Wi-Fi) that causes outputting to PulseAudio to result in the horrible latency and lag. The fix was simple: --ao=alsa (or ao=alsa in mpv.conf).
Yes, I still feel like a jive turkey after all of these years:
I have a dozen or so Raspberry Pis on my network that are used as routers, environmental sensors, looking glasses, and lab devices. They vary in models but I perform periodic upgrades to keep them running, which some have been for over 10 years now. I’ve recently realized I should be using SD cards with higher write counts on the RPis that run SmokePing so those are getting upgraded with priority. Many of these RPis are remote and headless so performing upgrades can be tricky. I’ve bricked one or two of them over the years and the fix was usually simple (OpenVPN / WireGuard issues or software bugs) but I’ve come across one recently that I can’t make heads or tails on!
I visited my parents in NJ last weekend and did a replace & clone of the SD cards in one of their RPis (a 3 B unit) that acts as a router on a stick (two VLAN-tagged interfaces running off eth0). The clone worked fine and I decided to do some upgrades before heading back to VA. I upgraded *udev*, *dbus*, *systemd*, and *raspberry*, which usually pulls in all the packages that typically require or recommend a reboot, which I figured I should do when I’m local. I track the equivalent Debian testing release on Raspbian / Raspberry Pi OS, which is currently bookworm. Unfortunately, this appeared to brick the box upon reboot. However, what was weird is that networking came up about 2 minutes after the reboot, which is very slow, but that’s it. SSH never started and neither did FRR (Free Range Routing, which starts OSPFv2, OSPFv3, and BGP) or the DHCP relay so while I could ping the interfaces the box failed to perform its most basic function. It seems I had a problem.
I connected the RPi to a TV and USB keyboard to debug further and found that some (actually all, I’d find out later) block devices were not being registered correctly with udev and causing systemd to drop to a emergency mode:
At first I thought that fsck was just taking awhile and needed more time so I added x-systemd.device-timeout=300s to /etc/fstab for both of the device nodes, /dev/mmcblk0p1 (boot) and /dev/mmcblk0p2 (root). Both are specified as raw device nodes and not UUIDs in my /etc/fstab file, which seems alright for now but probably something I should change in the future in case device nodes start moving around or getting renamed. I had to make this change by connecting the SD card to another Linux box in the house and mounting the filesystem.
This didn’t do anything. I changed the timeout a few more times and eventually gave up on this approach.
This is where things get annoying. I also tried to hit Control-D as specified but that just resulted in it sitting there for 90 seconds and returning to the emergency mode with the same prompt. I also tried to enter my root password (yes, it’s set!) but it wouldn’t take the password. I verified the password hash on another working RPi and could su to root just fine with the password.
So, I had no way of getting this RPi booted, using hacks or otherwise. The Raspberry Pi bootloader isn’t like GRUB and doesn’t have an interactive menu where one can play with the kernel command line options on the fly. It literally has no menu or interface and just boots one of the /boot/kernel*.img files, depending on the platform. If the boot fails, the RPi will just hang and display a color wheel on the screen. I thought about editing /boot/cmdline.txt and adding init=/bin/sh to the command line but I figured that would just bypass the problem and not allow me to actually figure out what was going on.
Short on time, I re-cloned the SD card from the original (unaltered) image I had saved on another machine (yes, I thought ahead!) and got the router back up & running without doing any upgrades.
After I returned home I re-created the setup with a spare RPi 3 B+ (similar hardware, although I found that a 1 B exhibited the problem too) and the original SD card image. Instead of upgrading all the reboot-required packages at once I did a few at a time. I started with *udev* and this triggered the issue. Here’s the exact upgrade path:
The following additional packages will be installed:
The following additional packages will be installed:
libblkid1
The following packages will be upgraded:
libblkid1 libudev1 udev
3 upgraded, 0 newly installed, 0 to remove and 491 not upgraded.
Need to get 1,789 kB of archives.
After this operation, 1,041 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://mirror.umd.edu/raspbian/raspbian testing/main armhf libblkid1 armhf 2.38.1-5 [131 kB]
Get:2 http://raspbian.raspberrypi.org/raspbian testing/main armhf udev armhf 252.6-1+rpi1 [1,559 kB]
Get:3 http://raspbian.raspberrypi.org/raspbian testing/main armhf libudev1 armhf 252.6-1+rpi1 [99.1 kB]
Fetched 1,789 kB in 2s (1,137 kB/s)
(Reading database ... 59747 files and directories currently installed.)
Preparing to unpack .../libblkid1_2.38.1-5_armhf.deb ...
Unpacking libblkid1:armhf (2.38.1-5) over (2.36-3) ...
Setting up libblkid1:armhf (2.38.1-5) ...
(Reading database ... 59748 files and directories currently installed.)
Preparing to unpack .../udev_252.6-1+rpi1_armhf.deb ...
Unpacking udev (252.6-1+rpi1) over (247.2-5+rpi1) ...
Preparing to unpack .../libudev1_252.6-1+rpi1_armhf.deb ...
Unpacking libudev1:armhf (252.6-1+rpi1) over (247.2-5+rpi1) ...
Setting up libudev1:armhf (252.6-1+rpi1) ...
Setting up udev (252.6-1+rpi1) ...
Processing triggers for libc-bin (2.36-8+rpi1) ...
Processing triggers for man-db (2.9.3-2) ...
Processing triggers for initramfs-tools (0.139) ...
I was able to view system.journal with journalctl --file on another machine to look at the logs and found some interesting stuff. Mainly, it looks like udev is running into errors with a rule specifying ID_SEAT for every single device node on the machine:
Apr 21 11:45:20 mercuryold systemd-udevd[159]: Using default interface naming scheme 'v252'. Apr 21 11:45:20 mercuryold systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 11:45:20 mercuryold (udev-worker)[160]: vcs2: /usr/lib/udev/rules.d/73-seat-late.rules:13 Failed to import properties 'ID_SEAT' from parent: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[160]: vcs2: Failed to process device, ignoring: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[161]: vcsu2: /usr/lib/udev/rules.d/73-seat-late.rules:13 Failed to import properties 'ID_SEAT' from parent: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[161]: vcsu2: Failed to process device, ignoring: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[162]: vcsa2: /usr/lib/udev/rules.d/73-seat-late.rules:13 Failed to import properties 'ID_SEAT' from parent: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[162]: vcsa2: Failed to process device, ignoring: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[161]: vcsa3: /usr/lib/udev/rules.d/73-seat-late.rules:13 Failed to import properties 'ID_SEAT' from parent: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[161]: vcsa3: Failed to process device, ignoring: Operation not permitted [...]
mmcblk0 and friends are included here:
Apr 21 11:45:20 mercuryold (udev-worker)[165]: mmcblk0: /usr/lib/udev/rules.d/73-seat-late.rules:13 Failed to import properties 'ID_SEAT' from parent: Operation not permitted Apr 21 11:45:20 mercuryold (udev-worker)[165]: mmcblk0: Failed to process device, ignoring: Operation not permitted
And then, of course, this is what makes systemd actually unhappy:
Apr 21 11:48:46 mercuryold systemd[1]: dev-mmcblk0p1.device: Job dev-mmcblk0p1.device/start timed out. Apr 21 11:48:46 mercuryold systemd[1]: Timed out waiting for device dev-mmcblk0p1.device - /dev/mmcblk0p1. Apr 21 11:48:46 mercuryold systemd[1]: Dependency failed for boot.mount - /boot. Apr 21 11:48:46 mercuryold systemd[1]: Dependency failed for local-fs.target - Local File Systems.
So, something was busted in udev. I took a peek at 73-seat-late.rules and it’s pretty stock and the same on all of my systems:
# SPDX-License-Identifier: LGPL-2.1-or-later
#
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
ACTION=="remove", GOTO="seat_late_end"
ENV{ID_SEAT}=="", ENV{ID_AUTOSEAT}=="1", ENV{ID_FOR_SEAT}!="", ENV{ID_SEAT}="seat-$env{ID_FOR_SEAT}"
ENV{ID_SEAT}=="", IMPORT{parent}="ID_SEAT"
ENV{ID_SEAT}!="", TAG+="$env{ID_SEAT}"
TAG=="uaccess", ENV{MAJOR}!="", RUN{builtin}+="uaccess"
LABEL="seat_late_end"
Line 13 is:
ENV{ID_SEAT}=="", IMPORT{parent}="ID_SEAT"
I did some searching and didn’t find any hits on any search engines that references this type of error or failure. What’s worse is that I poked the udev versions on some of my other RPis and it’s the same as the supposed broken version that I upgraded to, 252.6-1+rpi1. That version is also the most current version in Debian as well (minus experimental).
I then tried to see if I could get a shell on the machine somehow to troubleshoot it in the bad state. I tried to get sshd to start earlier in the boot process but after messing with the dependency directives in ssh.service and sshd.service a few times and that didn’t change anything, I gave up on it for now.
(destiny:19:23:EDT)% head ./system/sshd.service
[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
#After=network.target auditd.service
After=systemd-remount-fs.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run
[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
(the above didn’t so squat but I’m probably doing it wrong or I need to manually run whatever systemd edit does usually after one changes a unit file)
I’m at a stopping point for this now. I suppose the next thing I might try is re-cloning the SD card again and doing a full upgrade instead of just udev or just reboot-required packages to see if there’s some missed dependency somewhere that doesn’t result in the breakage, but that’s a long shot especially since any udev dependency would either be kernel or systemd-related, and I upgraded those and it didn’t help. I also need to figure out why entering my root password doesn’t get me into the system—that’s actually more troubling than the other issue.
For those who are reading this post, have you seen this before or have any suggestions to try?
Update on 2023-05-01
I ended up figuring this out last week, mostly. I still don’t know why just upgrading the udev package bricked the RPi but I do know that the raspberrypi-kernel package will fail to install due to insufficient disk space but not return an error code to dpkg, so if someone isn’t watching the terminal closely, this could be missed.
This was part of my problem. /boot was only 60 MiB on this machine (it was an older image, my newer RPi installs have a 253 MiB /boot) and the kernel install failed but the modules in /lib/modules succeeded. The existing kernel image was untouched but its modules were no longer present. So, upon reboot, lots of things didn’t work at all since there were no loadable modules.
To avoid the nastiness of resizing /boot, I ended up just unmounting /boot, mounting it at /realboot, rsync’ing contents over to /boot (which is actually temporarily part of /), doing the kernel upgrade, then rsync’ing /boot back to /realboot, and unmounting & remounting things again. This only works because during the install of raspberrypi-kernel, you need much more than 60 MiB in /boot, but when the upgrade is finished, 60 MiB is sufficient (barely):
Filesystem Size Used Avail Use% Mounted on
/dev/mmcblk0p1 60M 56M 4.8M 93% /boot
The fix here is to not upgrade udev by itself (I only did this as a test) but always upgrade that and the kernel at the same time. So, if I had done the /boot trick originally, I wouldn’t have had any issues.
About a week ago I decided to throw in the towel on my Juniper Networks certifications, namely the only one I have left, the JNCIE-SP.
I passed the JNCIE-M lab in Herndon, VA, USA on 2010-09-24 (age: 29) and it was converted to the JNCIE-SP on 2012-05-19. I spent count countless hours building and manipulating a Junos (when did they change it from JUNOS?) virtual lab with Olives and tons of logical routers (nowadays called LSYS) and more VLAN tags than a single fxp0 control plane interface was ever meant to carry to learn the ins and outs of things like MPLS TE, multicast, and BGP. I did most of the lab work and studying at the Panera Bread in Ballantyne, which was a short walk from the condominium I owned in Charlotte, NC between 2005 and 2014. No, I didn’t gain any weight during this period because it was typically after work and also after a trip to the Mecklenburg Aquatic Center, where I swam countless laps.
My lab setup started with a bare metal eMachines Celeron-based PC running a version (8.4?) of Junos Olive, which consisted of the Junos control plane and the FreeBSD kernel as a data plane(-ish). It was a horrible hack and the only way to add more nodes was to create logical-routers and connect them together with VLAN tags on the only interfaces that existed on the image, which were from the fxp driver in FreeBSD. I also made use of some MX240s that were set aside for “testing” in the lab at my current job (Time Warner Cable) to gain exposure to some data plane things that the Olive didn’t support like CoS.
I passed the JNCIP-M lab in Sunnyvale, CA, USA on 2009-09-16 and found it fairly easy (I was done in under half the time allocated). The JNCIE-M was quite a bit more complex (IS-IS, MPLS-TE, SONET, and other stuff) and if I remember corectly I took most of the time allocated and then YOLO’ed the submit button. Here’s a photo of the office on the day of the exam:
I ended up passing it on the first try. This was in stark comparison to my experience with the JNCIE-SEC, which I failed twice and decided to not pursue further.
The Junos lab has changed a bit over the years but is still technically running! It’s now powered by vMX and some SRX hardware running in both packet and flow mode and is documented here (yes, it’s up to date as of this writing!). As far as the non-Junos systems, VMs changed to LXCs, Dynamips changed to IOSv, NX-OSv, and IOS XRv. No, I never used GNS3 or anything like that. The lab is fully-built from my own scripts that call KVM, VirtualBox, and Linux’s networking and container things like bridges and LXCs.
For awhile I carried both the JNCIE-SP and JNCIP-SEC. I ended up giving up on the JNCIP-SEC in 2019 and that certification expired on 2019-07-16. The last time I really touched firewalls in a production environment was in 2013 so that certification was fairly useless. I kept recertifiying JNCIP-SEC until 2023. My scores kept falling since I didn’t really work with any Juniper Networks gear past 2014 or 2015. Although I initially scheduled the JNCIP-SP for May of 2023 I ended up cancelling it because I figured dragging it on and on wasn’t worth the expense (my current employer would still continue to pay the $400/exam) or the studying.
I suppose it’s an end of an era. Although, I still have lots of bare-metal Junos-based boxes at home that I find myself tweaking periodically. I suppose it’s 4x EX2200-Cs, 3x SRX210s, and 1x SRX300. I’m not counting the NetScreen-5 that I still keep online, which runs an ancient version of ScreenOS (NetScreen was acquired by Juniper Networks in 2004).
Nowadays my day job consists of some amount of traditional networking, project management, and business development. The days of being a command-line warrior are mostly behind me.
I will still be a fully-certified JNCIE until 2023-08-17 where I will be designated a JNCIE Emeritus, which practically means nothing. Effectively, this date will be when the sun sets on my JNCIE certification (AKA good night).
And, I reused the following from my previous build (originally a Core i9-9900K):
Corsair Carbide 200R Case
Corsair RM850x Power Supply
Crucial SATA MX500 4TB SSD
2x Western Digital WDC WD40EZRZ-22GXCB0 4TB HDD via USB
Pioneer SATA XL BD-RW
LG SATA BD-DR
Hauppage PCI-e 4x DVB/HDTV Tuner
USB 3.0 PCI-e Adapter
I didn’t go for water cooling and also used the NA-RC7 “low noise” adapter to make the CPU fan spin slower and therefore not make as much noise. I wasn’t going to overclock so I figured this would be fine since the NH-D15S was a beast of a heatsink. I don’t game at all but wanted a mid-range GPU in case I decided to do anything more interesting than Google Earth and I picked the i9-13900KF because it has the best single-thread performance (the criteria for my last build, too):
I still don’t know why the KF is faster than the K variant. K means unlocked multipler and F means without integrated graphics. Supposely since there is no heat & power consumed by the GPU on the KF series, it can overclock more than the K variant. I’m not sure if it fully explains it, though. The i9-12900K slightly edges out the i9-12900KF farther down the list but that is well within the margin of error of PassMark’s testing, I’d think.
The machine doesn’t actually sit on my desk so I don’t care about any kind of flashy RGB stuff but it seemed to be impossible to find premium RAM that didn’t have some of LEDs on it (in addition to the motherboard). Here’s the Corsair RAM being “blingy”:
The build went fine, overall. I think I could have done a better job applying thermal paste but meh. The MSI BIOS quickly indicated that all my stuff was working as expected. I did notice that the RAM frequency was 4000 MHz but the RAM itself was spec’ed at 5200 MHz. I found out later that the 5200 MHz is a [sanctioned] OC specification (Intel XMP) so I’m fine with 4000 MHz as long as things are stable.
I did end up changing to legacy boot because I didn’t see any reason to change from grub-pc to grub-efi (I have no use for secure boot). The MSI BIOS flipped some other options when I did that:
I initially booted my Debian install from the original NVMe SSD connected by a USB converter, which surprisingly went very well (albeit a bit slow). I then used a Knoppix live DVDUSB to clone the first few MiB of the disk (for GRUB) and then recreated all filesystems for the 2TB SSD and rsync’ed content over (and.. forgetting the -p in rsync, so I had to flip the setuid bit on ping and mtr!).
The Noctua cooler works fairly well although things get pretty toasty if I load up 32 processes of burnP6 and let it sit for a few minutes:
There are a few interesting things above that I noticed after the fact. First, the way the 16x E-cores vs. 8x P-cores are enumerated in Linux is interesting. The P-cores are listed first and are core IDs 0,4,8,12,16,20,24,28. The E-cores are 32 through 47. I don’t know why the P-cores skip 4x IDs but the sysfs enumeration is even weirder because it breaks out threads, which are only supported in the P-cores.
(destiny:20:57:EST)% for i in $(seq 0 31); do echo -n "${i}: "; echo -n "Core ID #"; cat /sys/devices/system/cpu/cpu${i}/topology/core_id; done
0: Core ID #0 // P-core
1: Core ID #0 // P-core
2: Core ID #4 // P-core
3: Core ID #4 // P-core
4: Core ID #8 // P-core
5: Core ID #8 // P-core
6: Core ID #12 // P-core
7: Core ID #12 // P-core
8: Core ID #16 // P-core
9: Core ID #16 // P-core
10: Core ID #20 // P-core
11: Core ID #20 // P-core
12: Core ID #24 // P-core
13: Core ID #24 // P-core
14: Core ID #28 // P-core
15: Core ID #28 // P-core
16: Core ID #32 // E-core
17: Core ID #33 // E-core
18: Core ID #34 // E-core
19: Core ID #35 // E-core
20: Core ID #36 // E-core
21: Core ID #37 // E-core
22: Core ID #38 // E-core
23: Core ID #39 // E-core
24: Core ID #40 // E-core
25: Core ID #41 // E-core
26: Core ID #42 // E-core
27: Core ID #43 // E-core
28: Core ID #44 // E-core
29: Core ID #45 // E-core
30: Core ID #46 // E-core
31: Core ID #47 // E-core
I’ve annotated which is a P-core vs. E-core. I’m still not clear on how the Linux kernel really decides what tasks to throw at E-cores vs. P-cores and while looking at htop as I use the workstation it seems that everything’s just treated equally. Maybe it’s because the INTEL_HFI stuff is not fully integrated yet. I did notice that the 6.0.12 kernel that’s current on Debian testing at time of writing does not have INTEL_HFI_THERMAL enabled, which might help (or make things worse since the E-cores run at a lower clock speed?). I’ve played around with turning on / off all of the E-cores and most of the P-cores (minus cpu0, which is a P-core, and cannot be disabled) but haven’t really concluded anything concrete about powersaving vs. performance.
Second, this is the first time I’ve seen a core on a desktop PC of mine reach 100°C. I’m guessing that this resulted in some throttling (cpuinfo shows 5478.906 MHz for that core ID so I’m not sure how much). Maybe if I had opted for water cooling (or removed the “low noise” adapter!) it wouldn’t have gotten so hot.
While I’m not going to use this sytem for gaming I did notice that the RTX 3060 is crippled and will detect ETH mining:
Apparently only the first RTX 30xx cards produced did not have this restriction but all of the current ones do. I don’t really care but I don’t like the hardware I buy to be encumbered for silly reasons.
All-in-all this feels like a good upgrade and should last 4-5 years like my last i9-9900K build, which was done toward the end of 2018.
I hate new versions of macOS. I try to not call them upgrades anymore. They’re just changes for the sake of changes. I finally did a reinstall of my 2017 MacBook from Big Sur (11.x) to Ventura (13.x) and one thing annoys me and another thing is broken. I’ve seen no benefits from the new OS.
The one thing that annoys me is that they replaced System Preferences.app with System Settings.app. Sure, it feels more like iOS but macOS runs on computers and not mobile devices. It requires more scrolling and clicking than System Preferences.app and it feels like browsing settings is more painful than it was previously.
The one thing I’ve noticed that’s outright broken is the ability to disable randomized IPv6 addresses, which I do not want on my network. By default, macOS uses RFC 4941 (privacy extensions) and CGA (cryptographically generated addresses), which is part of SEND, by default. This results in IPv6 addresses being randomized and periodically rotated and includes randomized link local addresses, too. The sysctls have changed over the releases but in Monterey and Big Sur these could be disabled by adding the following to /etc/sysctl.conf:
The first one still works but the second one does not. It is flipped back to 1 on every reboot (seems like the option in sysctl.conf is ignored) and setting it to 0 once the system has booted does nothing regardless of turning off and on Wi-Fi. Even twiddling the insecure flag in ifconfig doesn’t help:
ifconfig en0 inet6 insecure
The LL and GUA addresses are still CGA-based and show secured in the ifconfig output:
Somehow my main Linux workstation (non-work equip.) has achieved 384 days of uptime. Sure, that’s fine, but Xorg has been running for 384 days, too, which is fairly impressive:
It’s a pretty beefy machine and I’ve kept things like browsers, libc, and SSH up-to-date while avoiding touching any Xorg-related things like Xfce4 and.. uh oh.. Nvidia drivers.
I’ve also, unintentionally got a few other impressive uptimes at my home since I suppose power is fairly decent in the area. Although, we’ve had 5-6 minute interruptions a few times over the year and all my UPSes have come in very handy.
The Jetway box above is running an Atom D2700 CPU that can run in 64-bit but the Jetway BIOS doesn’t support it, unfortunately. I’ve done about 5 TiB of Wi-Fi in 500 days. That’s not too much I suppose but my TVs and other streaming devices don’t use Wi-Fi!
Juniper EX2200-C virtual chassis:
{master:0}
prox@zero> show system uptime
fpc0:
-------------------------------------------------------------------------
Current time: 2022-11-26 21:13:13 EST
Time Source: NTP CLOCK
System booted: 2021-07-10 14:09:30 EDT (72w0d 08:03 ago)
Protocols started: 2022-08-11 09:20:26 EDT (15w2d 12:52 ago)
Last configured: 2022-11-08 21:45:05 EST (2w3d 23:28 ago) by prox
9:13PM up 504 days, 8:04, 1 user, load averages: 0.79, 0.86, 0.79
fpc1:
--------------------------------------------------------------------------
Current time: 2022-11-26 21:13:14 EST
Time Source: LOCAL CLOCK
System booted: 2022-08-11 09:22:34 EDT (15w2d 12:50 ago)
Last configured: 2022-11-08 21:44:43 EST (2w3d 23:28 ago) by prox
9:13PM up 107 days, 12:51, 0 users, load averages: 0.01, 0.08, 0.07
Alright, at least half of that VC has some decent uptime.
This one takes the cake, though. It’s an Atlantic.net VPS that is $0.99/mo in Toronto, Canada:
(tiny:21:11:EST)% uname -a Linux tiny 4.14.0-3-amd64 #1 SMP Debian 4.14.12-2 (2018-01-06) x86_64 GNU/Linux (tiny:21:11:EST)% uptime 21:11:22 up 1777 days, 23:53, 1 user, load average: 0.00, 0.00, 0.00 (tiny:21:11:EST)%
APT frequently fails to fork() and I have to stop things like SmokePing and snmpd to run any upgrades. It only has 256 MiB of RAM.
[I also tweeted this here (my second tweet about the issue) but Twitter is going through a rough time right now so I figured I should blog it, too]
Before moving in June of 2021 I created a list of every service or organization that had my address so I knew what I needed to update after moving. The update process was a huge pain in the butt, as one might expect. It took about two weeks for Loudoun County to process the paperwork for the sale so it was well into July before most 3rd party systems were up-to-date. USPS was updated pretty quickly and I have a feeling that was due to the builder submitting that information ahead of time. There were a couple items that took some extra time and action on my part, though:
Google
A Bank
Best Buy
FedEx Delivery Manager [the topic of this blog entry]
After 6+ weeks Google still did not have my street address in the system. It had the street itself but not the unit number, which was a little weird. After awhile I just submitted an update to Google Maps and manually placed my unit number on the map. Within a day it was updated. This unblocked a variety of services that use Google Maps data for address verification. I noticed another builder in the area had this pre-populated on Google Maps even though the homes were still under construction.
A bank (out of many others that I had no issue with), which will remain nameless, required me to call to their technical support line to update my address. They kinda blamed me for doing things wrong to begin with but then magically it was updated.
Best Buy, of all things, took a few months to recognize my address. I’m not sure what 3rd party system they use for address verification but this one remained on the list for awhile.
The last one is still broken but I don’t think it’s just broken for me. FedEx appears to have a few different uncoordinated systems that can store user profiles and FedEx Delivery Manager appears to be a standalone one. It’s similar to UPS My Choice in that one can select preferences for where packages are held or delivered and some other options. It doesn’t appear to have all the functionality of UPS My Choice (e.g. the notifications when a label is created for an inbound package, which is really useful) but still is nice to have. It wouldn’t accept my address for months and I went back & forth with someone on Twitter about this who indicated I needed to call technical support, which I did but then I couldn’t figure out how to speak to a human. I finally just left it on the list as “broken” and called everything done.
Over a year later after FedEx randomly delivered a package to my garage, which is kinda stupid since the front of my unit is directly on a street and package thefts are not prevalant in the area, I decided to give it another try. Nope, it still gave me an error. I decided to try it without the suffix (Sq or Square) and it accepted the address! I was able to successfully complete the sign up. However, this wasn’t right because I know for a fact there’s another street with this name in the surrounding area and the suffix is different (although the ZIP code is different, too). I decided to try other things, too. It turns out I can make up addresses and it’ll let me sign up!
My only conclusion is that Sq or Square (yes I tried both) is not recognized as a valid suffix in the FedEx Delivery Manager system. I ended up canceling the account for the address without the Sq becuase I didn’t want it to somehow affect my shipments, which seem to work fine (guessing it’s using FedEx’s official address verification system). Square is not too typical of a suffix I suppose but not as odd as some of the other ones in NoVA like Terrace (Ter).
Ultimately, I suppose I’m not really missing out. The bulk of my packages are delivered via USPS, AMZL, and USPS. I’m not really interested in expending the effort outside of this blog entry to try to get this fixed because at the day, I’m really only a gnat.
[Whoops, this was a draft that I had written up in February of 2022 but never published it and then forgot about it. Well, I figured I’d just publish it now because even though it’s largely irrelevant it’s better late then never?]
Yes, I’m weird. I can’t decide between Android and iOS when it comes to mobile devices so I have both.
My Android phone was a OnePlus 6T up until this week when I decided to try Samsung’s Galaxy line and went with the S22. I was thinking about the S22+ but the screen size at 170 mm was larger than the One Plus 6T at 165 mm, which I consider the max size of a phone, for me. The S22 comes in at 150 mm, which feels small but is about the same as my iOS device, an iPhone 13 Pro (155 mm).
This will also be the first Android phone where I don’t enable root access.
After a week with the phone and the Samsung-branded leather case, my first impressions aren’t all that great:
It took me days to shut off and disable/uninstall the Samsung garbage apps and endlessly-annoying notifications & suggestions. The sheer amount of junk made it feel like buying a Windows 98 PC from the late 1990s from a shady manufacturer. I almost threw the phone out the window halfway through this process.
The biometrics (face and fingerprint authentication) are AWFUL. When comparing it to my other phones, iPhone 13 Pro >> OnePlus 6T > Galaxy S22. I’m surprised it’s that bad. It’s gotten to a point where I just don’t expect them to work at all and always start to enter my PIN after turning the screen on. [Update 2022-11-20: The fingerprint authentication got much better after many months of updates but the face authentication is still mostly useless as it does not work most of the time.]
[Update 2022-11-20: The screen is slippery to the point that double tapping (to zoom or zoom out of Google Maps, for example) is a fail most of the time. It’s the most slippery phone I’ve ever owned. I don’t like the idea of screen protectors so I have just gotten used to it over time.]
I’m still able to turn off animations/transitions using developer options without rooting, which makes the phone instantly feel 10x faster (if I couldn’t do this, I would have returned it).
The camera performance is the best I’ve ever seen on a phone. The low-light/night photography blows away anything the iPhone 13 Pro can do. This is the best feature of the phone. However, the inability to have the camera application reset all settings to default (zoom level, night mode, etc.) on exit has me frequently yanking my phone out of my pocket to quickly take a photo and then scowling as I have to reset some setting before taking the shot.
The phone is very light. I’ll compare against the iPhone 13 Pro since it’s the same dimensions – it’s 204g and the S22 is 167g. Even though it’s light build quality seems to be good.
I’ll get a better idea of battery life over the next month but it seems like it’ll last a day and a half for me. It’s got a bunch of battery/power options, though – I’ve left the setting at “optimized” for now.
Overall, I’m not too impressed with the S22. Maybe I’ll try to re-record the biometrics to see if that improves things. [Update 2022-11-12: Nope, that didn’t do anything, but software updates did help]