Core i9-13900KF Upgrade

I finally decided to upgrade my Linux workstation this past week. I went for the following specifications:

  • Intel Core i9-13900KF CPU
  • MSI GeForce RTX 3600 GPU
  • MSI MPG Z790 Carbon Motherboard
  • 128GiB Corsair Dominator Platinum DDR5 5200MHz RAM
  • Noctua NH-D15S CPU Cooler
  • Samsung NVMe 980 PRO 2TB SSD

And, I reused the following from my previous build (originally a Core i9-9900K):

  • Corsair Carbide 200R Case
  • Corsair RM850x Power Supply
  • Crucial SATA MX500 4TB SSD
  • 2x Western Digital WDC WD40EZRZ-22GXCB0 4TB HDD via USB
  • Pioneer SATA XL BD-RW
  • Hauppage PCI-e 4x DVB/HDTV Tuner
  • USB 3.0 PCI-e Adapter
Final Build!

I didn’t go for water cooling and also used the NA-RC7 “low noise” adapter to make the CPU fan spin slower and therefore not make as much noise. I wasn’t going to overclock so I figured this would be fine since the NH-D15S was a beast of a heatsink. I don’t game at all but wanted a mid-range GPU in case I decided to do anything more interesting than Google Earth and I picked the i9-13900KF because it has the best single-thread performance (the criteria for my last build, too):

Single thread performance on 2023-01-07

I still don’t know why the KF is faster than the K variant. K means unlocked multipler and F means without integrated graphics. Supposely since there is no heat & power consumed by the GPU on the KF series, it can overclock more than the K variant. I’m not sure if it fully explains it, though. The i9-12900K slightly edges out the i9-12900KF farther down the list but that is well within the margin of error of PassMark’s testing, I’d think.

The machine doesn’t actually sit on my desk so I don’t care about any kind of flashy RGB stuff but it seemed to be impossible to find premium RAM that didn’t have some of LEDs on it (in addition to the motherboard). Here’s the Corsair RAM being “blingy”:

Who cares, though?

The build went fine, overall. I think I could have done a better job applying thermal paste but meh. The MSI BIOS quickly indicated that all my stuff was working as expected. I did notice that the RAM frequency was 4000 MHz but the RAM itself was spec’ed at 5200 MHz. I found out later that the 5200 MHz is a [sanctioned] OC specification (Intel XMP) so I’m fine with 4000 MHz as long as things are stable.

MSI BIOS after first power on

I did end up changing to legacy boot because I didn’t see any reason to change from grub-pc to grub-efi (I have no use for secure boot). The MSI BIOS flipped some other options when I did that:

Flipped on legacy boot mode and some other options came with it

I initially booted my Debian install from the original NVMe SSD connected by a USB converter, which surprisingly went very well (albeit a bit slow). I then used a Knoppix live DVDUSB to clone the first few MiB of the disk (for GRUB) and then recreated all filesystems for the 2TB SSD and rsync’ed content over (and.. forgetting the -p in rsync, so I had to flip the setuid bit on ping and mtr!).

neofetch output

The Noctua cooler works fairly well although things get pretty toasty if I load up 32 processes of burnP6 and let it sit for a few minutes:


There are a few interesting things above that I noticed after the fact. First, the way the 16x E-cores vs. 8x P-cores are enumerated in Linux is interesting. The P-cores are listed first and are core IDs 0,4,8,12,16,20,24,28. The E-cores are 32 through 47. I don’t know why the P-cores skip 4x IDs but the sysfs enumeration is even weirder because it breaks out threads, which are only supported in the P-cores.

(destiny:20:57:EST)% for i in $(seq 0 31); do echo -n "${i}: "; echo -n "Core ID #"; cat /sys/devices/system/cpu/cpu${i}/topology/core_id; done         
0: Core ID #0 // P-core
1: Core ID #0 // P-core
2: Core ID #4 // P-core
3: Core ID #4 // P-core
4: Core ID #8 // P-core
5: Core ID #8 // P-core
6: Core ID #12 // P-core
7: Core ID #12 // P-core
8: Core ID #16 // P-core
9: Core ID #16 // P-core
10: Core ID #20 // P-core
11: Core ID #20 // P-core
12: Core ID #24 // P-core
13: Core ID #24 // P-core
14: Core ID #28 // P-core
15: Core ID #28 // P-core
16: Core ID #32 // E-core
17: Core ID #33 // E-core
18: Core ID #34 // E-core
19: Core ID #35 // E-core
20: Core ID #36 // E-core
21: Core ID #37 // E-core
22: Core ID #38 // E-core
23: Core ID #39 // E-core
24: Core ID #40 // E-core
25: Core ID #41 // E-core
26: Core ID #42 // E-core
27: Core ID #43 // E-core
28: Core ID #44 // E-core
29: Core ID #45 // E-core
30: Core ID #46 // E-core
31: Core ID #47 // E-core

I’ve annotated which is a P-core vs. E-core. I’m still not clear on how the Linux kernel really decides what tasks to throw at E-cores vs. P-cores and while looking at htop as I use the workstation it seems that everything’s just treated equally. Maybe it’s because the INTEL_HFI stuff is not fully integrated yet. I did notice that the 6.0.12 kernel that’s current on Debian testing at time of writing does not have INTEL_HFI_THERMAL enabled, which might help (or make things worse since the E-cores run at a lower clock speed?). I’ve played around with turning on / off all of the E-cores and most of the P-cores (minus cpu0, which is a P-core, and cannot be disabled) but haven’t really concluded anything concrete about powersaving vs. performance.

Second, this is the first time I’ve seen a core on a desktop PC of mine reach 100°C. I’m guessing that this resulted in some throttling (cpuinfo shows 5478.906 MHz for that core ID so I’m not sure how much). Maybe if I had opted for water cooling (or removed the “low noise” adapter!) it wouldn’t have gotten so hot.

While I’m not going to use this sytem for gaming I did notice that the RTX 3060 is crippled and will detect ETH mining:

(destiny:21:09:EST)% lspci|grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] (rev a1)

Apparently only the first RTX 30xx cards produced did not have this restriction but all of the current ones do. I don’t really care but I don’t like the hardware I buy to be encumbered for silly reasons.

All-in-all this feels like a good upgrade and should last 4-5 years like my last i9-9900K build, which was done toward the end of 2018.

macOS Ventura

I hate new versions of macOS. I try to not call them upgrades anymore. They’re just changes for the sake of changes. I finally did a reinstall of my 2017 MacBook from Big Sur (11.x) to Ventura (13.x) and one thing annoys me and another thing is broken. I’ve seen no benefits from the new OS.

The one thing that annoys me is that they replaced System with System Sure, it feels more like iOS but macOS runs on computers and not mobile devices. It requires more scrolling and clicking than System and it feels like browsing settings is more painful than it was previously.

The one thing I’ve noticed that’s outright broken is the ability to disable randomized IPv6 addresses, which I do not want on my network. By default, macOS uses RFC 4941 (privacy extensions) and CGA (cryptographically generated addresses), which is part of SEND, by default. This results in IPv6 addresses being randomized and periodically rotated and includes randomized link local addresses, too. The sysctls have changed over the releases but in Monterey and Big Sur these could be disabled by adding the following to /etc/sysctl.conf:


The first one still works but the second one does not. It is flipped back to 1 on every reboot (seems like the option in sysctl.conf is ignored) and setting it to 0 once the system has booted does nothing regardless of turning off and on Wi-Fi. Even twiddling the insecure flag in ifconfig doesn’t help:

ifconfig en0 inet6 insecure

The LL and GUA addresses are still CGA-based and show secured in the ifconfig output:

(orion:11:06:EST)% ifconfig en0
	ether dc:a9:04:8b:dd:95 
	inet6 fe80::77:a9c8:1948:3dad%en0 prefixlen 64 secured scopeid 0x5 
	inet6 2620:6:2003:106:816:afef:2dbc:262a prefixlen 64 autoconf secured 
	inet netmask 0xffffff00 broadcast
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active

I’ve yet to find a solution to this. There’s no option in the GUI and no option exposed in networksetup, either.

Anyone have any ideas on how to fix this?


Somehow my main Linux workstation (non-work equip.) has achieved 384 days of uptime. Sure, that’s fine, but Xorg has been running for 384 days, too, which is fairly impressive:

(destiny:21:06:EST)% uptime
21:06:58 up 384 days, 22:53, 22 users, load average: 1.77, 1.17, 0.89
(destiny:21:06:EST)% ps -eo pid,lstart,cmd |grep "[X]org"
1947 Sat Nov 6 23:13:49 2021 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch

It’s a pretty beefy machine and I’ve kept things like browsers, libc, and SSH up-to-date while avoiding touching any Xorg-related things like Xfce4 and.. uh oh.. Nvidia drivers.

I’ve also, unintentionally got a few other impressive uptimes at my home since I suppose power is fairly decent in the area. Although, we’ve had 5-6 minute interruptions a few times over the year and all my UPSes have come in very handy.

Wi-Fi router (Jetway box running Debian):

(trill:21:14:EST)% uptime
21:14:57 up 508 days, 21:22, 1 user, load average: 0.02, 0.03, 0.00
(trill:21:14:EST)% uname -a
Linux trill 5.3.0-3-686-pae #1 SMP Debian 5.3.15-1 (2019-12-07) i686 GNU/Linux
(trill:21:15:EST)% ifconfig br0
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::c872:cbff:fe3d:6002  prefixlen 64  scopeid 0x20<link>
        inet6 2620:6:2003:106::1  prefixlen 64  scopeid 0x0<global>
        ether ca:72:cb:3d:60:02  txqueuelen 1000  (Ethernet)
        RX packets 3718876922  bytes 1054181901663 (981.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5010257194  bytes 4495091370445 (4.0 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

The Jetway box above is running an Atom D2700 CPU that can run in 64-bit but the Jetway BIOS doesn’t support it, unfortunately. I’ve done about 5 TiB of Wi-Fi in 500 days. That’s not too much I suppose but my TVs and other streaming devices don’t use Wi-Fi!

Juniper EX2200-C virtual chassis:

prox@zero> show system uptime

Current time: 2022-11-26 21:13:13 EST
Time Source: NTP CLOCK
System booted: 2021-07-10 14:09:30 EDT (72w0d 08:03 ago)
Protocols started: 2022-08-11 09:20:26 EDT (15w2d 12:52 ago)
Last configured: 2022-11-08 21:45:05 EST (2w3d 23:28 ago) by prox
9:13PM up 504 days, 8:04, 1 user, load averages: 0.79, 0.86, 0.79

Current time: 2022-11-26 21:13:14 EST
Time Source: LOCAL CLOCK
System booted: 2022-08-11 09:22:34 EDT (15w2d 12:50 ago)
Last configured: 2022-11-08 21:44:43 EST (2w3d 23:28 ago) by prox
9:13PM up 107 days, 12:51, 0 users, load averages: 0.01, 0.08, 0.07

Alright, at least half of that VC has some decent uptime.

This one takes the cake, though. It’s an VPS that is $0.99/mo in Toronto, Canada:

(tiny:21:11:EST)% uname -a
Linux tiny 4.14.0-3-amd64 #1 SMP Debian 4.14.12-2 (2018-01-06) x86_64 GNU/Linux
(tiny:21:11:EST)% uptime
21:11:22 up 1777 days, 23:53, 1 user, load average: 0.00, 0.00, 0.00

APT frequently fails to fork() and I have to stop things like SmokePing and snmpd to run any upgrades. It only has 256 MiB of RAM.

FedEx Delivery Manager and Square Suffix

[I also tweeted this here (my second tweet about the issue) but Twitter is going through a rough time right now so I figured I should blog it, too]

Before moving in June of 2021 I created a list of every service or organization that had my address so I knew what I needed to update after moving. The update process was a huge pain in the butt, as one might expect. It took about two weeks for Loudoun County to process the paperwork for the sale so it was well into July before most 3rd party systems were up-to-date. USPS was updated pretty quickly and I have a feeling that was due to the builder submitting that information ahead of time. There were a couple items that took some extra time and action on my part, though:

  • Google
  • A Bank
  • Best Buy
  • FedEx Delivery Manager [the topic of this blog entry]

After 6+ weeks Google still did not have my street address in the system. It had the street itself but not the unit number, which was a little weird. After awhile I just submitted an update to Google Maps and manually placed my unit number on the map. Within a day it was updated. This unblocked a variety of services that use Google Maps data for address verification. I noticed another builder in the area had this pre-populated on Google Maps even though the homes were still under construction.

A bank (out of many others that I had no issue with), which will remain nameless, required me to call to their technical support line to update my address. They kinda blamed me for doing things wrong to begin with but then magically it was updated.

Best Buy, of all things, took a few months to recognize my address. I’m not sure what 3rd party system they use for address verification but this one remained on the list for awhile.

The last one is still broken but I don’t think it’s just broken for me. FedEx appears to have a few different uncoordinated systems that can store user profiles and FedEx Delivery Manager appears to be a standalone one. It’s similar to UPS My Choice in that one can select preferences for where packages are held or delivered and some other options. It doesn’t appear to have all the functionality of UPS My Choice (e.g. the notifications when a label is created for an inbound package, which is really useful) but still is nice to have. It wouldn’t accept my address for months and I went back & forth with someone on Twitter about this who indicated I needed to call technical support, which I did but then I couldn’t figure out how to speak to a human. I finally just left it on the list as “broken” and called everything done.

Over a year later after FedEx randomly delivered a package to my garage, which is kinda stupid since the front of my unit is directly on a street and package thefts are not prevalant in the area, I decided to give it another try. Nope, it still gave me an error. I decided to try it without the suffix (Sq or Square) and it accepted the address! I was able to successfully complete the sign up. However, this wasn’t right because I know for a fact there’s another street with this name in the surrounding area and the suffix is different (although the ZIP code is different, too). I decided to try other things, too. It turns out I can make up addresses and it’ll let me sign up!


My only conclusion is that Sq or Square (yes I tried both) is not recognized as a valid suffix in the FedEx Delivery Manager system. I ended up canceling the account for the address without the Sq becuase I didn’t want it to somehow affect my shipments, which seem to work fine (guessing it’s using FedEx’s official address verification system). Square is not too typical of a suffix I suppose but not as odd as some of the other ones in NoVA like Terrace (Ter).

Ultimately, I suppose I’m not really missing out. The bulk of my packages are delivered via USPS, AMZL, and USPS. I’m not really interested in expending the effort outside of this blog entry to try to get this fixed because at the day, I’m really only a gnat.

Samsung Galaxy S22

[Whoops, this was a draft that I had written up in February of 2022 but never published it and then forgot about it. Well, I figured I’d just publish it now because even though it’s largely irrelevant it’s better late then never?]

Yes, I’m weird. I can’t decide between Android and iOS when it comes to mobile devices so I have both.

My Android phone was a OnePlus 6T up until this week when I decided to try Samsung’s Galaxy line and went with the S22. I was thinking about the S22+ but the screen size at 170 mm was larger than the One Plus 6T at 165 mm, which I consider the max size of a phone, for me. The S22 comes in at 150 mm, which feels small but is about the same as my iOS device, an iPhone 13 Pro (155 mm).

This will also be the first Android phone where I don’t enable root access.

Samsung Galaxy S22

After a week with the phone and the Samsung-branded leather case, my first impressions aren’t all that great:

  1. It took me days to shut off and disable/uninstall the Samsung garbage apps and endlessly-annoying notifications & suggestions. The sheer amount of junk made it feel like buying a Windows 98 PC from the late 1990s from a shady manufacturer. I almost threw the phone out the window halfway through this process.
  2. The biometrics (face and fingerprint authentication) are AWFUL. When comparing it to my other phones, iPhone 13 Pro >> OnePlus 6T > Galaxy S22. I’m surprised it’s that bad. It’s gotten to a point where I just don’t expect them to work at all and always start to enter my PIN after turning the screen on. [Update 2022-11-20: The fingerprint authentication got much better after many months of updates but the face authentication is still mostly useless as it does not work most of the time.]
  3. [Update 2022-11-20: The screen is slippery to the point that double tapping (to zoom or zoom out of Google Maps, for example) is a fail most of the time. It’s the most slippery phone I’ve ever owned. I don’t like the idea of screen protectors so I have just gotten used to it over time.]
  4. I’m still able to turn off animations/transitions using developer options
    without rooting, which makes the phone instantly feel 10x faster (if I couldn’t do this, I would have returned it).
  5. The camera performance is the best I’ve ever seen on a phone. The low-light/night photography blows away anything the iPhone 13 Pro can do. This is the best feature of the phone. However, the inability to have the camera application reset all settings to default (zoom level, night mode, etc.) on exit has me frequently yanking my phone out of my pocket to quickly take a photo and then scowling as I have to reset some setting before taking the shot.
  6. The phone is very light. I’ll compare against the iPhone 13 Pro since it’s the same dimensions – it’s 204g and the S22 is 167g. Even though it’s light build quality seems to be good.
  7. I’ll get a better idea of battery life over the next month but it seems like it’ll last a day and a half for me. It’s got a bunch of battery/power options, though – I’ve left the setting at “optimized” for now.

Overall, I’m not too impressed with the S22. Maybe I’ll try to re-record the biometrics to see if that improves things. [Update 2022-11-12: Nope, that didn’t do anything, but software updates did help]

Moving On

After spending 7 years, 4 months, and 1 day in the Emerald City, my wife and I moved back to the east coast to a neighborhood (actually, a census designated place or CDP) named Brambleton in the Commonwealth of Virginia, located within the area that’s considered Northern Virginia (or NoVA) during the summer of 2021. Seattle had its ups and downs for us and I will sincerely miss the natural beautify of the Pacific Northwest, the WSDOT ferry rides to Bainbridge Island, and the quirky little eateries (and distilleries!) in the area. I won’t miss the broken politics, misguided legislation, and broken bridges. We purchased a new construction townhouse within walking distance to the Brambleton Town Center and sold our home in the High Point neighborhood of West Seattle (ironically, we used the same great realtor to buy it in 2014 and sell it this year).

We just moved in!

Our new home has Verizon FiOS (now stylized as Fios, for some reason) and I opted to upgrade to the highest speed plan, which, according to Verizon, is 940 Mbps downstream and 880 Mbps upstream. This is a welcome change from the troubles I had with Internet connectivity while in Seattle. The only downside is that there’s no IPv6 and there’s also no alternative due to a multi-year HoA agreement with Verizon. My cellular backup is still through Verizon Wireless, so I probably need to change that one of these days. Starlink is always an option, I suppose, but at $100/mo it’s not really a cost-effective backup. Latency on the FiOS connection is fairly low but the GPON network provides more jitter than VDSL2 or even DOCSIS, which is surprising:

SmokePing Plot to

It would be nice if Verizon would finally fix their equipment so it doesn’t keep breaking ICMP traceroutes, though.

Does CenturyLink DSL Block SIP?

tl;dr Yes, SIP INVITEs are blocked via what seems to be a packet size filter but some things are inconsistent.

Back in early 2020 I switched from Comcast Business to CenturyLink DSL for my main Internet connection. I still have Xfinity since my wife uses it for TV and her package comes with a 300 Mbps plan (that use for VPNs and some fun downstream-only load-balancing.. but that’s another topic). Anyway, when I flipped Internet IPv4 access over to the DSL connection I noticed that my outbound SIP connections to Vitelity (via my Asterisk server) were failing and timing out. SIP REGISTERs were fine so I could still receive incoming calls. I did a 30 second debug with tcpdump, saw packets going out but nothing coming back in. PPPoE adds an 8 byte overhead and the packets going out were 1492 bytes so I figured they would fit and that CenturyLink was just blocking SIP because they are “The Phone Company” and probably wanted me to buy a land line or something. I internally advertised Vitelity’s block from the LXC instance attached to my Xfinity connection and called it a day.

I was chatting with folks at work this past week regarding this and it prompted me to dig a little deeper. What I observed in early 2020 didn’t paint the whole picture. The SIP INVITEs are being sent to, which has a few A RRs all out of the The SIP packets themselves are huge, and even fragment on my local network since the total length of the first packet (including IP headers) is 1634 bytes. So, this means the SIP packets are being fragmented going out my Xfinity connection and still working fine:

(remus:19:27:PST)% sudo tcpdump -i enp3s0f1 -qvn net
tcpdump: listening on enp3s0f1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:27:21.045354 IP (tos 0x0, ttl 62, id 34953, offset 0, flags [+], proto UDP (17), length 1500) > UDP, bad length 1632 > 1472
19:27:21.045360 IP (tos 0x0, ttl 62, id 34953, offset 1480, flags [none], proto UDP (17), length 180) > ip-proto-17
19:27:21.228064 IP (tos 0x20, ttl 48, id 51882, offset 0, flags [none], proto UDP (17), length 567) > UDP, length 539
19:27:21.228368 IP (tos 0x0, ttl 62, id 34986, offset 0, flags [none], proto UDP (17), length 452) > UDP, length 424
19:27:21.228593 IP (tos 0x0, ttl 62, id 34987, offset 0, flags [+], proto UDP (17), length 1500) > UDP, bad length 1824 > 1472
19:27:21.228598 IP (tos 0x0, ttl 62, id 34987, offset 1480, flags [none], proto UDP (17), length 372) > ip-proto-17
19:27:21.311301 IP (tos 0x20, ttl 48, id 51883, offset 0, flags [none], proto UDP (17), length 492) > UDP, length 464
19:27:23.510342 IP (tos 0x20, ttl 48, id 51885, offset 0, flags [none], proto UDP (17), length 882) > UDP, length 854

So, fragmentation and packet size may not be the problem. SIP packets halfway through the outgoing call setup are even larger, past 1800 bytes! Well, fragmentation usually stinks but it seems to work fine here.

First, why are these SIP packets so large? It looks like it’s because of the list of advertised available codecs. Here’s a full decode of the first message (my authentication data is temporarily scrambled, FWIW):

I could probably dust off my Asterisk configuration and figure out how to reduce the number of useless codecs that are advertised but since this works via my Xfinity connection I’m not going to focus on that.

Here’s what it looks like out my DSL connection:

(discovery:19:42:PST)% sudo tcpdump -i ppp0 -qvn net
tcpdump: listening on ppp0, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
19:43:13.716582 IP (tos 0x0, ttl 61, id 27388, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:13.716624 IP (tos 0x0, ttl 61, id 27388, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17
19:43:14.216162 IP (tos 0x0, ttl 61, id 27435, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:14.216200 IP (tos 0x0, ttl 61, id 27435, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17
19:43:15.217157 IP (tos 0x0, ttl 61, id 27666, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:15.217197 IP (tos 0x0, ttl 61, id 27666, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17

And, just as a sanity check, I see it out the physical Ethernet interface that the PPP daemon is using, too:

(discovery:19:43:PST)% sudo tcpdump -i eth1 -qvn
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:43:13.716605 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27388, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:13.716661 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27388, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17
19:43:14.216182 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27435, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:14.216208 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27435, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17
19:43:15.217178 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27666, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1632 > 1464
19:43:15.217204 PPPoE [ses 0x9e8a] IP (tos 0x0, ttl 61, id 27666, offset 1472, flags [none], proto UDP (17), length 188) > ip-proto-17

If you look at the packet lengths, the Linux LXC instance attached to my DSL line is re-fragmenting the packet that’s originally fragmented from my Asterisk server to something that will fit into 1492 MTU. It’s 1492+188 instead of 1500+180, now. Yes, fragmentation is working as expected even though it’s kinda horrid!

So, I was wondering how CenturyLink might be blocking the SIP traffic? Just blocking UDP/5060 traffic might be an easy choice, so I tested this using NetCat to a VM of mine, remembering to set both the source and destination ports to 5060 (discovery is the DSL LXC instance and dax is the remote VM):

(discovery:19:51:PST)% nc -u -p 5060 5060

And this is seen on dax:

(dax:22:51:EST)% sudo tcpdump -qvi vtnet0 -n port 5060
tcpdump: listening on vtnet0, link-type EN10MB (Ethernet), capture size 262144 bytes
22:51:26.612983 IP (tos 0x0, ttl 57, id 30455, offset 0, flags [DF], proto UDP (17), length 32) > UDP, length 4

Okie doke. This is not the way they’re blocking it. How about large UDP packet sizes like the kind we saw with SIP and some fragmentation? For this we’ll use hping3:

(discovery:19:55:PST)% sudo hping3 -2 -s 5060 -p 5060 -d 1600 -c 2
HPING (ppp0 udp mode set, 28 headers + 1600 data bytes

--- hping statistic ---
2 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms

hping3 will automatically fragment if you specify a data size that results in the generated packet exceeding the interface MTU. 1,600 bytes does this for us.

But, I’m only seeing the fragments on my VM:

(dax:23:01:EST)% sudo tcpdump -qvi vtnet0 -n host
tcpdump: listening on vtnet0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:01:39.219056 IP (tos 0x0, ttl 57, id 25, offset 1472, flags [none], proto UDP (17), length 156) > ip-proto-17
23:01:40.219297 IP (tos 0x0, ttl 57, id 25, offset 1472, flags [none], proto UDP (17), length 156) > ip-proto-17

Verifying that my LXC instance is actually sending the packets correctly, I tcpdump ppp0:

(discovery:20:01:PST)% sudo tcpdump -qv -i ppp0 -n host
tcpdump: listening on ppp0, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
20:01:39.051291 IP (tos 0x0, ttl 64, id 25, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1600 > 1464
20:01:39.051362 IP (tos 0x0, ttl 64, id 25, offset 1472, flags [none], proto UDP (17), length 156) > ip-proto-17
20:01:40.051589 IP (tos 0x0, ttl 64, id 25, offset 0, flags [+], proto UDP (17), length 1492) > UDP, bad length 1600 > 1464
20:01:40.051620 IP (tos 0x0, ttl 64, id 25, offset 1472, flags [none], proto UDP (17), length 156) > ip-proto-17

Yep, this is fine. So, looks like CenturyLink is blocking huge packets. Just to round this out, I try without a fragmented packet:

(discovery:20:04:PST)% sudo hping3 -2 -s 5060 -p 5060 -d 1464 -c 2
HPING (ppp0 udp mode set, 28 headers + 1464 data bytes

--- hping statistic ---
2 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms

Nothing seen on the VM:

(dax:23:04:EST)% sudo tcpdump -qvi vtnet0 -n host
tcpdump: listening on vtnet0, link-type EN10MB (Ethernet), capture size 262144 bytes

Well, of course, I need to know, where are they blocking these packets and how large of a packet can get through? I started with one byte less than the full payload:

(discovery:20:06:PST)% sudo hping3 -2 -s 5060 -p 5060 -d 1463 -c 1
HPING (ppp0 udp mode set, 28 headers + 1463 data bytes

--- hping statistic ---
1 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms


(dax:23:06:EST)% sudo tcpdump -qvi vtnet0 -n host
tcpdump: listening on vtnet0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:06:32.247196 IP (tos 0x0, ttl 57, id 23545, offset 0, flags [none], proto UDP (17), length 1491) > UDP, length 1463

That’s the packet. So, now that that’s answered, where are they blocking it? We can use MTR for this. Here’s the same full UDP packet with both source and destination port of 5060:

(discovery:20:25:PST)% mtr --report --report-cycles=5 --report-wide -s 1492 -L 5060 -P 5060 -u
Start: 2021-01-23T20:25:10-0800
HOST: discovery Loss% Snt Last Avg Best Wrst StDev

Well, normally the first hop would be the BRAS in Tukwila, WA. We see nothing here. Let’s lower it back to 1491:

(discovery:20:28:PST)% mtr --report --report-cycles=5 --report-wide -s 1491 -L 5060 -P 5060 -u
Start: 2021-01-23T20:28:34-0800
HOST: discovery Loss% Snt Last Avg Best Wrst StDev
1.|-- 0.0% 5 8.1 8.0 7.7 8.2 0.2
2.|-- 0.0% 5 32.4 24.2 8.3 43.9 14.0
3.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0
4.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0
5.|-- 0.0% 5 86.1 83.9 81.5 88.7 3.4
6.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0

My VM filters UDP/5060 but we do see it in tcpdump and the traceroute does get through CenturyLink’s network. Looks like it’s the BRAS itself or something between that and me. But wait, as I was messing around I up-arrowed and I’m seeing 1492 being passed now too:

(discovery:20:30:PST)% mtr --report --report-cycles=5 --report-wide -s 1492 -L 5060 -P 5060 -u
Start: 2021-01-23T20:30:07-0800
HOST: discovery Loss% Snt Last Avg Best Wrst StDev
1.|-- 0.0% 5 7.6 16.6 7.6 47.4 17.3
2.|-- 0.0% 5 7.9 11.6 7.8 26.2 8.2
3.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0
4.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0
5.|-- 0.0% 5 82.3 81.8 81.4 82.3 0.3
6.|-- ??? 100.0 5 0.0 0.0 0.0 0.0 0.0

What the heck? A minute ago it was being blocked and now it’s going through? Yep, I just saw those full packets on my VM:

23:30:13.795238 IP (tos 0x0, ttl 9, id 26455, offset 0, flags [none], proto UDP (17), length 1492) > UDP, length 1464
23:30:13.854422 IP (tos 0x0, ttl 10, id 26465, offset 0, flags [none], proto UDP (17), length 1492) > UDP, length 1464

Huh? Everything was consistent up until now. I left it for a few minutes and then tried it again, blocked again:

(discovery:20:32:PST)% mtr --report --report-cycles=5 --report-wide -s 1492 -L 5060 -P 5060 -u
Start: 2021-01-23T20:41:13-0800
HOST: discovery Loss% Snt Last Avg Best Wrst StDev

I went through all the other tests again and made sure they were consistent and they seemed to be. However, the traceroute probes seemed to introduce some weird behavior. The only thing that’s different between successive runs of MTR is the IP identification (ID) fields. Linux uses a common counter for UDP so if MTR is the only application using UDP the IDs will increment sequentially. Any other UDP application running in parallel will cause these IDs to start skipping numbers. hping3 allows the use of a random IP ID per packet, so CenturyLink maybe allowing special IP IDs seems to be a dead end since the random IP ID is the default for the tests I ran above.

So, I’m pretty confused. The only thing that results in inconsistent behavior is traceroute, which uses a TTL of 1 and then incrementing the TTLs until it reaches the destination. Maybe whatever filter CenturyLink is using does something special with low TTLs? I don’t know.

Regardless, even though we see inconsistent behavior it’s pretty clear that my CenturyLink DSL connection is filtering SIP INVITEs and that’s not good.

I am indeed using a CenturyLink-branded Zyxel C3000Z modem in bridge mode so it’s possible there’s some packet filter embedded into the firmware even though the PPP connection is terminated on my own equipment. Or, it could be filtered at the BRAS or DSLAM.

I haven’t really seen much on the Internet to confirm my findings (maybe nobody cares about SIP on a DSL connection?) and I don’t know if this also happens on CenturyLink’s FTTH/GPON offerings. The same DSL modem I am using is also used for the GPON service as well and also requires PPPoE, so maybe it’ll act the same.

Do any readers have the same problem? Or, different symptoms?


I decided to try Signal today. The security and privacy features are probably its best selling point as well as it being FOSS. However, it’s got some rough edges that you should probably be aware of before deleting all your other messaging accounts.

First, here’s a little background. I’m kinda old so I’ve used a fair amount of computer mediated communication systems over the years. I started with instant messaging on AOL in the early 1990s and continued using the standalone service (AIM) for awhile as well. Toward the late 1990s I had fun with ICQ. In the 2000s I picked up IRC and Lily, which is a chat service created by some RPI developers. I use IRC and Lily primarily in a terminal, which fits my usage patterns for other things. I ran my own XMPP (jabberd) server for awhile and was able to talk to folks who used Google Talk until Google decided to kill their XMPP S2S interface and re-invent the product as Google Hangouts. I also picked up Facebook Messenger as well because a few of my friends use it. I tried out Telegram in early 2020 but didn’t end up using it for anything. SMS is also there for correspondence with one or two folks as well as all those insecure 2FA things and a whole lotta spam.

For work, I’ve used Microsoft OCS/Lync at two jobs as well as Amazon Chime and Slack.

I’ve never used WhatsApp or SnapChat.

Here’s my current state of my non-work messaging:

  • IRC – Mostly idling and periodic chats
  • Lily – Mostly idling and periodic chats
  • Facebook Messenger – Daily chats
  • Google Hangouts – Daily chats
  • SMS – Daily spam, some chat, and horrible 2FA

Google has made it clear that they are going to kill Hangouts or at least change it into something I won’t like, so I’ve decided I will use Hangouts until it no longer works. I won’t be using any future chat products by Google.

With the pending (although it now seems delayed?) WhatsApp ToS change and the increasing popularity of Signal, I figured I’d give it a go.

Account creation was easy but used my phone number as my identity. This rubbed me the wrong way because I would like to think phone numbers are ephemeral. It’s not supported to change phone numbers, currently. I started with the Android app, created my PIN (4 digits or optional alphanumeric), and was off to the races.

The app found about a dozen or two contacts from my address book that are already using signal, including some good friends and family members. This was encouraging.

I was away from my Linux workstation and using my MacBook so I installed the macOS app and activated it using a QR code read by my phone. Pretty easy so far. I then later went to install the Linux app on Debian, and ran into some issues.

While Signal’s site indicates that it provides binaries for 64-bit Debian-based distributions I had to add an APT source that referenced the xenial distribution, indicating it was Ubuntu-centric. Broadcasting support for Debian-bases distributions and then being centered around Ubuntu is a pet peeve of mine. Anyway, I use Debian testing, which is a rolling snapshot of the next stable distribution (bullseye, at the moment) and I ran into dependency problems when trying to install signal-desktop:

The following packages have unmet dependencies:
signal-desktop : Depends: libappindicator1 but it is not installable

Well, that’s nice. According to a bug report this is due to libappindicator1 being deprecated. The workaround was to change sources to sid and try the install again, which worked. I, of course, changed my sources back to testing. I’m sure things will break again in the future.

The Signal app for Linux looks like it’s nothing more than an Electron-style wrapper that uses Chrome or Chromium under the hood:

(destiny:15:56:PST)% ps xf|grep signal-desktop|cut -b -$COLUMNS
1386976 ? SLl 1:43 | _ /opt/Signal/signal-desktop --no-sandbox
1386978 ? S 0:00 | _ /opt/Signal/signal-desktop --type=zygote --no-sandbox
1387002 ? Sl 0:15 | _ /opt/Signal/signal-desktop --type=gpu-process --field-t
1387008 ? Sl 0:00 | _ /opt/Signal/signal-desktop --type=utility --field-trial
1387023 ? Sl 9:21 | _ /opt/Signal/signal-desktop --type=renderer --no-sandbox
1479371 pts/22 S+ 0:00 _ grep --color signal-desktop


(destiny:15:58:PST)% ls -a1 /opt/Signal

Meh, I don’t really care but I’m a little bit disappointed, especially because there is no actual web interface offered. It only takes up 332 MiB RSS, which is nice. It could be worse!

There’s a Pidgin plugin that I need to try. It looks like it has a hard dependency on signald to do anything, which I’m not familiar with at all. More things to do later, I suppose.

So, everything was mostly peachy, right?

It was until I decided to fire up signal on my 2nd phone, an iPhone Xs. Yes, I carry two phones because iOS offers some things that Android doesn’t and vice versa. I can’t decide which platform is best for me so I have selected both.

My iPhone has a different phone number, of course, so I plugged in my original phone number when starting up the iOS Signal app. Everything seemed to work fine until I realized that the Signal app on my other devices and Android phone started kicking out API errors. After trying to figure what was going on I found the page that indicates that more than one phone is not supported.

Huhwha? While this is not a deal-breaker I also realized that Android tablets are not supported either. I don’t get it, why can’t Signal on my iPhone be activated the same way the macOS and Linux clients were activated? I have a feeling the answer is security-related, but I can’t actually figure it out.

Lastly, to finish this up, I was curious who hosted Signal. I tcpdump’ed some DNS requests from a fresh Wi-Fi connection from my Android phone to see the following:

16:09:00.217072 IP > 7640+ AAAA? (55)
16:09:00.260375 IP > 7640 0/1/0 (140)
16:09:00.261707 IP > 18696+ A? (55)
16:09:00.389951 IP > 18696 2/0/0 A, A (87)
16:09:00.697416 IP > 21129+ AAAA? (36)
16:09:00.843944 IP > 21129 4/0/0 AAAA 2001:4860:4802:34::15, AAAA 2001:4860:4802:38::15, AAAA 2001:4860:4802:32::15, AAAA 2001:4860:4802:36::15 (148)
16:09:00.845337 IP > 41372+ A? (36)
16:09:00.882592 IP > 41372 4/0/0 A, A, A, A (100)

This mostly matches up with what’s detailed here regarding firewall settings for Signal. It looks like * and * are the main domain names. Right now, is not dual-stacked, which means it won’t work in an IPv6-only environment. Also, looking up the addresses that the two names resolve to gives me:

(destiny:16:13:PST)% for i in 2001:4860:4802:34::15 2001:4860:4802:38::15 2001:4860:4802:32::15 2001:4860:4802:36::15; do ipin ${i}; done
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS16509 [AMAZON-02, US]
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS16509 [AMAZON-02, US]
6 Address: 2001:4860:4802:34::15
6 PTR:
6 Prefix: 2001:4860::/32
6 Origin: AS15169 [GOOGLE, US]
6 Address: 2001:4860:4802:38::15
6 PTR:
6 Prefix: 2001:4860::/32
6 Origin: AS15169 [GOOGLE, US]
6 Address: 2001:4860:4802:32::15
6 PTR:
6 Prefix: 2001:4860::/32
6 Origin: AS15169 [GOOGLE, US]
6 Address: 2001:4860:4802:36::15
6 PTR:
6 Prefix: 2001:4860::/32
6 Origin: AS15169 [GOOGLE, US]
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS15169 [GOOGLE, US]
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS15169 [GOOGLE, US]
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS15169 [GOOGLE, US]
4 Address:
4 PTR:
4 Prefix:
4 Origin: AS15169 [GOOGLE, US]

That’s some nice big tech right there! Ah well, at least Signal is end-to-end encrypted so I don’t have to care who or what is in the middle.

I’ll keep an eye out for multi-phone support as well as IPv6 server support and the ability to change phone numbers in the future. For now, Signal seems like a clear privacy-centric alternative to other things like WhatsApp, FB Messenger, and Google Hangouts (and whatever will replace it).

macOS Big Sur: Ugh

I mentioned that I did the Big Sur upgrade in a previous blog entry and that it was working out alright. However, I’m going to have to change my mind on this one. I don’t think macOS is an operating system that meets my needs anymore. The amount of hackery needed to get certain components of the OS to work the way I desire is adding up and probably compromising the security of the OS in the process, not to mention just annoying the heck out of me.

Sleep Annoyances

On my 2017 MacBook there is no option to prevent the computer from sleeping while on battery power.

I don’t see any “computer sleep” slider here, do you?

This results in active SSH connections dying at some time after the display shuts off. When I say active I mean there is activity on the terminal, for example htop running. The odd thing is that it’s not a complete sleep, SmokePing indicates that my machine will still ping over Wi-Fi (albeit with heightened latency due to Wi-Fi power saving that cannot be disabled, yep, another thing!) but TCP communications are cut off. I cannot SSH into the machine, either. I have to use sudo pmset sleep 0 on every reboot to get around this stupidity and ultimately this needs to go into some sort of login script since cron is gone.

DNS Issues and Launch Daemons

I have to hack up the mDNSResponder (DNS resolver) property list file to make macOS resolve short names with a dot in them. I expect a DNS query for foo.vpn will result in my DNS search prefix appended but it doesn’t do that by default (mostly because it’s not appropriate for the general public, but it’s important for me and my particular network setup). Again, there’s no GUI option for this so in the past I’ve modified the in /System/Library/LaunchDaemons, added -AlwaysAppendSearchDomains to the ProgramArguments key and re-enabled it through launchctl. Now, though, starting in Catalina /System is non-writable without some major hacking of disabling SIP and SSV, the latter which will break your OS and cause boot loops if you do it wrong. The only middle ground here is to disable SIP and then unload and load a copy of the property list file out of /Library/LaunchDaemons. Unfortunately, none of these launchd changes persist between reboots (I don’t know why, probably some additional security layer that’s working against me) so I have to do this again at every boot or put it in some sort of login script.

IPv6 Issues

I also want to disable RFC 4941 (IPv6 privacy extensions) because I like the original EUI-64 behavior and I think that privacy extensions are privacy theater. Naturally, macOS doesn’t let me change this behavior (same with iOS and iPadOS) so I have to create my own /etc/sysctl.conf and add stuff to it:


Just as with the above, on any incremental upgrade these changes will need to be reapplied but at least macOS will throw the file on your desktop after that upgrade indicating it doesn’t like you modifying things.


Nah, not gonna talk about this. Search the web for both good information and misinformation on this one and make up your own mind on it.

In general, macOS is just doing the opposite of what I want and it’s getting worse with every release. Maybe the problem here is that I want macOS to be Linux or even something resembling a Unix-based operating system and it’s just not anymore. It’s being bogged down by heavy-handed mitigations to protect the user from their own naivete, which has an effect of restricting power users from doing what they want. Maybe this is due to the departure of Jordan Hubbard or just an effect of macOS completing the move over the last two decades from a niche operating system used by schools and graphic designers to an operating system for the masses.

I’m Still Alive

I haven’t written a blog entry since April of 2020. It’s not that I haven’t had much to say about all the horrible things 2020 has brought but I just haven’t thought the Internet needed more commentary about things that have been commented on to death. I’m still alive and really don’t have anything interesting to report but this blog needed activity, so here’s some randomness.

First, because it’s a substring of the title of this blog entry, Still Alive is an epic and moving piece of EDM by Ashley Wallbridge & Evan Henzi. He wrote it after narrowly winning a battle with meningitis. It, along with You’ll Be OK and Elise by Gareth Emery, got quite a bit of runtime this year on pretty much all systems I own that are capable of playing digital audio.

Since some of y’all know I love vintage Unix workstations and servers, I picked up an HP C8000 on eBay halfway through the summer. It was new in box and shipped from Germany. Due to problems with TnT (Track & Trace, owned by FedEx) it took a couple of weeks to arrive. The C8000 is a PA-RISC system and a model of the HP 9000 line of servers and workstations that were scrubbed in 2008. The machine has 2x PA-8900 CPUs @ 1 GHz, 8 GiB of RAM, and 2x Ultra320 SCSI drives. It was a top-of-the-line system for the early 2000s and still feels pretty quick & responsive (at least on the CLI and in CDE).

I spent about two weeks getting HP-UX 11.11 (unfortunately, it’s the last version with support for the C8000) installed and configured to my liking. The machine initially did not boot and after messing with it a bit I just decided to do a fresh install of HP-UX. I have a feeling there was something in the BCH that was wiped out when the CMOS battery died a few years ago. Anyway, after the installation I used LVM to split up /var, /home, /usr, etc. across the two SCSI drives because I didn’t care about redundancy. The HP-UX Porting and Archive Center provided me enough familiar FOSS packages to install so I could feel at home. The only thing I wasn’t able to get working was IPv6 (HP-UX_11i_v1_IPv6NCF11i_B.11.11.0705_HP-UX_B.11.11_32+64.depot) because I couldn’t get the patch installed due to dependency hell. I learned quite a bit about how swinstall worked and a little bit about the hardware during the process. Unfortunately, the C8000 pumped out much too much heat for my home office so during a warm week without A/C in September I decided to shut it down. I’ll mess with it again once I move some things around in the house.

I am NOT caught up with Star Trek Discovery. I lost interest toward the end of season 2 but, as with many other TV series, I figured I needed to probably watch it twice to appreciate it. So, I vowed that before season 3 came out I would re-watch seasons 1 and 2. Well, that didn’t happen. I’m on episode 6 of season 2 right now so I’ve got a few episodes to go before I can start season 3. That being said, I think I need to tell Google News and other sources that I’m temporarily not interested in anything Star Trek because one of these days I’m going to accidentally read a real spoiler.

I have been watching Lower Decks. It’s entertaining, so far. I think I’m a couple episodes behind on that, too.

My wife got me The Bartesian for my birthday this past August! The machine itself isn’t too fancy and does what it’s supposed to do—make mixed drinks. Similar to a Keurig, it can be a little bit messy. That might only be a problem for me, due to OCD. Depending on use, it’s a wallet suck through both the spirits required as well as the capsules. I stocked up, though:

Speaking for a second more about R3COH, my wife (wait, there’s a theme here?) got me a Christmas present last year in the form of a variety of mini-bottles leading up to Christmas Day. Most of the spirits I’ve encountered before except for a few, which included Green House Gin. Western Washington has more than a few distilleries that produce gin and I’ve tried many of them (Big Gin is usually my go-to). However, Green House is probably the best gin I’ve had—it’s got a ton of botanicals and the juniper flavor is just the right strength. I highly recommend trying it out if you’re into such things.

In other news, I upgraded my 2017 Retina MacBook from Mojave to Big Sur (11.0.2) a few days after it came out. I usually wait until the dust settles but I felt like going for it early this year. I always do a wipe and fresh install because I don’t want to deal with any leftover “garbage” or badly-migrated settings from the previous release. Anyway, most things work and I don’t miss any 32-bit applications. I think the only real applications I run on macOS nowadays are:

  • Microsoft Office 2019
  • VLC
  • Chromium
  • Kindle
  • Intel Power Gadget
  • Xcode (to support MacPorts builds only)
  • MacPorts
  • TunnelBlick

I usually make some minor hacks to macOS to get things to work the way I like, which include things like turning off some animations and making the DNS resolver always append the search domains:

% diff -ur /System/Library/LaunchDaemons/ /Library/LaunchDaemons/  
--- /System/Library/LaunchDaemons/	2020-01-01 00:00:00.000000000 -0800
+++ /Library/LaunchDaemons/	2020-11-16 18:46:08.000000000 -0800
@@ -13,6 +13,7 @@
+		<string>-AlwaysAppendSearchDomains</string>

Most of these hacks still work in Big Sur, which is nice. That being said, net-snmp does not compile correctly due to Xcode adhering to C99 standards so I submitted a bug report to MacPorts. I use net-snmp as an include to my mrtg-rmt script, which allows me to report some system health information. At least iStats still mostly works:

% istats 
--- CPU Stats ---
CPU temp:               25.63°C     ▁▂▃▅▆▇

--- Fan Stats ---
Total fans in system:   0           

--- Battery Stats ---
Battery health:         unknown     
Cycle count:            107         ▁▂▃▅▆▇  10.7%
Max cycles:             1000        
Current charge:         4851 mAh    ▁▂▃▅▆▇  100%
Maximum charge:         4959 mAh    ▁▂▃▅▆▇  89.4%
Design capacity:        5550 mAh    
Battery temp:           23.8°C      

For more stats run `istats extra` and follow the instructions.

Also, I didn’t see Blink Lite in the App Store anymore, which was my SIP client on macOS. That’s not much of a loss since I probably only used it once or twice a year, max.

Local stuff. Let’s see, the West Seattle Bridge has been closed since March and only this month did Mayor Durkin decide to pursue the repair option. The expectation is that traffic will return to the bridge in early to mid 2022. Also, I bought for no reason. No, I didn’t bother with an SSL certificate because I’m not really using the site for anything, at the moment. Well, that’s not true, it’s a bookmark for the official SDOT page, which has a much longer URL.

No blog entry in 2020 would be complete without some mention of COVID-19! My wife, a few co-workers, and I all believe we got a bout of COVID-19 in January. I have asthma and had what might be called a miniature to medium attack, which was very unusual. I, along with the others, got over it just fine, though. My wife works at Harborview Medical Center and goes into work every day but I’ve been working from home since late March with an expected return sometime during the summer of 2021. For many years I was diametrically opposed to WFH and still believe the lack of casual human interaction takes a toll on productivity and innovation but this year has taught me that it’s not as bad as I thought. I, along with the team I’m on, are still wildly productive. We have a virtual lunch once a week where we catch up on things that might happen in a hallway conversation. It feels forced but it works out alright. I still visit the office once or twice a month.

Speaking of, since I’m home a lot more lately I’ve taken up walking all over West Seattle. I usually just walk High Point, Gatewood, and Fairmount Park but once a week I’ll make it down to Lowman Beach and periodically will walk Highland Park. The beach is nice:

We put up the Christmas tree a few days after Thanksgiving, this year. I didn’t setup the train this year due to apathy:

I also didn’t do a time lapse as I’ve done in years past.

That’s it for now. Actually, a number of items above are now out-of-date because I left this blog entry in draft mode, untouched, for about two weeks. Sorry about that!

Have a fun and safe holiday season!