T-Mobile Home Internet, Part 2

A couple weeks ago I wrote a blog on T-Mobile Home Internet and I felt like I needed to post a quick follow-up.

Overall, the service works great. I have the “backup Internet” plan that includes 130GiB of data, which I’m prolly never going to get close to using. I ended up getting this setup at the right time since we took a trip to Finland for a week and our area had a power outage that nuked the UPS that happened to be connected to my Verizon Fios connection & Linux router and since it went into overload it never restored. So, for 2.5 days half (my home network became unfortunately bifurcated) of my home was able to stay online through the T-Mobile connection, enough for me to peek at some of the security cameras and SSH in to check on a few things.

The latency seems to spike a bit during the early evening hours every day (mostly weekdays). Sometimes it results in a bunch of packet loss. I’m guessing this is due to folks coming home from work and using the network before their phones connect to Wi-Fi. Maybe there’s another explanation, too. However, it’s all still much lower than the Verizon Wireless connection and it didn’t seem to impact any speed (although the VZW connection never showed any obvious latency variation due to time of day).

As it has been mentioned in other forums, MTU is lower on T-Mobile compared to other carriers like VZW. Actually, I’m pretty sure it’s lower than 1500 bytes on VZW as well but they might employ link-level fragmentation.

The problem with the MTU on T-Mobile is that PMTUD reports the wrong value (shocker, PMTUD broken, I know). Here’s a tracepath:

(sprint:10:16:EST)% tracepath -p 33434 -4 gravity
1?: [LOCALHOST] pmtu 1500
1: 192.168.12.1 0.871ms
1: 192.168.12.1 0.779ms
2: 192.0.0.1 1.239ms
3: 192.0.0.1 1.491ms pmtu 1480
3: no reply
4: no reply
5: no reply
6: no reply
7: no reply
[...snip...]

And tcpdump:

(sprint:10:15:EST)% sudo tcpdump -v -i enp5s6 -n "((portrange 33434-33534 and proto 17) and host 18.100.109.137) or proto 1"
tcpdump: listening on enp5s6, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:16:02.471423 IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33434: UDP, length 1472
10:16:02.472160 IP (tos 0xc0, ttl 64, id 9523, offset 0, flags [none], proto ICMP (1), length 576)
192.168.12.1 > 192.168.12.102: ICMP time exceeded in-transit, length 556
IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33434: UDP, length 1472
10:16:02.473337 IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33435: UDP, length 1472
10:16:02.473962 IP (tos 0xc0, ttl 64, id 9524, offset 0, flags [none], proto ICMP (1), length 576)
192.168.12.1 > 192.168.12.102: ICMP time exceeded in-transit, length 556
IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33435: UDP, length 1472
10:16:02.475246 IP (tos 0x0, ttl 2, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33436: UDP, length 1472
10:16:02.476354 IP (tos 0x0, ttl 63, id 29150, offset 0, flags [DF], proto ICMP (1), length 1240)
192.0.0.1 > 192.168.12.102: ICMP time exceeded in-transit, length 1220
IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33436: UDP, length 1472
10:16:02.477448 IP (tos 0x0, ttl 3, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33437: UDP, length 1472
10:16:02.478408 IP (tos 0x0, ttl 63, id 29406, offset 0, flags [DF], proto ICMP (1), length 1240)
192.0.0.1 > 192.168.12.102: ICMP 18.100.109.137 unreachable - need to frag (mtu 1480), length 1220
IP (tos 0x0, ttl 2, id 0, offset 0, flags [DF], proto UDP (17), length 1500)
192.168.12.102.37692 > 18.100.109.137.33437: UDP, length 1472
10:16:02.479946 IP (tos 0x0, ttl 3, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33438: UDP, length 1452
10:16:03.481089 IP (tos 0x0, ttl 3, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33439: UDP, length 1452
10:16:04.481646 IP (tos 0x0, ttl 3, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33440: UDP, length 1452
10:16:05.482838 IP (tos 0x0, ttl 4, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33441: UDP, length 1452
10:16:06.483986 IP (tos 0x0, ttl 4, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33442: UDP, length 1452
10:16:07.485137 IP (tos 0x0, ttl 4, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33443: UDP, length 1452
10:16:08.485699 IP (tos 0x0, ttl 5, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33444: UDP, length 1452
10:16:09.486857 IP (tos 0x0, ttl 5, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33445: UDP, length 1452
10:16:10.488001 IP (tos 0x0, ttl 5, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33446: UDP, length 1452
10:16:11.489199 IP (tos 0x0, ttl 6, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33447: UDP, length 1452
10:16:12.489641 IP (tos 0x0, ttl 6, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33448: UDP, length 1452
10:16:13.490787 IP (tos 0x0, ttl 6, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33449: UDP, length 1452
10:16:14.491978 IP (tos 0x0, ttl 7, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33450: UDP, length 1452
10:16:15.493124 IP (tos 0x0, ttl 7, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33451: UDP, length 1452
10:16:16.493642 IP (tos 0x0, ttl 7, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33452: UDP, length 1452
10:16:17.494836 IP (tos 0x0, ttl 8, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33453: UDP, length 1452
10:16:18.495978 IP (tos 0x0, ttl 8, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33454: UDP, length 1452
10:16:19.497118 IP (tos 0x0, ttl 8, id 0, offset 0, flags [DF], proto UDP (17), length 1480)
192.168.12.102.37692 > 18.100.109.137.33455: UDP, length 1452
[...snip...]

The 192.0.0.1 hop (internal to the router itself, likely CLAT+464XLAT) advertised path MTU of 1480, but this is clearly wrong. The rest of the traceroute times out for every subsequent hop because tracepath believes 1480 is correct (you’ll see it lowers the UDP size fo 1452) and the probes never make it past the T-Mobile router to generate a TTL exceeded response.

I did some playing and found that on my connection the MTU is actually 1436:

(sprint:10:21:EST)% ping4 -c 4 -M do -s 1409 gravity
PING gravity (18.100.109.137) 1409(1437) bytes of data.

--- gravity ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3067ms

(sprint:10:21:EST)% ping4 -c 4 -M do -s 1408 gravity
PING gravity (18.100.109.137) 1408(1436) bytes of data.
1416 bytes from ec2-18-100-109-137.eu-south-2.compute.amazonaws.com (18.100.109.137): icmp_seq=1 ttl=43 time=125 ms
1416 bytes from ec2-18-100-109-137.eu-south-2.compute.amazonaws.com (18.100.109.137): icmp_seq=2 ttl=43 time=123 ms
1416 bytes from ec2-18-100-109-137.eu-south-2.compute.amazonaws.com (18.100.109.137): icmp_seq=3 ttl=43 time=122 ms
1416 bytes from ec2-18-100-109-137.eu-south-2.compute.amazonaws.com (18.100.109.137): icmp_seq=4 ttl=43 time=145 ms

--- gravity ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 121.856/128.792/144.883/9.373 ms

I don’t think most folks will notice this because modern stacks assume PMTUD is broken and things like HTTP/3 and QUIC (both UDP-based) implement their own path MTU discovery at the application layer. Also, many CDNs use a lower MTU by default to work around brokenness like this.

However, I’m surprised T-Mobile has this problem in 2026.

T-Mobile Home Internet

I’ve been waiting for about 2 years for the T-Mobile Home Internet to be available in my area (I suppose I was technically waiting on cell tower slots) and recently it was, so I got the “backup Internet” plan for $20/mo (no equipment fee)!

I have Verizon Fios at a deep discount (technology assessment is $94.78 for gigabit + Fios TV) due to the fact that my home is part of the Brambleton HoA so why would I need T-Mobile Home Internet? I like backups. I’ve been using a Verizon Wireless (hello, SPOF) MiFi with a 1GiB legacy data plan for $15/mo for the past few years and ending up paying around $20-25 on average because BGP + SmokePing sometimes chews up more than 1GiB per month! I have the backup connection in a Linux container running a WireGuard tunnel for IPv6 (for my PI allocation) and iptables NAT for IPv4. The failover is seamless for IPv6 but all connections reset for IPv4.

The T-Mobile backup Internet plan for $20/mo comes with 130GiB 5G data per month and “3G speeds” thereafter. That’s just fine since my Fios hardly ever goes out but it gives me peace of mind. The cellular router itself has 2x GigE ports and Wi-Fi and is built by a company called Arcadyan (model code G5AR). It’s got a little display that show signal strength and a QR code to download the T-Mobile app, which seems utterly useless:

Arcadyan G5AR
It’s got the standard ports but also external antenna connectors!

It’s powered by USB-C (I just used the included power adapter), has a USB-C data port (CDC ACM, not Ethernet over USB), and has 2x LAN ports and 4x external antenna connectors. Luckily my townhome is 4 floors and I have this thing on the top floor, so I have no signal strength issues. RSSI is at -72 dBm and it’s using band n41, which is 2,496 to 2,690 MHz.

So, how does it perform? Pretty much the same as my cellphone on T-Mobile. Here’s a wget and MTR:

There is some jitter, yes.
Arounnd 320 Mbps, not bad.

Now, on a more interesting note, SmokePing was nice & clean for awhile but this evening it got a little bumpy:

SmokePing

What happened around 1500 EST? It doesn’t seem to affect the speed but there some latency and, at times, severe loss. You can see the stark difference between the VZW (maybe power saving issues?) from the MiFi yesterday, though. I’ll see how this goes.

Anyway, I’m happy with the setup. I find it funny that they still have to assign a phone number to these devices even though they’ll never be used for voice or SMS.

Something About 2025

This isn’t a year in review article, it’s more of an article I’m writing to encourage myself to blog a bit more than every 6 months. However, I might as well share a few things about 2025. For the critical reader, I didn’t write this article with any AI assistance but there are em-dashes used because that’s how I write!

I finally visited Sydney, Australia for a week during springtime! I mostly stayed in the city except for a short ferry ride to Manly Beach for a quick walk. It’s a beautiful and clean city. I’d visit again (and may indeed have to for the day job, especially since I was overdue for this trip by a couple years anyway).

Overall, I didn’t travel as much as I did in 2024, though, but still managed to get 1K status on United Airlines for a second year in a row. The card spend helped add quite a few PQPs. It looks like this will be mostly my final summary for 2025:

The requirement for 1K in 2024 was 24,000 PQPs and it was raised to 28,000 for 2025, so I just made it. I haven’t heard what the number will be for 2026, yet. I don’t really use most of the 1K (or card) benefits but preboarding is nice. PlusPoints are good for upgrades, although they’re not guarantted to go through unless you have the option to “Skip Waitlist” but then you run the risk of losing those PlusPoints with an itinerary change. This did happen to me once and United’s policy is they are non-refundable in this situation but oddly enough I did get a refund about a month later (the Internets say there could be a couple reasons for this and one of them is a goodwill gesture from United—I think I just got lucky!).

I did travel to Dublin four times (3x with United, and 1x with Aer Lingus, since United doesn’t fly IAD to DUB in January), which was a little unusual.

Summary of personal travel:

  • SFO to Napa, CA, USA
  • LGA: New York, NY, USA
  • EWR to North Brunswick, NJ, USA (2x)
  • CLT to York, SC, USA
  • TPA to Sarasota, FL, USA
  • LAX to Manhattan Beach, CA, USA

Summary of business-related travel:

  • SEA to Seattle, WA, USA (2x)
  • SYD to Sydney, Australia
  • DUB to Dublin, Ireland (4x)

While I flew into SFO for the Napa trip, this was the first year in awhile that I didn’t visit the Bay Area for business.

After Henry passed away late in 2024 I wanted to stick with one dog (Wesley) for at least a year but my wife had different plans. We got Percy in July. And, yes, I have too many dog domains now.

As a result, over the last 5 months my body has mostly acclimated to many fewer hours of sleep. There are also still some accidents.

The day job is still going but there have been lots of changes in the 2nd half of 2025 and there will be more in 2026. I won’t comment further.

As far as personal technology, I still host most of my own stuff and partially document it poorly. AS395460 is still running strong but I haven’t added any additional connections or peers this year. I am on a 2-year cycle for phones and waited until iOS 26 had progressed to only being half-broken (hey Apple, please stop calling whimsical UI changes “upgrades”) before getting the iPhone 17 Pro, which I just setup over this weekend. I’ll upgrade my Galaxy S24 to the S26 in 2026. Since I don’t bother with my dSLR camera anymore, my wife also got me two Moment T-series lenses (the Tele 58mm and Wide 18mm), shown below on the 15 Pro (I need a new case for the 17 Pro):

At 116.5g it adds some weight to the phone. I was worried that the Moment cases would be annoying in some way but they actually seem pretty nice. I don’t think I’ll mind using it exclusively so I can pop the lens on & off (it’s an easy 90° rotate to attach & detach). The results are what you’d expect, a little bit of a zoom assist:

iPhone 15 Pro telephoto lens:

iPhone 15 Pro telephoto lens with the T-series 58mm Moment lens attached:

It’s not bad, overall. It’s obviously nothing compared to a dSLR (I have the Canon 400mm L lens, which is super nice), but it’s just annoying to carry those around nowadays, especially since I’m less into photography than I used to be (my old Flickr page still exists, though!).

That’s it for now, I guess. Happy New Year!

Useless Linux Kernel Error Messages

I run lots of Linux-based software routers on my home network to route IPv4 and IPv6. Periodically, they freak out with some IPv6-related errors that seem to indicate a problem but there is no corresponding forwarding impact. Here are two of them:

(trill:18:56:EDT)% dmesg|tail
[40878917.324479] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878920.039800] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878920.875706] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878921.920218] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878924.426656] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878925.471213] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878926.515702] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878929.022420] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878930.066760] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.
[40878931.111439] ICMPv6: Received fragmented ndisc packet. Carefully consider disabling suppress_frag_ndisc.

I started getting this (code link) every few seconds for a week or so every few hours on a router that had 400+ days of uptime. From the sysctl documentation, suppress_frag_ndisc says:

suppress_frag_ndisc - INTEGER
Control RFC 6980 (Security Implications of IPv6 Fragmentation
with IPv6 Neighbor Discovery) behavior:
1 - (default) discard fragmented neighbor discovery packets
0 - allow fragmented neighbor discovery packets

I really shouldn’t have any fragmented ND packets on my network. Everything is 1500 MTU on this specific router, but since it’s the first hop for my general purpose Wi-Fi network, maybe there is a misbehaving device? Well, I would love to debug further but the printk does not include the MAC address or link-local source. So, my options are to tcpdump all ND until it happens again, maybe?

And now there’s this one:

(starfire:18:53:EDT)% dmesg|tail
[26897673.366452] neighbour: ndisc_cache: neighbor table overflow!
[26897673.366461] neighbour: ndisc_cache: neighbor table overflow!
[26897673.366475] neighbour: ndisc_cache: neighbor table overflow!
[26897673.366828] neighbour: ndisc_cache: neighbor table overflow!
[26897673.366839] neighbour: ndisc_cache: neighbor table overflow!
[26897673.366850] neighbour: ndisc_cache: neighbor table overflow!
[26897674.390436] neighbour: ndisc_cache: neighbor table overflow!
[26897674.390448] neighbour: ndisc_cache: neighbor table overflow!
[26897674.390460] neighbour: ndisc_cache: neighbor table overflow!
[26897674.390831] neighbour: ndisc_cache: neighbor table overflow!

I am getting this on a /core/ router that is the first-hop for a few segments and is transit as well. I’m pretty sure nothing is overflowing:

(starfire:19:02:EDT)% ip -6 nei|wc -l
43
(starfire:19:03:EDT)% ip -4 nei|wc -l
41

This also happens after a few 100s of days of uptime. Oh, but I would love to debug this but the log message doesn’t tell me what the limit is and my current usage. It also doesn’t tell me the last entry that was added and is (presumably?) dropped. So, I can’t really debug this at all. But, there seems to be no forwarding impact so I guess I’ll ignore it and searching the web either indicates this hard locks the CPU (not for me) or is a bug.

It would be really nice if printk messages would provide a little more information here.

ip monitor.. broke?

In both $dayjob and my personal networks, I use iproute2 extensively as it’s the main tool to interface with Linux’s Netlink socket API from the command line. In a pinch, to see live route & IP neighbor (ARP) or IPv6 neighbor (NDP) changes, I instinctively run ip mon and sit back expecting to see a bunch of messages scroll on the screen.

Prior to the latest update, it’d result in something like this for a lightly busy first hop router with constant ARP and NDP updates:

(trill:20:47:EDT)% ip -ts mon
[2025-04-27T20:47:30.873841] 10.3.6.224 dev br0 lladdr c2:b7:a7:94:d8:19 PROBE
[2025-04-27T20:47:30.876208] 10.3.6.224 dev br0 lladdr c2:b7:a7:94:d8:19 REACHABLE
[2025-04-27T20:47:31.113926] 10.3.6.108 dev br0 lladdr 70:61:be:36:58:ed REACHABLE
[2025-04-27T20:47:31.129783] 10.3.6.116 dev br0 lladdr 80:6a:10:18:05:cd PROBE
[2025-04-27T20:47:31.129850] 2620:6:2003:106:2ecf:67ff:fe19:16b4 dev br0 lladdr 2c:cf:67:19:16:b4 PROBE
[2025-04-27T20:47:31.130962] 2620:6:2003:106:2ecf:67ff:fe19:16b4 dev br0 lladdr 2c:cf:67:19:16:b4 REACHABLE
[2025-04-27T20:47:31.135276] 10.3.6.116 dev br0 lladdr 80:6a:10:18:05:cd REACHABLE
[2025-04-27T20:47:31.229838] 10.3.6.115 dev br0 FAILED
[2025-04-27T20:47:31.545864] 10.3.6.138 dev br0 FAILED
[2025-04-27T20:47:31.870086] 2620:6:2003:106:ba27:ebff:fe7c:788b dev br0 FAILED
[2025-04-27T20:47:31.870442] 2620:6:2003:106:eee:99ff:fe22:4b21 dev br0 FAILED
[2025-04-27T20:47:31.870839] 2620:6:2003:106:d272:dcff:febf:261c dev br0 FAILED
[2025-04-27T20:47:31.871257] 2620:6:2003:106:dea9:4ff:fe8b:dd95 dev br0 FAILED
[2025-04-27T20:47:31.871685] 2620:6:2003:106:662:73ff:fe66:9b4 dev br0 FAILED
[2025-04-27T20:47:31.901816] 2620:6:2003:106:52a6:d8ff:feb6:17f7 dev br0 lladdr 50:a6:d8:b6:17:f7 STALE
[2025-04-27T20:47:31.902021] 10.3.6.150 dev br0 lladdr 96:be:a5:e3:4e:c4 STALE
[2025-04-27T20:47:31.902171] 10.3.6.145 dev br0 lladdr c8:ff:77:63:bf:dd STALE
[2025-04-27T20:47:31.902342] 10.3.6.136 dev br0 lladdr 14:91:38:79:c9:c8 STALE
[2025-04-27T20:47:32.121889] 10.3.6.149 dev br0 FAILED
[2025-04-27T20:47:32.157843] 10.3.6.4 dev br0 lladdr b8:27:eb:d8:5f:69 PROBE
[2025-04-27T20:47:32.158468] 10.3.6.4 dev br0 lladdr b8:27:eb:d8:5f:69 REACHABLE
[2025-04-27T20:47:32.925806] 10.3.6.153 dev br0 lladdr 00:24:b1:0b:73:20 PROBE
[2025-04-27T20:47:32.927484] 10.3.6.153 dev br0 lladdr 00:24:b1:0b:73:20 REACHABLE
[2025-04-27T20:47:33.597925] 10.3.6.119 dev br0 FAILED
[2025-04-27T20:47:33.694007] 10.3.6.104 dev br0 lladdr 08:84:9d:d2:58:7c PROBE
[2025-04-27T20:47:33.694199] fe80::a84:9dff:fed2:587c dev br0 lladdr 08:84:9d:d2:58:7c router PROBE
[2025-04-27T20:47:33.696668] 10.3.6.104 dev br0 lladdr 08:84:9d:d2:58:7c REACHABLE
[2025-04-27T20:47:33.697061] fe80::a84:9dff:fed2:587c dev br0 lladdr 08:84:9d:d2:58:7c router REACHABLE
[2025-04-27T20:47:33.949761] fe80::14df:53d6:630:4d33 dev br0 lladdr c2:b7:a7:94:d8:19 STALE
[2025-04-27T20:47:33.949915] 2620:6:2003:106:96ee:f793:c700:7632 dev br0 lladdr 24:11:53:ce:04:2b STALE
[2025-04-27T20:47:33.950042] 10.3.6.113 dev br0 lladdr c8:ff:77:b6:a9:d9 PROBE
[2025-04-27T20:47:33.950142] 10.3.7.197 dev lxcbr0 lladdr 00:50:56:1a:ad:cf PROBE proto zebra
[2025-04-27T20:47:33.950222] 10.3.7.197 dev lxcbr0 lladdr 00:50:56:1a:ad:cf REACHABLE proto zebra
[2025-04-27T20:47:33.950445] 10.3.6.113 dev br0 lladdr c8:ff:77:b6:a9:d9 REACHABLE
[2025-04-27T20:47:34.397842] 2620:6:2003:106:6aff:77ff:feb6:a9d9 dev br0 FAILED
[2025-04-27T20:47:34.397982] 2620:6:2003:106:daa9:4ff:fe8b:dd95 dev br0 FAILED
[2025-04-27T20:47:34.398075] 2620:6:2003:106:f281:73ff:feeb:946e dev br0 FAILED
[2025-04-27T20:47:34.398156] 2620:6:2003:106:f218:98ff:feec:e96a dev br0 FAILED

And this for a router with a view of the IPv6 DFZ:

(daedalus:20:50:EDT)% ip -ts -6 mon
[2025-04-27T20:50:13.332562] 2402:e580:745a::/48 nhid 655447 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
nexthop via fe80::a03:444 dev sit2 weight 1
[2025-04-27T20:50:13.565449] 2a03:eec0:3212::/48 nhid 125 via fe80::a03:444 dev sit2 proto bgp metric 20 pref medium
[2025-04-27T20:50:14.052933] 2a12:a580::/29 nhid 627585 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
[2025-04-27T20:50:14.052991] 2a06:de01:861::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:14.053030] 2605:9cc0:c05::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:14.053162] 2a06:de05:63d6::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:15.155490] fe80::2a7:42ff:fe45:bc73 dev eth0 lladdr 00:a7:42:45:bc:73 router PROBE
[2025-04-27T20:50:15.156345] fe80::2a7:42ff:fe45:bc73 dev eth0 lladdr 00:a7:42:45:bc:73 router REACHABLE
[2025-04-27T20:50:15.683859] 2a06:de05:61c4::/48 nhid 81 via fe80::a03:7ee dev sit4 proto bgp metric 20 pref medium
[2025-04-27T20:50:15.861581] 2a03:eec0:3212::/48 nhid 125 via fe80::a03:444 dev sit2 proto bgp metric 20 pref medium
[2025-04-27T20:50:16.890712] Deleted 2605:9cc0:c05::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:16.890771] Deleted 2a06:de01:861::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:17.004830] 2a03:eec0:3212::/48 nhid 627631 proto bgp metric 20 pref medium
nexthop via fe80::a03:7ee dev sit4 weight 1
[2025-04-27T20:50:17.129794] 2a06:de05:61c4::/48 nhid 387 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
nexthop via fe80::a03:7ee dev sit4 weight 1
[2025-04-27T20:50:17.193829] 2a03:eec0:3212::/48 nhid 81 via fe80::a03:7ee dev sit4 proto bgp metric 20 pref medium
[2025-04-27T20:50:17.247879] 2a03:eec0:3212::/48 nhid 125 via fe80::a03:444 dev sit2 proto bgp metric 20 pref medium
[2025-04-27T20:50:17.489729] 2a12:a580::/29 nhid 88 via fe80::a03:443 dev sit5 proto bgp metric 20 pref medium
[2025-04-27T20:50:17.681168] 2a03:eec0:3212::/48 nhid 125 via fe80::a03:444 dev sit2 proto bgp metric 20 pref medium
[2025-04-27T20:50:17.961734] Deleted 2a06:de05:63d6::/48 nhid 80 via fe80::a03:449 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T20:50:18.357769] 2a03:eec0:3212::/48 nhid 125 via fe80::a03:444 dev sit2 proto bgp metric 20 pref medium
[2025-04-27T20:50:18.357974] 2a0c:b641:302::/47 nhid 441 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
nexthop via fe80::a03:444 dev sit2 weight 1
[2025-04-27T20:50:18.585064] 2a0c:b641:302::/47 nhid 441 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
nexthop via fe80::a03:444 dev sit2 weight 1
[2025-04-27T20:50:18.656761] 2a0c:b641:302::/47 nhid 441 proto bgp metric 20 pref medium
nexthop via fe80::a03:449 dev sit1 weight 1
nexthop via fe80::a03:444 dev sit2 weight 1

So, after the latest Debian package update of iproute2 (6.14.0-3), I was surprised to find that ip mon appeared to not work anymore:

(concorde:19:49:CDT)% ip mon             
Failed to add ipv4 mcaddr group to list
(concorde:19:51:CDT)% ip -4 mon
Failed to add ipv4 mcaddr group to list
(concorde:19:51:CDT)% ip -6 mon
Failed to add ipv6 mcaddr group to list
(concorde:19:51:CDT)% ip mon all
Failed to add ipv4 mcaddr group to list
(concorde:19:51:CDT)% ???

It turns out that one now has to specify the type (nei, route, etc.) due to a patch that enabled multicast functionality for iproute2’s monitor command. I couldn’t get the multicast functionality working but I can replicate the old behavior with the additional arguments:

(concorde:19:57:CDT)% ip -ts -6 mon r
[2025-04-27T19:57:22.103476] 2a0f:7803:faf7::/48 nhid 135 via fe80::ae88:6002 dev sit1 proto bgp metric 20 pref medium
[2025-04-27T19:57:22.156228] 2001:661:4000::/35 nhid 36 via fe80::fc00:1ff:fe95:fbf8 dev eth0 proto bgp metric 20 pref medium
[2025-04-27T19:57:22.157505] 2a0f:9400:6110::/48 nhid 36 via fe80::fc00:1ff:fe95:fbf8 dev eth0 proto bgp metric 20 pref medium
[2025-04-27T19:57:22.274238] 2a06:de00:de03::/48 nhid 36 via fe80::fc00:1ff:fe95:fbf8 dev eth0 proto bgp metric 20 pref medium
[2025-04-27T19:57:22.385085] 2a0f:7803:faf7::/48 nhid 36 via fe80::fc00:1ff:fe95:fbf8 dev eth0 proto bgp metric 20 pref medium
[2025-04-27T19:57:22.578524] 2804:40:4000::/34 nhid 36 via fe80::fc00:1ff:fe95:fbf8 dev eth0 proto bgp metric 20 pref medium
[2025-04-27T19:57:23.095034] 2a06:de05:6151::/48 nhid 138 via fe80::a03:449 dev sit4 proto bgp metric 20 pref medium
[2025-04-27T19:57:23.095687] 2001:67c:20fc::/48 nhid 138 via fe80::a03:449 dev sit4 proto bgp metric 20 pref medium
[2025-04-27T19:57:23.095991] 2400:fc00:87e0::/44 nhid 138 via fe80::a03:449 dev sit4 proto bgp metric 20 pref medium
[2025-04-27T19:57:23.096256] 2a06:de01:97e::/48 nhid 138 via fe80::a03:449 dev sit4 proto bgp metric 20 pref medium

The fact that even ip mon all doesn’t work with this patch and still spits out the mcaddr error seems like a bug, to me, and I might submit a bug report if nobody has done so already.

Or, has everyone been always specifying route or nei and I’ve been the only idiot expecting results when omitting those objects?

Verizon Fios.. Latency Tiers?

My parents and I are both lucky to be in a Verizon Fios service area and get FTTH Internet, which is generally far superior to DOCSIS or xDSL when it comes to speed but also, more importantly, latency and jitter. My parents have had the service for about two decades and I’ve had mine since 2021, when I moved to VA and I started with the Gigabit (940/880 Mbps, actually) speed tier. My parents, however, have stuck with the 75/75 Mbps since they really haven’t felt the need for more bandwidth.

I was visiting them this weekend and along with installing a new Mac mini M4 for my mom as a late Christmas present, also helped them install their new Fios TV+ to replace their old (and I guess soon unsupported?) set top boxes. The Fios TV+ service makes use of Google Stream TV hardware & software on each TV, which all connect to a main Video Media Server through a combination of MoCA and Wi-Fi. The VMS receives ATSC through coax from the ONT. Anyway, this generally requires a Verizon-branded router if all it ends up doing is bridging MoCA and Wi-Fi. As I do in my home, I put the router behind my own router since I use home-grown Debian-based routers running various VPNs and iptables (see PCN for more details). Regardless, I replaced their ancient Actiontec MoCA router with the new G3100. The installation was pretty easy and now my parents have two Wi-Fi networks in their home: the main one and one dedicated for the Stream TV boxes.

When we activated the first Stream TV box and the VMS, their Internet service was automatically upgraded to the 940/880 Mbps service without a price difference (no ONT reboot required). I am not completely clear on what happened on the billing side of things but my dad seemed happy.

Various standard speed tests confirmed the new bandwidth tier was working as expected. I then looked at SmokePing just for kicks, not expecting to see anything different but got a surprise:

All SmokePing targets saw the latency and jitter drop.

This was unexpected for two reasons:

The link speed on my router was 1000Mbps/full-duplex before and after the upgrade. Although, even if it was 100Mbps/full-duplex before, that would not explain the drastic latency decrease of ~9 ms to ~5 ms since a 56-byte ICMP packet incurs 4.48 μs deserialization delay at 100 Mbps and 0.448 μs at 1000 Mbps. That’s not anywhere near the milisecond range.

From what I have seen, bandwidth tiers on residential Internet services are implemented as policers (token bucket) or shapers (leaky bucket), neither of which kick in until the traffic hits the limit, which was 75 Mbps symmetric before and 940/880 Mbps after. There certainly wan’t much of anything even close to 75 Mbps between 1000 and 1200 local time (graph is cut off at 10 Mbps to highlight this):

So, link speed was always 1000Mbps and we weren’t hitting the limiters prior to the bandwidth upgrade. Why did the latency change?

None of the hardware between my SmokePing node and the Fios network changed during this upgrade and the software (and uptime of the operating systems) didn’t change. My only explanation is that Verizon may be using something other than normal policiers and shapers for their bandwidth tiers, which impacts unloaded latency numbers.

If this is all true and there’s no other explanation for the latency change, this indirectly creates latency as well as bandwidth tiers. Latency is especially important for realtime applications such as A/V and gaming as well as the time it takes for TCP to increase its sliding window. The last point can impact page load times when multiple short-lived TCP connections are used to pull various page components.

Am I the only one who’s seen this?

2024 Review and Stuff

I forgot to do a 2024 review. It’ll be short and sweet and mostly visual but here it goes.

I flew a lot for work and pleasure:

We’re down to 1 dog, now (rip Henry Kamichoff on Christmas Day):

I doubled down on Intel for my last two “main computer” upgrades:

(destiny:18:49:EST)% ssh vega lscpu|grep Model.name
Model name: Intel(R) Core(TM) i9-14900
(destiny:18:49:EST)% lscpu|grep Model.name
Model name: Intel(R) Core(TM) i9-14900KS

Next time it’ll be AMD-something-3D, I suppose. I’d really like it to be ARM, though. Maybe RISC-V?

I went to a few EDM shows. LSR/CITY v3 in Austin, TX:

Ultra Music Festival in Miami, FL:

Eric Prydz in Washington D.C.:

I already have Gareth Emery’s CYBERPUNK lined up for 2025!

Did I mention travel?

  • Seattle, WA, USA (6x times)
  • Cupertino, CA, USA (2x times)
  • Dublin, Ireland (2x times)
  • Sarasota, FL, USA (2x times)
  • North Brunswick, NJ, USA (2x times)
  • Sedona, AZ, USA
  • Miami, FL, USA
  • New York, NY, USA
  • Austin, TX, USA
  • Chicago, IL, USA
  • Boston, MA, USA
  • United Kingdom (England and Scotland)

We stayed home for the holidays, though. 2025 may be different, though.

That’s really about it, though. If I try to add more to this entry I’ll just put off publishing it.

A Tiny 2023 Review

I’m not all that interested in writing long year in review (YoR) articles anymore, so I’ll just create some bullets and a bunch of photos. The bullets come first.

  • I had three different managers at my place of employment.
  • I got on a plane to travel out of my home area a whopping 17 times, 7 for work and 10 of a personable nature. Notable destinations were Israel (1H of the year), Ireland, Las Vegas, Chicago, and of course Seattle & the Bay Area.
  • I switched my primary vehicle to a non-Tesla EV.
  • I was promoted at my place of employment.
  • I attended concerts by Gareth Emery and P!nk.
  • My wife and I did not travel for Thanksgiving, Christmas, or New Years.
  • I returned to the office. My commute time ranges from 40 to over 90 minutes depending on my mode of transportation and various other timings.
  • I started drinking espresso shots.
  • I upgraded my main workstation to an Intel Core i9-13900KF CPU with 128GiB of RAM and a Nvidia GeForce RTX 3060 GPU.
  • I had an irrigation system installed for our end unit townhome. This installation resulted in 1 week of no running water in our home due to a valve that is part of the fire suppression system, which I blame squarely on the builder.
  • I gained 4.9 kg.
  • I saw three movies.

Here are some images:

Temple Bar, Dublin, Ireland
I visited Washington D.C. a couple of times since it’s nearby.
Tel Aviv, Israel
The Sphere had just been turned on when I was in Las Vegas
DC Metro after the P!nk concert at Nationals Park
Empire State Building and Vessel in NYC
“Cask Mates” at the 1608 Bar in Quebec City, Canada
My wife on our New England cruise
Seattle Spheres
Intel Core i9-13900KF with Noctua cooler and Nvidia RTX 3060 GPU
Henry and Wesley, our dogs
The Mediterranean Sea from Carmel Beach in Haifa, Israel
Gareth Emery in Area 15, Las Vegas

That about it, I suppose. In actuality I got tired of processing photos.

I also started listening to melodic techno instead of a steady stream of trance and house music. I also have a whopping 4x EDM shows scheduled through March 2024 already!

FreeBSD 14.0

FreeBSD isn’t dead. Nope.

I run this webserver (dax.prolixium.com, a VirtualBox VM running on a bare metal server that I lease from Vultr in Parsippany, NJ) on FreeBSD as well as another test VirtualBox VM at home, trance.prolixium.com. Although it’s a less-than-scientific development environment, I usually try FreeBSD upgrades on trance before upgrading dax.

I did the trance upgrade from 13.2 over the weekend during some evening down time I had while in Sarasota, FL (visiting a family member). I did a source upgrade of the base system with a binary package upgrade. I used to do a 100% source upgrade, which included FreeBSD ports. However, over time the /build/ dependencies spiralied out of control (think multiple versions of llvm..) so I switched to binary packages.

The only quirks I ran into was that the first installworld failed with some ld-elf.so.1: /usr/bin/make: Undefined symbol "__libc_start1@FBSD_1.7" error. I ran installworld a second time and it worked. I’m guessing there’s some race condition or dependency error (I am using make -j1). Some folks on Lily indicated this is par for the course, although I haven’t encountered it before and I’ve been running and upgrading FreeBSD since 4.8. The other quirk is that openssh-portable seems to be gone from the available binary packages. As a result, pkg upgrade -f didn’t reinstall it to link to new libs and I ran delete-old- libs. After the reboot sshd was not running due to a libcrypto mismatch. I had to VNC in and switched to the OpenSSH from base. The only reason I used openssh-portable originally was because the OpenSSH version in base always lagged behind considerably but this doesn’t seem to be the case anymore since it’s 9.5. I’m still curious why openssh-portable was removed from the available binary packages—a quick wed search didn’t turn up anything interesting. I still see it in ports so I guess I could still build it from source if I wanted.

The trance VM runs WireGuard and FRR since I’ve had problems with that combination in the past (also OpenVPN and Quagga). I figured it’s a good soak and /fairly/ representative of what I run on dax, which I suppose I’ll upgrade in the next couple of weeks.

MPV, PulseAudio, and Laggy Video

I switched over to MPV from MPlayer a year or two ago and haven’t looked back. However, after I upgraded my Intel Core i9-9900K with an RTX 2060 to A i9-13900KF with an RTX 3060 earlier this year I noticed videos played with MPV were very laggy, to the point where the FPS would be much slower and it’d have to drop frames to keep up with the audio. VLC and browser-based video was unaffected. It was just MPV.

I did all sorts of debugging, which included changing video drivers (--vo), decoders (--vd), and even playing with CPU scaling on the system (to the point where I even tried writing 0 to /dev/cpu_dma_latency) but nothing helped. The odd part is that some video files weren’t affected by this, specifically videos from my iPhone (H.265 in MOV container). I mostly switched to VLC, as a result, and moved on with life.

Today I came across an article where someone was troubleshooting a similar laggy video issue with MPV and even though the problem described in the thread was not related to mine, one of the troubleshooting steps helped me solve mine.

I run PulseAudio because some things like to use it but if I have the choice, I tell applications to run directly off the ALSA API. MPV uses PulseAudio by default over ALSA. It turns out that there is a slight difference in the audio controller on the motherboard on my i9-13900KF system (MSI MPG Z790 Carbon Wi-Fi) that causes outputting to PulseAudio to result in the horrible latency and lag. The fix was simple: --ao=alsa (or ao=alsa in mpv.conf).

Yes, I still feel like a jive turkey after all of these years: