I was restoring an entire computer, and restoring from my network share on a NAS wouldn’t work: it would quietly hang, a day in. Copying the backups to an external drive and restoring from that worked.
This is the simpler migration. I’m planning to post the opposite direction separately.
I started writing these for the “external drive to network share” migration, but, uh. Look, doing this still feels a bit cavalier to me, I guess.
These processes were largely worked out with HFS+ backups; I haven’t walked through these with APFS backups.
These (mostly) worked for me, but they are very much not officially supported.
There’s a lot of context, and I’ve likely forgotten details; it’s been a year or two. This is still kind of a rough draft. I’ll try to highlight rough bits and gaps.
Newer versions of the OS might be more protective of letting you access these backups, even as root.
I think the external drive won’t be encrypted, even if the sparse bundle on the network share was. I think I managed to get it to start incrementally encrypting the external drive by adding it as a backup location, with encryption, then having it go “oh! ok! got it”.
On a network share, Macs use sparse bundles to make something that looks like a Mac disk image, on a network share where the file system is relatively unimportant. If the Mac can read and write the files within, things like user IDs, permissions, and encryption don’t have to be coordinated across machines.
On the network share, it will be called something like Bob's MacBook.backupbundle
or .sparsebundle
.
From the Finder, you can “Connect to the server”, go to the network share, select the appropriate sparse bundle, and either double-click it or right-click and “Open” it.
You’ll probably see “Time Machine Backups” as a mounted volume.
Note, this is for an APFS backup.
Backups.backupdb
Bob's MacBook
2024-04-13-092828
Erase and reformat the external drive. Label it something distinctive, so you don’t confuse source and target for the copy. Mount it.
I’d recommend ethernet instead of wifi.
From a Mac, once the sparse bundle and the external drive are mounted:
# change these as appropriate
input="/Volumes/Time Machine Backups" # sparse bundle
output="/Volumes/Time Machine 2023" # external drive
date; time sudo asr \
--source "$input" \
--target "$output" \
--erase; \
date
It will prompt for:
Part of the process was unmounting the source. If interrupted, I had to redo:
Validating target...done
Validating source...done
Erase contents of /dev/disk5 ()? [ny]: y
Validating sizes...done
Restoring ....10....20....30....40....50....60....70....80....90....100
Verifying ....10....20....30....40....50....60....70....80....90....100
Restored target device is /dev/disk5.
This took about 35 hours to process a 4TB drive, 2.6TB used.
]]>this is part 2 – part 1 has an intro and links to the others
I forget where I picked up “forest” as “many files or hardlinks, largely identical”. I hope it’s more useful than confusing. Anyway. Let’s make a thousand thousand thousand files!
Putting even a million files in a single folder is not recommended. For this, the usual structure:
These are ordered, roughly, slowest to fastest. These times were on an ext4 file system.
Lots more details over in a Gitlab repo, a fork of the Rust program repo.
forest-touch.sh
– run touch $file
in a loop, 1 billion timescreate_files.py
– touches a file, 1 billion times. from Lars Wirzenius, take 1, repo.forest-tar.sh
– build a tar.gz with a million files, then unpack it, a thousand times.
makes an effort for consistent timestamps.forest-multitouch.sh
– run touch 0001 ... 1000
in a loop, 1 million times.
makes an effort for consistent timestamps.More consistent timestamps can lead to better compression of drive images, later.
A friend, Elliot Grafil, suggested that tar would have the benefits of decades of optimization. It’s not a bad showing! zip didn’t fare as well: it was slower, it took more space, and couldn’t be streamed through a pipe like tar.gz can.
Lars Wirzenius’ create-empty-files
, with some modifications, was the fastest method.
Some notes on usage:
For documentation, filed merge request #3, merged 2024-03-17
mount
recognizes them automatically.The fastest version was the one where I’d commented out all saving of state. If state were saved to a tmpfs in memory, it slowed down by a third. If state were saved to the internal Micro SD card – and this was my starting point – it ran at about 4% the speed.
The Rust program was documented as making an ext4 file system, but it was really making an ext2 file system. (I corrected this oversight with merge request #2, merged 2024-03-17.) Switching to an ext4 file system sped up the process by about 45%.
I didn’t modify the defaults. After 100 min, it estimated 19 days remaining. After hitting ctrl-c, it took 20+ min to get a responsive shell. Unmounting took a few minutes.
By default, it stores two copies of metadata. For speed, my second attempt (“v2”), switched to one copy of metadata:
mkfs.btrfs --metadata single --nodesize 64k -f $image
These are the method timings to create a billion files, slowest to fastest.
method | clock time | files/second | space |
---|---|---|---|
shell script: run touch x 1 billion times, ext4 | 31d (estimated) | 375 | |
Rust program, xfs defaults | 19d (estimated) | 610 | |
Rust program, ext4, state on Micro SD | 17 days (estimated) | 675 | |
Rust program, btrfs defaults | 38hr 50min | 7510 | 781GB |
shell script: unzip 1 million files, 1k times, ext4 | 34 hrs (estimated) | 7960 | |
Rust program, ext2 | 27hr 5min 57s | 10250 | 276GB |
Python script, ext4 | 24hr 11min 43s | 11480 | 275GB |
Rust program, ext4, state on /dev/shm | 23hr (estimated) | 11760 | |
shell script: untar 1 million files, 1k times, ext4 | 21hr 39min 16s | 12830 | 260GB |
shell script: touch 1k files, 1 million times, ext4 | 19hr 17min 54sec | 14390 | 260GB |
Rust program, btrfs v2 | 18hr 19min 14s | 15160 | 407GB |
Rust program, ext4 | 15hr 23m 46s | 18040 | 278GB |
This is a story about benchmarking and optimization.
Lars Wirzenius blogged about making a file system with a billion empty files.
Working on that scale can make ordinarily quick things very slow – like taking minutes to list folder contents, or delete files.
Initially, I was curious about how well general-purpose compression like gzip
would fare with the edge case of gigabytes of zeroes, and then I fell down a rabbit hole.
I found a couple of major speedups, tried a couple of other formats, and tried some other methods for making so many files.
For a brief spoiler: Lars’ best time was about 26 hours. I got their Rust program down to under 16 hours, on a Raspberry Pi. And I managed to get a couple of other methods – shell scripts – to finish in under 24 hours.
I was polishing up a lengthy blog post, and I fell in to what might be a whole other wing of the rabbit hole, and I realized it might be another blog post, or, maybe several posts would be better anyway.
The sections I can see now, I’ll add links as I go:
I worked from a Raspberry Pi 4, with 4 GB RAM, running Debian 12 (bookworm). The media was a Seagate USB drive, which turned out to be SMR (Shingled Magnetic Recording), and non-optimal when writing a lot of data – probably when writing a gigabyte, and definitely when writing a terabyte. This is definitely easy to improve upon! The benefit here: It was handy, and it could crash without inconvenience.
I tried using my Synology NAS, but it never finished a run. Once, it crashed to the point of having to pull the power cord from the wall. I think its 2GB of memory wasn’t enough.
Lars Wirzenius wrote:
Slides from Ric Wheeler’s 2010 presentation, “One Billion Files: Scalability Limits in Linux File Systems”
]]>
I want to keep an eye on domains and their expiration dates without signaling that, avoiding middlemen who would like a signal of interest, to front run the purchase, and auction it off.
This is, to me, surprisingly hard to do.
I kept an eye on various domains I’d like to register, if and when they expire. I set reminders on my calendar to check. With grace periods, it gets more complicated: I’ve seen expiration dates over a month ago, but still blocking a registration.
Don’t go to the domain from your browser. If it works, it could signal interest. If it doesn’t work, it’s not definitive; it might not be registered, the webserver could be down, or it’s being used for email, so the webserver was never connected. Going to the whois.com site is better about getting info like an expiration date.
In a handwaving way, the three stages:
Happily, there’s a solution built for this. domain-check-2 is a shell script that can read a list of domains from a text file, check their expiration dates, and send email if there’s under a certain number of days remaining. It checks using whois
, and I think that this method is safe from would-be domain squatters. I give it a list that looks like this, only my domains:
# 2023
pronoiac.org # #me, exp 2023-10-23
mefi.social # #mefi, exp 2023-11-11
# 2027
mefiwiki.com # #mefi, exp 2027-07-05
I’m running it manually, on a weekly basis; I haven’t used the email notification, but looked at the output. The comments, ordering, and exact expiration dates aren’t necessary, but they help me fact-check that it’s working, and they might help my imperfect understanding of the domain lifecycle.
this is much fuzzier
I checked whois
(not whois.com) from the command line, and grepped for status or date. If you want to register a domain the day it becomes available, I’d suggest checking the status daily. Knowing when it switches to “pending delete” is important, as that starts a five day timer. Finding that it’s been renewed is another possibility, in which case, update the expiration date in the text file, and go back to step 1.
status | days after expiration | renewable? | website could work |
---|---|---|---|
ok | before | yes | yes |
renewal grace period | 0 to 30* days | yes | maybe |
redemption period / restoration grace period | 30 to 60 days | yes | no |
pending delete | 5 day duration | ? | no |
available | 35 to 75, or up to 120 days??? |
Notes:
Grace period:
probably 0 to 30 days.
It could be lengthened, to 40 or 90 days, or shortened.
Redemption period:
A recovery fee required to renew: $100 to 150. The registrar could put the domain up for auction during this.
Available:
Apparently, usually opens sometime between 11am and 2pm Pacific.
Add grace period:
People and registries can cancel a domain purchase within five days of purchase. This can be used for domain tasting and domain kiting. This means, if the domain of interest was picked up by someone else, watch it for the next week. Maybe they’ll change their mind and return it.
This timeline can vary by TLD, registrar, and registry.
Don’t rely upon whois.com after the expiration date; aggressive caching could show out-of-date information. Such as, “pending delete” when other sources show it’s been registered for days.
Apparently, domains usually open sometime between 11am and 2pm Pacific. Logging into your domain registrar of choice, and having funds available, is a good idea, if you want to act quickly.
Honestly, I haven’t gotten as far as “registering a lapsed domain”. The whois.com caching surprised me. This blog post is partially me gathering context and notes, so as and when the next domain of interest nears expiration, I can make exciting new mistakes, rather than repeat old ones.
Trying to pair an Apple Watch to a phone, an update was required; upon requesting it, the Apple Watch (through the iPhone) would cancel, reporting:
Unable to check for update, not connected to the Internet
I enabled the iPhone’s hotspot, which disabled its wifi, and attempted the update again. The phone started to download the appropriate firmware.
Minutes in, I re-enabled the wifi for a faster download; downloading and updating the watch both succeeded. So apparently, only the initial negotiation required this workaround.
It might be wifi bands, 5GHz (faster, but shorter range) vs 2.4GHz:
So, I might post some of my notes to my blog. They might be helpful to other people. Just having them tagged here might make them easier for me to find.
(to do: switch themes to one that displays tags)
]]>In the movie “Glass Onion”, there’s a recurring background chime: every hour, a resounding “dong!” with some chimes. Turning this into a ringtone sounded like fun.
Here’s the playable mp3 (I didn’t make the mp3, see below for the source):
And here’s the m4r version, usable as a ringtone on iPhones. It’s an aac / mp4 (audio-only) file.
This seems like something that should be easy and built-in, but it’s not. This is using Music, not iTunes, on a Mac.
To transfer the file: Download the m4r file, and open a Finder window for Downloads (or wherever you saved the file).
Plug your device in.
Open the General view in Finder for your device.
Drag and drop the “hourly dong” m4r file over to that Finder window.
You should be good to go!
ffmpeg
for audio conversionI’m a fan of the TV show, “The Good Place.” The makers of it are fond of easter eggs, and some weird fonts made an appearance. Some folks on Reddit decoded them; I thought I’d make a font out of it. I made a quick lofi draft with iFontMaker on an iPad Pro and an Apple Pencil.
I present: Afterlife Wingdings v 0.1, as a Truetype font (15k).
Currently missing: Q, Z, q.
Decoding:
Edit, 2019-02-09: From a behind-the-scenes video, there’s the door to Heaven (about 1:31 in), in Alchemy A:
Shari
Cort
IHOP
Adam
Zdcks
Before that, a sign (around 1:16 in):
Adam is the land of the
small and the mighty
ill powerful bageljohn-
nys should be on the
look out for wonder?
ful cruisesZacks Is a land of slugs
and waterfalls slugs
love waterfalls so
that should be a sur-
prIse I mean really
comoneIHOP be careful what
you touch In thIs mIs-
tIcal realm of danger
and IntrIgue ???
dont eat an???
fInd on the ???
s03e01, “Everything is Bonzer!”
Shawn’s computer earlier in the season used Alchemy C, blurred (see the podcast, around 1:14 in):
aspernitur aut odit out fugit, sed q…
equuntur magni dolores eos qoi r…
voluptatem sequi nesciunt.
Neque porro quisquam est, qui dol …
ipsum quia dolor sit amet,. e porro
consectetur, adipisci velit, sed quia nonre et dolore magnam aliquam qu
“Neque porro quisquam” led me to a page about lorem ipsum’s origin, and I think the translation kinda defends me spending time on translating this.
s04e04, “Tinker, Tailor, Demon, Spy”
The tub of Glenn did have “1 GLENN” in Afterlife Wingdings.
s04e12, “Patty”
Around 7 minutes in, the contract that Michael signs:
… ed and eIght shall In any Manner affect the
… the fIrst ArtIcle[;?] and that no
…prI … of Its equal Suffrage In the Senate[Michael’s signature]
New Leader of the Good Place
The first part was identified on Reddit, as being part of the Constitution.
I’d thought there was a glimpse, when Eleanor was looking over the green folders, but on rewatch from Hulu, I don’t see it.
Resources:
I’d love to know if you do anything fun with this!
]]>As background: I run a few websites, some for myself, and some for other people. When the sites for other people break, I pay more attention. And when I check the traffic to see what’s being requested, and the most frequent is a page for suicide and depression prevention resources, right after Robin Williams’ taking his own life, when the resources could be timely and helpful, well, that really lit a fire under me.
So, a MediaWiki installation died, after months of puttering along, citing a database error: “A database query error has occurred.” My thought is that the hosting company updated something, and usually, a quick software update handles it. Not this time, though. Some googling suggested adding “$wgShowSQLErrors = 1;” at the end of LocalSettings.php – which I should add to the MediaWiki errors page. Following that led to the more informative error: “1267 Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci”,COERCIBLE) for operation ‘=’ ((snip out my database server)”
My Mediawiki installation fell over due to a database problem involving mismatched collations; utf8 and, for some reason, latin1_swedish_ci. I followed some steps based on Alex King’s notes on a similar situation, with some modifications.
1. “Export the data as Latin-1.” I usually use phpmyadmin, but that ‘Latin-1’ wasn’t a listed export option. So, I used the command line, starting from these directions as a starting basis:
mysqldump -uUSER -p --quick --single-transaction --create-options --skip-set-charset --default-character-set=latin1 -h DB_SERVER TABLE_NAME > db-dump.sql
I opted to enter the database password interactively, so it wouldn’t be saved in the shell history.
2. “Change the character set in the exported data file from ‘latin1’ to ‘utf8’.” I used nano to edit, find and replace: latin1 to utf8, and latin1_bin to utf8_bin. Textmate might have worked, but it asked about character set encodings, and so I worried it could screw things up.
3. See #2.
4. “Import your data normally.” phpmyadmin on the usual host balked at the 15 meg bzipped database file, and I’d hit my breaking point with this hosting, so I set up a Linode instead. Based on the Linode import directions: mysql -u USERNAME -p -h DB_SERVER DB_NAME < FILE.sql
– again, interactively prompted for the database password.
Lastly: there may be an issue with accented characters, causing truncated pages. I tried to grep the logs, but the lines are too long. I looked at the Google results for each accented letter, and nothing looked amiss. I started looking at hexdumps, and there were a lot of results, from old spam. I’ve punted on this, and asked the wiki mods and the Mefi mods to let me know if anything breaks.
]]>What was encoded? Spoilers follow.
Issue 1, page or panel 21: “Kill them all!”
2.1 – “No mercy!” twice.
2.7 – “By the goddess!”
2.9 – “Die Filth!” / “Nng fff!”
2.12 – “No!” / “Uuuggg”
6.2 – “What’s this bullshit you’re flinging?”
“Well, isn’t that interesting?”
“I’ll forgive your impudence, as my joy at hearing the old tongue softens my mood.”
Edit: 8.19 – “Honor?”
That’s all for now. I haven’t seen what q, x, or z look like yet.
]]>