I honestly don’t understand how DreamHost stays in business.

Dossy tweeting: "@DreamHostCare Is anyone actually watching this Twitter account? You're about to lose a $350/mo customer because a server migration has been going on for 6+ hours now, with no updates from your end. What are you all doing over there?"

Yesterday, a client of mine was having their DreamHost dedicated server migrated to a new dedicated server because the one they are on intermittently becomes unresponsive at 8pm ET, seemingly at random.

DreamHost’s diagnosis is that the server is on a Linux kernel version that is supposedly causing this, and their recommended solution to the problem isn’t to just upgrade to a kernel that doesn’t have this problem, which would be trivially simple, but to upgrade the entire operating system and migrating to a new dedicated server.

As a person who manages servers for a living, I get it: it can suck having to support old stuff sometimes. The old server is on Ubuntu 14.04.6 LTS, which is quite old at this point, but isn’t due to reach End of Life until April 2022. The new dedicated server they’re moving us to is only on Ubuntu 18.04.5 LTS, which isn’t even the newest Ubuntu at this point, which would be Ubuntu 20.04.1 LTS. Still, any opportunity to force a customer to do a major OS upgrade because the service you’re providing is failing intermittently, I suppose if you don’t give a shit about your customer, you make them do it.

There’s only four small sites being hosted on this server.

There are a combined total of 103.3 GB worth of files, and 7.5 GB worth of MySQL data. These numbers might seem large by year 2000 standards, but in 2020, this is trivially small; it can all fit comfortably in RAM on any modern server, or stored on a modern iPhone.

Transferring this from one server to another over a 1 Gbps link shouldn’t take more than 19 minutes, and less than 2 minutes on a 10 Gbps link. Migrating from one server to another shouldn’t take more than 30 minutes, tops, and that’s if you’re a Neanderthal and type with your fucking elbows.

A woman typing on a Dell keyboard with her left elbow.
Source: odiouswench on YouTube

Thinking that this should be a quick and easy migration, we requested the migration back on October 28th, asking for a date and time when the migration could be done. On October 30th, DreamHost responds saying the data center team would need to do some prep, and that they’d let us know when they could schedule the upgrade, “likely early next week.”

Apparently, “early next week” in DreamHost-speak takes over a month.

Fast-forward a month, and on November 28th, I send a follow-up message asking what the status is with our request. Four days later, on December 2nd, I get a response saying they’re ready to go. I respond on December 3rd, requesting the soonest available time slot, because at this point I just want to get this over with. I get a response late that night saying that they’ll schedule the upgrade for the next day, December 4th, at 11am PT/2pm ET. Fantastic, we’ve got a plan!

The time comes, it’s 2pm ET, and I’m sitting here, with the Cloudflare panel open in one tab, the DreamHost panel open in another, and the sites all lined up in 4 other tabs, ready to pull the trigger on changing the DNS to point everything at the new server to minimize whatever downtime I can. I’m prepared.

At 2:57pm, I get an email from DreamHost saying that they’re only now starting the migration. 🤦 Okay, fine, whatever. The email says I’ll receive an automated email once the upgrade is done. Cool, let’s get this over with!

… time passes …

… and some more time passes …

… I’m starting to wonder if my spam filter ate their automated email …

… and the sites still haven’t been migrated …

At 7:26pm, I send an email pointing out that at least one of the sites is down because it can no longer connect to its database. I point out that I haven’t gotten an email that the migration has completed yet, so either their process has failed or they have seriously taken four and a half hours, so far, to complete a migration that should have been 30 minutes, tops.

At 8:37pm, having gotten no response to my earlier email, and the site still being down, I send another email, asking for an update. How much longer could this possibly take?

Getting no responses to my emails, I decide to give DreamHost support’s “live chat” a shot. I queue up at 9:17pm, and eventually get connected to a person at 9:27pm. I ask for a status update with our migration. I notice that while I was waiting in queue, an email arrived at 9:16pm saying their upgrade process failed and had to be restarted.

Are you fucking kidding me?

I stay on the live chat to try and get progress updates, and see if there’s any chance this is going to actually get done tonight. Sadly, at 10:19pm, I’m told that the migration process has failed again, and that the tech who was doing it will revert part of the migration to point the sites at the databases on the old dedicated server to bring the sites back online, and that they’ll come back to this on Monday.

At 10:42pm, I’m informed that the sites should be back online and that and that there’s nothing more that will be done this evening. I confirm that the sites are back online, and end the chat.

***

I was a long-time DreamHost customer, myself, since 2006. But, after they changed their service offering in 2015, I had enough and closed my account.

At that time, I was just happy enough to leave and leave it at that. But now, 5 years later, seeing that the DreamHost experience has continued to get worse over time, I’ve decided that not only am I not going to give them my business, I’m not going to have my clients give them their business, either.

If you’re currently hosted at DreamHost and unhappy and want to move away, but haven’t because you’re either uncomfortable moving your site by yourself, or you’ve tried hiring someone in the past to do it and they failed, I want to help move you.

Contact me and tell me about your DreamHost experience, and I’ll see to it that you’re moved to better hosting.

Goodbye, DreamHost

I can’t believe that I’m canceling my DreamHost account, just one month shy of my 10 year anniversary with them.

Closing my DreamHost account

I first signed up for my DreamHost account back in January of 2006. For the most part, it’s been a great experience. I originally signed up for the two-year plan for $214.80 ($8.95/month), and added VPS to my account in March 2009, for an additional $18/month.

I was okay spending the extra money in order to have a VPS that I had full control over. DreamHost even tweeted about offering root access with their VPS back in February 2011. It was definitely part of the attraction for many customers.

dreamhost-root-vps-tweet

Then, on November 17, 2015, DreamHost sends out an email informing customers that in two weeks, they would be removing everyone’s sudo (root) access, on November 30.

Wow. Just … wow.

What recourse did we have? Try and sign up for their DreamCompute offering, which is still in public beta, and there’s now a wait-list to even get access to it?

Sell me a product, then take away a key feature but still charge the same price, while suggesting an upsell into a different product offering if I want to get that feature back? That’s called bait-and-switch, and that borders on fraud.

This was the last straw. Several times in the past I’ve wanted to switch away but was too busy to really do it, but this forced my hand: I had to move away, and from the looks of it, I wasn’t the only one.

I ended up moving my stuff over to Amazon AWS. It looks like it’ll cost me around $10-12/month, netting me a savings of close to $15/month compared to the $26.95/month I was spending at DreamHost.

I’m relieved now that I’ve actually gotten around to moving everything off DreamHost. No more wondering if they’re going to change their product offerings again. No more wondering if my sites are going to come back up when they’re down.

Well, DreamHost, it’s been nice knowin’ ya, but I’m officially done. I suppose it was good while it lasted, but like many good things, this too had to come to an end.

Edited to add: And, the account is now fully closed.

dreamhost-account-closed

How to migrate a CVS module to Git and GitHub

Since it took me a while to figure out, I figured this might be useful to document: migrating code from CVS to Git. Specifically, I was moving modules in a CVS repository on SourceForge over to GitHub.

Here are the versions of the tools that I used:

$ rsync --version | head -1
rsync version 3.1.0 protocol version 31
$ cvs --version | head -2 | tail -1
Concurrent Versions System (CVS) 1.12.13 (client/server)
$ git --version
git version 2.6.3
$ cvsps -V
cvsps: version 3.13

First, I grabbed a copy of the CVSROOT, and checked out the module so I had a reference copy to compare to when I’m done.

$ rsync -av aolserver.cvs.sourceforge.net::cvsroot/aolserver/ aolserver-cvs
$ cvs -d $(pwd)/aolserver-cvs co -d nsmysql-cvs nsmysql

Then, I create the working directory for the local git repo.

$ mkdir nsmysql
$ git init nsmysql

Next, do the actual CVS-to-Git conversion.

$ cvsps --root $(pwd)/aolserver-cvs nsmysql --fast-export | git --git-dir=nsmysql/.git fast-import

Finally, do a diff to compare the two working directories to make sure the import worked correctly.

$ cd nsmysql
$ git checkout master
$ diff -x .git -x CVS -urN . ../nsmysql-cvs

If everything looks good, go ahead and push it up to GitHub.

$ git remote add origin git@github.com:aolserver/nsmysql.git
$ git config remote.origin.push HEAD
$ git push -u origin master

I don’t do this often, but when I do, I always have to figure it out each time, so hopefully next time I’ll find this blog post at the top of my search results and save myself some time.

Debugging a strange MacOS X printing problem

At some point in time, something changed on my system that resulted in my printer no longer printing. (Cue a relevant scene from Office Space here…) A quick Googling of relevant keywords didn’t turn up anyone else complaining about what I was observing, so I did what any lazy person would do: I found another way to print what I needed to print, and forgot all about it.

Now, several months later, the problem still persists, and while I found a suitable workaround (use the “Generic PostScript printer” driver instead of the Lexmark one), the hardcore geek in me felt it necessary to struggle against the injustice of this whole “it doesn’t work” thing. It should work, damn it.

First, here’s what the most obvious symptom looks like:

/Library/Printers/Lexmark/filter/pstopsprinter1 failed

Along with this will be an entry in the system log which you can see in Console.app:

1/25/13 12:19:02.275 PM ReportCrash: Saved crash report for pstopsprinter1[78490] version ??? (???) to /Library/Logs/DiagnosticReports/pstopsprinter1_2013-01-25-121902_localhost.crash

If you look inside the crash report, you’ll see:

*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFArray length]: unrecognized selector sent to instance 0x1006036b0'

Great. Just great. So, we fire up our handy-dandy debugger, and what is the object it’s choking on?

(gdb) po 0x1006036b0
<__NSCFArray 0x1006036b0>
[redacted],
panoptic.com
)

Aww, damn. I recognize what that is. Those are the domains defined in my /etc/resolv.conf‘s “search” parameter.

Here’s the whole backtrace:

(gdb) bt
#0  0x00007fff93b09ce2 in __pthread_kill ()
#1  0x00007fff9392a7d2 in pthread_kill ()
#2  0x00007fff9391ba7a in abort ()
#3  0x00007fff8ce037bc in abort_message ()
#4  0x00007fff8ce00fcf in default_terminate ()
#5  0x00007fff8d9931b9 in _objc_terminate ()
#6  0x00007fff8ce01001 in safe_handler_caller ()
#7  0x00007fff8ce0105c in std::terminate ()
#8  0x00007fff8ce02152 in __cxa_throw ()
#9  0x00007fff8d992e7a in objc_exception_throw ()
#10 0x00007fff954c81be in -[NSObject doesNotRecognizeSelector:] ()
#11 0x00007fff95428e23 in ___forwarding___ ()
#12 0x00007fff95428c38 in __forwarding_prep_0___ ()
#13 0x00007fff9539f426 in CFStringGetLength ()
#14 0x00007fff953affac in CFStringFindWithOptionsAndLocale ()
#15 0x00007fff953b67da in CFStringHasSuffix ()
#16 0x00007fff953ed7f6 in -[__NSCFString hasSuffix:] ()
#17 0x00007fff8e1ab6da in __-[NSHost resolveCurrentHostWithHandler:]_block_invoke_7 ()
#18 0x00007fff90497c75 in _dispatch_barrier_sync_f_invoke ()
#19 0x00007fff8e1a9b5d in -[NSHost resolveCurrentHostWithHandler:] ()
#20 0x00007fff8e1aa5a7 in __-[NSHost resolve:]_block_invoke_1 ()
#21 0x00007fff90495a82 in _dispatch_call_block_and_release ()
#22 0x00007fff904972d2 in _dispatch_queue_drain ()
#23 0x00007fff9049712e in _dispatch_queue_invoke ()
#24 0x00007fff90496928 in _dispatch_worker_thread2 ()
#25 0x00007fff9392a3da in _pthread_wqthread ()
#26 0x00007fff9392bb85 in start_wqthread ()

Well, let’s see what it was trying to resolve, maybe that’ll give us a clue:

(gdb) frame 20
#20 0x00007fff8e1aa5a7 in __-[NSHost resolve:]_block_invoke_1 ()
(gdb) po $rdi
[redacted].panoptic.com

Yup, that would be the DNS name of my machine, based on the reverse DNS of my IP address on my local private network. However, it appears that forward DNS for that FQDN isn’t resolvable, due to a recent change in my DNS setup. Let’s fix this, and correct the DNS so that the FQDN resolves, and test again.

Still fails. Damn, not going to be that easy, huh?

(gdb) frame 13
#13 0x00007fff9539f426 in CFStringGetLength ()
(gdb) info frame
Stack level 13, frame at 0x1003d72c0:
 rip = 0x7fff9539f426 in CFStringGetLength; saved rip 0x7fff953affac
 called by frame at 0x1003d7770, caller of frame at 0x1003d72a0
 Arglist at 0x1003d72b8, args: 
 Locals at 0x1003d72b8, Previous frame's sp is 0x1003d72c0
 Saved registers:
  rbx at 0x1003d72a8, rbp at 0x1003d72b0, rip at 0x1003d72b8
(gdb) po $rbx
<__NSCFArray 0x1006036b0>(
[redacted],
panoptic.com
)

And, here it is. CFStringGetLength is expecting a CFStringRef and instead getting handed a NSCFArray, and is blowing up because CFStringGetLength trying to [str length] it.

I wish I knew someone who was doing OSX development at Apple to pass this bug along to …

Updated: I filed this as a bug in Apple’s Radar bug reporting system. It was assigned bug ID #13089829.

How young is too young … for WordCamp?

WordCamp NYC

I was thinking about attending WordCamp NYC 2012 this coming June, and I thought, “Hey, Charlie has a WordPress blog, maybe she’d like to attend, too.”

I mentioned this to Samantha, and it brought up the question: Is she too young to attend? I’m not so sure. Charlie’s quite bright for her age, she’s occasionally shown an interest in blogging, and seeing the wide range of possibility by meeting and interacting with other bloggers could give her new ideas, inspire her, generate additional interest, etc.

What say you, blogosphere? Are there likely to be too many age-inappropriate topics discussed within earshot? Will she just be ignored and written off as too young to interact with, and therefore be bored and discouraged?

How To Upgrade an AT&T Captivate to Gingerbread with Cognition 5 on a Mac

Disclaimer: Everything you see here is at-your-own-risk. If this doesn’t work for you, or it ends up bricking your phone, etc., it’s your own damn fault. Sorry. All I can say is that it worked great for me.

Prerequisites

You’ll need to download all of these files before you get started. If figuring out how to download these things proves to be too hard for you to figure out, then you should not be attempting this project. You probably don’t understand enough of the basics, and will make a mistake that will likely brick your device.

Now, let’s get started …

1. BACK UP EVERYTHING FIRST!

Use Titanium Backup to back up all of your data and apps. Use ClockworkMod ROM Manager to back up your current ROM.

Don’t say I didn’t warn you …

2. Copy Cognition5-v2.zip to your internal SD card

You’ll need to do this first, before you start the following steps. Hook up your phone using your USB cable and use “mass storage mode” to copy the zip file over.

3. Reboot into download mode

Throughout this process, you’ll need to keep putting your phone into “download mode.” You’ll know you’re in download mode because the phone will display a screen with a big yellow triangle with the Android robot holding a shovel, and the word “Downloading…” underneath it, and the words “Do not turn of Target!!!” at the bottom of the screen.

Android download mode screen

To put your phone into “download mode,” use the following steps:

1. Disconnect the USB cable, remove your battery.
2. Wait 5 seconds.
3. Connect the USB cable.
4. Press and hold the volume up and volume down buttons.
5. Insert the battery.

If you’ve done this right, you should be in download mode.

4. Flash stock Gingerbread using Heimdall

I’m not thrilled with the Heimdall frontend, so I just use the command-line interface using Terminal.app.

First, verify that Heimdall can see your Captivate in download mode. You should see:

$ heimdall detect
Device detected

If, instead, you see this:

$ heimdall detect
Failed to detect compatible download-mode device.

Stop now. Go back, figure out what you did wrong. You cannot proceed until Heimdall can see your phone connected via USB and in download mode.

***

To flash the stock Gingerbread ROM, we’ll need to prepare a little bit, first. Start by extracting the I897UCKF1.rar file, which should contain the following files:

CODE_I897UCKF1_CL273832_REV02_user_low_ship.tar.md5
Kepler_odin_new_part_JE3_S1JE4.pit
MODEM_I897UCKF1_CL1017518_REV02.tar.md5
odin v1.81.zip
SGH-I897-CSC-ATTKF1.tar.md5

We will need to extract the files from two of these tar files:

$ tar xf CODE_I897UCKF1_CL273832_REV02_user_low_ship.tar.md5

$ tar xf MODEM_I897UCKF1_CL1017518_REV02.tar.md5

Now we should have the following files: Sbl.bin, boot.bin, cache.rfs, dbdata.rfs, factoryfs.rfs, modem.bin, param.lfs, zImage. With our Captivate in download mode, we will flash these using the following command:

$ heimdall flash --repartition --pit Kepler_odin_new_part_JE3_S1JE4.pit \
    --factoryfs factoryfs.rfs --cache cache.rfs --dbdata dbdata.rfs \
    --param param.lfs --primary-boot boot.bin --secondary-boot Sbl.bin \
    --kernel zImage --modem modem.bin

Heimdall v1.3.0, Copyright (c) 2010-2011, Benjamin Dobell, Glass Echidna
http://www.glassechidna.com.au

This software is provided free of charge. Copying and redistribution is
encouraged.

If you appreciate this software and you would like to support future
development please consider donating:
http://www.glassechidna.com.au/donate/

Initialising connection...
Detecting device...
Claiming interface...
Setting up interface...

Beginning session...
Handshaking with Loke...

Uploading PIT
PIT upload successful
Uploading KERNEL
100%
KERNEL upload successful
Uploading MODEM
100%
MODEM upload successful
Uploading FACTORYFS
100%
FACTORYFS upload successful
Uploading DBDATAFS
100%
DBDATAFS upload successful
Uploading CACHE
100%
CACHE upload successful
Uploading IBL+PBL
100%
IBL+PBL upload successful
Uploading SBL
100%
SBL upload successful
Uploading PARAM
100%
PARAM upload successful
Ending session...
Rebooting device...

After the device reboots, you should have a working Captivate running stock Gingerbread ROM.

5. Flash Cognition 5 v2 using Heimdall

Again, put yourself in download mode using the instructions from the previous step, then flash the Cognition 5 v2 initial kernel:

$ heimdall flash --kernel Cognition5-v2-initial-kernel.tar

[...]
Uploading KERNEL
100%
KERNEL upload successful
Ending session...
Rebooting device...

From here, follow the usual steps to get into ClockworkMod Recovery, and install the Cognition5-v2.zip update that we copied onto the internal SD earlier in step #3.

If all goes well, you should be able to reboot your phone after the update and be up and running on Cognition 5 v2!

***

If I’ve made a mistake in any of the steps, or have left out important details, feel free to help correct them by leaving a comment below.

Cross-posting from WordPress to LiveJournal woes

WordPress to LiveJournal woes

I hate to do this, but I’m finally fed up with something that’s been bothering me for a while …

I’ve used a series of various WordPress plugins that mirror posts to LiveJournal, and for the most part, they work great. However, there’s been an issue: whenever I edit a post, it appears to delete the LJ post and post it as new. Not a huge problem, except for the fact that any comments left on the old LJ post are lost, which is a real drag.

I’m starting to wonder if it’s not WordPress or the plugins that are causing the problem, but the blog post authoring app that I use: MarsEdit. I don’t think so, but I haven’t ruled it out yet.

So, I’m posting this entry and will be using it as a test entry in which I’ll try to get to the bottom of things, either fixing the plugin that I’m currently using or otherwise figuring out what the problem is. Therefore …

Don’t post comments on this entry, or at least expect them to disappear suddenly as I test.

Thanks.

Update: This post originally was posted to LJ as http://dossy.livejournal.com/68169.html. Here’s the first edit using the WP web interface directly.

Update #2: Great, WP updated the post and updated the LJ post without changing the post ID. Now, I’m editing the post and adding this update using MarsEdit. Let’s see what happens …

Update #3: Aha! After posting the last edit using MarsEdit, the post on LiveJournal disappeared. A new post on LJ was created, with the latest post content, though: http://dossy.livejournal.com/68462.html. Not sure if it’s really MarsEdit’s fault, or a bug in the WordPress XML-RPC interface that MarsEdit uses, or the way that MarsEdit uses it. For completeness, I’m going to note that the WordPress post_id hasn’t changed regardless of how the post is edited.

Update #4: This reminds me, I need to submit an enhancement request for MarsEdit, to refresh an individual post from the server. Having to refresh all posts and pages just to pick up the edits within MarsEdit that I make in the WP interface is quite cumbersome.

Update #5: I’ve also posted a thread on the Red Sweater MarsEdit forum about this issue, to see if I can get any troubleshooting help there.

Update #6: I posted detailed troubleshooting information to the forum, but the summary is that MarsEdit invokes the WP XML-RPC in a way that marks the blog post as unpublished, then published again, and that causes the WP plugin to delete the LJ post and then re-post it. I’ve gone and made some adjustments to the plugin to NOT do this, so hopefully folks commenting on LJ won’t have their comments so unceremoniously deleted. Ideally, MarsEdit shouldn’t be marking posts as unpublished then republished (seriously, what?) but since I can’t fix that, I can fix the plugin to not delete LJ posts in response.

Of course, Google takes G+ very seriously …

So seriously, in fact, that …

Google has just launched games on Google+.

Because, you know, having games on G+ is SO much more important than fixing ALL the horrible usability problems with G+ …

Every time I say something negative about G+, the rabid fanboys say something that goes like this (paraphrasing):

But, but, but, the G+ team is doing all they can to make the service, the experience, etc., better … just cut them some slack and give them time.

… and then, Google goes and does something like this.

Google Mannekin Pis

(credit: Accidental Hedonist on Flickr)

Launching games on G+ now is like pissing on G+ users and calling it rain.

It’s one of the features that users explicitly do not want. In the early days of G+, one of the things most commonly cited by the fanboys as giving Google an edge over Facebook was “the lack of games cluttering up the stream”.

Like I’ve been saying all along, everyone will slowly come out of the “ooh, new and shiny” haze they’re in, and realize how badly G+ sucks in comparison to Facebook.

Google+ will soon join the ranks of Google Wave and Google Buzz. Remember them? Yeah …

Transitioning to a new 4096-bit RSA GPG key

@pleia2 reminded me that I ought to generate a new GPG key, given the recent advances in cryptography, etc. So, I just did. The new key’s fingerprint is:

pub   4096R/8D9740AA 2011-05-18
      Key fingerprint = C535 6302 1171 987D 738E  BFD8 2B1A B2E1 8D97 40AA

My old key’s fingerprint:

pub   1024D/EE812431 2004-08-27
      Key fingerprint = 0B12 F42F 2263 0444 B147  2C66 3587 2D37 EE81 2431

Read the full text of my GPG key transition statement (signed by my old and new keys).

Android USB tethering on Mac OS X

If you’ve got an Android-based phone, and want to do simple USB-based tethering on your Mac, you will find this guide useful. For reference, I performed this with the following equipment:

  • Samsung Captivate on at&t running custom Cognition 4.1.1 ROM.
  • MacOS X 10.6.6

The standard disclaimers apply here: follow these instructions at your own risk. This may void your warranty. Discontinue use if a rash develops.

Getting started: Preparing Android

All we need to do on Android is turn on “USB debugging” – do NOT fiddle with any of the “USB tethering” options or anything else. So, go into Settings > Applications > Development, and check the box next to “USB debugging.”

Android screenshots

After turning on “USB debugging” connect the device to your Mac using the USB cable.

Next, configure the Mac

After connecting the USB cable, your Mac should pop up a window saying that a device “SAMSUNG_Android” needs to be configured, like this:

Mac screenshot

This is a good sign. Click the “Network Preferences…” button, and find that device in your System Preferences’s “Network” section, which should look something like this:

Mac screenshot 1

The first thing to do is enter the values for this screen. Use the following settings:

Mac screenshot 2
  • Telephone Number: *99#
  • Account Name: wap@cingulargprs.com
  • Password: cingular1

Next, click on the “Advanced…” button towards the bottom right of the window. That should bring you to a screen that looks like this:

Mac screenshot 3

First, click on “Generic” and select “Samsung” for the vendor. Next, click on “Dialup” and select “GPRS (GSM/3G)” for the model. Enter in “wap.cingular” for the APN. Leave the CID as “1” which is the default. When everything is done, the window should now look like this:

Mac screenshot 4

Once that’s done, click “OK” which should bring you back to the previous screen. Next, click the “Apply” button to save all these settings.

Lets try connecting

At this point, you’re ready to try tethering! Go ahead and click on that “Connect” button. You should now see something like this:

Mac screenshot 5

If everything goes well, after 5-10 seconds, it should change to something that looks like this:

Mac screenshot 6

That’s it, now you’re tethered

Not terribly painful, no goofy software installation needed and hoops to jump through. And, here’s a speedtest for folks who are curious:

Speedtest

3.4 Mbit/s down, and 330 Kbit/s up on a 400ms ping isn’t fantastic, but it’s more than adequate for getting work done while out and about.

I hope you’ve found this guide useful and are happily tethered now. If you have any questions or comments, feel free to share them in the comments below!

Added on 2012-11-20: A reader named Art emailed me about HoRNDIS: a USB tethering driver for MacOS X. This may be useful for folks who are still interested in USB tethering on OSX.