Indexing Test

You can safely ignore this post. I'm using it as a shortcut to get this page about vistuncular margarine indexed as quickly as possible.

If you're interested in what I'm up to, view source on that page. It's a test to see if search engines will hydrate and index content that comes over the wire as json and javascript. They should, but it's always worth double checking.




S3stat November/December Outage Postmortem

S3stat had an issue that stopped us from being able to run reports during the last 10 days of November.  We successfully backfilled all missing data, and were then hit with a related issue that stopped reports for 3 days at the end of December.  While we're backfilling that second hole, I'd like to take a moment to explain what happened, what we did about it, and why it shouldn't happen again (for a while at least).

What Happened

We run our Nightly Reporting job using Amazon's EC2 service.  We actually use about twelve different AWS services to do our thing, which shouldn't be a surprise considering that we're in the business of processing AWS data.  Specifically, we use EC2 Spot Instances for most of our workload, which is Amazon's way of selling their unused extra computing power on the cheap to customers like us who need "lots of computing, but not necessarily a fixed amount" for short periods of time. 

We need about 200 hours of computing to run a day's worth of reports, so each morning we ask Amazon to rent us up to 50 of their spare machines for a couple hours.  AWS is big, so they usually actually do have 50 extra machines laying around for us to grab, but we make a point of starting up some of their standard, full-price machines as well.  Just in case.

For the most part, Amazon's cloud stuff is pretty solid.  When they deprecate something, they give everybody lots of heads up (which explains why every single one of our customers got scary mails from them in September warning that they were going to deprecate an API endpoint next August, and that we (S3stat) needed to spend 5 minutes sometime in the next year bumping a version of a thing).  

So it was a bit of a surprise to discover that they had suddenly run out of Spot Instances to sell us. Our job started taking longer and longer to run, then eventually started failing entirely. The reason was that Amazon would turn on less and less of those 50 machines each day, and at first we didn't notice. Then it started turning on Zero extra machines each day, and we did notice. Because suddenly we had a small handful of machines working their little hearts out all day long to churn through as many reports as they could before the next day's job kicked off and added even more work.

For added fun, because they never finished the previous day, they would never give the "done" signal to turn themselves off, so they'd keep running the next day. Each day we would see another six machines fire up to help, and after enough days there would be enough of them to actually finish a whole day's worth of work. So they'd breathe an exhausted sigh and turn themselves off. Then the whole thing would start again the next day.

Anyway, when we figured this out, the quick solution was to stop asking for these (now) flaky Spot Instances and instead just buy full-priced standard machines for the whole workload each day. This got the job running again. We fired up an extra couple hundred of those good machines and pointed them at the backlog of missed work from late November until that was all back up to date. Life was good. Or at least good enough to go on holiday at the end of the year.

The Server Knows When You're On Vacation

One thing you may have noticed about S3stat is that we use the Royal "we" a lot. Because there's not a lot of actual We here. Mostly it's just me, so when We go on holiday, so does the whole company. Normally that's not a big deal because we built this thing to run on its own and not bother we when we's on vacation (which we is a lot). But still, when the thing does break, it does have an uncanny ability to do it when I'm sleeping in a tent in the Sahara desert. Which is exactly what happened this time.

You see, there was a reason that Amazon ran out of Spot Instances of the particular machine type we're using. And if I was smart enough I would have caught it a month ago. Amazon appears to have decided not to buy or maintain any more "m1.medium" machines. They've moved on to newer hardware, and are currently provisioning a fleet of "m6" generation machines. The old ones don't get used much anymore, and they're just sort of letting them die organically.

The reason why my Spot Instance requests were going unanswered is because most of the time, nearly all the machines of that type were in use. And only a month later, it turned out that you couldn't even get 50 Standard machines of type "m1.medium". We'd ask for 50 machines, and AWS would respond with "ERROR: We don't have 50 machines to sell you right now. Hold tight while we go buy some more." You could ask for 1, and it'd all go well, but ask for 20 and it'd fail in a way that left you with zero running machines.

So the job stopped working again. For essentially the same reason as before. Because Amazon ran out of computers and wasn't going to do anything about it.

Moving Forward

That's a shame and all, but it's also good news. It means that we know what happened. And it suggests an easy fix: Ask for newer machines. That's what we did, and it worked nicely. There's still a bit of work to do in the future to get all the way to the latest and greatest hardware that AWS has to rent, but we're on "m3" now, so it'll take a while before those run out. We should have plenty of time to get upgraded the rest of the way before our plug gets pulled again.




Survivorship Bias

Have you ever noticed that nobody is allowed to be successful on the internet?

Woe unto you if you've bootstrapped a successful software company and want to help other people do the same. No matter how straightforward the steps you took were. If you write them down and share them, the Internet will respond by telling you that they had no bearing whatsoever on your success.

The real reason you succeeded were all those unfair advantages that you had. You were born into money, you see. Or at least you had some money that you'd saved at some point. And you had a successful blog. And powerful connections pulling strings for you. And you did it in the past when, as everybody knows, everything was way easier. The important thing is that your situation was unique and therefore unfair, so nothing you have to say matters.

Worse, if you do manage to persuade these people that you were in fact not all that rich or successful when you started out, and that the only connections you have were even more broke than you, well, you still don't get to give advice.

Because the sheer fact that you succeeded in building a business when other people have previously failed at building businesses is proof that your success was nothing but blind luck. What you're really seeing is Survivorship Bias and, again, everything you have to say is worthless.

You see this play out in roughly 100% of the discussions of successful bootstrapped businesses.

Which is annoying.

Because it misses the point.

You see, the biggest reason the business in question was successful was that our hero went out and built it. He actually tried. Chances are he tried lots of times with lots of ideas before he got this particular one to stick, but most importantly, he kept trying until he built a successful business.

The truth is that building a business is hard, and most of the time it won't work. You'll probably get better over time, but even then, you're still looking at maybe a 10% success rate. I personally know this all too well. Just about every software product I have ever built failed, dead. And yet I live comfortably on the profits of one of my products.

There's no contradiction there. And there's not really any Survivorship Bias there either. There's mostly just Math.

If you try something with a 10% chance of success enough times, eventually it will work. Think of it as Persistence Bias if you like, and it makes a lot more sense.

And that's really all those "how I built an X" writeups are describing: The distilled wisdom that got the author to the point where he had a 10% chance of succeeding, as it existed on the try that happened to be successful.

It's not something to dismiss. It's something to learn from.

Dismissing the author's wisdom and not trying to start a business will never, ever, in a million years, leave you with a successful business. Reading even the most ludicrously bad advice and using it to try to start a business will at least get you in the game.

You're nearly guaranteed to blow it the first few tries, but you'll learn from each of them. And if you keep trying, eventually you might be lucky enough to have some clown on the internet dismiss all your hard work as Survivorship Bias.

But remember, it's actually Persistance Bias they're accusing you of. And it's their loss.

Good luck!




What if your bootstrapped product dies?

Two things worth mentioning about bootstrapping. First, getting a product off the ground is hard. Second, nearly all of your products will fail.

The first bit is common knowledge. But nobody seems to prepare themselves for that second one. And when it happens, it often hits hard.

Don't give up though. I live primarily on the income of a single SaaS product. It is the fifth complete project that I built with the intention of getting to this point. That's only counting completed products, ready to go out the door or even launched and running. Call it a dozen or more if you add in things that only lasted a month or so.

Now I'm in the process of getting another solid income stream up and running. There are two more complete sites in the can that didn't work, another half dozen false starts, and one site currently about to launch that hopefully will be the one.

Perhaps you see a pattern here. There are entire years of effort written off up above, many with a lot more work put into them than what you would ever want to go to waste. You are virtually guaranteed to fail at least once at building something profitable. If you decide to pick up and try again, be certain that you'll have this same experience several more times before you finally hit on the thing that will pay for your kids' college fund.

But once you have that thing ticking away, it'll make all the effort worth it. It was sunny yesterday, so I ditched work for the day and went out rock climbing. It was sunny again today and the kids had the afternoon off school, so I took that off too. That's the lifestyle you're working towards.

Keep at it, and don't get discouraged by minor setbacks.

Good luck!




Working the System at a BigCo

Way back when I used to wear a tie, I worked as a salaried employee for a large consulting firm.

They made their money by charging their engineers out at an hourly rate on these big Time and Materials contracts, but they paid their employees a regular salary.

I'd record my hours on a timesheet so that they could bill the right clients, but anything over 40 hours a week was ignored on my paycheck. The understanding was that if ever your hours dropped below 40, you could bill an overhead number to make up for the difference, thus the fairness of not getting paid for overtime.

Anyway, after about a year of working there and putting in my share of unpaid overtime, I only managed to find 35 hours of work to do one week. No problem, I thought. I'll just call up HR and get the overhead number to put down for those extra 5 hours. I submitted my 40 hour timesheet and went home for the weekend.

Monday morning, bright and early, I found myself in a meeting with my boss and his boss, learning about the importance of always keeping one's plate full. Consulting, it seems, is all about utilization percentage, and while it may be necessary for, say, an E4 or E5 to spend overhead time on business development, an E1 like myself should always strive for 100% utilization.

To help learn this lesson, I was being put on some form of probationary "hourly" status, working part time until I could get my workload back up to speed. I'd keep my benefits, and I could work as few as 24 hours per week, but I'd only get paid for the hours I worked.

So naturally things picked up and soon I found myself working 50 and 60 hour weeks again, and as advertised, my new "hourly" status meant I was getting paid for all of them. Surprisingly, though, they were paying me time and a half for those overtime hours. Suddenly I was making a lot of money.

HR noticed that my hours were back up to expectations, so they sent up the necessary paperwork to get me back onto "Salaried" mode and I told them I'd get it right back to them.

1 month later, they sent that paperwork again, and I apologized for letting it go on so long.

Next month, my boss delivered it by hand and I promised to "get right on it."

Finally, after 180 days of billing 60 hour weeks and getting paid a hefty premium for all of it, I found myself back in that same conference room with my boss, his boss, and now his boss, all of whom wanting to know why I hadn't filled in that paperwork.

I laid out the math for them. Silence... Then uncontrolled laughter from all hands. Congratulations, son. But how about we fill out that paperwork right now?

Epilogue

Now I've been asked by people I've told this story to why it wasn't obvious to all those Big Bosses why I hadn't filled in that paperwork.

The answer is that they wouldn't be looking at it from that angle.

Probationary statuses are generally considered a bad thing in big companies, and are often used as a first step in building a case for termination. As a cog who's hoping to make a career in one of those big companies, you're expected to want to get off that bad status and back into "good worker" mode as quickly as possible. It had never occurred to anybody that they might have an employee who didn't care about internal promotion or their "career" at the company.

So no, the only reason they pushed it at all was that it was looking bad for them to have employees on the "underperforming" list. The fact that the penalty for underperformance was essentially a raise was something they had never even considered.




Things Worth Knowing About Your Trial Users

I've been thinking a lot about Funnels lately. You know, that thing where a bunch of people sign up for a Free Trial of your product, but only a handful of them eventually give you any money?

Yeah, that kind of Funnel. The Software as a Service Trial Funnel. It's totally exciting.



The cool thing about these Funnels is that you can kinda adjust either end of the thing. You can pour more Trialers in the top, and a certain percent of them will come out the bottom. More Trialers, more money. But you can also mess with the innards of that Funnel so that a higher percent of Trialers make it through (and again give you money).

Now, any fool can turn the spigot and pour more people into the top of the Funnel. It's as easy as cutting a Big Check to your local Ad Agency. Having no Big Checks on me, however, I tend to advocate first spending a bit of time adjusting the bottom end of that Funnel. All that requires is Hard Work, and I do in fact have some of that lying around.

And besides, if you have your funnel tuned to separate just about everybody who falls in the top from their Corporate Mastercard, you're going to get a lot more bang for your buck when you finally do write that check to turn on the spigot.

Here is one of the things I've been doing to widen that Funnel.

Spying on your users for fun and profit

I've been collecting a lot of data about my Trialers. I first wrote about this idea almost a year ago, and while I initially had grand visions of using Machine Learning on the data, I find that I can actually learn a lot of actionable stuff just by looking at it. Specifically, I can ask some simple questions and bucket users into actionable categories. Then I can act on that action!

The low hanging fruit

Let's start with the low hanging fruit. These are things anybody running a business (meaning you, incidentally) should be able to pull straight out of the database using only the information you need to be tracking anyway:

  • These Trialers just signed up.
  • These Trialers signed up a week ago.
  • These Trialers will expire in a few days.
  • These Trialers just expired without converting to Paid.

Notice how this is all the information you need to send out Lifecycle Mails.

Lifecycle Mails are a series of emails you send your users over the course of their Trial that remind them that they're signed up for your thing and should go log in and play with it. They can offer some quick training and walkthrough type stuff about the product. And, most importantly, they remind the user when it's time to pull out their credit card and actually pay for the thing. Lifecycle Mails are kinda table stakes for Onboarding, so if you're not doing them, you might want to bookmark this article and go get them up and running like right now.

Oh, and incidentally, I wasn't kidding about that "ask them to pay you" thing. Here's a little histogram I put together showing when the people who eventually paid for S3stat did so. Care to guess which days were the ones where I sent out the Lifecycle Mails with the "Don't Forget To Activate" message?



Digging a bit deeper:

You'll need to log user activity to pull these next ones off. I've written in depth about how I collect this data, but there are a lot of ways to do it. Collect it though, because this is where things start to get good:

  • These Trialers have never come back since starting their trial.
  • These Trialers haven't logged in for a week.
  • These Trialers never finished setting their account up.
  • These Trialers haven't yet done $ImportantAction
  • These Trialers were engaging with their Trial, but let it expire without converting to Paid.
  • These Trialers recently expired, without ever really engaging with the product.

Oh dear, these folks are straying off. But notice how easy it would be to craft the email we send to each of those different groups that tries to steer them back on course?

"Hey, Ralph, I notice you haven't yet added any Foozles to your account. We can't really give you a good Foozle Tally without any Foozles, now can we? If you want to get the most out of your FoozleCounter.com Trial, here's a link you can follow to add your first Foozle.

Haven't had the time? No problem. Let me know and I'll add an extra couple weeks to your Trial.

And just in case you've forgotten why you signed up for your trial in the first place, here's a link to our Whitepaper on Foozle Tallies and their Impact On Industry.

All the best,"

Don't forget to encourage success

  • These Trialers just finished setup.
  • These Trialers just completed $ImportantAction.
  • These Trialers recently converted to Paid.

Finally, it's worth following up with your Trialers who are cruising along happily. Touch base, congratulate or thank as appropriate, and ask for feedback about how things are going and questions about the product in general.

What we're doing with all this is turbo charging that Lifecycle Mail campaign we talked about above. Instead of a simple timed list of mails to send off on certain days, we can build something that looks more like a State Machine, where each Trialer gets to move along his own path and receive just the exact amount of nudging so that he can successfully end up on your Purchase page ready to buy on day 28 of his Trial.

Wrapping Up

The point of all this is that you can get a lot of actionable information out of your Trialers if you take the time to collect a bit of data on them. Naturally, those of you who have been following my writing will remember that I'm actually working on a product called Unwaffle that does all of this right out of the box and am therefore probably selling something. But even if you make the gross blunder of not buying my thing, it's still really important that you do all of this.

Because my Trial Conversion Rate has gone from about 9% to just over 13% since I started collecting this data (so like 45% better). That Funnel is getting fatter by the month, and it's really starting to make a difference in terms of revenue.

I'd give it a go if I were you.




How I built a business that lets me live on the beach full time

I take a lot of vacation. I find it more enjoyable than working.

That's not to say that I don't like working. I write software for a living, which is crazy fun in and of itself. If you'd have told little 14 year old Jason, writing games on his Commodore 64, that he'd one day have a job doing basically the same thing, he'd have been pretty stoked.

But over the years I've found a lot of other stuff that I like doing even more than programming computers. I like to climb rocks, surf and travel through interesting parts of the world, and always found it hard to do that for, say, most of the year every year when I had to work for other people. So I set out to build a business with the heuristic of "Maximize Jason's Vacation Time".

This is a rather long and drawn out account of how I did that (and how you can probably do that too.)

Stage One: Contracting

I sat in a lot of felt cubicles in the 90s, riding various venture-backed startups into the ground and generally having a lot of fun doing so. Over time, I noticed the same story coming up again and again. About "the Contractors who had charged $150k to build the prototype" and "the Contractors they brought in for $200/hr to build this other system". The general tone was always sort of "That's unfair. We could have saved the company a ton of money if we'd have done this ourselves." But I came away with a different message:

I gotta figure out how to get into this "Contractor" thing.

So, as always happened in 1998, the company I was working for burned through its $5m over the course of a summer and laid everybody off just before hitting the ground. I had been doing some cool stuff for these guys, so I got a call from the CTO asking if I was interested in sticking around for a few months to mothball the tech so the investors could possibly sell it onward.

"I suppose", I said, "I could stick around on a Contract..."

Stage Two: Travelling Between Contracts

Now the cool thing about Contracting, in case it wasn't clear above, is that they pay you at least twice as much as everybody else to do the same job. Yeah, you have to buy your own health insurance, and they can drop you with no notice whatsoever. But it turns out that any job can and will drop you without notice. And catastrophic health coverage is like $100/month for a healthy guy under 40. The math kinda works out.

Being an Engineer, and thus knowing how to divide, I quickly worked out that a job paying "twice as much in a year" will also pay you "exactly as much in six months". And hey what do you know, they have Contracts that last just 6 months. And flights to Thailand for $800.

So I started taking time off between gigs.

I experimented a bit and found that if I moved to a high cost-of-living spot like Southern California to pick up those contract gigs, I could live cheaply and sock away even more of the higher rate those gigs would pay. After all, while rooms in LA go for $900/month instead of $300/month in Portland, you can make up that difference with a single day's pay at $75/hr.

And since rooms on the beach in Thailand were going for $5/day back then, it'd only take a few months of Contracting to pay for an entire year on the road in places like Southeast Asia and Africa, including occasional flights and beer money. I got pretty good at rock climbing.

As it worked out, though, I never made it a whole year on the road. I missed thinking. As fun and rewarding as it is to travel the world, the intellectual challenges you face trying to work out bus routes in a language you don't understand can never compare to the ones you face writing code. I'd end up emailing my contacts back home somewhere around the 6-9 month mark, and would eventually hop a flight back to LA to work another gig.

Stage Three: Travelling While Contracting

That was all way back in the early 00's. Even then it was apparent that this whole Internet thing was kinda taking off. While the concept of "guy on a beach with a laptop" was still mostly just a cheesy AT&T ad, it was starting to seem like it might not actually be that far off. I had some freelance clients in other cities. Some of them I'd never met in person. I was already working remotely. Why not see how remote I could go?

So I picked a client. Or rather, a client picked himself by dragging out a project when he knew I had already booked a flight. And we tried having me build his thing out of my bungalow on Tonsai beach.

It kinda worked. And it got my internal mathematician working again to determine that hey, I can afford to live here on this beach indefinitely if I can somehow arrange just one day of work per month.

That's good.

Stage Four: Software Product Business

The only downside of all this is that it still required quite a bit of working for Other People. Other People often have silly ideas about what needs working on, and the work is sometimes not as much fun as you could hope. But try as I might, I couldn't find a solution. The problem was, every time I stopped working for Other People, they stopped sending me money. Something had to give.

I started working on building a Software Product in my spare time (which, helpfully, I still had a lot of). But boy, did I suck at it. There are a lot of really good resources online these days to steer a fella clear of all the obvious pitfalls I stumbled across in places like picking product ideas, gauging interest, finding customers, etc., So now I do a lot of nodding along to articles with tons of common sense advice that anybody with half a brain could have figured out, but that had never occurred to me when I designed and built my first several products.

But I stuck with it and eventually came up with Twiddla (which lots of people really like) and S3stat (which people are actually willing to pay for). The second one pays to keep the first alive, which is handy because that one is a lot more impressive to show off in case I ever need to find another one o' those contract gigs in the future.

And the cool thing with Software as a Service (SaaS) products is that people keep sending you money every month whether you're working or not. QED.

It's still hard though. When you first build one of these "earn money in your sleep" SaaS products, it really doesn't make you all that much money. After a few months of being launched and signing customers, S3stat was still only bringing in something like $50/month.

It was tough to keep motivated to tweak, market, A/B Test and otherwise keep moving forward with this product stuff, especially looking at the old consulting rate and comparing it to the effective hourly rate from this f'ng side project.

But here's the thing. After a while, that $50/month started to look more like $500. Still nothing compared to consulting, but it paid me that $500 while I was off surfing in Morocco for the whole month. And the next year it paid me $1,000/month while I was backpacking through South America and building another (sadly failed) product on the laptop. It even kept sending me money at the same time I was doing consulting gigs for other companies.

Notice that it kept growing even when neglected. That's another advantage of SaaS. Until attrition really kicks in, you're going to be signing more customers than you lose. Even if you only sign a few per month, that's revenue that just keeps piling on top of itself.

Stage Five: Chillin'

Having a successful Single Player Software Empire does have one downside: Customers.

Having Customers can sometimes seem an awful lot like working for Other People. Customers often have silly ideas about what features your product needs. They send you emails asking to reset their password, and they don't appreciate it if your thing stops working for an entire week while you're taking a riverboat up the Amazon.

I'm lucky that S3stat has a pretty technical audience that doesn't generally need a lot of hand holding, so support was never something that needed my full time attention. Still, I'm an Engineer, so I like to automate everything that can be automated. That doesn't just include simple things like those password reset emails. It also means that whenever I get a support request, I make sure to Fix It Twice: Once to fix the issue for the customer, and another time in the code, documentation or UI to make sure that I don't ever see another support request for it. So as time goes on, there are fewer things that can interrupt my Days Off. (Days Off being defined as days where the sun is out or the kids are off school and I don't need a rest day from climbing or surfing, so hey, let's polish the product a bit).

Naturally, I still get bored of Chillin' The Most, so it's handy to still be good at all this computer programming stuff. I'm still cranking out new features for S3stat and Twiddla, and I have Unwaffle (the SaaS Customer Lifecycle Metrics thing I've been writing about lately) running in beta mode for a few test customers. Lots of code is getting written, but finally it's all happening on my time.

Which Was What We Wanted.

A footnote.

I don't mean to brag. I don't mean to boast, but I'm intercontinental when I eat French Toast.
Mike D

I see articles like this come past on the Internet every once in awhile, and they are invariably received with hostility. "Sampling Bias!" "That guy could only do this because of $X!" "That wouldn't work today in this economy!" "If all this stuff really worked, why tell us about it instead of milking it for millions. He's probably just trying to sell us his book!" Lots of reasons why we should quickly dismiss everything that was said above and carry on the way we were before.

But a good plan instead might be to read those stories and see if there is an idea or two in there that might be applicable to other people. Perhaps, even, to you yourself.

Personally, I don't find any of the above to be particularly remarkable. As software developers, we're in one of the few professions where we actually can move to a tropical paradise and get paid to do our thing.

Consult remotely, build products and charge money for them. Pick a paradise beach that doesn't have tall hotels on it and is therefore just this side of free to live on. Bring some savings and start small.

I know that this works because I (and lots of other people) have done it. And no, I will not charge you money to tell you how to do it. Everything you need to know you can learn from the previous paragraph. It really is doable. Don't go out of your way to find a reason not to try.

Good luck!




Chasing Trucks

I bet you didn't know that I used to wear a tie to work every day.

Yeah, my first job out of school, when I was working as an Environmental Engineer for a Big Consulting Firm. This was way back in the early 90s when Computer Programming was still this low status thing that people did down in some dark basement, updating the modeling software or whatever while the Engineers up on the 14th floor ran the show. It was a good few years before they showed up with that giant sack of cash to give to anybody who could produce Hello World with Angle Brackets.

Anyway, the guy I worked for was one of those ever-smiling business guys who can talk you under the table any day of the week and leave you with a signed contract in your hand for whatever he happened to be selling at the moment. He had an Engineering degree, an MBA, and was working his way through Law school at night. He would have made a good Larry, so we'll call him Larry from here on out.

Larry was a bit of a serial entrepreneur, and always seemed to be setting one of his cousins up with a hot dog cart or something that he hoped to one day spin into an Empire to get out of all these two hour workdays, lunches and golf that comprised his day job at the Firm.

One day he shows up at my apartment with a used pickup truck and a bunch of pumps, tanks and hoses. He's going into the Industrial Truck Washing business and it's going to make him rich. You see, the City's stormwater regulations specify that all wastewater from industrial washing has to be collected and taken to a designated processing facility (we were Environmental Engineers, remember, so these were things a fella needed to know). That meant you couldn't just go out and wash somebody's truck in the parking lot. You needed a big, expensive, facility to deal with all that wastewater collection and disposal. And you needed those trucks to detour all the way to your place every time they needed washing.

That meant that Truck Washing was Expensive.

But Larry had invented The Device.

The Device was nothing more than a manhole-cover-sized aluminum plate with a hose coming out of it. But you could pull up a storm drain cover, drop this thing in its place, and easily pump all the water that collected on it into a tank in that pickup truck. Mobile Truck Washing was born. Larry was gonna be a millionaire.

Larry had a simple plan to find new business. He'd park his rig down in the Industrial District and wait for a dirty Semi or Delivery Van to roll past, then he'd follow it around until it eventually went back to its yard. Then, he'd turn his Schmooze Guy superpowers on to whoever owned that dirty truck and sell him on the virtues of having it (and all its dirty friends here in the yard) cleaned on a bi-monthly basis. This drummed up enough business that he was able to fine-tune the washing procedure and hire a couple guys (of the "guy hanging around in front of Home Depot" variety) to perform the actual day-to-day washing work.

But here's the thing. It didn't make him a millionaire. Not even close. Mostly it just took up his weekends.

The problem was those dirty trucks he was chasing. They were dirty for a reason.

Dirty trucks are dirty because their owners either don't care how they look or don't have enough money to do anything about it. Most of the people who agreed to have them cleaned at all decided that, two weeks later, they were actually still pretty clean, so maybe we'll just skip this time and you can come back later. Today, we'd call this an example of "Customer Churn".

But, but but. There do in fact exist trucks that people will pay good money to have washed. Those trucks are easier than you think to find. Because they're the clean ones.

Once Larry figured out that he should be chasing the clean trucks, things got better in a hurry. He started landing customers that needed washing every week. He started landing fleets. He landed the freakin' Post Office as a customer. By the time I took over his old job of wearing ties and playing golf with clients for The Firm, he already had three more trucks and ten staff.

There are lessons in there about picking markets, the advantages of selling B2B vs. B2C, and half a dozen other things. But all you really need to remember is Larry's quote.

Always chase the Clean Trucks.




SaaS Onboarding Metrics for Developers

This is a technical overview of the techniques you'll use to collect Metrics on your SaaS Trial Users so that you can leverage that information to stop your trial users from churning, which in turn will make your business a lot of money.

But be warned: There Is Math Here. If you'd prefer to step back a notch and learn what this whole Onboarding Analytics thing is all about, here's a much more entertaining overview I wrote about How I Quadrupled My SaaS Trial Conversions (with math).

OK. Just us nerds left? Cool. Let me break out the Entity Relationship Diagrams…

Step One: Collecting Data

First, a bit of background. What we're doing here boils down to collecting data about everything your users do during their Trial. Every interaction with your site, every lifecycle mail they open, even every time they didn't show up for an entire week. If it happens, collect it.

We'll then use that data to find patterns about what sort of things your successful, gonna-be-paying, users do versus what your unsuccessful, churn-bound users are doing. And once we have those patterns, we'll be able to flag up failing users so that the we can guide them back onto the happy path.

So again, step one: we need to store everything they do. How about this for a schema:

  • Participant
  • ParticipantID
  • ParticipantIdentifier
  • ParticipantStatusID

Here, a Participant is one of your trial users. You could skip this table if you want to hook actions directly to Users, but this gives us a bit of flexibility and keeps our example self-contained. The Label table will hold the names of the Actions that our Participants can do, and allow us to add extra information, and to link off to any supplemental tables we'll build later. And, of course, Actions holds one record for each thing that happens, ever. ParticipantStatus is just a lookup table for "Trial", "Paid", "Expired" & "Cancelled".

Now we can build a little library in the backend of our app to quickly stash Actions for us. Ideally, we'll want to expose a single function to our developers that is so simple that they can't come up with an excuse not to use it:

Action.Track("Login");

It can really be that simple, as we'll know who our user is (since he's logged in), and we can probably ask somebody for the current time.

Now all that's left is to sprinkle a few hundred of those calls throughout the codebase, every time anything even mildly interesting happens, and to expose a way to for your marketing folk to manually add more Actions when they've interacted with your trialers in person. (Which, incidentally, is something that marketing folk actually do!)

Step Two: Finding Patterns

Here we go, straight into the Machine Learning stuff. Ready?

No, we're not ready. Sadly, it'll be months of boring data collection before we have enough for our Machine to Learn anything about. So unless you want to embarrass yourself with a Decision Forest that happily maps "User Logged In" to "Always Churns, 90% Confidence", we'll hold off on that for the moment. First, let's step back and just look at that data with our eyes.


Step 2A: Visualizing the data

Here are two of our Participants, cruising through their trials. Which would you say is more likely to convert?

It's amazing how much you'll learn about your users just by watching them. You'll find out that some people reset their password every single time they log in to your site. You'll find users who log in four times every day and check the same screen to see if anything has changed. And you'll find plenty of users who got stuck on something during the first 10 minutes and never came back.

Those are useful things to know, so it's worth building some simple reports right away to get that information to the folks who can use it. This is our Minimum Viable Product to justify collecting this data in the first place. Even though we know the Big Payoffs will come later on, it's surprising how big a win you can get just from this first step.

Here's a few report ideas to get you started:

  • Actions By User
  • Actions By Label
  • Trial Users (with action counts)
  • Paid Users (with action counts)
  • Expired Trials (with action counts)
  • Who Logged In This Week
  • Who Didn't Log In This Week

  • Step 2B: Statistics

    Even though true ML is still a while off, we'll have trialers start converting (or expiring) right away, so we can start building statistics.

    For a given user at the end of his trial, we have a list of things that he has done, and we know his outcome. That gives us enough information to determine which Labels tend to affect outcomes, and (to a lesser extent) by how much. We can at least state something along the lines of

    "38% of participants with the Label 'FilledOutProfile' converted to paid, whereas only 8% of participants without that Label converted."

    We can build little stories like that around all of our Labels, to see which are the most interesting. We can even combine them to generate something like a Score for a given Participant based on which Labels he has seen and which he hasn't.

    So, assuming we can compile a list of Participants that have finished their trials and either paid (which IsGood) or expired or explicitly cancelled (which !IsGood), we can calculate a few properties for our Labels like this:

    foreach (var label in Labels)
    {
    	label.InGoodCount = Participants.Count(p => p.IsGood && p.LabelIDs.Contains(label.LabelID));
    	label.NotInGoodCount = Participants.Count(p => p.IsGood && !p.LabelIDs.Contains(label.LabelID));
    	label.InBadCount = Participants.Count(p => !p.IsGood && p.LabelIDs.Contains(label.LabelID));
    	label.NotInBadCount = Participants.Count(p => !p.IsGood && !p.LabelIDs.Contains(label.LabelID));
    }
    

    And then all we need are a couple helper methods on our Labels:

    public double PercentIfIn()
    {
    	var denominator = InGoodCount + InBadCount;
    	if (denominator == 0)
    	{
    		return 0;
    	}
    	return (double)InGoodCount / denominator;
    }
    
    public double PercentIfNotIn()
    {
    	var denominator = NotInGoodCount + NotInBadCount;
    	if (denominator == 0)
    	{
    		return 0;
    	}
    	return (double)NotInGoodCount / denominator;
    }
    

    ... and we can generate the little snippet of text describing the performance of a given Label. While we're at it, we can keep track of the expected conversion rate:

    var goodCount = Participants.Count(p => p.IsGood);
    var badCount = Participants.Count(p => !p.IsGood);
    var expected = (double)goodCount / goodCount + badCount;
    

    ... and we can use the variance from the mean to assign an "Interestingness" value to each label that we can sort by when presenting them or using them to "score" Partipants:

    // for each Label...
    var up = Math.Max(PercentIfIn/expected, expected/PercentIfIn);
    var down = Math.Max(PercentIfNotIn / expected, expected / PercentIfNotIn);
    Interestingness = Math.Abs(up + down);
    

    Part Three: Intervention

    Now that we have all the collection and analysis ticking away, we'll have a bit of Calendar Time on our hands, waiting for enough Participants to stumble through their trials for our conversion statistics to be meaningful. That gives us some time to automate things.

    We're going to want a nightly job of some description to come through and calculate all those statistics for our Labels and apply them to our Participants. We'll want to generate a report showing which of them are likely to convert and which of them are pretty much hopeless (and why.)

    It'd also be cool if that nightly job fired off some webhooks for certain conditions so that we could set up worker tasks to do things like send off rescue mails to problem users, or at the least notify that guy in Marketing so that he can do something drastic like actually call them on the phone or something. You can even build him something that plugs directly into his CRM and Calendar so that he'll have up to date information and action items ready to go each morning.

    The important thing is that by this point you'll be the hero, having demonstrated that this whole Customer Lifecycle Metrics thing is real, and that it will in fact make a measurable-in-dollars difference to your business by steering problem trialers back onto the happy path.

    It's also worth noting, and I'm almost embarrassed to mention it since we're both software folk and could easily build all this from scratch, but just putting it out there: This is all built already.

    You can sign up for a Free Trial today over at Unwaffle.com, and have it all up and running 20 minutes from now. We have drop-in client libraries for every programming language known to man, so all you need to do is sprinkle those Track() calls around in your code then sit back and watch the data flow in.

    But either way, build or buy, that's how it works. And you really should be doing it. It'll make you a lot of money.




    Developers need to learn to negotiate.

    Computer Programmers make me sad sometimes. We're so good at so many things, and so good at learning whatever we want, but there are certain things we simply refuse to get better at, despite them not being all that hard, to our own detriment.

    Talking to girls and negotiating salary are two things that we're historically bad at. We're actually bad at lots of things, but these two are unique in that rather than finding ways to get better at them, we instead proclaim them to be Impossible Things For Computer Programmers To Learn and refuse even attempt to get a little bit better at them.

    This is really strange.

    If you were bad at writing Objective-C code, but it was 2008 and knowing Objective-C would let you answer "Yes" to any of those dozen emails a week you were getting offering $150/hr to write Objective-C, what would you do? Would you perhaps spend a few hours learning how to do it?

    Now what if you were bad at negotiating?

    "No way. Negotiating is evil. They should just pay everybody the same. That'd be fair."

    Or dating?

    "No way. Talking to girls seems hard and scary. And I'd actually have to walk over there and talk to one of them."

    But here's the thing. It's only hard because you've never done it. Go do some of it and you'll find it's really not that bad.

    Developers have a reason for all this, of course. They like things to be "fair". They like meritocracies. The idea that somebody off the street could just talk his way in to making more money than somebody with more skill just comes off as wrong. An affront to the way the world should work. Immoral.

    Salary Transparency is all the rage these days because it presses those fairness buttons in Engineers. Negotiation is unfair, and I'm no good at it anyway. But no need to bother with all that. Salaries are set. Everything is "Fair". Sign me up!

    Transparency also sounds good from the employer perspective. You get to control the negotiation procedure (by completely eliminating it) and set salaries how you like. Take it or leave it.

    Unions also resonate nicely with Software guys. Again, negotiation goes away and everybody is paid "fairly" according to some scale. Everybody wins.

    But not everybody wins. In fact, both these ideas fall down because roughly 50% of the developer population loses in any arrangement where you set all salaries to the same level. That's how distributions work.

    Imagine you are an engineer who's genuinely good at what he does and knows how to negotiate a rate that reflects that. Why would you leave, say, 100k on the table to go work for a place like Buffer, where even the CEO is only making $150k/year? Why would you join a Union where your bill rate was set based on the average joe with the same number of years experience? In either case, you're worth a lot more on the open market, so you'd opt out.

    That also means that shops with Transparent Salaries may have a hard time attracting the best talent (since only a guy who knew he was in the bottom 50th percentile would seek them out), but that's their problem not ours, and a bit off topic for today.

    We're talking about fairness, and how negotiating is amoral because it places negotiating skill ahead of technical skill in determining how much you get paid, and how companies are ripping developers off by paying them less than they deserve just because they're bad at negotiation.

    But I disagree.

    • I don't agree that negotiating is "amoral", nor that rewarding people for being good at negotiation is "unfair", nor that you can get "ripped off" by receiving a salary that you agreed to.
    • I think it's fine that people can negotiate with one another, and that being good at negotiating has advantages.
    • I think the most productive thing to do in a world where people negotiate is to learn some basic negotiation skills so as to live in that world.
    • I think the least productive thing you can do in that world is to try to make everybody stop negotiating for things. Doing so can only make things worse for you, since everybody else will continue negotiating after you stop.

    Ah, but you counter:

    Negotiation is stressful, a waste of time and not interesting at all for many software developers. It would just be better for the public good for no salary negotiations to take place, because people would worry less about being underpaid. Less stress is almost always good for work productivity. And there will be more justice.

    Setting aside the disputable bit where the author refers to a process that can add millions of dollars to one's total career earnings as a "waste of time", have a quick re-read of that comment and notice how he uses terms like "public good" and "justice." We're back to capturing the in-built developer-brain need for things to be fair.

    As developers, we tend to look for technical solutions to things, so when we see a situation like the one above, we come up with ideas to "fix" it and make everything fair. Transparent Salaries, no-negotiation workplaces, a precise formula translating skill to compensation. Keep building technical solutions until we've coded the problem out of existence. That's how things are done in our world.

    But a more productive approach when confronted with the fact that "negotiation skill seems to have more bearing on how much I can make than technical skill" might be to just take advantage of that fact. If tech folk are as generally bad at negotiating as everybody seems to agree, the best course would seem to be to simply get better at negotiating.

    That's actually quite easy to do. And if you do so, you'll make a lot more money.

    That's a good thing.



    Viewing 1 - 10 of 58 Entries
    first | previous | next | last

    Jason Kester
    @jasonkester

    I run a little company called Expat Software. Right now, the most interesting things we're doing are related to Cloud Storage Analytics, Online Classrooms, and Customer Lifecycle Metrics for SaaS Businesses. I'll leave it to you to figure out how those things tie together.


    Subscribe in a reader


    Copyright © 2024 Expat Software