Alex Sharp

engineer

designer

cook

 

I work at Zaarly.

Find me on Twitter and Github.

Alex Sharp

Alex Sharp
engineer / designer / cook

Side Project Hacks: Don't Sweat the Details June 12 2013

Conserve momentum. Don't sweat the details. Focus on Shipping.

Note: This is a cross-post from Medium.

Lately, I’ve been spending a lot of my nights and weekends working on a side project called Octocall, a service to simplify meetings and conference calls for businesses.

Working on a side project is a lot like working on a very early-stage startup: It’s pre-launch, time and resources are extremely limited, and the primary goal is to ship something and learn.

This point is important to re-iterate: Side projects are not about perfecting the product, nailing the implementation, besting the competition, being feature-complete, or making money.

The goal of a side project is singular and simple: ship something.

Once you accomplish this critical first step of launching, only then should you worry about revenue, user acquisition, competition, etc.

Details, Time and Momentum

Maintaining momentum is critical in this unique pre-launch stage. Focus on core value and the MVP instead of sweating the details.

Time and momentum are the two most precious resources on a side project. Get caught up tweaking that shade of gray for 15 minutes, and you’ve lost precious time instead of making progress towards shipping. Nailing that shade of gray isn’t real progress, it’s just tinkering.

These types of details are a massive time-suck and they deplete momentum. Build a sense for recognizing these rabbit holes and avoid them like the plague.

As an example, I was recently working on a “quick call” feature for Octocall. This feature is for users who want to jump on a call with others immediately. In order to select the call participants, an autocomplete UI component seemed appropriate.

I’ve always found the existing open-source autocomplete plugins clunky to work with, so from the get-go I ruled out wasting time integrating an existing library, and I decided to write my own.

The mechanics of a basic autocomplete UI component are pretty straightforward:

  1. Take in user input from a form field

  2. Send the user input to the server to search for matches

  3. Display matches in a drop-down selector below the text input field

  4. Allow the user to make a selection

Octocall autocomplete UI

Little Details

After about 1-2 hours, the core functionality of the autocomplete dialog worked. You could type in a few letters, receive results from the server, select someone and they’d be added to the call. At this point, my momentum was high, having made great progress in a short timeframe.

But the part that tripped me up was the selection of one of the participant matches. Personally, when I see an autocomplete component, I like to be able to make the selection without using the mouse.

There are two ways to do this: with the “up” and “down” arrow keys, and with “Ctrl-p” and “Ctrl-n”, if you’re on OS X. I like the latter much better because your fingers don’t leave the home row, and this UI convention is supported throughout OS X whenever text input is accepted. So, I figured that it’d be cool to support that functionality when selecting a contact for an Octocall.

It’s important to note that at this point I had a fully functional UI component. The thing worked. The basic “up” and “down” arrow key navigation worked perfectly. I was riding high on a wave of momentum. But the ctrl-n / ctrl-p thing was irking me. The detail-oriented nut in me was crying out: “keep going!”

So I did. And I got really, really close. And then I gave up.

After about 30-45 minutes of working on the ctrl-p / ctrl-n functionality, I realized I was wasting precious time working on the tiniest little detail that most people don’t even know exists, and probably wouldn’t use it if they did.

I did a quick final test -- does Gmail let you navigate autocomplete results using this technique? Nope. Cut it. Even if they did, it wouldn’t have mattered. I should never even have gone down this road.

The only thing left to do was to cut my losses, try to rebuild my momentum and get back to work. Because my little rabbit hole expedition wasn’t work, it was tinkering with meaningless details, gambling with my momentum. And I busted.

The core of the autocomplete component works really well. It lets you jump on a call with as many people as you want in seconds. That’s what’s important, that’s the core value of the feature.

Momentum: The Hidden Cost

Coming out of one of these tangents, you feel like you were in a trance and you just returned back to your senses. You’re left with this feeling of, “wait, shit, I just burned 30 minutes trying to build this silly little thing, and it was mostly to impress myself to see if I could do it, and I didn’t even finish.” And now you’re a little more bummed than you were before, and you’ll probably take a break to stretch, and then spend another 10 minutes making a cup of coffee, and maybe playing with the dogs for a few minutes while you’re up, and maybe make a PB&J (and God, how I love a PB&J).

The hidden cost of these little rabbit holes isn’t time -- it’s momentum. As precious as time is at this stage, momentum is your fuel, and it’s nearly impossible to replenish. And chasing down rabbit holes drains your momentum faster than anything.

A gut full of momentum is like rocket fuel to progress and productivity, but an empty tank means yet another un-finished side project. So preserve your momentum as if it were an irreplaceable precious metal, the one and only thing that will enable you to ship your side project. Because it is.

A Brief Note on Tools February 16 2013

Lately, I've been doing a lot of home renovation tasks. Hanging cabinet hardware, sanding, cutting and staining wood, hanging lights, and much more. Spending more time building things in the real world has refreshed my perspective on the tools we rely on to accomplish things.

Here's an example: I recently had to make a few modifications to our fence to make sure the dogs don't get out. Through this process, I needed to cut wood -- nothing fancy, just standard cuts of 2x4's. I also needed to drill new holes and remove old screws.

The tools required here are, abstractly:

  • A tool that has the ability to make relatively smooth cuts of wood
  • A tool that can drill holes in wood
  • A tool that can put screws or nails into wood

To make basic cuts of wood, there are two options, a hand saw, or a power saw. Both will do an adequate job at cutting the wood, but one will require far more time and effort than the other (hint: it's the handsaw). Luckily, I picked up a decent powersaw a few months back, so making those cuts will take seconds, not minutes. And I'll exert almost no effort, compared to manually sawing the wood.

Now, drilling holes. Believe it or not, people did drill holes before power drills, and they did it with one of these:

Now, obviously, I have a power drill. But at some point, once the use of my drill became second nature, once I had taken it for granted, it hit me: It wasn't so long ago that people were forced to use the manual drill. They had no other tool. Nothing better existed. It had not yet been invented.

We're not really talking about wood, are we?

The question I'd like to pose is: in software engineering, are we in the age of the manual drill, or the power drill?

On the whole, sadly, I think we still live in an era of manual drills. Many of us still write code with text editors that were conceived in the 1970's. In the world's most widely used language, we still have errors like this:


[source]

Obviously, some toolkits are more advanced than others. In my opinion, the iOS SDK and toolkit (xcode, interface builder, etc) are way ahead of the web application development toolkit.

But it's not about IDE vs text editor -- that flame war entirely misses the point. It's not about whether we'd rather write code in a text editor or an IDE (though, that's certainly part of it). Just like my fence, I wasn't cutting wood for the fun of it, I was fixing the fence (again, cutting wood was part of that process). Instead, we should be focused on which tools make us most effective and efficient at creating products.

Personaly, I hope we're entering the age of the power drill:

--- Thanks to Brent Dillingham for reviewing this post.

Thoughts On Process August 23 2012

Process is a funny thing. Most of us have some sort of process we go through every day. Even if unintentional. We wake up at roughly the same time, we go to work, maybe we get some exercise a few times a week, we come home, eat dinner, go to bed, and do it again. When we start deviating too much from our day-to-day regimen, we start to feel unfocused and disorganized.

Yet, we view process at work through a much different lens.

Startups, by and large, seem confused about what process should mean for them. In many of us, the idea of "process" evokes fearful notions of bureaucracy and inefficiency. Many of us have worked in jobs where the processes in place were the manifestation of someone who was seeking control more than they're interested in helping the team get actual work done.

The gut reaction many of us have to process is a good thing -- no one wants to work at a company crippled by inefficient bureaucracy, or have pointless obligations that get in the way of our actual work. More importantly, a startup's very existence and sustenance rely on it's ability to move quickly and make decisions so it can accomplish it's one goal: finding a customer. For a startup to be riddled with bureaucratic process would almost certainly be a company killer.

Un-process

Unfortunately, the result of this fear of inefficient bureaucracy is often a mis-directed desire for un-process. Often, young companies with a decentralized decision-making process end up with an overall lack of process and in-decision throughout the company.

We look at companies like Amazon, Github and Facebook and sometimes assume that their decentralized decision-making processes mean that they have very little process guiding their day-to-day work. The reality, however, is that these companies are deeply dedicated to efficiency and getting actual work done. They know that this is crucial for a software company to build high-quality products that their users will love. They throw out the bad processes and ways of building software, and strengthen the good.

So on one extreme, there can be way too much process -- bureaucracy. The other extreme, un-process, however, is just as destructive and inefficient. Un-process quickly leads to indecision, internal political battles and infighting, lack of cohesion and communication, and overall chaos. People stop sharing what they're working on, stop collaborating with each other to improve the quality of their work, and eventually, they'll stop caring altogether. The quality of the product suffers, and the speed with which things things get shipped slows measurably.

The company won't last long in this state.

So, neither extreme, un-process or bureaucracy, are places you want to be. On one extreme, we have pure freedom, a process of un-process. On the other extreme, we have the type of inefficient bureaucracy typically associated with large corporations, universities and governments. A startup's process needs to be somewhere in the middle, probably left-of-center, leaning towards the pure creative extreme.

I believe developing the right process is one of the most important things a young and growing company can do. So here's a simple definition of process:

Process is a structured and repeatable way by which you get work done.

That's a simple enough concept, yet executing on it is so much harder than it reads.

Enough with the Platitudes, Concrete Please

Early-stage startups are typically made up of bright, open, creative people, looking to have an impact and do something big. The idea of "process", even with such a simple definition, sounds constricting and anti-creative. However, the right process that evolves from real problems we have make us better at our jobs, not worse.

In most types of creative work, we already have forms of process. On the Zaarly engineering team, we judiciously use pull requests in our day-to-day development workflow. We write code for a feature, we create a pull request, solicit feedback from other team members, iterate on the feedback we receive, and when another team member gives a "+1" to the feature, we land it.

Build, gather feedback, iterate, ship. We repeat this process day-in and day-out. Simple enough, yet, this process is both highly intentional, and efficient. It serves our goals as an engineering team of shipping high quality code, quickly. The code review aspect of the process serves as a collaborative sanity check for the person doing the work, and a learning experience for the rest of the team. At some point, as a team we decided that all non-trivial code changes should go through this workflow, and it's the best way I know of to build software.

So, we have a process in place, and it makes us more efficient, not less. It's makes our jobs more enjoyable, not less. The benefits of peer review help us grow and progress as engineers. In short, this process helps us do our jobs better.

Some Guidelines

As with most things, the devil is in the cliché catch phrases details. Waxing poetic about platitudes on process in a blog post is pretty damn convenient. Here are some general guiding principles I use when thinking about process.

Things to promote:

  • Process should evolve naturally out of real problems your team has.
  • The aspects of your process should be easily justified, and explainable in one or two sentences.
  • Process should make those participating in it happier.
  • The goal of any process should be to help people do their jobs better.
  • Your process should be flexible, and open for modification.
  • Things that facilitate communication and creativity amongst the members of your team (examples: Basecamp, Github).
  • Process should promote things that provide feedback amongst the team (pull requests, design critiques, retrospectives, etc) and help individual members grow in their craft.
  • Always favor lightweight and asynchronous over hard requirements.

Things to avoid:

  • Process for the purpose of performance evaluation of employees.
  • Process so someone can stay clued into "how things are going".
  • Process motivated by anything other than helping people do great work.
  • Meetings to solve problems (and meetings in general).

At the end of the day, there is one thing that matters at a startup: finding your customer. Your job is to build a product that customers will enjoy, and for which they will hopefully pay. Any process must support this singular goal above all other things.

Build, get feedback, iterate, ship. Rinse, repeat. Everything else is just noise.

Slides from "Refactoring in Practice" - Ruby Hoedown 2010 September 8 2010

Building a Refactoring Talk - Part 1 August 12 2010

Lately, I've been working on a talk I'm doing on refactoring. I'll be this giving talk at Ruby Hoedown 2010 in early September in Nashville, TN and at Sunnyconf in Phoenix, AZ later in the month.

The goals for this talk are extremely aggressive, but it's helpful to lay them out:

  1. Lay out core principles of refactoring
  2. Present conceptual framework for executing small refactorings
  3. Demonstrate common refactoring techniques
  4. Demonstrate web-app specific anti-patterns and techniques

At it's core, this talk will not have a "cookbook" agenda, but rather a "patterns" agenda. My hope is to impart some piece of lasting knowledge on the audience so it can be applied in their domains. A "recipe" approach wouldn't really work here, as refactoring is inherently a "before and after" process.

Examples, examples

One of the most difficult parts of constructing this talk is selecting appropriate examples through which to demonstrate refactoring techniques. One of the primary considerations I've been wrestling with is that of domain complexity in my examples.

Should I "dumb down" the domain?

I'm hesitant to "dumb down" the domain, mostly because one of the largest barriers in large refactorings is domain complexity itself. Plus, I find overly contrived examples in presentations off-putting and leaving me wanting more. I don't want to exclude portions of the audience due to a small oversight like this. Plus, being that I already work in a fairly complex domain, constructing these examples would be a fairly simple task.

On the other hand, there's also a good chance that an overly complex domain will work against absorption of the key subject matter, but in a different way. I wouldn't want the audience or reader to struggle to understand the refactoring concepts and techniques simply because they're too busy trying to grasp dizzying complexities of fault-tolerant aerospace systems, or whatever. Still, while domain complexity indeeds adds to the overall difficulty of performing large refactorings, refactoring is a sufficiently difficult topic on it's own -- it doesn't need any help in that area.

So, for the example material, I'm hoping to find an adequate middleground. Examples that reside within sufficiently complex domains that effectively demonstrate large-scale refactoring techniques that neither exclude people with overly contrived examples nor paralyze attention spans with the minutiae of some extremely complex domain.

The hard part, of course, is identifying the middleground. And I am bad at gray areas and middlegrounds.

This seems like enough direction to run with. Now, off to find that perfect domain. As always, thoughts and comments are welcome.

Slides for Practical Ruby Projects with MongoDB at Ruby Midwest July 17 2010

Here are my slides for "Practical Ruby Projects with MongoDB", a talk I gave at Ruby Midwest. Enjoy.

Testing rails view helpers July 11 2010

Rails view helpers are easy to overlook in your test suite, but they want your testing love just like your models. I generally use view helpers for one of two reasons: 1.) when I'm trying to output something that's a bit too hairy to leave in a view or 2.) when I want to DRY up some view code. Both of these reasons warrant testing this code.

At first, view helpers can be kinda awkard to test, but in this post I'll show that testing view helpers can be quite simple.

Mongoflow == Rubyflow + Mongoid + Rails 3 + Refactoring Love

Recently @theriffer and I have been hacking on mongoflow, a MongoDB link aggregator heavily inspired by rubyflow. In fact, the super-awesome Peter Cooper was gracious enough to open source the original rubyflow codebase. We've decided to go with mongoid as the Mongo ODM (object-document mapper) and rails 3. Refactoring Peter's original version of Rubyflow to rails 3 has been a fun little project. (To view the code for MongoFlow, check out the github project.)

Rails 3 form error helpers

As I was clicking through the app, I noticed some deprecation warnings flying by in the server output. As it turns out, the error_messages_for and f.error_messages methods have been deprecated in rails 3 beta 3 and moved to plugins. I was never really crazy about the default error message style, so I decided to write my own super-simple helper method to show error messages for an object.

Keep in mind that rails view helpers are simply modules that get included into an ActionView template. Thus, the methods you define in a helper are available to you in templates as instance methods. However, we often use helpers to abstract creating HTML markup, and we often do this by leveraging the rails view helper methods (such as content_tag).

So in order to test view helpers that use the rails helper methods, we need to simulate the scope in which we would otherwise be calling our helper methods - a view template instance.

 module ApplicationHelper
  def form_errors(obj)
    content_tag :ul, :class => 'errorExplanation' do
      obj.errors.full_messages.collect { |msg| content_tag :li, msg }.join('')
    end
  end
end
 
describe ApplicationHelper do
  class MockView < ActionView::Base
    include ApplicationHelper
  end
  
  describe '#form_errors' do
    class FakeItem
      include Mongoid::Document
 
      field :title
      validates_presence_of :title
      validates_length_of   :title, :minimum => 8
    end
 
    before(:each) { @template = MockView.new }
    after(:all)   { FakeItem.destroy_all         }
 
    it 'should display multiple errors hash in a list' do
      item = FakeItem.create
      msg1 = item.errors.full_messages[0]
      msg2 = item.errors.full_messages[1]
 
      @template.form_errors(item).should == "<ul class=\"errorExplanation\"><li>#{msg1}</li><li>#{msg2}</li></ul>"
    end
 
    it 'should display one error in a list' do
      item = FakeItem.create :title => "2short"
      msg = item.errors.full_messages[0]
 
      @template.form_errors(item).should == "<ul class=\"errorExplanation\"><li>#{msg}</li></ul>"
    end
  end
end

Rails to the rescue!

Luckily, creating an instance of ActionView::Base is absolutely trivial. All we need to do is create a new class that extends ActionView::Base, include our helper module, and instantiate it. That's it. No params, nothing. Boom. Awesome. You can see where I've created the MockView class on line 11.

By extending from ActionView::Base, we get access to all the other helper modules that rails would give you in a view template, in our test suite. If we had just created a plain-vanilla ruby class for MockView, trying to call the content_tag method on it would raise a NoMethodError.

The second class, FakeItem, is an actual valid Mongoid model, with a title field and two validations on that field.

The reason for that I've created a proper Mongoid class (as opposed to stubbing certain methods or creating a mock object) is that Mongoid leverages the new ActiveModel API, which is where validations and validation errors (among other things) are handled in rails 3. The implementation of the form_errors method is dependent upon the ActiveModel api, and I would rather have access to the real errors objects, as opposed to attempting to stub out the portion of the api we're using in the form_errors method, which could get messy. Plus, creating classes in Ruby is so trivial, so why not!?

In the first example, the model instance will get two errors placed on it by ActiveModel. To test the #form_errors method, I'm simply asserting it outputs the correct html markup around the error messages coming from the model. A bit brute force, yes; but it's the best way that I know of to test the entire #form_errors method, rather than just pieces of it.

An alternative might have been to place message expectations on the various calls to the content_tag method, but that's a lot of magic, and not much benefit. And I hate magic ;)

I hope you've found this post useful. I'd love to see how other people are testing view helpers, so please post what you're doing in the comments.

Ruby Will Treat You Like an Adult May 4 2010

One of the things that I love most about Ruby is that it is an "adult's language". It is a very powerful language; as such, it doesn't put many restrictions on what you can do with your objects and classes.

Most of are familiar with the ability open up and modify any class in Ruby, including what might be referred to in other languages as "primitives". Take the following example:

1 + 1 # => 2
 
class Fixnum
  def +(num)
    puts "You're playing with fire there buddy!"
  end
end
 
1 + 1 # => "You're playing with fire there buddy!"

Ok, we can open up any class and play around and be really destructive. Ok, so maybe we're not going to get ourselves into much trouble trying to do something contrived like override +, but Ruby provides many other ways to subvert object oriented principles. Probably the most well-known of these is the Object#send method.

send if used correctly, allows you to completely subvert encapsulation without even missing a beat.

class Person  
  private
    def dirty_secrets
      "OMG! You can totally see my secrets!"
    end
end
 
Person.new.send(:dirty_secrets) # => "OMG! You can totally see my secrets!"

Whoa! Aren't private methods supposed to be private!? Well, if we had tried to simply call Person.new.dirty_secrets our secrets would have been safe, and Ruby raises an error in this case. However, when we use send, we don't get a warning, no slap on the wrist, nothing. Nope, we just move happily long, and our secrets are out.

Ruby truly is an adult's language. While Ruby is extremely object-oriented (at least in the sense that nearly everything in Ruby is an object), we're given the power to completely subvert and override encapsulation, a basic principle of object-oriented programming. Kinda cool, but as I found out the hard way, this can be dangerous if you're not careful.

At this point, there's a decent chance you're thinking to yourself, "Listen GUY, I'm a good developer, I would never do something so arrogant as to use send on a private method. Well...at least not in production code, but I suppose I've done it in my tests before. But that's not a big deal. They're just tests."

So let's drop the contrived examples and look at some real code. Indeed, I got a bit arrogant, and it came back to bite me, and by "bite me" I mean that I introduced a bug into what my tests told me were a stable code-base.

Bunyan

About a month or so ago, I released a project called bunyan. Bunyan is a very simple project that provides a thin wrapper around a MongoDB capped collection. I created it because capped collections are really powerful, and we had a need to begin using them to log additional data on every request. Bunyan sits on top of the mongo ruby driver to keep the API clean, simple and familiar.

Shortly after we began using it, it quickly became a bit of a nuisance for other developers who didn't have Mongo installed on their local dev machines to start up a copy of our app. Essentially, I had created another external dependency on another piece of software that was impeding our development process. This is not good. So I decided that Bunyan should fail silently, output a message to $stderr that it couldn't connect to Mongo. Cool.

At this point I need to explain a bit about how Bunyan works. Basically, Bunyan defines very few methods of it's own. In order to keep it as lightweight as possible, Bunyan uses method_missing to pass nearly all method calls through to a Mongo capped collection. In addition to keeping things simple, this also means that all calls to Mongo other than initializing the connection happen right there in the method_missing method.

module Bunyan
  class Logger
    include Singleton
    # ...
    def method_missing(method, *args, &block)
      begin
        collection.send(method, *args) if database_is_usable?
      rescue
        super(method, *args, &block)
      end
    end
    # ...
  end
end

So, in implementing this silent failure feature, I need to ensure that if there is a problem connecting to Mongo, then method calls do not get passed on to a Mongo collection that doesn't exist. The way I chose to accomplish this was to use a @disabled configuration option I introduced in the initial release.

This is a configuration option that the user can set from the Bunyan configuration block as a way to turn Bunyan off without commenting out the whole configuration block. Naturally, I figured I would set @disabled = true in a rescue block if an error occurs while connecting to Mongo (In retrospect, I'm not too crazy with the idea of "over-loading" a configuration attribute like that, as a user-disabled Bunyan is somewhat different than Bunyan needing to be turned due to connection problems).

Configuration refactor

So here's where things get tricky. Initially, I was handling all the configuration stuff and the logging in one big class. It soon became apparent that I needed a separate configuration class to handle all this logic. Without going into too much detail, it's safe to say this was a fairly major refactor of the configuration logic (commit).

So, when I went about to actually do this, I forgot that I had moved the @disabled variable and more importantly, the disabled accessor method, to the configuration class. In other words, doing what I did on line 12 below had absolutely no effect on anything.

module Bunyan
  class Logger
    include Singleton
    # ...
    private
      def initialize_connection
        begin
          @db = Mongo::Connection.new.db(config.database)
          @connection = @db.connection
          @collection = retrieve_or_initialize_collection(config.collection)
        rescue Mongo::ConnectionFailure => ex
          @disabled = true
          $stderr.puts 'An error occured trying to connect to MongoDB!'
        end
      end
    # ...
  end
end

Even worse, my tests told me so, because they were failing. I should've known better right then. But no, I was arrogant. Here's a re-enactment of the conversation my test suite and I had that day:

Ruby: No go bro! Yer doin it wrong.

Alex: No way man! I'm setting the @disabled variable right there.

Ruby: I'm telling you bro, that mess is not disabled!

Alex: Pfft! It's probably just some shared-state crap between my tests because I'm using the Singleton module to do all of this. Yea, that has to be it.

instance_variable_get should come with a poison label on the box

At this point, I made the biggest mistake of the day -- I changed my tests to make my failing code pass. If you find yourself feeling the need to do this, before you type another stroke, take a five minute break and re-think your inks, because you're doing it wrong ;)

The second example below is the one to focus on, and specifically line 20 where I felt the need to use instance_variable_get to subvert encapsulation and pull the instance variable out of the class directly.

Again, I cannot stress enough how bad of an idea this really is.

describe 'when a mongod instance is not running' do
  before do
    Mongo::Connection.stub!(:new).and_raise(Mongo::ConnectionFailure)
  end
 
  it 'should not blow up' do
    lambda {
      Bunyan::Logger.configure do |c|
        c.database 'doesnt_matter'
        c.collection 'b/c mongod isnt running'
      end
    }.should_not raise_exception(Mongo::ConnectionFailure)
  end
 
  it 'should mark bunyan as disabled' do
    Bunyan::Logger.configure do |c|
      c.database 'doesnt_matter'
      c.collection 'b/c mongod isnt running'
    end
    Bunyan::Logger.instance.instance_variable_get(:@disabled).should == true
  end
end

Of course, the reason the tests were failing is because I should've been setting @config.disabled = true. @disabled was a deprecated property.

Listen to your tests, don't subvert encapsulation, and don't be a jerk

The moral of this story is that you should listen to your test suite when it's trying to tell you something.