Bye 2015

I am really bad at taking vacations – the last time I tried to take a couple days off, I ended up writing most of a short programming book (which I put out at the end of last month) and getting started with my author/publisher relationship with Pluralsight. So instead of relaxing at all, I took on two more projects.

The last two weeks were quite different. No work, lots of books, and a flight to Los Angeles four days ago. My girlfriend and I spent New Year’s there, which marks (as far as I can remember) the first time I’ve spent New Year’s “on vacation”. Los Angeles is truly one-of-a-kind: it felt like we were immediately welcomed the minute we step foot out of our plane. We spent a day on the beach, nights in clubs, and a ton of time with some of our greatest friends. I feel like Los Angeles is my second home, after only a short time there. I’m excited to go back as soon as possible; it feels like so many incredible things are going on all the time that the entire city is basically its own living thing. It’s incredible to plug into, even for just a bit.

This year was spent working really hard – it’s been a lot of headaches and stress, but the eventual goal was to free myself up to do more interesting things this year. I’ve spent time building complex systems at Simple, teaching at Pluralsight, and experiments on smaller content on Kindle and YouTube. I think that people underestimate their ability to be flexible – it’s great to do one thing and do that really well, but what if you can do a couple things where you thought you only had room for one? What if the thing you’re doing isn’t the right thing? Why not “taste test” a couple things throughout the year?

We’re only a couple days into 2016 and instead of being home tonight like expected, I’ve ended up in Salt Lake City, with a flight home later this morning. I didn’t expect to be here, but as we landed, I realized that a thing like flight plans going completely haywire would have been handled a lot differently by my old self, say, eight months ago.

I spent a fair amount of time this year reading the Stoics – primarily Marcus Aurelius and Seneca – also Ryan Holiday’s great modern summation, “The Obstacle is the Way”. I don’t think I was aware of how well I had internalized their work until today: ending up in Utah is amusing and interesting, not stressful, not the end of the world. The obstacle is the way – the present is what matters, not the future. I was supposed to be home, but instead this turns out to be one of the more interesting January 3rds I have. How different would this day have been if I would’ve been afraid of anything and everything going wrong, like it did?

I’ve had the good fortune to work on a lot of interesting things over the last year. I still love programming, and I love the work I’ve done in new tools and paradigms I had no experience with in the past. Over the next year, I’m going to push myself harder to learn completely new things – for instance, the entire field of machine learning and neural networks has been out-of-reach without ever finishing calculus in school. This would have seemed insurmountable to me in the past. It’s now an interesting challenge. I’m also going to work on a couple smaller projects. My last side project was overly ambitious and was infinite in scope – now, I’m building things that start really small and are easily verifiable and testable. This decision was no doubt due to the introduction of Lean Startup and Scrum books over the last year.

This last year has been setup for 2016 to be great. More travel, more interesting work, more diversification. I have a lot of crazy plans and I’ve spent the last two weeks not working on them, but allowing myself to dream a bit bigger and how to get from this point in time to where I know I can be in 365 days.

I don’t believe in resolutions – I think ambitious goal-setting because of an arbitrary date change is just a way to procrastinate. Instead, I’m thinking of each project and goal that I have as part of a larger “sprint”, where the project is simply this year as a whole.

Move fast, break things, don’t look back. Those are my plans for this year. It’s going to be awesome. As always, you can find out what I’m working on (and keep me accountable) via my Now page.

Integrating Payments with Ruby on Rails

I’m really happy to announce that my first course with Pluralsight, Integrating Payments with Ruby on Rails, is now live.

The course is a three-hour long dive into integrating the Stripe API into a Ruby on Rails 4.2 application. Specifically, we integrate the Subscription API by building a digital product subscription site. Tools used include Devise, Stripe.js, the Stripe Ruby gem, the dotenv gem, and Postgres.

If you’re interested in learning how to begin accepting credit cards in your Rails application, this course will be really helpful. It’s a summation of the work I’ve done in a couple Rails applications around this topic, and simplifying it to what you need to know to get started. The application provided as sample code is generic enough that it could pretty easily be adjusted to fit your needs, as well. The idea of sample code was an idea I hadn’t played much with before, but Pluralsight requires it, and now I see how it could be hugely beneficial to viewers.

I had a good time recording the course, and learned a lot about how best to create long-form teaching content like this. I made pretty much every recording mistake possible, especially around the right order to format the content in. Here’s my preferred workflow for recording courses/programming videos now: record audio first, run through the code along with the audio, then in a second take, record video while playing through the audio. This means that the audio is essentially mixed and edited in the timeline before any video is recorded – in my first coding session, I did video first, which made it really hard to match audio length and speed after the fact.

2Do is pretty good

Quickly: I play with new todo lists often. 2Do has been great over the last two months and can “scale” with your needs really well. I recommend it.

In the past, I wrote about the app Due (with the buzzword-y title “My favorite to-do app updates to reach perfection”). Due worked well for me, but my one issue with it is the need for projects – related tasks, two or more in size. I’ve found that 2Do handles this well. It even has similar behavior to Due in that tasks with a set date and time will continue to alert until completion, unless updated. It isn’t as granular as Due, but it’s a pretty great application that seems to fly under the radar, yet remains updated constantly.

Federico Viticci has a good post on MacStories that sums up 2Do and will describe the app better than I will here, but I highly recommend it if you’re looking for a good task manager.

Surface

Earlier this evening, I had a great conversation with one of my best friends about the value of good conversation.

You’ll go through life having a ton of (frankly) useless conversations. The weather, what’s playing in theaters – these conversations pass the time easily, but here’s the problem we all pretend doesn’t exist: all that time only happens once.

When I have a conversation that teaches me to be a better person or even makes me think of something just a bit differently, I find that infinitely more valuable than an observation about the clouds outside. I love being wrong, if at the end of the conversation, I’ve come to a new conclusion about the topic.

In Buddhism, part of the Eightfold Path is “right speech”. From the Magga-vibhanga Sutta:

“And what is right speech? Abstaining from lying, abstaining from divisive speech, abstaining from abusive speech, abstaining from idle chatter: This, monks, is called right speech.”

Invert that and you have a good set of rules for good conversation: truth, inclusiveness, kindness, and… idle chatter? What’s the opposite of that?

I’ve referred to idle chatter for a while now as “surface-level”. Things like movies and the weather are conversations anyone can have, and at the end of it, there probably isn’t noticable change in your perception of what’s around you. The onion metaphor is a bit of a platitude now, but the general idea is sound here: I get a lot of benefit out of pursuing levels of conversation that are “past the skin”.

The truth is that while I’m annoyed by surface-level conversations, I’m guilty of it just as much as anyone else. This month’s goal is to try and extract the conversations that are a bit more interesting, more impactful, and more enjoyable.

Back to YouTube

The impetus for re-starting this blog and a lot of the decisions I’ve made on projects to work on have been due to YouTube. In the first post for this site, I talked about a Vim video I created which has been hugely successful. At this point, it probably stands to be the most popular thing I’ve ever worked on. In that same initial post, I talked about doubling down on that method of content – I feel comfortable in front of a microphone, and there’s an infinite number of topics to be covered.

The decision to work with Pluralsight is a natural extension of this. They have a massive platform that enables focused content (like my first course – Integrating Payments in Ruby on Rails – coming very soon!) and they have an awesome content team that will work with you to look at topics you’re interested and pick one that would make for a good course. (By the way, want to be an author for Pluralsight? Contact me and I’ll get you in touch with the right people)

That being said, there’s a lot of smaller topics I think could have big impact when available freely online. A lot of the basics of using terminals and shell commands, and getting started with ubiquitous programming tools (think about the difference knowing about the existence of diff tools would make with people who work with data all day).

So I’m (re-)doubling down on making some useful, short-form YouTube content available in parallel with the Pluralsight content. Want to check it out? Subscribe!.

The Now page

I’m a big fan of writing things down. If it’s important, it gets written down somewhere.

The reasoning behind this comes from years of half-baked usage of Getting Things Done. In GTD, David Allen suggests that the cognitive overhead of maintaining work that needs to be done (and even is being done) is a huge contributor to stress and procrastination. This is definitely true for me. Writing things out has always been the best way to break something impossible to manageable, discrete chunks. Of course, in 2015, paper isn’t quite the right solution for some things. Managing long-term “projects” is best done, for me, in something more powerful than a tool like Apple’s Reminders application.

Tangent: want to know something really powerful here? Stop thinking tasks like “Get oil changed in car” are acceptable. “Get oil changed in car” is a project, or collection of multiple tasks. Do you know where you’re getting your oil changed? Have you called and scheduled the oil change? Have you checked your calendar to know what the right day and time to schedule the oil change is? The list goes on. When you accept that pretty much every task you have is not as basic as it sounds in your head, it can have massive impacts on your quality of life and work.

So now everything in my brain goes into a system. It’s not perfect – software architectures, for instance, land on pads of paper and remain fairly undiscoverable, digitally (unless you use Evernote or some other OCR system) – but in general, I’m always pushing towards moving everything I think about into Apple’s Notes application, the wonderful 2Do app, or just a piece of paper.

I came across Derek Siver’s “Now” page this weekend, and immediately loved it. (Hey, by the way? Go read like every post on Derek’s site. They are all amazing.) The idea behind the Now page (and the directory Derek’s building at nownownow.com) is deceptively simple – an up-to-date list of what you’re doing, right now.

This might become clear by poking around nownownow, but there’s an incredible range of responses to that idea. Some people focus on work, some on their personal life, but the interesting bit buried in the process of writing this out is that some real feelings emerge. Is this the right thing to be working on? Am I working on too much? Not enough? Is my “personal” section lacking, where my “work” section is bursting at the seams?

The obvious result of this is you can now see my Now page, at /now. I’ve decided to maintain the bulk of it on GitHub’s Gist service. Gists provide an interesting set of features that I think go nicely with the Now page – revision history allows you to look back at the atomic level to see how your current list has changed. For instance, at time of writing, I chose to include a “Abstract” section – this contains things I’m thinking about actively or more high-level in-progress items. At this point, that section seems appropriate and interesting to me: when the more concrete section grows (or shrinks), maybe the abstract section will seem less useful, and it will be removed. Watching these changes over time could be useful to revisit later, or may even be interesting to outside viewers. Additionally, Markdown support allows me to a bit more granular in my formatting, and allows easy reference to all the projects I’m working on.

Have you built a Now page? I’m always interested to hear about what other people are working on. Have questions about my Now page? I’d love to chat about it.

A Bit of Git coming soon

I’m excited to announce that I’ve been writing a book, titled A Bit of Git.

abitofgit-shrunk

I use the version control system Git for everything. It’s been crucial in building almost every software project I’ve worked on in the last five years, and with the rise of GitHub, it’s become even more crucial to my daily workflow.

Git suffers from an immense learning curve. My experience in learning Git was mostly trial and error, and I found that it was only when I worked with other people on Git-managed projects that I started to become sure that what I was doing was “correct”.

I think this is avoidable. I’m a big fan of the Pareto principle, summed as “80% of the effects comes from 20% of the causes”. This is incredibly applicable to Git, where the vast majority of knowledge about how Git works and the commands that come with it are just not needed for a beginning Git user. While great Git books exist about how to understand Git from the inside out, I would have loved a book that laid out the primary few tasks I would be doing in Git, and quickly taught me how to accomplish those tasks.

A Bit of Git will be available for $9.99 on Amazon, and includes seven chapters:

  1. Learning the ABO way
  2. Installing Git
  3. Interacting with files, and handling changes
  4. Branching and merging
  5. GitHub and the remote model
  6. Common Git interactions: The pull request
  7. Common Git interactions: Working with a team

The content is very focused – by the end of the book, you’ll feel comfortable using Git on software projects large and small, on a team, and solo. This is that crucial 20% you need to be effective with Git, no more.

As the book writing wraps up, I’ll be including some bonus content here on the site: a screencast with my daily Git workflow, the sample Git project I use in the book, and a blog post about the auxiliary tools I’m using around Git day-to-day.

The book is on Amazon today, with a planned release of December 31, 2015. I’ve partnered with Amazon to do their KDP program, meaning the book will be available for free for the first three months of publication. Since this is my first book release, I thought I’d experiment a bit and see what readership numbers I can reach in the first couple months. The hope is to see it pop up in some places for people learning programming, like Reddit, Hacker News, etc.

Random (but useful) data in Paw

I’m a big fan of Paw, a REST client for Mac. I use it primarily for API documentation – with the right plugins, it exports flawless Markdown documentation that fits perfectly in your Git repository.

Most developers have problems with picking good test data. Think how many times you’ve tested a form by typing “asdfghjk<TAB>asdfghj<TAB>”. When you’re testing various types of input, random text isn’t the best solution – especially with different types of input, like emails, and addresses.

In designs, we get through this problem by using Lorem Ipsum (though the use of that is controversial), but in development, there are very practical reasons to add “good data”. Adding a valid email address to a form in order to validate format on the backend is a better idea than just typing gibberish in that input.

This morning, I was looking for a good solution to random data in Paw requests. As a Rubyist, Faker came to mind. Paw has extensions, but they’re in JavaScript, so I gave Faker.js a shot. Unfortunately, the JS format of Paw’s extensions is pretty simple – they use Apple’s built-in JavaScript interpreter, so things like ES6 and modules are out of the question. Because Faker.js is a fairly popular library, it includes a lot of support for various module solutions, but I couldn’t get Paw to pick it up like an old-school “global” JavaScript library.

chance

Instead, I found a Paw extension for Chance.js. Chance.js has a wide variety of things to be randomized, including names, addresses, and various social media username formats. Unfortunately, limitations of Paw’s “Dynamic Value” extensions don’t allow discovery of these types. For instance, I tried first_name and last_name a couple times before realizing I wanted first and last, instead.

The solution was to create a Dash docset for Chance.js. I did this by using Dashing, a Go tool for converting HTML into a Dash docset. I hadn’t heard of this tool before today, but it’s super cool – you specify CSS3 selectors that correspond to Dash data types.

"selectors": {
"dt a": "Command",
"title": "Package"
},

The JSON for Chance was pretty similar – the docs are a single page with each method under the CSS selector article.method. I cloned the gh-pages branch, compiled it with jekyll build, and ran Dashing against the HTML output. The result was a functional Chance.js.docset that I’ve been using happily this morning. You can find it here:

Chance.js.docset

Apple Music’s “Intro to” Series

I’ve really been enjoying Apple Music. It combines the great parts of iTunes Match (ubiquitous library, consistent streaming reliability) with the huge selection of a modern streaming service.

One of my favorite discoveries is the “Intro to…” series of playlists. For a huge selection of artists on Apple Music, a playlist has been created by Apple with a good sampling of their “best” (or at least, most popular) songs. Interestingly enough, these playlists also serve as a great indicator of some of the most fundamental problems in Apple Music.

Intro to Stars of the Lid

I was enjoying this playlist of songs by the ambient group Stars of the Lid – a lot, actually. I added it to my library by clicking the “+” icon that was located at the top-right (of course, after adding, it becomes a check mark, as seen above). The problem with this is that doing this doesn’t add the songs to your library. It adds just the playlist. Out of habit, I now go back to the predominant album in the playlist and add it to my library, so some of the songs from this playlist actually end up in my library. The separation of songs and playlists in Apple Music is incredibly weird, and I’m not sure what Apple’s intentions are in making this extra step required.

Second is the “attribution” of the playlist – “by Apple Music Experimental“. Who? Only by searching for the name do we find that it is actually a “curator”:

curator

Turns out that the curator section is incredible. While writing this, I added (checked?) a huge number of the playlists by Apple Music Experimental. The problem is that none of them are discoverable. Unless you’re viewing an artist that is part of those playlists, searching manually for the curator is the only way to find this amazing selection. Of course, the “For You” section is supposed to be the main point of exposure to these playlists, but again, the curator names aren’t selectable.

Apple seems to be a bit indecisive – curators are discoverable, kinda. But they aren’t exposed like a focal point of the service. I’m hoping that future updates of the service will make this a bit easier. FWIW, Apple Music seems to be a client-side application – here’s hoping that an update to support curator discovery is more trivial than a point release of iOS.

PostgreSQL as default

I found a nice series titled “What PostgreSQL has over other open source SQL databases” via Hacker News. The premise – there are many databases you could pick for a project, based on scaling needs, write consistency, etc. That said, PostgreSQL is the right choice most of the time. It continues to be fast and easy to use, compared to other database tools. When you begin needing more than basic read/write operations, PostgreSQL usually has what you need, tucked away. If not, it likely has a reasonable explanation for why it isn’t included.

I’m currently recording a Pluralsight course called “Integrating Payments with Ruby on Rails”. In the course, I use the Stripe API to build a simple subscription service on top of Ruby on Rails. Stripe handles the heavy-duty parts of the payment system. Our database tables: users, subscriptions, and the actual data model provided by the subscription. Pretty straightforward! In the course, the first topic is how we architect the application. Rails, of course, but also the database – Postgres.

There are many “hotter” database solutions out there – JSON-based like MongoDB, or real-time cloud systems like Firebase. Postgres is a great example of my favorite economics tool, the Pareto principle (also known as the 80/20 rule). In this context, 20% of Postgres’ feature set meets 80% of our requirements from a database. In my course application, we don’t need more than basic tables and relations. Postgres does this instinctually, and building it in Rails is trivial. In this situation, these architecture decisions are pretty much a no-brainer. When we need to expand our feature set (that last 20%), there’s a good chance that the additional features in Postgres will do the trick. Consider the following example:

In Stripe’s subscription API, a request with credit card information is passed to Stripe – they validate it, and respond with JSON containing a “customer” object”:

There’s a ton of information in there – some of it is more useful than others. We could take the time to pull out the relevant information: id seems immediately useful, as it’s the way we’ll now communicate with that user instance on Stripe. created could be useful as well, to track the length of subscription. We’ll take those values and map them to a subscription table. But what happens when it turns out we wanted the cards list as well? Maybe the customer has an existing Stripe account, and by leveraging the fact that Stripe is used all over the web, we could allow our users to change the “active” card for their subscription.

What we need to do is, of course, store everything in this object. If we were in MongoDB, we could dump this into a StripeCustomer collection and be on our way. Luckily Postgres has a similar json field that does the trick. To keep things simple, we could even just dump the entire object into a stripe_customer row, and get things like Postgres’ validation for free. For instance, the customer JSON can be considered invalid if the id field is nil.

Here’s another short example: I’m building a mobile application with a focus on real-time communication between people in groups. I could build it in Rails, but I know that Rails isn’t the best choice for real-time. Instead, I took a gamble – I’m building it in Elixir, using the Phoenix framework. Phoenix is Rails-like, but with “channels” (like WebSockets) built-in. Real-time communication is a first-class citizen in Phoenix. I’m picking something different than the obvious 80/20 choice because I know my requirements. This is an acceptable compromise for when to pick the “less hot”, reliable tools – when you need something that tool doesn’t offer, from day one.

I’m a big fan of Tesla – I pretty much salivate over every Model S I see drive-by on the road. Day-to-day however, I really like my car: a Mazda. Why do I love my Mazda (to the point where I’m grinning when I get in the driver’s seat)? Because it does the 80% that I need – it drives well, it has enough “extras” to make traveling easier, and it’s literally is about 20% the price of a Tesla. Funny how that works. There may be a day when I roll up to the Tesla dealership and get the insane $125k version of that car. But for now, my car works really well, and by the same token, so does Postgres. When you’re storing billions of rows, Postgres will probably still be kick-ass, but at that point, you can consider alternative options because you can actually encounter bottlenecks at that scale.

The tl;dr? Don’t over-optimize – it is a very common trap. Use Postgres until they rip it from your cold, dead hands.