Seven tough lessons from ten years in bootstrapped business

When I entered the self-employment world at barely the age of 22, I had only three and a half years of industry experience, albeit in half a dozen job roles. Owing to my academic and Soviet sociocultural roots, I also had vanishingly little received social knowledge of business. Commerce was not a family tradition.

At present, my business retains a mixture of product and consulting revenue, though with a renewed focus on product following a few tumultuous years. It is what one might call a “moderately successful” business; it’s still around ten years later, and it makes a respectable, if hardly ostentatious, living for its owner, but it has not significantly grown beyond that.

Thus, in such ambivalent circumstances, the tenth anniversary of the company’s humble start as a bootstrapped concern passed largely without remark at the beginning of this year. Nevertheless, I’ve had an ample opportunity to ponder the most important things I’ve learned along the way.

There’s a school of thought out there that says speaking with candour about mixed results or outright failures, or in any way admitting or owning liabilities, is bad for marketing and bad for the business’s image. You know, it is. 99% of business-related messaging out there consists of “success porn” and proclamations of victory. No businessperson on their right mind would stray from the safety of those clichés; the image of success sells, the same way a $3000 suit and a rented Ferrari sells.

I’m out to help promote a new, brutally honest way of writing about life in the trenches. I think the social utility of publicly discussing the realities of entrepreneurship outweighs the risks. If an owner-founder out there reads this and realises they’re not alone in the obstacles they slog through as a one-man band, the magnitude of my happiness will be greater than the magnitude of my worry that someone, somewhere might think I suck at life. In short, I’m not going to take part in “success porn”; my anecdotes may paint a less rosy picture, but I’m candid about the lessons that I really did learn, and I own my many mistakes. I value reality and substance over figments of marketing imagination or self-congratulatory treacle. If you do too, that should give you some confidence in doing business with my company. So I hope.

One last thing: this article is implicitly about bootstrapping a small “information age” business from nothing, or at most a small investment of personal capital. A distinguishing trait of “information age” businesses of this sort is that the primary capital is human and the startup cost is, in principle, quite low. What I say is unlikely to apply to businesses requiring significant investments of COGS (Cost of Goods Sold) or machinery.

I started out suddenly, with $200 to my name, no revenue, no customers and no business model, and having just blown my meagre savings on a down payment for a condo. Result: two mortgages and a car payment were due the next week.

Wiser people would have avoided that kind of extreme (more on that below), but this is nevertheless possible in the business models I’ve got in mind. I suppose lesson #0 is: don’t start a business like that. If I were doing it over again, I would certainly live cheaply, save money, and capitalise the business with at least a year’s worth of runway.

If you have funding or are starting from a position of nontrivial wealth, or want to open a restaurant, a shoe store, or for that matter a backyard medical devices factory or an oil drilling rig, you might as well skip this article, as your economics and constraints are going to be entirely different.

With all that in mind, here are some other things I’d travel back in time to tell my younger self—if he’d listen, which is far from assured:

1. Consulting is not a scalable business model or a good funding strategy

As mentioned above, I unexpectedly found myself jobless, with no cash, and had two mortgages and a car payment. That means the initial conditions of my business were: do whatever generates cash ASAP. In tech, that naturally pushes one into consulting. Of all the things one can possibly do, service work pays the quickest.

Clearly, I managed and paid the bills. To those who are not so lucky in their ventures, that might seem like success. The reality is more complicated.

To make an adequate living as a consultant, you have to be rather good at many things and possess specialised skills and knowledge. Proffering relatively generic IT skills won’t work; at that point, you’ll be competing with volume operations who have made a process of this, and with offshore techies on oDesk with whom you cannot compete on price and make a “First World” living. So, for purposes of this discussion, let’s assume you have some unique knowledge of the ways and means of a particular industry, are particularly talented, and possess both wide general knowledge and deeply detailed specialisation in a few valuable areas.

This seems like a blessing, but for anyone trying to build a business bigger than themselves, it’s a curse because it’s not a business model. It’s a glorified job for yourself, with all the downsides of a salary but none of the upsides, including the “steady paycheck” bit.

First, well-paid tech consultants are paid for the fact that the intersection of many different technical skill sets is present in a single person, likely coupled with some niche domain knowledge of a particular industry. It’s near-impossible to hire anyone who is not of an identical profile to help you with that and grow the business beyond yourself. There are only so many hours in the day, and only so many hours you can bill. That is the limit of your business model and your compensation.

Even if you are somehow able to afford to pay a competitive market salary (I wasn’t), the likelihood of finding someone with the mixture of skills you need is very low, and this becomes exponentially more true as you get into rarefied niches. In IP telephony, for example, hiring most IT people off the street is of limited value; in all cases, you’re going to have to educate and train someone in the folk knowledge you use to ply the trade. That can take months, or even years, during which time you’re burning scarce cash and cannibalising your own (ostensibly billable) time.

Another peculiarity of niche consulting work is that you may have to pay these folk quite a lot, as their salaries need to be competitive with what they could otherwise command elsewhere in the market with their legitimately valuable skills. That’s a hard pill to swallow if they don’t have the specific skills you need.

As you move down the talent ladder and toward entry-level people, you can only find people who are kind of okay at one or two of the eight or nine things you need them to know well. In the best possible case, it’s going to be you and some help that will, for the value you are able to extract out of it, be suffocatingly expensive no matter what it costs, and you will spend months or even years trying to groom it. Needless to say, the help can leave any time for greener pastures where its existing skills fetch a higher return.

The second problem with consulting is that it’s custom work. The market has a seemingly bottomless appetite for custom work, but you don’t own it, and it leaves you with no residual intellectual property or, in a broader sense, capital. Yes, charging a lot can compensate for that to some extent, but there’s only so much you can bill. In that sense, it’s a profoundly linear business, no different to a barber shop. Want to make twice as much money? You’ll need to cut twice as much hair. Except you can find afford to hire people to work at a barber shop, in principle.

Yes, there are ways to optimise the process. Most consultants eventually find ways to package what they do, through some sort of reusable assets, to lower the marginal cost of doing it. In some cases, if you’re lucky, you can even get customers to fund part of the research and development cost of a product indirectly. A lot of products’ origin story lies in consulting in that sense; indeed, so does mine.

All businesses with a headcount greater than one have something in common: a real business model requires devising easily replicable workflows and business processes at decreasing marginal cost. 

If consulting in particular is your thing, there are clearly ways to scale out the business of consulting, and indeed, any other service business, as demonstrated by the existence of the professional services majors – KPMG, Deloitte, Boston, and others. But all of these businesses have solved the problem of the replicable business model. At the professional services majors, they’ve figured out how to write a three-ring binder of procedures—so to speak—that a fresh-faced Comparative Literature graduate six weeks out of school can follow just well enough without accruing years of deep expertise in a particular industry vertical. That procedure is then carried out for every Fortune 500 client that requests the services of an “Audit Associate II”. That’s still a business model, but it doesn’t work at a small scale.

That’s why I said above that being in a position to sell specialised work that one does individually is a curse, not a blessing, because it gets one out of having to solve the intellectual problem of how to build that business model. A much more enviable position is that of entrepreneurs who do not have the skills to do most of the work required to build their business themselves: it forces them to solve for how to combine labour and capital in a way that works. From day one, they have to devise a workflow and a process into which they can plug other people—other people they have to somehow afford—with the idea that more people can be added in the same way. Consulting is, entrepreneurially speaking, the laziest possible option; it pays now, but it doesn’t generalise the process of how to get paid repeatably.

Consulting is often seen as a means to an end, a short-term funding strategy until we can get the product off the ground. The idea is to do consulting at the same time as developing and marketing a product until it becomes self-sustaining. There are indeed thriving businesses and products that have got started this way, though it usually requires that the consulting be at least somewhat complementary to the larger goal.

Still, an incredible landscape of obstacles is arrayed against you in this paradigm: consulting has a way of taking up 85% of your time for 40% of the revenue. In my experience, it takes nothing less than a supernatural Teutonic discipline to properly compartmentalise consulting, and even then, by the very nature of the sort of thing that it is, it has a way of spilling outside whatever neat boxes you try to shove it into. Consulting customers are demanding and have a habit of calling.

Another widespread belief is that one can do consulting intensively for a while to build up cash reserves, then operate off that runway and focus on your real mission. This might be arithmetically possible if you’ve dialed down your personal burn rate to exceptionally low, but, if you find a way to make lucrative consulting projects arise spontaneously, exactly when you need them, and disappear precisely when you don’t, with no effort required on your part to get them, please let me know. I’ll make sure you get that Nobel Prize. In the meantime, consulting happens based on the customers’ needs, not yours, and the networking, marketing and engagement required to maintain a pipeline and get any consulting project at all will easily suck up the rest of your time.

Useless workFor all these kinds of reasons, the result is that your product development proceeds at a glacial pace, or falls by the wayside entirely, because much of your energy is spent hustling for consulting dollars. Relative to the windows of market opportunity in tech, that can be disastrous. Plus, you are likely competing with companies who are not hindered by such ballast, and can afford to focus on product 100%. This might all be surmountable if you can muster the total dedication and raw 24/7 energy of an unattached 23 year-old singleton, and have—somewhat improbably—jettisoned any expectation of a personal life, hobbies or outside interests. However,  it is essentially impossible if you have a family or other constraints that box you into a more or less 9-to-5 regime. At that point, you’re an advertising copywriter dreaming of writing the next Great American Novel. Any day now.

Critically, the friction and complexity of doing niche custom work also makes it complex and frictional to sell, particularly for someone who is not you. Sales cycles for this sort of thing are long and involve patiently nurturing consultative sales relationships which underscore you as an industry expert in the eyes of the prospect. As with business processes for employees, sales needs to traffic in simple, easy-to-understand concepts with as few moving parts as possible. It is hard to package custom work and niche expertise in a form consumable by otherwise capable and motivated salespeople. This problem can also afflict “consultingware” software products; at the very least, products which are complicated to sell require extensive sales training, driving up customer acquisition costs and prices accordingly, and limiting your ability to scale that process out.

Lastly, even if you are not in a business model that specifically contemplates a dichotomy between “consulting” and “software product”, many other models have parallels. For instance, in the IP telephony service provider world, there is a similar choice between building high-expertise, custom deployments versus prefabricated IP PBXs and trunks. Some people I know clear very respectable revenue doing the former and pride themselves in their resistance to the commoditising forces of the latter, but there are serious volume—and therefore, revenue—limits around their business model.

2. Good cash flow is more important than revenue

For my first years in business, I did only project-based consulting work and had neither a product nor significant recurring revenue otherwise. That meant I’d sometimes go more than a month without getting paid. This is the period in which I most substantially wrecked my credit and paid personal bills notoriously late, the consequences of which remain with me now.

Bad cash flow at higher revenue is worse than good cash flow at lower revenue. In fact, bad cash flow can sink even a notionally profitable and viable business. This is a well-known fact in economics, but the usual countermeasures are rarely available at an individual scale.

It is only human to suppose that if one earns a $100k (for example), one can afford a commensurate lifestyle. The trouble with that in consulting, for instance, is that a high percentage of that income is volatile (unsteady). If someone divided that $100k by 12 and gave you 1/12th on the first day of every month, you’d be set, but that’s just not how it works; if that’s what you’re after, get a salaried job.

The devil is all in the details of what you have liquid at any given moment. It is likely to be the case that $100k of unsteady income can only support a $35k lifestyle once appropriately discounted for volatility. The same logic can be applied to business expenses, and is something to think about before you commit to cutting someone a paycheck on the 1st and 15th.

Make adjustments accordingly. I didn’t; by the time I was spirited out of my last job, I had managed to buy the aforementioned condo and otherwise acquire a lifestyle that required a middle-class professional income to sustain, and have generally persisted in maintaining a relatively high personal expense base.

That’s going to create a dynamic that is substantially similar to the one poor people go through because you’re nearly always broke, only with the added irony that, on paper, you might earn a pretty sizable income. Even if that’s the case, being high-income with high expenses is about the same as being low-income. Your decisions and priorities will resemble those of low-income people. Any zeroes in your checking account will represent only a fleeting blip of happiness before they go right back out. Sizable income does nothing for you when it arrives in episodic, unpredictable chunks. You may have money once that payment clears tomorrow, but that does nothing to help you here, now, today. I can’t count how many times that has literally been true.

In principle, one can meticulously set aside cash from high points into a rainy day fund to offset the low points, but the messy timing of reality probably won’t fit whatever  savings scheme you’ve devised. The only real way to get ahead of this is to have an expense base that is vastly below your income.

In addition to deteriorating one’s ever-important personal financial history, the constant stress of poor cash flow is a major distraction from your business, and therefore an existential threat. Don’t do what I did; live cheaply and bide your time. Don’t mortgage your financial future with shortcuts.

3. Everything takes a lot longer and is harder than you think

My company’s product would be morally offensive to a young developer at first glance. It took seven years to write what is, in the grand scheme, a few thousand lines of code? Bro, I could do that in like, a weekend.

Developers are famous for grossly underestimating how long it takes to just, you know, write the code real quick. That’s not a new revelation. The real revelation is in what else you have to do to take it to market.

Overlooking the fact that the product was developed alongside consulting, with all the problems that entails as per point #1, I’d say maybe 15% of the work that has gone into this product had anything to do with writing code, and indeed, that’s not where most of the value lies.

Prototyping something is fairly easy for many minimally viable iterations of product. The real work, the dark matter of the entrepreneurial universe, is in what is sometimes called “customer development”; deploying it in the real world, painstakingly learning what works and what doesn’t, incorporating that market feedback and iterating it. Then there’s the troubleshooting of bugs and fixing of problems which only arise in production and at large scales (and therefore under the pressure of irate customers who must be placated). For enterprise-oriented software like ours, customers expect a streamlined vendor support relationship with well-considered processes. Polishing aspects of customer experience which are unrelated in any overt way to your product is a critical part of selling any product with a service and support dimension—that is, most software for enterprises and service providers.

If you multiply that out across punishingly long sales and adoption cycles which can range from months to, in many cases, years, it’s not so hard to see how it can, for an army of more or less one, take seven years to breed good product stock. Although this is not true of some products for the mass market, it is absolutely true for intra-industrial solutions with numerous moving parts.

For software development work and other intense focus-based tasks, there are also the simple economics of human task-switching. Joel Spolsky and others exposed aspects of this argument to a larger audience some time ago, but it boils down to this: development has a huge cognitive load. While Joel et al focused mainly on the ways developer focus can be derailed by distractions, development is also exceptionally sensitive to mental fatigue and to the vicissitudes of motivation — problems you’re going to struggle with if you’re juggling consulting gigs alongside product development work. “Programmer’s block” is a thing, and I’ve learned that fighting it with brute force and will power alone simply leads to burn-out.

Moreover, we’re simply not robots; one does not simply do consulting from 9 AM to 1 PM, then switch one’s mind to product coding at 1:01 PM and carry on until 5 PM. The mind needs time to unwind, recharge, adapt, and get in gear. As many of my fellow developers know, often by the time that happens, 5 PM rolls around. This is, incidentally, why developers chafe at being scheduled for meetings in the middle of the day or being burdened with errands spliced into the middle of work; the mental focus required for coding requires fairly long, unbounded chunks of time to conjure—a process not altogether reliable even in those cases. A midday meeting or a dental appointment can split your day into two useless chunks in which you can’t do anything useful.

Naive is the person who thinks software developers in large companies work on code anywhere near eight hours per day. Perhaps I am exceptionally feeble, but I don’t think the human brain can sustain that for any significant length of time. Occasional heroics are of course possible, but in general, I’d say 2 to 3 hours of solid coding per day is a banner day for a developer in the enterprise. The rest of the time is taken up with e-mail, meetings, conference calls, lunch breaks, updating internal tickets and bug reports, and just vacantly scrolling through the code, trying to summon the elusive “zone”.

If “knowledge work” were brainless piecework, it wouldn’t pay much. It’s got a lot of parasitic drag in the brain. Factor all that into your estimation of how long things really take and the resource commitments they will require.

4. Figure out what actually motivates you

Succeeding at business is quite hard, actually. If it were easy, everyone would do it. To even stand a fighting chance, you need to figure out how to keep yourself plugging away at it in a sustainable and enduring way. In my experience, generating real drive within yourself comes down to a bit of willful sleight of hand, some mental tomfoolery to trick yourself into doing work even when you don’t really want to.

I think the official party line among the self-employed is that money is supposed to be the motivation. To some extent, it surely is. To be sure, everyone wants to have enough money to solve most problems in their life and have the things, experiences and security they want, and everyone’s sense of justice is offended when they receive less of it than they believe they should. My observations suggest, however, that money is seldom the root of most drive on a long-term basis. The idealised economic free agent (“Economic Man”) is more myth than reality.

At any rate, if money were my sole motivation, I’d take the path of least resistance and find a salaried job; I’d probably make more by this point in my career, it’d be steady pay, I’d have benefits, and best of all, a narrow area of responsibility rather than direct exposure to everyday business risk. If I had stuck to the corporate career track, I’m almost certain I’d have a positive net worth and some trappings of middle-class wealth accumulation, instead of being six figures under.

When I speak of motivation, I’m not referring to sanguine or idealistic conceptions of motivation, nor grand, cosmological ideas. Many people are tempted, upon first examination, to answer this question in terms of how they want to see themselves. Some will say they want to make the world a better place. Some will say it’s all for the money so they can cash out and do what they really want. Some will say it’s all for their children, their sick brother, baby sister, destitute grandmother.

That may all be true, but it probably isn’t what motivates them in a local, everyday sense. I’m referring to the stuff that makes the work itself enticing. There are probably preternaturally disciplined people out there who are animated into everyday action by the notion of executing a long-term goal, but unless you’re one of these Terminators, you need to do some blunt and honest introspection to figure out what makes you tick.

To answer that question for myself, I have to go back to my childhood and adolescence. I started programming at age 9, and from that age until I started working, I put what I reasonably estimate to be somewhere between 8,000 and 10,000 hours into it. If you crunch the arithmetic on that, that means I wrote code almost every day, and to a degree that surely crowded out most normal teenage activities and developmental experiences, including a healthy social life. That level of dedication was certainly not sustained by a singular long-term goal that lasted from primary school through adulthood. Indeed, I had no intention of making a tech career, and entered the University of Georgia as a political science major, later switching to philosophy and entertaining vague aspirations of law school.

I spent much of that time working on multi-user chat servers called talkers, originally a UK-centric phenomenon, with a group of like-minded peers and associated social cohort of people who simply used talkers, called “spods” (verbed, “spodding”). I count some of them among my real-life friends today, but they were all online then. My programming peers were mostly a bit older, and played an important mentoring role. We frequently competed and co-opeted to over-engineer our home-brew talkers for high performance and concurrency, since we needed to support … tens of users.

Making the computer do stuff was quite interesting, but there’s no way a solipsistic interaction with the machine would have sustained my interest to this level. This was, in effect, my primary social life at the time. It was the social dimension that interested me; the collaboration, the teamwork, the feeling of watching users actually use — and enjoy — my inventions. I relished learning processes and workflows (e.g. version control) and “how things are done” as much as doing them. The journey was as important as the destination. The qualitative, human aspect of how people worked interested me. And of course, ego certainly played a big role. As my skills sharpened, my status and acknowledged expertise increased in that social group, and as I got older and demonstrated increasing maturity (in relative terms, anyway), I came to have legitimacy and respectability in that community.

From the perspective of someone self-employed since a rather young age, what I miss most is about conventional employment is certainly the teamwork and camaraderie, the feeling of contributing distinctive expertise to an endeavour larger than myself. Self-employment is a mostly solipsistic endeavour for me; my economic incentives point to spending as much time one-on-one with the machine as possible. That’s not easy to sustain for a fairly sociable person.

By and large, nobody cares what I do. I don’t really have direct colleagues or peers around due to my narrow specialisation. Certainly, customers recognise my expertise and even pay me for my work from time to time, and in capitalist society, one might say that this is the highest expression of caring about something. But I’m paid for a bottom-line result, and few people appreciate the intricacies of how I arrived at it. The resulting creative and intellectual control is liberating in one sense, but constricting in another. I have few intermediate responsibilities apart from the all-important delivering.

I’ve tried various coping strategies, such as co-working spaces (which I have written about). But I had little to say to the web economy SEO Superstars, Sales Quarterbacks and Vision Catalysts who tend to inhabit such places, and they had little to say to me. Oddly enough, this is where entry-level employees whose direct economic contributions were otherwise quite limited helped me a lot. I had people I could bounce ideas off of and show things to who also had some folkloric knowledge of our business and could put what I showed them in the right kind of context. I suppose I’m a bit like Dr. House (minus the “genius” bit); I work unsteadily in isolation and need the dynamic of a “team”. There are other people I know who seem more genuinely introverted, and really relish working at home and not having to deal with “people”. I don’t understand them and they don’t understand me, but the differences seem genuine.

Little cute caucasian girl in jacket and hat sitting alone on a seesaw swing at a playground outdoors. Loneliness mood conceptI don’t think I’m the first to say that doing small business is lonely. Much has been written by other entrepreneurs about how it can even be quite isolating even within a lively social and family life, as nobody really understands. I am not given to pity parties, but psychologically, this has posed challenges. I don’t have a good solution for my case; indeed, not all problems have solutions. Nor am I sure how I could have gone about my self-employment differently without radically altering its nature, apart from perhaps soliciting more on-site engagements. What I do know is that the solipsism of “moderately successful” small business is something the 22 year-old me failed to anticipate or consider, which I think is par for the course for 22 year-olds.

All this to say: it’s not overly self-indulgent to do some honest introspection. If you can more or less accurately deduce what really makes you tick and can find a way to cater to that in your business decisions, the returns to sustained productivity will not disappoint. While all economically useful work involves tedious drudgery, I know some people who have found their own answer to this problem and managed to solve for it, and they really do seem thrilled and energised to go to the office every day, as best as I can tell.

5. Don’t neglect proper tax planning

I have sizable delinquent tax debt that I am still in the process of resolving, and of course, the painfully injurious arithmetic of compound interest and penalties fully applies.

In my undoubtedly tendentious estimate, that’s about 30% failure to pay taxes in the aforementioned context of poor (though steadily improving!) cash flow, but 70% poor tax planning and tax-related decisions. Just by way of merely one example: I have an LLC and did not do an S-Corp tax treatment election until FY 2017. This means that for every prior year, I was assessed 15.3% self-employment tax on top-line net business income prior to personal deductions, and indeed, this represents a slight majority of my tax debt. That is to say, my taxable income for tax purposes is significantly lower than my taxable income for self-employment tax purposes. That’s tax debt I wouldn’t have if I structured my compensation differently.

Worse yet, it’s not that I didn’t know the consequences. I was just too busy trundling along with my business to attend to these things diligently.

One of the worst things about bootstrapping is that you’re always living check-to-check, leaving little time or energy for “stepping back” and thinking “deep strategic thoughts”. The strategy is “make money, pay bills”. This is also why most business advice rings hollow in this world; it’s a luxury one cannot afford.

Set aside the time to research and fully understand, with the help of a CPA and perhaps an attorney, every implication of a given business entity/incorporation structure. Don’t be like me; get ahead of it before it’s too late.

6. Lay off the success porn

By now, much has been written about research findings that social media use gives rise to depression and anxiety. This is largely explained by the fact that people selectively curate the portions of their lives they put online, leading to the impression that everyone else’s life is amazing and full of neverending vacations and rich, vibrant experiences, and you alone are struggling with the drudgery of everyday life.

This is doubly true in business, where the relentless bombardment of announcements related to other people’s success never abates. LinkedIn serves a similar function for the business-minded as Facebook and Instagram does for ordinary people. Few economic struggles or bankruptcies are proclaimed on LinkedIn, except perhaps in the “started from the bottom, now we’re here” epics that salespeople, business coaches and “self-help gurus” seem to love to write.

My earlier years invited frequent ruminations on why I alone can’t seem to hack it beyond a certain level of business development, and even making some abortive attempts to mimic the declared customs and habits of people who could. Since then, I’ve found general agreement among bootstrapped business founders that it’s integral to one’s mental health to tune this stream out, keep one’s head down, and focus on one’s own work.

7. Only hire the right people, and don’t be reticent to fire

Well, let me tweak that for my specific case: don’t hire entry-level people unless you’ve got a business model that is specifically set up to extract value from them.

That certainly does not describe my business. Nevertheless, I’ve had a bad predilection for hiring entry-level people. Those aren’t the only people I’ve ever hired, but it’s been a major theme. The first and most immediate reason is that entry level is mostly what I could afford. However, what really drove the decision in those cases was—if I can find a way to say it without making myself out to be some kind of saint on a mission civilisatrice—an altruistic impulse.

I’ve met people who were sincerely interested in learning more about technology and, in many cases, I perceived, rightly or wrongly, that they could have used the job. I have been idealistic over the years about my ability to introduce someone to this industry and “raise the floor” on their skill set, slowly but surely, until they can take over some routine business functions. If I’m being honest, there was probably an egotistical desire to disprove conventional hiring wisdom and demonstrate a civically minded alternative that spoke to the social purpose of employment.

However, I’m simply not set up to put that sort of candidate to use productively for reasons explained in point #1, and in this light, hiring them was ultimately a disservice to myself and to them. In addition to burning cash I simply did not have, I neglected to consider how bummed out people feel when they know they can’t be useful. Far from being lazy, most of these folks were quite motivated to do a good job, and it was no moral failing on their part that they simply didn’t have twenty years of broad-based IT experience that matched my needs. That did not change the reality; an unproductive employee is not a happy employee.

I think the most damage was done not by my choice of who to hire, but my tendency to keep them on payroll far beyond the point at which it became apparent, if tacitly, to both of us that this isn’t really going to work. I invariably blamed myself, believing it to be the responsibility of the manager or the entrepreneur to devise workflows and systems in which one could put other people to use. Earlier years were spent labouring under the delusion that given the right tools, technology, process and training, I can make anyone into a gainful contributor to the company’s work.

It is vital to be brutally honest with oneself about whether someone’s skills can realistically be put to gainful work. Do not allow emotions or generous impulses to override that. Big companies average together the productivity of lots of different people, and can afford a relatively unproductive team member. As a small, bootstrapped company, not only can you not do that, but to achieve any sort of “escape velocity”, your first few team members have to be exceptionally productive. Otherwise, the result will be that you will work three times as hard as you already do to keep the lights on and make sure everyone’s salaries are paid, while your employees feel useless and dejected. This is one I seem to have had to learn over and over.

Finally, letting someone go is one of the life’s least pleasant sensations. I have a pathological aversion to it, because my inner narrative about any given employment situation that’s not working out is that I have somehow failed the employee. Nevertheless, if you’ve made a bad choice, stopping your bleeding—and theirs—is an absolutely essential facility to develop as a bootstrapped entrepreneur. You really, quite literally cannot afford not to, and by not doing it, you’re leading the employee on and denying them the opportunity to move on to a more fulfilling role with better prospects for themselves.

Finally

There is a more global reflection that haunts me the most. I saved it as a bonus for last, because it’s intimately related to the conditions of “moderate success” as laid out in the introduction.

Let me start by saying that I don’t subscribe to California startup capitalism and its “go big or go home” philosophy of business, and neither to the pejorative distinction its adherents draw between “real business” and “lifestyle business”. I don’t think a small, specialised company with a small team is a worthless aspiration—it’s not a failure of $100M imagination.

I also don’t advocate byzantine, metrics-laden strategy at micro scale. There is such a thing as simply too small.

Nevertheless, the cold, hard truth is that  “moderate success” is a vast chasm that is difficult to cross. Unless you explicitly declare bankruptcy or wind down the company, nobody comes along to tell you that you’ve failed and shutter you. If you’re doing service work such as consulting in particular, you can tread water and still pay the bills almost indefinitely. You need to decide where the shuttering point is for yourself.

I’m not suggesting you should aim for Mars, nor that this shuttering point is a static quantity. It will probably need to be calibrated to the market feedback you receive and your sense of the overall prospectus. Nevertheless, it is helpful to start with some idea of where you want to be after some period of time that is more specific than “uhhh, grow?”

I’ve got my shuttering point. Fortunately, I am above it. Nevertheless, my focus on the prize over these ten years would have been sharper if I had started out with a notion of what the prize is.

What does it all mean? Should all of this discourage you from going into business for yourself on a self-funded basis, or from doing consulting work? Absolutely not! If you’re studying that option, hopefully this article has taught you something about the obstacles you will face and help you to plot your master plan in a more pragmatic way. It’s far too easy to fall under the spell of “success porn”, which is the dominant packaging of information out there for the aspiring self-employed. The only comme il faut narrative forms out there seem to be 1) “make it” or 2) “fake it till you make it”, and my aim was to shed some light on what “only kind of making it” looks like, which seems to me to be the vast middle of the bell curve. That is to say, unless you achieve meteoric, hockey-stick break-out success or you straight-up fail, I think you’re probably going to end up dealing with many of the same problems I’ve encountered.

The good news is that these problems are not insurmountable. They’re just a lot easier to surmount if you don’t spend a long time discovering them for yourself. Why play life on hard mode if you don’t have to?

Many thanks to my good friends and fellow entrepreneurs Fred Posner and Kelly Storm for their feedback on drafts of this article.


Georgia’s new “distracted driver” / texting-while-driving law is misguided

House Bill 673, the Hands-Free Georgia Act, took effect today in our state. In essence, it prohibits holding or operating a smartphone/mobile device while driving, more especially for the purpose of sending messages or partaking of Internet-enabled applications. Some allowances are made for using devices with “hands-free” kit. A good summation can be had from the Atlanta Journal-Constitution.

Woman driving car distracted by her mobile phoneI happen to think it’s a fantastically misguided and stupid initiative, but I understand it’s not a popular position to assume. It’s easy to get just about anyone to agree that, in principle, driving whilst distracted with a smartphone is bad—though a great many people are hypocritical about it. All the same, nearly every driver has seen people drift off the road or almost cause an accident while having their head down in a smartphone “like an idiot”. It’s doubtless a sore spot for those who have been involved in such accidents or known people injured by them. Like laws against drunk driving, and indeed, most law-and-order grandstanding, it’s something that politically plays well and doesn’t attract much controversy. Thus, I would not be surprised to be pilloried for opposing it.

The first and most obvious objection comes from my repressed inner (moderate) Libertarian — with thanks my somewhat-less-moderate Libertarian schoolmate Claire for putting it into greater focus. The public welfare and the legal system doesn’t really benefit from a large proliferation of laws against doing small things that people are going to do anyway. Drunk driving hasn’t stopped because of stiff criminal penalties (more on that later). It’s already prohibited, and in many cases under penalty of criminal prosecution, to do things with a motor vehicle to which being distracted with a smartphone would give rise.

Yes, I understand that many laws exist to deter dangerous or frowned-upon behaviours regardless of whether they lead to death or injury in any given instance. I also understand the concept of criminal negligence and reckless endangerment: bringing an alligator to a school can expose one to criminal liability regardless of whether it actually bites any children, as it displays casual disregard for human life, and in general we would want to strongly deter people from bringing alligators to public places to begin with — even if they only cause actual harm a small percentage of the time.

paper wisdomStill, there’s a cost-benefit limit to any law. The implementation of any law carries with it enforcement costs, some amount of dedicated bureaucracy, increased load on the court system, and contributes in a rather general sense to a societal management burden and carry costs of sustaining an increasingly bloated legal code. Ever notice how the law books never get shorter, only longer? For the most part, the effect of laws about trivial matters is simply to give police another reason to stop us and search our property, extract fines, and, to one degree or another, puts our money into the hands of defence attorneys.

This law runs afoul of that limit, especially given the technological specificity with which it is articulated. For committed enthusiasts, its poor formulation gives rise to a fertile field of unsettled case law on issues such as what precisely constitutes a “hands-free” device. For instance, in another AJC article on the subject, it is stated:

You can still control your music through your vehicle’s radio system (if you have the technology to do it).

I have a basic 2007 Honda Accord, which is in no danger of having a modern touchscreen-based or steering wheel-controlled vehicle entertainment system. However, I do have an in-cabin mount for my iPad Mini, and use it to play music through the headphone jack. Is that part of my “vehicle’s radio system”? Another part of the article states that the law:

prohibits motorists from handling their phones and other electronic devices (like MP3 players) while driving. That means no using your hands to play music, switch playlists, etc.

An entertainment system built into the car is no different than an external one such as the aforementioned rig. To borrow the language of the actual bill, what exactly makes such a device a “standalone electronic device”? As with all stupid laws, the legislature wrote some metaphysical checks their weak philosophy budget can’t cash.

Next, while electronic devices are undeniably a very common source of driver distraction, and perhaps the most common, this sort of legal treatment gives them extraordinary weight over other activities which can generate a similarly distracting cognitive and sensory load. Those remain curiously legal. This includes anything from having an involved conversation with a passenger to fiddling with the lid on one’s coffee tumbler to manhandling a leaky taco. For that matter, anyone with a young child knows how easy it is to get into a wreck while contorting onesself to hand them a toy that has fallen on the floor; but then again, it’s almost as easy to get into a wreck when they’re screaming at the top of their lungs about the toy.  As a general rule, petty laws which take aim at niche preoccupations are calculated primarily for optics and political points rather than substantive effect. The real problem is not electronic devices, it’s driving while distracted.

But all of this is really nitpicking that can “trouble” any law or regulation; by nature, laws are one-size-fits-all solutions which define arbitrary boundaries, simply because they must, and overlook many nuances. These are people who manage to use their electronic devices rather carefully while driving, all things considered, and there people who plough their cars into things and people with disturbing regularity in a sober and distraction-free state. Which is the greater “menace to society”? Laws aren’t tailored for outliers.

The real problem I have with this law is that it is yet another attempt to patch a symptom of a much larger social problem: horrific built environment and deficient public realm of America. The vast majority of the settled areas of the US consist of discardable suburban automobile sprawl. A car is a basic requirement for any social or economic activity in the middle 90% of America, outside of a vanishingly small handful of traditional urban formats such as NYC. This is even true in the vast majority of places we refer to as “cities”. It’s not really much of an exaggeration to say it would be better to be paraplegic with a car in America than to be fully intact without one; you can, at least, have a job with no legs, but you can’t have one without wheels. In this country, where almost everything is built at automobile scale, cars are, by and large, first-class citizens and people are not. Four wheels are a more significant extension of one’s corporeal being than one’s legs are.

And most Americans spend sizable chunks of their waking life driving. For those who endure the punishing commutes of “cities” like Houston, Atlanta, and Los Angeles, more so. From an empirical point of view, it should surprise noone that aspects of the rest of their life bleed into the copious time they spend chained to the steering wheel. People aren’t recklessly indulging their luxurious sensibilities, deigning to send text messages and emails — and in a motor-car no less! I can’t imagine anyone sending text messages while driving really wants to, they’re just stuck there. Given a walkable public realm and adequate public transportation, I’m sure, all other things being equal, they’d much rather play with their phone sat on a train, without being in command of a 3-ton metal behemoth — an “inherently dangerous instrumentality”, as one legally minded author put it. We don’t have a choice!

Yes, yes, I know, we do have a choice; we have the choice to put our phones down and focus on the road. But I suppose I’m of the empirical, behaviorist school of thought, which, truer to the objectivity of the scientific method, is focused on describing how people actually respond to the incentives and cues of their environment, rather than focused on tedious moralising about how they perhaps ought to behave. If you build a society that makes driving a basic requirement of doing absolutely anything at all outside one’s house, these are among the many externalities you’re going to get, because this is how people work. America, you brought this upon yourself.

This has a lot of parallels with DUI (driving under influence, aka drunk driving) laws. DUI exists everywhere in the world, but not to the scale that it exists in America. It’s an open secret that just about anyone who drinks to any degree here, whether at friends’ houses or at a nearby watering hole, has driven when they shouldn’t. How else are they going to get there and back?

And a massive industry has grown up around it. Lots of paychecks are drawn from the DUI industry: blood testing labs, specialised DUI police units and DUI prosecutors, tow truck companies, ignition interlock manufacturers, private DUI class providers, and of course, lawyers.

How has DUI reached such “epidemic” proportions in America? I have a simple explanation: one has to drive for any social occasion! Most bars, clubs, taverns and pubs in America are located in shopping strips or detached buildings on the side of a road, and they have parking lots. How, do you suppose, did all the patrons get there? Over half the town’s work force would have to be employed as an Uber driver for them to get there by means other than their own car.

To be clear, the individual responsibility for drunk driving is born by the drunk driver. I don’t dispute that one should not drive after drinking. I’m not saying our built environment forces any individual to drink and drive and endanger others. Nevertheless, DUI is, in the aggregate, inevitable in a country that’s built at automobile scale and, by and large, exclusively for cars, whether you like it or or not. You don’t have to like it; it’s still going to be a fact. It takes a uniquely wicked type of sociopathy to build a whole country this way, then declare sanctimoniously that nobody is to drive after drinking. How do you resolve the immense, schizophrenic contradiction in messaging between MADD campaigns and police roadblocks on the one hand, and remote bars with huge parking lots on the other? How do you expect the average citizen to do so? How about a child growing up? It’s just not realistic public policy.

Realistic public policy is, in my view, about recognising that most people are somewhere in the middle of the bell curve of opportunism. Among those who drink with any degree of regularity, there are going to be 10% that just won’t ever drive — the paranoid, the preternaturally principled, those on probation for a DUI. There are also going to be the 10% who will always drive, totally wasted, no matter what, after partying every weeknight; those sloppy repeat offenders are the ones we love to hate with our DUI campaigns. But most people aren’t one or the other; all other things being equal, they generally prefer to be rather law-abiding and make the safe choice, but from time to time they will drive with some degree of intoxication just because it’s far and away the path of least resistance. They probably won’t get pulled over, and if they will, they will probably be fine most of the time. Every once in a while they won’t be, and a few of them will get in trouble with the law, because the DUI laws, as written, are hard to comply with. It’s very hard to gauge one’s blood-alcohol level. And there are all kinds of other tools police have, ranging from subjective perception of impairment to gotchas and entrapping features in US law enforcement interactions.

That middle 90% is who laws should target, and this law doesn’t capture that awareness.  Driver-distracting devices work the same way. There are people who will never, ever, ever touch an electronic device whilst behind the wheel of a car. There are pathological texting addicts who are miraculously still alive, despite weaving casually in and out of lanes on an everyday basis as they thumb out WhatsApp and Snapchat messages. But if you make the middle 90% drive for everything, they’re going to do “stuff” behind the wheel some of the time, though they’ll be cautious and, all other things being equal, prefer to avoid it most of the time. Still, they’re going to be bored. They’re going to be put upon by work-related communication. As a practical matter of observation, it’s psychologically inevitable. Life doesn’t stop just because you’re caged in a car.

You can’t make laws prohibiting the normal distribution of externalities. That’s a weak, intellectually feeble cop-out. It’s a shitty band-aid. Sooner or later, we’re going to have to face facts and deal with the hard issue that we have built an overwhelmingly auto-dependent country and public environment, if it can even be properly called that. That’s going to bring car-related negative externalities, as people live out large chunks of their lives in cars. It’s going to bring carnage. It’s going to bring maiming. It’s going to bring obesity and diabetes. It’s going to bring depression, isolation and boredom. And yes, the odd instant messenger conversation.


Problems and their solvers

We live in a cultural moment of what is sometimes called “solutionism” — an intellectual fixation with “solutions” rather than the problems to which they are addressed.

Some of this is ingrained in Anglo-American cultural psychology—and not necessarily all for the worse. Although this vantage point sometimes suffers from a debilitating naiveté about the nature and complexity of problems, its indefeasible optimism about man’s ability to control and master the world has made a contribution to technological and economic progress that cannot be denied.

Nevertheless, a lot of the excesses of solutionism are nowadays driven by unremittingly one-dimensional Silicon Valley groupthink of the Internet Age, a metaphysic in which all problems are reframed as a want of an app or a startup. In venture capital, code and devices lies salvation.

Nowhere is this more evident than in the worldview of TED, where the worst of 1% liberal tone-deafness meets the intellectual fraud of  facile technocracy. This has seen some public rebuke from the likes of Evgeny Morozov (who, ironically, gained notoriety through the very same TED over a decade ago for his sceptical take on the hackneyed cliché of the Internet as an instrument of political liberalisation). Although the assaults on the edifice of Apps and Machine Learning Macht Frei are muted and comes from a small renegade force, that critique has gained exposure to a wider audience in recent years. There is sufficient prior art to not warrant a recapitulation.

Business man pushing large stone up to hill , Business heavy tasks and problems concept.The bigger and more troublesome consensus I see relates to the social convention that one must provide solutions alongside one’s contemplation of problems. In America, at least, social criticism is widely deemed “unconstructive” if not accompanied by a a plan to fix the ills. It seems to me that one cannot be a public intellectual with a critical vantage point in the US unless one is prepared to offer concrete rectification, whether policy prescriptions for worldly problems or inward-looking attitudinal adjustments for personal ones. Otherwise, one’s a pathetic whiner.

The first and foremost reason this is problematic also manifests itself in the reign of the aforementioned technocracy: it posits something about the nature of problems — that they all have clear, distinct and discrete solutions. Some problems of humanity are timeless and existential, though. Not all problems are solvable, particularly in isolation. The American cultural mythos is cholerically hostile to the notion that some problems simply might not have solutions, alas. Everything’s solvable!

And maybe it is. But where solutions do exist, they are often woven into complex and interdependent systems of simultaneous equations, inextricably bound up in solutions to vast categories of other problems. The presumption of symmetry between the task of describing a problem and devising a solution is unwarranted, but if the critic balks, they are met with: “Oh, all you want to do is rant and complain”.

That explains why the “so what do we do about it?” part of socially critical books often reads like a stilted afterthought, stammered out at 5:49 AM on the day of the editor’s deadline in an eerily silent graveyard of empty latte urns and greasy take-out food caskets. After doing the rather manageable thing of identifying the problem, the writer’s now tasked with the much more cosmological burden of sorting it all out. It’s the thrill of agony and the stinging pain of defeat, all in one manuscript.

Yet the most overlooked problem ought to be the one most glaring: even where discrete solutions are possible, in principle, the people best equipped to identify and describe a problem are not necessarily the best people to solve it, and any correlation between the two is strictly incidental. Observers most sensitive to the consequences of a political problem, for instance, are rarely policy experts. They are not in a position to craft a labouriously articulated fix that is compatible with the internal logic of, for example, the legislative process.

Gun control is a timely, if random example. I can tell you in considerable detail why I think this country has a globally unprecedented mass shooting problem and that it needs to seriously re-examine its interpretation of the Second Amendment, but I don’t have the esoteric knowledge to tell you what kind of response might actually work as a matter of working law or regulation. Ironically, the people who are more qualified to do that mostly don’t seem to think our mass shootings are much of a problem.

I don’t know why we expect a competent description of a problem to signal an ability to solve it, but I do know that the demand to do so is a widely deployed conversation stopper that shuts down a lot of legitimate critical work. Conversation stoppers only work inasmuch as they capture widely accepted notions — we call it “conventional wisdom”. Conventional wisdom has it that everything’s fixable and that one must proffer a fix to get a seat at the table of criticism and dissent.

That’s something we need to solve.


No, I will not make my son a programmer

The world is abuzz with talk of “coding” lately. Lots of people tell me their brother or their cousin is “into coding”; “you know, he does web sites and stuff”. Indeed, I saw this book at Fry’s yesterday:

26240261_10108180032600940_363805100848276626_o

(Apparently, this book, mostly a tome of basic HTML and CSS, passes for “coding” nowadays, but that’s a rant for another day.)

On the shelf below it, there was another title: “Python for Kids”. And lots of tech colleagues tell me they’re teaching their five and six year-olds basic programming.

And as I have a two year-old son, and given what I do—though it has precious little to do with web development per se, in the main—I am asked fairly often: “Are you going to teach Roman to code?” It seems to be almost rhetorical in the mind of many doing the asking, almost a fait accompli.

I’ve always found the question puzzling. I don’t know. Am I going to teach him to code? To me, it sounds rather arbitrary, a bit like, “Are you going to enroll him in karate lessons?” or “Are you going to have him tutored in oil painting?” or “Is he going to play basketball?” It depends on what he’s like as a growing person, and whether he seems interested or appears to have any aptitude for it, I suppose. He’ll doubtless be exposed to it, given his parentage; there’s probably no avoiding that. Beyond that, it’s really a question of whether he’s keen on it.

There’s an important balance to strike; when it comes to specialisations, kids don’t know what they don’t know, and one of the main reasons we have general public education (and general ed/survey course requirements at the university level, in the USA) is to expose growing minds to the range of occupational possibilities, academic disciplines and fields of human endeavour generally. Still, I’m acutely aware of what happens when parents try to remake children to any degree in their own professional or intellectual image. I got this mildly, in the form of being subjected to parental projections of Soviet intelligentsia values: mandatory piano lessons, assigned reading of literary classics, lots of classical musical concerts, ballet performances, etc. In hindsight, it probably did me some good, though my adolescent rebelled powerfully on the inside. I see much sharper examples in the lives of others, whose parents want them to proceed down some similar track — play football in college, learn the family business, or, as it happens, become a software engineer.

My own interest in IT as a child arose in a particular context, a historical conjuncture of many factors: university environment, emergence of the commercial Internet, supportive academic social community, adolescent quest for identity, efficacy, communication. There’s no reason to think the same motivations will drive others in an era in which all this is long commoditised. A lot of people seem to subject their kids to forcefully projected nostalgia for a different time and place. I know my love for computers came from a different time and place. I am not sure I’d have been lured by them as they are today.

Teenage Boy In Bedroom Writing Computer CodeI think the question about “coding” runs deeper, though. There’s a widespread awareness—and perhaps it’s fair to say, anxiety—about software eating the world. There seems to be some consensus that the foreseeable future of gainful employment in the developed world dovetails extensively with machine intelligence. Automation as a reputed killer of low to medium-skilled service jobs is a routine headline. I think what’s really being asked is, “given that we’re going to be a society of computer programmers, will Roman take part?”

I suppose don’t buy the given. It’s fair to say that use of computer technology has become routine and necessary in most full-time professional jobs. I also think it’s important for kids to have some idea of how software works so that they can make sense of the world around them; it can’t all be “magic”, and indeed, that lack of understanding is an obstacle as we rapidly leap into a very software-driven world.

But it doesn’t follow that everyone needs to learn to speak to computers in code. Indeed, one could convincingly argue that the general arc of software progress and the commoditisation of computers has been to make this less necessary over time; there was a time when everyday uses of computers required speaking an assembler, COBOL, BASIC, while nowadays a substantial portion of the digitally savvy population taps through “apps”, and frankly, so do I. I started writing socket (network-related) code in C on Linux when I was 12, but I only have the broadest idea of how my Galaxy S8 works. I’ve asked younger Millennials for Android help before.

Young friends using smartphones and drinking coffee outdoor - Group of happy people having fun with technology trends - Youth and friendship concept - Main focus on grey t-shirt man cell hand

Moreover, people learn what they need to; I know plenty of otherwise technically illiterate accountants who have conquered snow-capped summits of Excel macro wizardry, the likes of which I could not have even conceived.

My undergraduate-aged babysitter is far from a technologist, but her mobile and desktop computer literacy surpasses that of many Baby Boomer and Gen X professionals. Why? She was born in the late 1990s; she’s always known the Internet. I jokingly asked her once if she realised music wasn’t always on iPods or in MP3 format, but based on her matter-of-fact response, I don’t think she really heard the full notes of the humour. It was almost like asking me if I realised history used to be recorded on papyrus.

In short, I don’t see law, medicine, writing, poetry, music, art, or the myriad of skilled professions becoming a fancy, domain-specific branch of computer programming. These fields will—as they do—put computers and the Internet to business use, but why are we talking as if everyone’s sat in front of a PDP-11?

That leads me to the heart of what inflames me about this cultural moment of software mania and metaphysical, cosmological technocracy: technology is a tool, not an end in itself, and we mustn’t forget that. It is a force subordinated to human purpose, not the other way around. It is as lifeless and mechanical as a jackhammer, not an organism in need of care and feeding, nor a capricious god to which we must pay tribute or sacrifice our young. It does not intrinsically solve most timeless sociopolitical problems. It’s not a raison d’être, and neither is “coding”.

Speaking of sacrificing our young, while my own childhood obsession with programming and the Internet got me a well-compensated occupation in an in-demand and growing field, as well as a supportive network of likeminded online cohorts, I’m all too aware of the human costs, physical and psychological. At least ten thousand hours were spent in a sedentary pose as an adolescent and teen. I missed out on almost all social features of high school, since there was always C code to be tinkered with or someone was wrong on Kuro5hin or something. (Though, there’s no particular reason to think it’d have been epic otherwise, for reasons Paul Graham articulated better than I could). The shockingly low amounts of sleep I ran on most school days between grades 6 and 11, bleary-eyed from the blue light-soaked all-nighters of homo computatis, ought to be the subject of some kind of study, I swear. I wear multiple pairs of glasses due to eye strain. I dropped out of college because I cared so much more about my work. The fact that anyone ever dated me seems like a miracle sometimes; I somehow had a girlfriend my senior year of high school, which finally had me looking after myself more, but you, too, would ask “how?”; it didn’t (heh) compute.

I’m not saying I necessarily regret any of it, though of course we’d all tweak a few things with the benefit of hindsight and time travel. What I will say is that I don’t bill my lawyer-ish hourly rate for nothing. I got here at the cost of much of my childhood and adolescence, as we ordinarily understand those stages of life, and at this point I’ve fed for more than 2/3rds of my life span to the exacting and jealous machine. The road to being pretty good at what I do was long and arduous. Computers are addictive as all hell. It’s no accident I’m finishing this post at 4:45 AM; when you mess up your biorhythms from such an early age, old habits die hard don’t die. I’m very mindful of all that as I consider the full list of possible consequences of parentally encouraged geekery for kids.

I suppose there is one way in which Roman will be socialised in the shape of his father: he’s genetically part philosopher, and if he does take up programming, we’re going to spend a lot of time on: “But code what? And why?” In the meantime, I have no plans to plant him in front of a Raspberry Pi or “Python For Kids”.


Review: the failure of my Kinesis Advantage experiment

For the past few years, I have vacillated between a classic unlabeled Das Keyboard Model S and a Microsoft Sculpt. I liked the Sculpt for the ergonomic aspect, as I have a high comfort level with split keyboards from past experience retraining myself to use them, but deep down, I am one of those IBM Model M / Unicomp devotees of The Click—not unlike many of my technical cohorts. Thus, I was always a bit torn.

For the last year in particular, I had been using the Das Keyboard and my laptop. In response to growing wrist and hand soreness of a type I would intuitively describe as “pre-RSI”, I decided to explore other options. I had begun to experience some hand discomfort from all the finger stretching, and other faint pains that likely presage RSI. I also began to experience that psychological anticipatory aversion to tasks that require lots of typing, which is often discussed as a psychological manifestation of creeping RSI. Never have I ever been averse to lots of typing before.

RSI is a dead-serious concern for people in our profession; I have heard about people having to change professional roles or exit the profession altogether because of it. And I seemed like a better candidate to eventually succumb to it than most because, in my estimation, I have put more mileage on my hands than most of my colleagues. This is because:

  • I am a fast typist; I can do 140 WPM fairly easily on the right keyboard, and if I concentrate really hard, more.
  • I have historically favoured loud, clicky keyboards that require high-impact, violent typing mannerisms, such as the IBM Model M, and used one or variations thereof for close to a decade and a half. These tendencies have been carried forward even to kinder, gentler keyboards, and I have worn out the keys on many a cheap laptop keyboard.
  • I have been typing fast and a lot since I was 9 years old, the age at which I began to work with computers seriously and program for the first time.
  • I’m a talkative and verbose personality, and I just type a lot in IM conversations, write lengthy e-mails, blog posts, etc.

For all these reasons, I had cause to be especially concerned. The combination of volume, speed and impact in my computer-based life led to a lot of strain. My relationship to the keyboard is an “intense” one.

The Sculpt did a pretty good job of soothing these discomforts as much as a keyboard can. It also comes with an optional component that goes underneath the wrist pad and elevates the keyboard, which I have found to make a big difference. Still, with this emergent discomfort, I began to grow anxious about my hands and the impact in five to ten years, so I was interested in exploring more radical options. They’re my hands. Next to my eyes, they’re the most important professional asset I’ve got.

The Kinesis has been around for a long time, and I have heard evangelism for it from some developer colleagues and friends for at least ten years. In my twenties, I ignored it; this keyboard was just a little too “weird”, would obviously require some retraining, had a $350 price point, and RSI seemed like a distant concern. In light of the shifting situation, however, it seemed quite appealing.

I read and researched; it got generally rave reviews, to the degree that there are several other keyboards (e.g. ErgoDox) that have basically mimicked at least some essential features of the concept. I read Hacker News comments from people who said the Kinesis literally saved them from having to change professions. I wasn’t sure how much of that was hyperbole, but I figured I’d give it a shot.

kb600-angled-cc-510x344

I had high hopes, and I knew it wasn’t going to be an easy road to adapt a layout where most functional keys (Enter, Space, PgUp/PgDown, Home, End, Alt, Win key, Ctrl, Backspace, Delete) were moved to thumb clusters. Reviews spoke of weeks or months to really retrain and make the switch. But I was really sold on the idea of key wells to reduce finger extension, since, intuitively, and despite my large hands, that seemed to be the biggest pain point for me. And, let’s be real — for $350, anyone’s going to have high hopes.

And thus, it gives me no pleasure to report that I am one of those people for whom the Kinesis isn’t going to work out. One doesn’t hear from them much; the online reviews are generally very positive, but I suspect they suffer from survivorship bias.

Typing words on the Kinesis was an amazingly pleasurable experience. If your job consists primarily of typing natural language text, this might be just the keyboard for you.

There were growing pains during the first day or two, of course. Hitting Backspace with my left thumb took some adjustment. The curvature in the wells had the same effect upon my typing as making some keys very small and difficult to hit. The columnar structure meant my spacing expectations were all off. P and O off to the northeast like that was odd. Getting punctuation right was hard. Nevertheless, after a few days, I had built my typing speed back up to a fairly respectable level with which I was comfortable. It’s hard to say how much of it is real and how much is placebo effect, but my hands felt considerably more relaxed and the stressful finger-stretching motions all but disappeared. It was nice to not have to take my fingers off home row much.

Alas, the primary factor dooming this venture wasn’t in writing e-mail or chatting, but rather my utter paralysis in the face of my actual work. I’m paid to be a consultant and a programmer, not a novelist or blogger.

Any programmer will recognise that every proficient “power user” of computers has their “flow”: their specialised use of input device idioms, keyboard shortcuts, and key combinations to get things done. Fast. This “flow” is generally taken for granted by anyone with deep computing experience. You absolutely need it to be effective and get things done, and being without it is crippling. I would liken it to doing higher-order math; to be any good at it, you have to be able to do the easy math at 200 MPH, otherwise you’ll spend all your time struggling through that. The same holds true for people who work with text; bare-minimal literacy is not enough, they have to be truly fluent readers. Or maybe it’s like speaking a foreign language; until and unless understanding and speaking becomes fairly second nature to you, you will spend too many brain cycles struggling with language mechanics, with no room to traffic fluidly in complex ideas. For that matter, you can’t perform Bach without having a second-nature relationship to the instrument and reading sheet music as a subconscious act.

Fluent computer use is like that, too. Here is a small sampling of things I do on my keyboard literally all the time — and I cannot emphasise enough that they figure into almost anything I do at the computer, any time:

  • Use special characters such as tilde (~), carets (<>), curly braces ({}) and square brackets ([]), plus (+), minus (-), underscore (_), parentheses, etc. Can you write code or even use command-line UNIX without using these constantly?
  • Use Ctrl + Backspace to delete entire words in generic text controls (e.g. browser).
  • Use Ctrl + Left/Right to skip around entire words in generic text controls (e.g. browser).
  • Rapidly deploy complex arrangements of tiled windows in i3wm and shuffle windows around within these arrangements. i3wm was the most significant productivity and comfort improvement in my life over the last year in that it virtually vanquished my use of the mouse (except in a browser of course).
  • Skip to beginning and end of text with Home/End.
  • Make use of PgUp/PgDown liberally.
  • Use Shift + arrow keys to highlight text.
  • Use Ctrl + A to select an entire block of text.
  • Use Escape to constantly switch among “insert”, “normal” and “visual” mode in vim.

The Kinesis’s placement of most of these keys is completely beyond the pale:

1-f-RqQi4YMcJ3yIfuaiqtlQ.png

Observe:

  • The arrow keys are split between the panes; up/down arrows are on the right, left/right arrows are on the left.
  • There are two Ctrl keys but only one Windows (Command for Mac layout) key.
  • Alt is at the top of the left thumb cluster.
  • Esc is a tiny and difficult to reach button.
  • The plus/equals (+/=) key is where you would expect to find Esc on most keyboards.
  • The minus/underscore (-/_) key is where you would expect to find Backspace on most keyboards.
  • The curly braces/brackets ({} / []) are awkwardly situated in the southeast corner, requiring use of one’s pinky finger to reach them.
  • The tilde (~) and backtick (`) key is off to the extreme southwest.
  • The pipe symbol is just to the top right of it.

Few of these keys are easy to hit, particularly the arrows, since they are also near the rim of the well curve.

Now, I’m not so naive as to think adaptation to this new regime is quick. The average time thrown out to truly make a transition was about a month, and I’ve had the Kinesis for less than a week. However, I drew the conclusion for myself — and it clearly isn’t shared by some other techies who swear by the Kinesis — that the time and effort required to invent a new kind of flow won’t pay off.

The flow is absolutely critical. Customers don’t pay me to be slow at what I do. They don’t pay me to do what I do at an average pace. They pay for database jiu-jitsu and command line fu. With the tools, shortcuts, idioms and patterns of computer interaction at my disposal, I can nail up an environment in which to debug their critical production environment in seconds. They’re not paying for me to struggle with my keyboard or type like a normal person. When I need to get cracking, it’s a flurry of windows, symbols, connections and output. SSH sessions fly through the screen like deadly weapons. My hands dance through directory structures and command line switches. It’s often enough that I have to soar to the apex, the crescendo of a human-mechanical chorus; I have to be one with the machine. If it’s not too immodest to say so, even savvy fellow nerds have commented that I am too fast on the keyboard for them to follow what I am doing. That’s just how I roll, and I rely on it for maximum effect and commercial advantage.

I don’t write code at 100+ WPM, of course. But programming involves trafficking in many layers of abstraction simultaneously. It happens in fits and starts. When an idea needs to be translated into code, you can’t be slowed down by basic input mechanics. As it is, the character-by-character entry is a bottleneck to the translation of thought process into code stream. The subconscious, second-nature use of those special characters and key combinations are absolutely critical to that. Code is hard enough to entertain without the keyboard or poor muscle memory being in your way.

I approve of the basic concept of the Kinesis. The key wells seem especially therapeutic. But this wholesale rearrangement of symbolic keys just isn’t going to work for me. Perhaps if they had designed the keyboard but kept more keys in conventional places, my outlook would be different. As it stands, however, it’s just fantastically difficult.

As best as I am able to understand the logic of the design, these keys are all considered “infrequently used”, and thus moved off to the sidelines to make room for keys that are frequently used. Well, infrequently used by who? The semicolon key is faded from wear on my laptop. The brackets/braces keys ({} / []) are slightly depressed from constant smashing. The slash key (/) is perilously close to destroyed. The arrow keys grimace every morning, anticipating another day of unremitting abuse. Letter keys are only the beginning of the story in my keyboards’ brutal existence.

The other major factor that discourages the Kinesis is, of course, the incompatibility with conventional keyboards. I acknowledge that twenty years of motor memory doesn’t go out the door in one day, but it goes quickly enough; after two or three days of nonstop Kinesis use, I tried typing on my laptop keyboard, and pure gibberish came out. It was surreal to realise that I had forgot how to type on normal keyboards in the space of a few days.

The culprit here wasn’t so much the compulsion to hit the Space Bar with my left thumb to achieve Backspace, but rather the different muscle memory expectations with regard to the spacing of keys. The Kinesis’ rows aren’t staggered like a conventional keyboard’s, they’re columnar. Some of the keys are located in awkward places near the rim of the key wells. Until I regained some of the old muscle memory, I just typed the wrong letters for a half hour. The other noticeable aspect of conventional keyboards, including the split keyboard, is that they felt very cramped after the Kinesis.

All the same, I’m often untethered from my desktop. I do on-site work sometimes. Y’all, I can’t just forget how to use the keyboards the other 99.9% use — and use them well. I’ve heard the comment that routine use conventional keyboards along with the Kinesis keeps both sets of muscle memory fresh, but in my case, that didn’t seem to be working out so well. It took me a whole evening to work out how to type on my Sculpt again. I shudder to think what this experience would have been like if I had put a few more weeks into the Kinesis.

Fortune favours the bold, experimentation is important, and RSI avoidance is a sufficiently compelling goal that the risk of buying a $350 doorstop was worth a shot. I regret nothing. Perhaps I will revisit the Kinesis when the impetus comes along, but for now, the Sculpt—and ergonomic split keyboards more generally—is an adequate middle ground that doesn’t in any way interfere with my typing on conventional straight keyboards as well.

No, I am not selling the Kinesis, and neither will it be collecting dust on the shelf. Typing prose on it was undeniably pleasant, and I will explore ways to try to adapt to it again in the near future. I will research ways to remap keys in ways conducive to developer “flow”. I may yet end up riding it into battle. However, at first glance, the experiment appears to have been a failure.


In Response to the Cult of Remote Working

Remote working and working from home have become hallowed totems of the progressive side of IT business in recent years. Advocacy for the benefits of working from home and collaborative technologies that bring distributed teams together is widespread on weather vane forums of IT culture like Hacker News. Even in the more formal business press, there’s been a steady drumbeat of analysis with an optimistic view to the possible benefits.

I’ll be the first to admit that I have been a beneficiary. I’ve been self-employed since I was twenty-two, and have spent a good chunk of my adult life working remotely, in coffee shops and coworking spaces, as well as living overseas and working wherever there.

Furthermore, the remote work thesis has dragged onto the stage with it several related insights that unquestionably needed exposure to a wider audience. For some time now, we’ve been highlighting the failings of old-fashioned, Twentieth Century “butts in chairs” presumptions of corporate America, namely that if you’re sat in your cube like a good Organisation (Wo)Man, you must be working productively. It’s good to see more mainstream recognition of the fact that people work differently and have different biological and psychological rhythms, from which it follows that the standard 9-to-5 schedule is not the most productive one for a lot of people (I make a crappy 9-to-5er, and so do many developers I know). And more generally, I welcome the pressure to take a more results-based view of productivity that privileges what people are actually getting done over how and when they do it.

Multi-tasking mother at homeStill, remote work has become something of a religion now among Millennial professionals in the “digital realm”, and it’s reached a fever pitch. I’ve heard from multiple people and in various forms the claim that modern tech companies, or software companies, simply do not need offices. It’s trotted out as an incontrovertible fact that trumps all business and people-specific considerations. Among certain segments of affluent Millennial professionals, it’s become a cult.

The world wants for a more sober and equanimous analysis, which, if undertaken, leads to more ambivalent conclusions.

Business model and knowledge

I think the main thing missing from the generic focus of lyrical encomiums to working at home is an awareness of how knowledge is shared and transmitted. That’s going to be strongly tied up in the nature of the business model and its specific workflows.

Yes, remote can work well in a small team of professionals who work mostly independently on compartmentalised work items. That suitably describes a lot of web startups. Good web developers, for example, be relied upon to maintain and expand their skill set independently of the concrete work they do. Essentially, they’re freelancers with a W-2 paycheck.

That’s not how a lot of business in the “knowledge economy” works, though. I got my career start at a relatively small-town Internet service provider, rapidly rising from a part-time student tech support employee to the principal system administrator in 1-2 years’ time. I came into the first role at age 18, having good raw technical skills from a childhood of Linux and C programming but with no real-world work background, business experience, or knowledge of industrial equipment. I was an eager knowledge sink and learned a great deal from older colleagues who mentored me. A lot of the gaps that needed filling weren’t so much technical skills as applied experience with how to implement technology to serve real-world business cases, and the trade-offs involved in doing so. I had no exposure to business growing up; I had never dealt with the complexities of real customers or contracts, knew nothing about how to price services or the true cost structure of a company, CAPEX vs. OPEX, etc. Like all over-eager, bright-eyed, bushy-tailed 18 year-old beavers, I had to be slowly disabused of an overwhelming tendency to recommend “build” vs. “buy”. Furthermore, I forged strong relationships with the interesting and eclectic crowd that this employer attracted. These remain my strongest social connections more than a decade later, and have been professionally as well as personally important. And when I left that ISP role at age 20, I was able to successfully leverage the broad experience and parlay it into a rather meteoric rise in professional status at a big-boy corporate job in Atlanta. This process made me into the professional I became.

The small-town ISP wasn’t lucrative. The bargain with employees was, for the most part: we pay low student-type wages, you learn more, and more quickly, than almost anywhere else you could conceivably work. It was a fair trade, and one that exists in a lot of places in the economy. The average 19 year-old, even of the precocious sort, doesn’t get to administer BGP routers or help deploy SANs. This was all socially transmitted knowledge, the organic outcome of shared culture built around the proverbial water cooler.

I saw where the cables ran and how real networks looked. Even in our Cloudy world, where these links are increasingly software-defined, it’s important to see and touch. I paid attention to how my coworkers worked, their mannerisms, how they reacted to difficult situations, and I copied and adapted many of their habits. I came to have similar instincts. In ruminating upon how I learned, I learned how to better teach and train others. I learned to bring a business outlook to bear on many issues as well as a technical one. I learned a lot about common organisational anti-patterns and what not to do. These are the things that made me valuable to future employers as much as any technical skill set I possessed.

I have trouble imagining how this would have worked if I were sat at my home computer, given a bunch of logins to network equipment and told to inquire on something like Slack if I had any questions. I was there in person to pester — and occasionally frustrate — my senior coworkers, and, with time, to teach and mentor my junior team members, and it made all the difference.

Techno-utopian fantasies and the human factor

For the last decade or so, I have been doing SIP and Kamailio consulting for VoIP service providers. VoIP is a weird intersection of the technology universe where telephony meets computers, two worlds that don’t traditionally converse. The business opportunity as a consultant comes largely from the fact that the phone guys traditionally don’t know much about IP packet networks, data and IT, while the IT guys don’t know much about phones.

And although that world is slowly changing, VoIP providers still have to talk to the PSTN (Public Switched Telephone Network), AKA the traditional telephone network. To be a useful vendor to VoIP service providers, you need some rather esoteric domain knowledge about arcane PSTN concepts that go back to 1980s technology. The PSTN is highly regulated, and you need to understand that regulatory environment to be able to understand customer needs as they relate to billing and interconnection.

That sort of thing is called domain knowledge, and exotic domain knowledge is the essence of most commercially viable consulting endeavours. VoIP is an uncommon skill set; there’s a very limited number of people out there who possess it, and you can’t just hire off the street for it. Even when you do find someone with that expertise, it’s almost certainly going to be in an allied, but different subspecialisation of the field. In addition to imparting niche technical skills, you’re going to have to teach them about the industry and the customers.

How do you do that over Slack and Hangouts? Well, I thought I could. I have hired four or five people during the lifetime of my business, all remote, reasoning — rather contemporarily — that working at home is a nice benefit to provide and technology can bridge the gaps.

It can’t. It didn’t work. And I specialise in telecommunications. When it comes down to it, phone, e-mail, chat and video are all directed communications graphs. Any communication is particularised, deliberate, and has a certain cost, even if it’s relatively low. Chat inherently privileges the short sound-bite and the “quick takeaway”, the favoured refuge of people too busy to think. Few people are going to type as much as they speak. And any commitment to do so leads to self-consciousness about using “work time” for that purpose in ways that hallway chats do not.

Of course, it’s not that all “meatspace” workplaces are all socially robust, thriving marketplaces of ideas and nexuses of collegiate friendship. I’ve worked in plenty of corporate environments where people come in, sit in their cube, type things, have a meeting, and go home. But still, real-world tech work isn’t always about churning out code in a generic, undifferentiated way. Often, it can’t be divorced from deep knowledge of the business domain in which you participate. It’s very important that your employees come to have social knowledge of that business domain, and the inefficiencies of remote communication are a surprisingly strong headwind.’

What’s more, any honest entrepreneur can tell you that convincing people to work for you and applying their work in an economically useful way is actually an incredibly hard problem. It’s often harder than getting customers to pay for the product, which is usually the more central preoccupation of business lore. Knowing your (expensive, indispensable) people, what makes them tick, keeping them happy, and maintaining a finger on their pulse is more art than science. Accenture and Deloitte may think of people as “Linux resources”, but in the world of small business, this is your crew, your livelihood, your life-blood. Emojis don’t promote that kind of deep connection to so-called “human capital”.

I think this is all a special case of a more general fallacy that pervades the technocratic bent of Valley thinking: the conceit that technology can solve broad classes of timeless management problems that are essentially human. A lot of the sales pitch behind ticketing systems, project management systems, CRMs, Slack, Basecamp, etc. has the meta-message that if you just had the right tools, you can bridge all work and process gaps, or somehow guarantee or force productivity, or provide browser-based surrogates for the psychological feedback of solidarity and shared purpose. You can’t. Not even with uncompressed 8K video and a million dollar telepresence system. Ask the airlines if anyone still travels to have important business meetings. There are certain categories of problems for which more technology is not the answer.

traffic jams in the city, road, rush hourA related pitfall of technocratic utopianism—that it is in tools and technology that our salvation lies—is that it often leads to solving the wrong problems. For example, metro Atlanta is practically a poster-child for the sprawling suburban dystopia of which I have treated much. It’s an accepted fact that no matter where you locate your company office in Atlanta, you’re dooming a high percentage of your work force to a potentially soul-crushing commute across Atlanta’s unconscienable freeway distances. It’s not news that length of commute correlates inversely with health and happiness. So, what’s our response? Instead of taking a fresh critical look at our crappy infrastructure, lack of public transport, and automobile-centric, sprawling built environment, we flush the positive value of the enterprise of “going to work” out with the bathwater of “the hated car commute”. But they are not one and the same.

Personality and social needs

As I wrote in another post from 2015:

Oh, [working at home] seemed incredibly cool when it was the forbidden fruit. Back when I had to make a bleary-eyed, tedious commute to some cube at 9 AM and put cover sheets on TPS reports or listen to coworkers’ incessant sports talk, working from home was a rare and coveted treat, the stuff of dreams. Imagine, saving the world in my bathrobe, all the fine things in life at my fingertips: refrigerator, snacks, couch, coffee table, a breather on the balcony!

Working from home

However, after I went out into the reputedly exciting world of self-employment around this time eight years ago, the novelty wore off after a week or two and the bleak reality set in. I’m an extrovert and I don’t handle extended loneliness well. Not leaving the house was depressing and unhealthy. It was not conducive to a routine; I quickly developed a chronically dispirited mood, exquisitely strange and shifty sleep rhythms (even by my nocturnal standards), and eating habits worthy of a bulletin from the Surgeon General.

Oddly enough, this was unrelated to whether I lived alone, with a long-term romantic partner, or family and friends. Certainly, I can’t work at home these days in a small apartment with three young kids, but for most of the eight-year history of this business, I lived alone or with an adult partner no less busy than I. Also, I spent a few years living overseas. In all cases, I was dysfunctional working at home–or whatever place served the role of home–and I hated it. To stay sane and produce consistently, I need some kind of routine, a commute, movement and walking, coworkers, water-cooler talk, lunch meetings, and the overall psychological compartmentalisation that comes with a distinctive work-space. If I don’t have that, things go downhill fast.

Working remotely from one’s residence certainly doesn’t have universal appeal.

An understated but important subtext of the remote work discussion in IT culture is a celebration of the stereotypical techie introvert, who resents being subjected to mandatory social participation in the typical corporate workplace shuffle. In a world where such people feel — to some extent legitimately — crowded out by extroverts, this is fair play.

But not all extroverts are facile schmoozers and gold chain-wearing womanisers from Sales. A question that seems to get drowned out in ecstatic praise of remote working is about the psychological foundations of motivation. Productive relatedness to one’s fellow man is a universal psychological need. We all need, in one measure or another, to be seen, admired, included, valued, recognised and praised for our distinctive contributions to larger endeavours. There’s a lot of unexplored territory around how a chronic state of remote work bends this dynamic and affects long-term job satisfaction. I am not ashamed to admit my investment in my work and my professional identity generates social needs that languishing at home does not fulfill.

Lonely senior man looking at the windowThere are other wrinkles in the fabric of human psychology, the blunt and unvarnished truths we’re all supposed to learn as we get older, wiser and more savvy to the fragility, equivocality and capriciousness of the human condition. By way of illustration, one possible wrinkle is the role of workplace as refuge, however unconsciously sought, from a difficult and stressful home life, for men and women alike.

We’ve heard a lot lately from people who say, “Now that I work at home and don’t have to commute, I get so much more done in less time while still staying on top of chores and spending more time with my kids! My life is so much better!” Well, good. Everyone should have your idyllic life and happy marriage. Everyone should be young, affluent and healthy instead of old, bankrupt and ill, and they should live in a village full of warm, loving friends and relatives, instead of alone and forgotten. Home life should be easy and cheerful instead of overwhelming and demoralising. Who wouldn’t rather live in Mister Roger’s Neighborhood, where addiction, abuse and depression are the unintelligible words of a foreign language? Given the choice, instead of dealing with Mom’s methadone withdrawals, bailing their cousin out of jail again, going in for an MRI of a meningioma, or staring at a foreclosure sale notice, I think anyone would sure as hell prefer to crush some P90X, bro down on some Scrum board stories in Awesome.js, and make a fat 401(k) contribution because it’s payday. All without leaving the house! So much winning.

But my life experience suggests there is a not-insignificant number of people for whom escape to a workplace and a single-minded focus on work is what they need to stay sane. It may be all they have. Even for a lot of the affluent professional middle class, as often as not the best case of ordinary American existence is that it’s bleak and offers little to come home to and little to go out to. Or it can be much worse. That’s real life.

Rather generally

Well, that took a turn into a peculiar niche area, you might say. But we’ve got to think about stuff like that before we declare the workplace, as we classically understand the concept, to be unequivocally obsolete. We overzealously declared the pedestrian civic realm and the public plaza to be obsolete half a century ago, and look what happened?

There’s probably a reason why we have evolved an entire cultural vernacular not just around the specific places and facilities in which we work, but the idea of being “at work” and not “at home”. Being “at work” isn’t just about where you are located right now—it can also have a more cosmological dimension. It’s a state of affairs. It’s a punctuation mark to many would-be run-on sentences, far from all pleasant.

As a software engineer, I’m the first to say that not everything need be expensive and physical. However, humanity cannot be wholly separated from its physicality and “uploaded” to The Cloud. Many of the physical structures and in-person rituals we have built are a necessary manifestation—indeed, a mindful assertion—of inspired productive communion in our short time on Earth. It may be that a place of work—a work-place—is one of them.


Concerning the Heartland

Young man wrapped American flag crop field sunsetOne of the notes that the dog whistle of the Trump political machine hits is rural parochialism and provincialism, a conviction of the inhabitants of the American “heartland” that it is the essential America.

America hardly invented pitchfork-wielding country chauvinism, but it has had a particularly powerful historical current of that kind of know-nothingism, owing to its roots in a medley of dispossessed immigrant political minorities—especially religious minorities. There is ample lore in the heritage to support the popular psychological metaphor of taking a nihilistic wrecking ball to the state. There’s burning the corrupt edifice to the ground, nuking it from orbit, methodically strangling it with “starve the beast” spending legislation, “draining the swamp”—whatever form it takes.

These people don’t want to hear about big city folk and their reputedly elite cosmopolitan problems. No, it is we, on this two-lane road in the middle of southern nowhere, amidst the vestigial shells of Rust Belt industry, in the bucolic crop fields of South Dakota, who are the “real” America! It’s time to take it back! We don’t need no stinking foreigners, we couldn’t give a hoot about “diversity”, and (as was remarked to me recently) come on, show us one legit immigrant from Yemen, we dare you.

As always, the problem is that going full ostrich doesn’t work.

Like it or not, we do live in a highly globalised, interdependent world. Interdependence means complexity, and—to return to thermodynamics, as we all someday do—complexity means fragility. Fragility means that diplomacy and a multilateral approach to human affairs are necessary. Omaha mostly buys Samsung smartphones, too. Banks in Kansas are exposed to the Hong Kong Dollar and the German stock market. There are Caterpillar excavators and Boeing airplanes everywhere, and the manuals are translated into two dozen languages. Ohioans entrust their very lives to talented Japanese embedded software engineers every day, and will again tomorrow and the day after that—isn’t it nice to plug one’s ears and not have to wonder why one’s Honda doesn’t spontaneously explode or run off the road? The working-class enlisted sons of Kentucky are shipped off to places their parents would do well to learn to pronounce. The more economically privileged ones are going to discover a good Jordanian immigrant-owned falafel stand in their college towns.

(My own tech career was launched at a small-town Internet service provider owned by a Syrian Arab and his Pakistani business partner. But everyone knows devout Muslims don’t create jobs in Murrica.)

Or, more cosmologically: events in Yemen, Somalia, Egypt, South Korea, China and India, inter alia, are highly relevant to every American’s existence.

I know a lot of people who would prefer not to venture beyond the county line and like their intellectual preoccupations as they like their beer—domestic. To you I say:

Sorry, you can’t put the milk back in the cow and rewind the clock to 1802 A.D.1

Moreover, without the Coastal Mind Control Elites, this iconic heartland would have neither the government subsidies on which it extensively relies, nor markets for its products. I won’t patronise you with one of the many charts showing the geographic distribution of net inflows and outflows of federal dollars and the rural-urban trade balance. Y’all are actually pretty good at Google when there’s pervasive liberal bias and insidious Soros funding to be unearthed on the Internet, or some Julius Streicher Breitbart screed in need of scholarly citation.

Big conurbations concentrate economic opportunities and institutions. You don’t have to live in them if you don’t want to, and you don’t have to take part in the so-called Knowledge Work economy, but you can’t just give it all a sociopolitical middle finger and pretend that they, their denizens, or the larger world don’t exist. You guys just want to rap about how you’re screwed by globalisation and immigration, drop the mic, and leave hard problems of making the world turn ignored and unsolved. It’s facile, it’s petulant, and it makes you sound like an overgrown toddler

If that’s how it’s going to be, fine, but could you at least let the overedumacated, book-learned cityfolk do their jobs instead of sticking them with a reality TV demagogue and his diabolical alt-right posse?

1 This is precisely why the [seemingly] intentional deafness of the Libertarian candidates to foreign policy is a non-starter.