Big Silicon Valley capital is largely interested in consumer-facing, high-growth “world domination” plays, in anything that has the potential to become a household name. Naturally, the Valley’s startup-grooming tributaries (e.g. Y Combinator) aim to position tech entrepreneurs at an angle complementary to that criteria, since that’s how they make money.
A lot of the cultivated folklore and intellectual work-product of the cultural leaders of this space, as epitomised by the writing of Paul Graham, speaks the language of upward exponential curves, critical mass, and gargantuan user bases–all things Valley web economy VCs like. As PG says here and elsewhere, startups are, most essentially of all, about going big by building something lots of people want.
But what if you’re like us, making a foray into the “boring” world of intra-industrial business software or a product that is specialised deeply into a vertical-specific niche?
I don’t mean a “lifestyle business”, nor do I specifically mean the long-run, sustainable, bootstrapped approach for the advocacy of which 37Signals and DHH have distinguished themselves (although nonparticipation in mainstream tech investment is implied); 37Signals still have products that millions of people want. I’m talking about building something relatively expensive that almost nobody wants.
Think of some Byzantine water pump control mechanism for a sewage treatment plant, something you can elevator-pitch in two seconds to a very select audience but that you couldn’t easily explain in ten minutes to anyone else. We build something like that for the VoIP telephony industry.
We’ve been around for eight years, we’re tiny, and we’ve morphed into a product company largely out of a consulting heritage. We’re clearly not a startup as YC and Valley “VC-istan” would have it, and nobody would fund us. We’re in no danger whatsoever of a “rocket-ship trajectory”, do not leverage “network effects”, we’re not “going viral”, and our customer acquisition cost is pretty high. While we too have benefited from the structural decline in the cost of starting a technology company (e.g. cloud servers), we’re largely unable to benefit from some of the biggest shifts to a lower cost basis and barriers to entry: Google web ads don’t do us much good because that’s not how our typical customers find us, and we don’t have anything to put in a mobile app store.
It’s mostly old-fashioned relationship building, personal brand, conferences and trade shows for us. It’s boring, it’s expensive, it’s slow. Imagine an SEO-tweaked conversion funnel with low-touch onboarding. We’ve got whatever the opposite of that is: a trickle of leads that, when we get lucky, spool out into long, drawn-out, consultative sales cycles measured in months or even years. It’s not the stuff of compelling pitch decks.
And so, the question I’ve been pondering for a long time is: what is the size and location of the cultural and methodological intersection between the Hacker News flavour of startup lore and our kind of business model? Do we have still have something to learn and apply? Can any useful takeaways be mined from the corpus of essays and “thought leadership” that PG and YC have provided? Are there useful entrepreneurial insights to be captured from Hacker News?
I think the answer is yes. One must simply be careful to cherry-pick the right bits. Here are a few thoughts:
Making Something People Want
In the business software realm, this needs to be recast with a sharp emphasis on “solve problems people have”. The advice to solve one’s own problems, or at least problems directly relatable to one’s industry experience, is, at its core, essential.
I first had an idea for something like our present-day product in 2006. At that time, I was young, inexperienced and new to telecom, and conceived of the problem space in very a priori terms. Had I moved forward and tried to take the concept to market at that stage, it would have been a spectacular flop because it did not provide institutionally acceptable solutions to actually-existing problems.
It’s possible that, had I been in a financial position to commit to it full time and avoided being bogged down in consulting for several years, I would have realised a speed advantage from being able to “fail fast”, “iterate” and/or “pivot” in response to the negative feedback for my initial concept. However, in a small industry where personal brand and reputation plays an important role, I’m not sure the speed advantage would have outweighed the personal brand deterioration resulting from putting something out there that simply doesn’t work.
Thus, I’m moved to say that the emphasis on empiricism and really understanding one’s target user is triply important in business software. In particular, I would add that non-trivial domain expertise in the user’s industry is probably a must, unless you’re building something that is, at heart, rather broad and generic.
There’s a particularly ludicrous current of wishful thinking out there that presupposes “customer discovery” to be a free-floating skill set unrelated to any particular industry or sphere of expertise. There probably are some backward industries where daily workflows consist of mostly disposable paper pushing, and where an application with some commodity CRUD screens could make a meaningful dent. However, in our industry, having a JS/CSS-savvy “UX quarterback” shadow “everyday users” for a few days to “really discover their pain points” would be a hopelessly naive waste of time.
There are no shortcuts to knowing a lot about telecom by working in telecom. To make a good telecom product, you have to be deeply conversant with the history of voice and data, the supply chains, the acronyms, and the regulation–oh God, the regulation. This probably holds true of most industries you could build complex solutions for. If you think you can walk into property casualty reinsurance and “disrupt” the place with a month-long Ruby on Rails bender (how very Agile of you), the vertical niche business market is not for you.
In this light, the market validation provided by an organic, consulting-driven funding strategy–which PG is generally sour on–is highly valuable. It might be worth building your product that way even if you could go the fundraising route instead. You’ll learn a lot. I doubt our product would have any market traction if we had tried to leapfrog the several years of hard lessons learned about our target market from our otherwise tiresome and financially stressful consulting slog.
Making Something Users Love
Having said all that, if you read a lot of PG and Sam Altman, it’s easy to become discouraged by repeated talk of the importance of building something users love. Good products simply roll off the shelves, like those round-ish late-1990s iMacs. Marketing is just an optimisation for more eyeballs; the product fundamentally sells itself on a powerful wave of early-adopter enthusiasm, and if your product isn’t grabbing most people who come across it, the implication is that it’s just not a good product.
There’s a fine line there. You do have to know when to call it quits if nobody’s buying what you’re selling. It’s possible to sell at least one unit of something to someone, somewhere given enough time and effort; it doesn’t imply a good commercial prospectus. You should have some way of figuring out if your product isn’t really taking.
However, sales in this area is hard, and you should expect that; the idea that your potential customers are going to just want or love what you’re selling in a self-evident kind of way is complicated by, well, the complexity of what’s being sold. Don’t be fooled into thinking that your product isn’t good just because every sale feels like bruising hand-to-hand combat. You’re fighting against institutional inertia, the customs and habits of the boys at the country club, sclerotic management bureaucracy, combative purchasing departments, and the marketing stranglehold of big-brand competitors on risk-averse management. In our case, we’re selling something that requires the customer to remove core infrastructure in a growing, revenue-generating, and intensely downtime-averse business and replace it with our own. It’s easy to get everyone to agree it’s ultimately a good idea, but it takes political wherewithal. “We’re really in a hurry to do that,” said no executive decision-maker ever.
Many of our most loyal customers of today didn’t know they had a commercial problem our product could solve. The function of marketing becomes very important here: as a general rule for the world of capital goods, customers have to be educated. The “bounce rate” on eyeballs alone is going to be close to 100%.
The popular understanding of what it means to make something users love is often tied up in good user experience and front-end mechanics. This is a siren song. In the world of capital goods and the complex solution sales that go with them, the most important criterion is, “Does it make or save us money?”
That’s not to say a good UI isn’t a competitive advantage, but don’t sweat it. Users will put up with a pretty bad UI on a machine that prints money for their company. More importantly, a good UI won’t do a damn thing for a product that doesn’t have positive bottom-line impact or significant business-level differentiation.
It is indeed critical to hire the right early-stage employees, and hiring mistakes in the early stage will break your company with all the good products and marketing tailwinds in the world. Much of what PG and the YC crowd have to say on the importance of building a great team is directly applicable.
Early-stage hiring is particularly complicated in niche verticals because your customers are buying a vendor service relationship as much as they are buying a software system. Your staff will be engaged closely with customers who expect credibility and expertise from your people, and it’s the warm fuzzies from that collaboration that often close the sale.
That often means that run-of-the-mill skill sets found in the general IT population you can hire off the street simply won’t do. Entry-level people are especially pernicious hiring choice here, since there’s so much more background knowledge to impart. Your early-stage employees will need to be both technical and essentially fluent in the industry domain to which you are selling, which greatly reduces your hiring pool and makes candidates even more expensive.
Accordingly, your success and failure depends on your ability to get people with some vertical-specific industry background in the door. You should expect to make even more compromises here than typical in Valley web startup land as far as equity grants and so on. You’ll want to be mindful of the regions and labour markets around the country that concentrate IT people with particular domain knowledge above and beyond table stakes technical skills: if you’re doing something in energy, get comfortable with Houston, and if you’re doing fintech, think Chicago or New York.
There really are no pre-revenue business models in enterprise software; at least, there shouldn’t be. All talk about building “critical mass” user base and figuring out how to monetise it or render it profitable later is irrelevant and should be summarily ignored. Your first customer should be paying.
There are, of course, some strategic API and platform plays out there whose primary purpose is to set up for an acquisition. These often don’t have paying users or don’t generate a lot of revenue. However, as a general rule, acquisitions in the business market are more rationalistic and quantitative, so bringing more revenue to the table redounds to the benefit of your valuation and bargaining power. This is a bit different from the voodoo valuation process of mass-market startups, where irrational investor exuberance can sometimes be maximised by removing the constraint of concrete, earth-bound revenue and encouraging the acquirer to “really dream big”. All this to say: book revenue. You’re not going to get more for less by letting a freemium cat out of the bag and into the open market. More revenue is always better.
Otherwise, in a market with a low volume of high-magnitude transactions, every customer counts. Arithmetically, price segmentation of some description is usually required to make a product economically viable. One of your biggest preoccupations early on should be to delineate the needs of your “lite” users at one extreme, versus your “enterprise” or “platinum plan” users on the other, and to tier your product accordingly. Joel Spolsky’s classic Camels and Rubber Duckies, dated though it may be, still comes highly recommended as one of the best introductions I’ve seen on this subject.
The homo computatis college drop-out was a cliché whose establishment in folklore predated my departure from the University of Georgia by at least two decades. Nevertheless, I also joined the club. In the first semester of the 2005-2006 academic year, after dabbling half-heartedly in coursework for two years as a philosophy major, I threw open the gates and exiled myself into the great beyond.
Although my actions conformed to a known stereotype, I still feel I was something of an early adopter, virtually alone in my group of peers. I came to count many in my acquaintance who never pursued post-secondary education in the first place, or who floated in and out of community and technical colleges amidst working and financial struggles, but knew of vanishingly few, especially at that time, who straight-up dropped out of a four-year institution–that is, academic majors who unregistered abruptly from their courses mid-semester and skipped town with no intention of returning. That sort of thing seemed to be the province of Larry Page, Sergey Brin and Bill Gates–definitely outliers. And unlike them, I wasn’t at the helm of a world-changing startup on a clear trajectory into the multi-billion dollar stratosphere, so I couldn’t point to an overwhelming and self-evident justification.
A lot has changed since then. By all appearances, we seem to be passing through a watershed moment where existential questions about the value and purpose of college and traditional higher education in America are emerging onto the mass level among Millenials and Generation Z-ers. This discussion has been spurred on by the ensuing housing crisis, a growing tower of debilitating student loan debt, tuition rises, and mounting questions about the future of employment in the developed world, especially the ways in which opportunity has become more and less democratic in the context of technological shifts and globalisation. There’s also a growing interest in open courseware and novel forms of technology-enabled correspondence learning–though, I should say, I don’t share in the TEDdies’ conflation of the Khan Academy with higher education. Still, nearly a decade has passed since I made my fateful decision to forsake the path of higher learning, so it seems like a good time to reflect on where it’s taken me and whether it was a good call.
Some aspects of the progression of events will sound familiar to many in IT. I grew up mostly around university environments, and computer programming figured dominantly among my childhood interests. It was an interest easily encouraged by proximity to lots of expensive computer equipment, good Internet connectivity and access to sympathetic graduate student mentors. I had been playing with computers and the Internet since age 8 or so, and wrote my first program at 9. I spent much of my adolescent and teenage years nurturing this hobby, having a strong interest in both software engineering and operational-infrastructural concerns. As most people in IT know, ours is a profession that offers unrivaled self-teaching opportunities, enabled by a highly Googleable open-source software ecosystem and collaborative social dynamic. That’s why so many programmers like me are self-taught.
I also had various other intellectual interests, however, and had no plans to make a tech career. In fact, for most of my life prior to age eighteen or so, I wasn’t even particularly aware that I had a marketable skill set. The desire to get into computing as a child was utterly innocent and non-ulterior, as arbitrary as some kids’ choice to take up the cello or oil painting. I entered UGA in 2004 as a political science major and shortly switched to philosophy, with vague ideas of law school in the future.
It’s also worth remarking that I came from a humanities-oriented academic family and cultural background; my parents were professors of philosophy at a top-tier Soviet university, and my father is a professor of the same at UGA. My extended family background includes a venerable dynasty of musicians, including my great-grandfather, Mikhail Tavrizian, the conductor of the Yerevan State Opera and a National People’s Artist of the USSR, as well as his Russian wife Rassudana, a renowned ballerina in the Bolshoi Theatre. My late grandmother was a philologist and a member of the philosophy faculty of the Russian Academy of Sciences. When my parents emigrated to the US in 1992 (I was six years old), they redid graduate school entirely at the University of Notre Dame, which is where my primary school years were spent. My social and cultural life at that time played out in housing for married graduate students with children, where I ran around with friends from dozens of different nationalities.
All this to say, I was on a strong implicit academic trajectory as a function of my upbringing, a trajectory rooted in the humanities, not hard sciences. In fact, my parents were not especially supportive of my computing hobbies. As they saw it, I think, spending my days holed up in my room modem-ing away interfered with schoolwork and was not especially promotional of cultural development consonant with the mantle I was meant to inherit.
Nevertheless, I began working when I was eighteen (my parents did not let me work prior to that–Russian parents do not, as a rule, share the American faith in the virtues of part-time work for teenagers or students). My first job was technical support at a small Internet Service Provider in our university town of Athens, GA, at first very much part-time and reasonably complementary to college. I earned a 4.0 GPA in the first semester of my freshman year.
However, I was ambitious and precocious, decidedly more interested in work than school. Within a year, after some job-hopping (which included a stint in IT and/or warehouse labour at a Chinese importer of home and garden products–how’s that for division of labour?), I returned to the ISP at twice the pay rate and assumed the role of systems administrator. I was learning a great deal about real-world business and technology operations, getting my hands on industrial infrastructure and technologies, and rapidly assimilating practical knowledge. I had been around theoretical publications and underpaid graduate student assistants hunkered in dimly lit carrels my whole life, but I never had to learn the basics of business and how to communicate with all kinds of everyday people on the fly. Although the cultural clash was sometimes frustrating, the novelty of learning practical skills and how to run a real-world operation was intoxicating. It occasionally led to an outsized moral superiority complex, too, as I became conscious of the fact that at age 19, I could run circles around most of the job candidates being interviewed, some of whom had master’s degrees. Clearly, I was doing something right!
From that point, my career rapidly evolved in a direction not compatible with school. Formally, I was still part-time and hourly, but it was effectively closer to a full-time gig, and I rapidly took on serious responsibilities that affected service and customers. Small as the company was, in retrospect, I was a senior member of its technical staff and a sought-after authority by people ten to twenty years older. My commitment to school, already a decidedly secondary priority, rapidly deteriorated. I had no semblance of campus life immersion or student social experience. From my sophomore year onward, I was effectively a drop-in commuter, leaving in the middle of the day to go to a class here and a class there, then hurrying back to the office. I neither had time for studying nor made the time. My GPA reflected that. I didn’t care. School droned on tediously; meanwhile, T1 circuits were down and I was busy being somebody!
As my interests in telecom, networking, telephony and VoIP deepened, it became clear that the next logical career step for me was to move to Atlanta; Athens is a small town whose economy would not have supported such niche specialisation. Toward the end of the second semester of my sophomore year, I began looking for jobs in Atlanta. I unconsciously avoided the question of what that means for my university; I was simply too engrossed in work and captivated by career advancement. In the first semester of my junior year, by which point my effort at university had deteriorated to decidedly token and symbolic attendance, I finally found a job in Alpharetta (a suburb of Atlanta) at a voice applications provider. In October 2006, at the age of twenty, I announced that I was quitting university and moving to the “big city”.
My parents reacted better than I thought they would. I halfway expected them to disown me. However, in hindsight, I think they were pragmatic enough to have long realised where things were headed. It’s hard for me to say, even now, to what degree they were disappointed or proud. I don’t know if they themselves know. What was most clear at that moment was that I am who I am, and will do as I do, and there’s no stopping me.
That’s not to say that East and West didn’t collide. I remember having a conversation that went something like:
– “But what happens if you get fired in a month?”
– “Well, I suppose that’s possible, but if one performs well, it’s generally unlikely.”
– “But is there any guarantee that you won’t lose your job?”
Guarantee? That was definitely not a concept to which I was habituated in my private sector existence.
– “There are never any guarantees. But my skills are quite portable; if such a thing happened, I could find another job.”
– “It just seems very uncertain.”
– “That’s how it goes in the private sector.”
All the same, it was clear enough that, for all the problems this decision might cause me, I certainly wasn’t going to starve. Even in Athens, I was an exceptionally well-remunerated twenty year-old. My first salary in Atlanta was twice that. Moreover, it was clear that IT was an exceptionally democratic and meritocratic space; if one had the skills, one got the job. My extensive interviews in Atlanta drove home the point that potential employers did not care about my formal higher education credentials by that point in my career development. The “education” section from my résumé was long deleted, replaced by a highly specific employment history and a lengthy repertoire of concrete, demonstrable skills and domain knowledge with software and hardware platforms, programming languages, and so on. The résumés I was sending out to Atlanta companies at age twenty proffered deep and nontrivial experience with IP, firewalls, routers, switches, BGP, OSPF, Cisco IOS, Perl, Bash, C, PHP, TDM, ADSL aggregation, workflow management systems, domain controllers, intrusion detection systems–what didn’t I do at that ISP? There aren’t many companies that would have let someone with my age and experience level touch production installations of all those technologies. I was bright-eyed, bushy-tailed, and soaked it all up like a sponge. And when asked for substantiation by potential employers, I sold it.
Those of you in IT know how this works: formal education is used by employers as signalling about a candidate only in the absence of information about concrete experience or skills. All other things being equal, given two green, inexperienced candidates among whom one has a university diploma and one doesn’t, employers will choose the one who finished university, as it’s a proxy for a certain minimal level of intelligence and ability to complete a non-trivial multi-year endeavour. When concrete experience and skills are present, however, the educational credentials fly out the window for most corporate engineering and operations jobs, and the more one’s career evolves, the less relevant early-stage credentials become. Moreover, there are innumerable people in mainstream IT whose university degrees were not in computer science or affiliated subject matter, but rather in a specialty like literature, history or ecology.
My next three jobs were in Atlanta, within the space of about the next year and a half. I averaged a job change every six months or so, often considerably increasing my income in the process. By the time I was barely twenty-two, I had worked for a voice applications provider, a major local CLEC and data centre operator, and an online mortgage lender.
Of course, certain jobs were off-limits. I couldn’t do research work that required a formal computer science background, nor take jobs in government or certain other large institutions who remained sticklers for credentials. I lacked the formal mathematics and electrical engineering background necessary for low-level hardware design work. It’s also quite likely that if I had tried to climb the corporate ladder into middle to upper management, I would at some point, later in life, bump into a ceiling for degree-less drop-outs. When one gets high enough, it becomes comme il faut to have an alma mater in one’s biography, even if it’s just an airy-fairy management degree from a for-profit correspondence course mill. The only way I know of to get around that is to have had a famous and inscrutable business success (i.e. an acquisition) to overshadow it. Click on “Management Team” under the “About” section of some tech companies’ web sites to get the drift. Appearances are important at that level.
I didn’t stick around long enough to figure out where exactly the limits are (although I didn’t get the impression there were many, as long as one could demonstrably do the work). In early 2008, I was abruptly fired after some political clashes. Also, they don’t take kindly to the habitual morning tardiness of “programmer’s hours” in financial services. Instead of looking for my seventh job in four years, I decided to go out on my own. I had been itching to do it for quite some time, but didn’t quite have the wherewithal to walk away from the steady paycheck. Getting fired has a way of forcing that issue.
And so, on a cold, windy January day in 2008, barely twenty-two, I left the building with my belongings in a box, with nearly zero dollars to my name, having wiped out my savings with a down payment on a downtown Atlanta condo. I had no revenue and no customers. A friend and I went to celebrate. I was determined and hit the ground running, though, and that’s how I started Evariste Systems, the VoIP consultancy turned software vendor that I continue to operate today, nearly eight years later.
Because the US does not have a serious vocational education program and because the focus of the “everyone must go to college” narrative of the last few decades is reputed success in the job market (or, more accurately, the threat of flipping burgers for the rest of one’s life), the first and most pertinent question on American students’ minds would be: do I feel that I have suffered professionally because I did not finish my degree?
I didn’t think I would then, and I still don’t think so now. Notwithstanding the above-mentioned limitations, it’s safe to say that I could qualify for almost any mainstream, mid-range IT job I wanted, provided I evolved my skill set in the requisite direction. In that way, IT differs considerably from most white-collar “knowledge work” professions, which are variously guilded (e.g. law, medicine) or have formal disciplinary requirements, whether by the nature of the field (civil engineering) or by custom and inertia (politics, banking). Although politics perverts every profession, IT is still exceptionally meritocratic; by and large, if you can do the job, you’re qualified.
The inextricable connection of modern IT to the history and cultural development of the Internet also moves me to say that it’s still the easiest and most realistic area in which one can receive a complete education through self-teaching. You can learn a lot about almost anything online these days, but the amount of resources available to the aspiring programmer and computer technologist is especially unparalleled.
That doesn’t mean I’d recommend skipping college generically to anyone who wants to enter the profession at roughly the same level. I put in, as a teenager, the requisite ten to twenty thousand hours thought to be necessary to achieve fundamental mastery of a highly specialised domain. However, I can’t take all the credit. I was fortunate to have spent my formative years in a university-centric environment, surrounded by expensive computers and highly educated people (and their children), some of whom became lifelong mentors and friends. Although my parents were not especially thrilled with how I spent my free time (or, more often, just time), they had nevertheless raised a child–as most intelligentsia parents do–to be highly inquisitive, open-minded, literate and expressive, with exposure to classical culture and literature. Undergoing emigration to a foreign land, culture and language at the age of six was challenging and stimulating to my developing mind, and the atmosphere in which I ended up on the other side of our transoceanic voyage was nurturing, welcoming and patient with me. The irony is not lost upon me that I essentially–if unwittingly–arbitraged the privilege associated with an academic cultural background into private sector lucre. A lot owes itself to blind luck, just being in the right place and in the right time. I could probably even make a persuasive argument that I lucked out because of the particular era of computing technology in which my most aggressive uptake played out.
This unique intersection of fortuitous circumstances leads me to hesitate to say that nobody needs a computer science degree to enter the IT profession. My general sense is that a computer science curriculum would add useful, necessary formalisation and depth to the patchwork of the average self-taught techie, and this certainly holds true for me as well–my understanding of the formal side of machine science is notoriously impoverished, and stepping through the rigourous mathematics and algorithms exercises would have doubtless been beneficial, though I don’t think it would have been especially job-relevant in my particular chosen specialisation.
Still, I’m not committed to any particular verdict. I’m tempted to say to people who ask me this question: “No, you don’t need a degree to work in private industry–but only if you’re really good and somewhat precocious.” Many nerds are. Almost all of the really good programmers I know have programmed and tinkered since childhood. It comes part and parcel, somewhat like in music (as I understand it). In the same vein, I don’t know anyone who wasn’t particularly gifted in IT but who came out that way after a CS degree.
On the other hand, for the median aspiring IT professional, I would speculate that a CS degree remains highly beneficial and perhaps even essential. For some subspecialisations within the profession, it’s strictly necessary. I do wonder, though, whether a lot of folks whose motive in pursuing a CS degree is entirely employment-related wouldn’t be better off entering industry right out of high school. They’d start off in low entry-level positions, but I would wager that after four years of real-world experience, many of them could run circles around their graduating peers, even if the latter do have a more rigourous theoretical background. If practicality and the job market are the primary concern, there are few substitutes for experience. Back at my ISP job, CS bachelors (and even those with master’s degrees) were rejected commonly; they had a diploma, but they couldn’t configure an IP address on a network interface.
Another reason I don’t have a clear answer is because things have changed since then; a decade is geological in IT terms. I’ve also spent twice as much time self-employed by now as I did in the employed world, and niche self-employment disconnects one from the pulse of the mass market. I know what I want in an employee, but I don’t have a finely calibrated sense of what mainstream corporate IT employers want from graduates these days. When I dropped out, Facebook had just turned the corner from TheFacebook.com, and there were no smartphones, no Ruby on Rails, no Amazon EC2, no “cloud orchestration”, no Node.js, no Docker, no Heroku, no Angular, no MongoDB. The world was still wired up with TDM circuits, MPLS was viewed as next-generation, and VoIP was still for relatively early adopters. The point is, I don’t know whether the increasing specialisation at the application layer, and increasing abstraction more generally, has afforded even more economic privilege to concrete experience over broad disciplinary fundamentals, and if so, how much.
All I can firmly say on the professional side is that it seems to have worked out for me. If I were in some way hindered by the lack of a university diploma, I haven’t noticed. I’ve never been asked about it in any employment interview after my student-era “part-time” jobs. For what I wanted to do, dropping out was the right choice professionally, and I would do it again without hesitation. It’s not a point of much controversy for me.
The bigger and more equivocal issue on which I have ruminated as I near my thirtieth birthday is how dropping out has shaped my life outside of career.
I don’t mean so much the mental-spiritual benefits of a purportedly well-rounded liberal education–I don’t think I was in any danger of receiving that at UGA. 80% of my courses were taught by overworked graduate teaching assistants of phenomenally varying pedagogical acumen (a common situation in American universities, especially public ones). The median of teaching quality was not great. And so, I’m not inclined to weep for the path to an examined life cut short. It’s not foreclosed access to the minds of megatonic professorial greats that I bemoan–not for the most part, anyway.
However, moving to Atlanta as a twenty year-old meant leaving my university town and a university-centric atmosphere. My relatively educated environs were replaced with a cross-section of the general population, and in my professional circles, particularly at that time, I had virtually no peers. My median colleague was at least ten years older, if not twenty, and outside of work, like most people living in a desolate and largely suburban moonscape, I had nobody to relate to. At the time I left, I found value in the novelty of learning to work and communicate with the general public, since I never had to do it before. I thought our college town was quite “insular”. In retrospect, though, it would not be an exaggeration to say that I robbed myself of an essential peer group, and it’s no accident that the vast majority of my enduring friendships to this day are rooted in Athens, in the university, and in the likeminded student personalities that our small ISP there attracted.
As a very serious and ambitious twenty-year old moving up the career ladder, I also took a disdainful view of the ritualised rite of passage that is the “college social experience” in American folklore. I didn’t think at the time that I was missing out on gratuitous partying, drinking, and revelatory self-discovery in the mayhem of dating and sex. If anything, I had a smug, dismissive view of the much-touted oat-sowing and experimentation; I was leapfrogging all that and actually doing something with my life! Maybe. But I unraveled several years later, anyway, and went through a brief but reckless and self-destructive phase in my mid-twenties that wrought havoc upon a serious romantic relationship with a mature adult. I also at times neglected serious worldly responsibilities. Being a well-remunerated mid-twenties professional didn’t help: it only amplified gross financial mistakes I made during that time, whereas most people in their twenties are limited in the damage they can do to their life by modest funds. I’m still paying for some of those screw-ups. For example, few twenty-one year olds are equipped to properly weigh the wisdom of purchasing a swanky city condo at the top of a housing bubble, and, subsequent developments suggest that I was not an exception. Oh, a word of advice: pay your taxes. Some problems eventually disappear if you ignore them long enough. Taxes work the opposite way.
But in hindsight, a bigger problem is that I also missed out on the contemplative coffee dates, discussion panels, talks and deep, intelligent friendships that accompany student life in the undergraduate and post-graduate setting. While the median undergraduate student may not be exceptionally brilliant, universities do concentrate smart people with thoughtful values densely. It’s possible to find such connections in the undifferentiated chaos of the “real world”, but it’s much harder. I situated myself in a cultural frame which, while it undergirds the economy, is not especially affirmative of the combinations of the intellect. To this day, there is an occasionally cantankerous cultural clash between my wordy priorities and the ruthlessly utilitarian exigencies of smartphone-thumbing business. Get to the point, Alex, because business. Bullet points and “key take-aways” are the beloved kin of e-solutions, but rather estranged from philosophy and deep late-night conversations.
This facet of campus life is less about education itself than about proximity and concentration of communities of intelligent people at a similar stage of life. Because I grew up in universities, I didn’t appreciate what I had until I lost it; I traded that proximity to personal growth opportunities for getting ahead materially and economically, and my social life has been running on fumes since I left, powered largely by the remnants of that halcyon era of work and school.
If leaving the university sphere was a major blow, self-employment was perhaps the final nail. Niche self-employment in my chosen market is a largely solipsistic proposition that rewards hermitism and prolific coding, perfect for an energetic, disciplined introvert. I probably would have done better at it in my teenage years , but it didn’t suit my social nature or changed psychological priorities as an adult. A lot of time, money and sacrifice was emitted as wasted light and heat into the coldness of space as I spun my wheels in vain trying to compensate for this problem without fully understanding it.
The essential problem is much clearer in hindsight: in leaving the university and the employment world, with its coworker lunches and water cooler talk, I had robbed myself of any coherent institutional collective, and with it, robbed myself of the implicit life script that comes with having one. I was a man without any script whatsoever. I rapidly sequestered myself away from the features of civilisation that anchor most people’s social, romantic and intellectual lives, with deleterious consequences for myself. I did not value what I had always taken for granted.
There are upsides to being a heterodox renegade, of course. Such persistent solipsism mixed with viable social skills can make one very fluid and adaptable. I took advantage of the lifestyle flexibility afforded by the “non-geographic” character of my work to travel for a few years, and found unparalleled freedom few will experience in wearing numerous cultural hats. I had the incredible fortune to reconnect with my relatives and my grandmother on another continent. For all its many hardships, self-employment in IT has much to recommend it in the dispensation it affords to write the book of one’s life in an original way.
Be that as it may, the foundations of my inner drive, motivation and aspirations are notoriously ill-suited to the cloistered life of a free-floating hermit, yet I had taken great pains to structure such a life as quickly as possible, and to maximal effect. My reaction to this dissonance was to develop a still-greater penchant for radical and grandiose undertakings, a frequent vacillation between extremes, in an effort to compensate for the gaping holes in my life. The results were not always healthy. While there’s nothing wrong with marching to the beat of one’s own drum, I should have perhaps taken it as a warning sign that as I grew older and made more and more “idiosyncratic” life choices, the crowd of kindred spirits in my life drastically thinned out. “Original” is not necessarily “clever and original”.
In sum, I flew too close to the Sun. When I reflect upon the impact that my leaving the university has had upon my life, I mourn not professional dreams deferred, nor economic hardship wrought, but rather the ill-fated conceit that I could skip over certain stages of a young adult’s personal development. Now that the novelty has worn off and the hangover has set in, I know that it would have been profoundly beneficial to me if they had unfolded not within the fast and loose patchwork I clobbered together, but within a mise en scène that captures the actions, attitudes and values of the academy–my cultural home.
In tech, we’re always talking about workplace ergonomics, the fine points of Aeron chairs and standing desks, the wretchedness of open-plan spaces and cubicle farms versus private offices, how many monitors to have, and so on. Usually, I’m an eager participant, very much attuned to the idea that comfort and pleasant aesthetics are essential to output, motivation, and sustained concentration in high-focus, specialised creative labour.
But sometimes it does help to get a little perspective. A glance at the work environments of many world-class musicians, scholars, authors and researchers from the 18th, 19th and early 20th centuries, or at the cramped physical environments in which highly capable, clever IT people work in developing countries, may convey a useful reminder that if you really want to do something, you can do it just about anywhere.
For that matter, my first job, as a technical hand and later sysadmin at a small-town ISP, when I was a bright-eyed, bushy-tailed 18-20 year old, was in office space not altogether Class A; it was a freezing, windowless dungeon in a small 4-room suite whose HVAC controls were shared with a machine room in need of constant, high-intensity cooling, and the rooms were littered with stray computer hardware and accessorised with fairly second-rate furniture. Yet, I’ve never been so productive or so excited to go to work since those days.
In fact, the physical configurations in which I eagerly wrote code for many hours as a teenager are downright sadistic by cushy Silicon Valley standards; a crude wooden desk from Big Lots, a far too low wooden dining room chair, a 7 lbs (~3.1 kg) laptop, a shared family PC in the tiny living room of graduate student barracks. And what a PC that was–a 386SX/40 with 2 MB of RAM, at a time when most respectable citizens were packing 60 and 90 MHz Pentiums. We were poor–by American standards, anyway–but I didn’t really notice.
I have a friend and colleague who travels around the world, living his digital life out of a rather clunky Lenovo ThinkPad, sat on a variety of surfaces, whichever are available at the moment. While he’s not usually staying in mosquito-ridden youth hostels in the Congolese jungle, I fully grant, he doesn’t have 30″ IPS displays or wrist rests, to say nothing of a snazzy office with an adjustable-height desk and a foosball table–you know, the bare essentials for Ruby on Rails jockeys in the Bay Area. Yet he’s one of the most disciplined and entrepreneurially successful people I know. I think of him every time someone says that they “literally cannot work” without a chair with proper lumbar support.
More extremely, some of my Armenian colleagues learned to code in a time when Yerevan was largely without grid electricity, during the disastrous Nagorno-Karabakh War and its contemporaneous power crisis. They hooked up their clobbered-together computers to car batteries for a few hours day, batteries which they improvised some means of charging occasionally, usually by leeching surplus electricity from cables to critical facilities that did have it. They’re some of the most capable and multifarious IT guys (not to mention electrical engineers!) I know.
Yes, one’s eyes, back, wrists, etc. become more fragile and capricious with age, and it would be prudent to afford them some care. As I near 30, I’m acutely aware of that fact, having all sorts of aches and pains I didn’t used to have.
Nevertheless, my conclusion on the psychological purpose of constant twiddling of small features of our environment – desks, monitor sizes and so forth – is that it’s more about tricking yourself into working on stuff you don’t really want to do. It’s to create the illusion that now everything is truly right with the world, and productivity will seamlessly spring forth from one’s fingers. It’s a means of papering over the fact that you’d rather be doing something else. If most of us were actually doing something stimulating, we’d probably be happy enough doing it on a rooftop while it’s snowing.
To drive that point home, a story:
I rented an office in a repurposed Soviet-era building in Yerevan, Armenia for a while. I should pause to say that this was very much a “you get what you pay for” kind of thing, so please don’t think all life in Yerevan looks like this. Anyway, here’s what was going on in the very room next door some of the time (“renovation”), and also a picture of the ice that formed under my doorway some winter days:
The “included” Internet connectivity was sorely lacking; in the end, I ended up with some WiMax receiver that gave me 384k/128k on a good day. I grumbled about it some of the time, sure. But I also wrote much of our product’s middleware, user interface and API documentation there. I was furiously productive, and it would not be unreasonable to say that our product made a titanic generational leap whilst I was there. Some other parts of this product infrastructure were written, also at a very impressive clip, whilst sat in a stiff chair and Spartan metal table in the rentable upstairs workspace at Sankt Oberholz, a Berlin coffee shop and coworking enterprise situated in a late 19th century building. I was hunched over uncomfortably, my nose stuck in a 13″ ultrabook with a loathsome keyboard. My eyes burned and my wrists tingled by the end of most days.
Now I’m sitting in my eminently comfortable Class A office in Atlanta, with great connectivity and a 32″ LED display on my desk, a favourite Das Keyboard at my fingertips, and I’m writing blog posts, poking around on Facebook. The difference is pretty clear to me: when I was in Yerevan and Berlin, I wanted to work, and now I don’t.
I won’t end with some hackneyed and nauseating Millenial “thought leadership” about “doing what you love”, nor the facile conclusion that it’s all in your head. I would just offer the modest speculation that an ounce of tweaks to the intellectual content of one’s work, or other, more existential life issues that inform your inner drive, is probably worth a pound of major adjustments to one’s office furniture, seating, barriers, and computer peripherals.
Armenian economists, diaspora repatriates, and development evangelists offer many idealistic proposals about how Armenia can reverse its inexorable decline and parlay the forces of its gradual disintegration into positive economic growth and regional leadership. Most of these are plainly quixotic, at least to anyone with even cursory insight into everyday Armenian reality and demographic trends. They do not give one the sense of having been united with the probable.
However, on this spectrum of bright-eyed, bushy-tailed pronouncements, the idea of Armenia becoming a major IT centre is one of the less wholly implausible ones. Politicians and self-appointed diaspora luminaries say the darndest, most fantastical things; of all of their grandiose ideas, for Armenia to traffic in virtual goods is probably not the most far-fetched, if only because the messengers of progress have set a high standard with their pompous rhetoric.
I don’t have much experience working within the local IT market, though I do have plenty of colleagues in the IT field in Yerevan. Nevertheless, true to my well-established tendency, I’ll dubiously anoint myself enough of an authority to give an outsider’s impression about the merits of this thesis that Armenia should hitch its wagon to tech. Then, maybe insiders can tell me why I’ve got it all wrong.
The backdrop of strong Soviet-era fundamentals in science and engineering helps in Armenia, too. Armenian engineers are, classically, quite capable. Moreover, the hardships this generation of IT people have had to live through in the bedlam of the 1990s has given them a lot of adaptability, flexibility, and resolve. They’ve definitely got the stereotypical Soviet MacGyver-type knack for improvisation. Among my circle of acquaintances are many people who learned to program during the dark years of the electrical crisis and the Nagorno-Karabakh War, squeezing a few hours a day of power for their third-rate computers out of car batteries, charged by tenuous methods of dubious legality, at a time when Yerevan was plunged into near-total darkness and bitter winter cold. People were burning books and random objects at the time for heating fuel. They pioneered low-baud Internet connectivity through Moscow in extremely inhospitable conditions. They paid obscene rates for Internet and telephone service in the heyday of the ArmenTel monopoly, and still more obscene black market rates for aftermarket mobile devices. As far as the pampered, effete cubicle-dwellers of the “developed world” are concerned, these guys might as well have been working on punchcards by candlelight. They would’ve given a lot to have had mere Office Space problems.
But, for the most part these people left a long time ago. It doesn’t take a genius to recognise that with these kinds of skills, one could make proportionally better money abroad, even offset against higher living costs, while opening greater career development and life opportunities for themselves and their families. Moreover, these guys have an advantage that many other Armenians trying to leave the country don’t. Steeped in the Western-rooted shared culture of the Internet, they have a good command of the English language and soaked up a lot of globalist ideas that make them highly fraternal with their nerdy American and European counterparts. They’ve also got the Russian IT culture and language angle, which is very influential in Armenia as well, so they can migrate to Eastern Europe, too. Economically speaking, it’s a lot easier for foreign companies to plug clever Armenian technologists straight into their workforce, because they’re not so different from the domestic clever technologists. The homogenising force of the Internet definitely offers an efficiency benefit to both labour and management.
Understand, too, that there are ways to eke out a meager living doing IT in developing economies that effectively don’t exist in the developed world anymore, because they’ve been obsoleted, rationalised and optimised away. When’s the last time you saw a general, all-inclusive computer store in First World countries (Chinese importers notwithstanding)? Non-tech people may be forgiven for thinking that the guys at one of Yerevan’s innumerable computer stores that know how to repair PCs, clean spyware, make some simple web sites, and wire up small business LANs are pretty sharp, but their skill set is not globally competitive. They can’t emigrate on that basis. All that stuff is long commoditised. To find work abroad, one has to have specialised, nontrivial and current skills that are intra-industrially useful. However, the same applies to Armenia: this tier of technicians isn’t qualified to hold up the weight of Armenian infrastructure and economic development on their shoulders either. All in all, there may be a fair amount of computer-savvy guys in Yerevan, but there are actually very few, if you see what I mean.
So, as far as I can tell, the main limitation on any Armenian aspiration to become an IT major is the severe shortage of qualified people. Everyone prattles on about a shortage of qualified tech people in every market, but as with all other problems, in Armenia the problem is much more acute, sharp and concentrated, due to its tiny size. There’s a small skeleton crew of highly competent remnants holding down the fort (i.e. people who didn’t manage to leave for one reason or another, usually family or personal reasons rather than lack of opportunities to do so), but even among them, emigration is a major theme of discussion. As all other highly qualified specialists, productive workers and capable entrepreneurs in Armenia, they’re getting fed up and leaving. Many developing countries and ex-Soviet republics are bleeding specialists, but other countries have a lot more people to bleed. Armenia is haemorrhaging.
I don’t see a crop of up-and-coming youngsters that stand to viably replace the classic hackers of the 1990s. The few especially capable ones generally take the shortcut of leaving. It’s the same old song of Armenia: everybody’s leaving. Thus, I take no pleasure in elucidating the obvious conclusion to anyone thinking of turning Armenia into a globally competitive IT centre: where’s your globally competitive work force?
Any IT business in Armenia with aspirations of making real money must, by definition, be export-oriented. There’s no money to be made in selling into the local market. The only IT companies in Armenia I’m familiar with that make any money–and I’m not counting outsourced development or engineering divisions of foreign companies here–are ones that service government contracts and foreign orders.
The local market has IT needs, of course, but they’re pretty pedestrian and connected to low-margin products and services–the kinds of things that are, in terms of their global cost structure, only viable at a large scale. The killer is a triple curse:
- Small market size, and therefore, no economies of scale, as well as fierce competition and saturation;
- Poverty; it is possible to sell into a small market, but only if it’s a rich economy. Armenia is basically city-state size, but it’s no Singapore;
- Relatively undeveloped, traditional economy. There’s not that many businesses in Armenia that have a need for sophisticated technological capital goods.
There’s other problems related to the last point as well. Armenian businesses are, as a matter of cultural disposition, cheap quite apart from their relative poverty, as Armenians are historically given to commerce with an Eastern bent. Few proprietors seem to have made the shift to a post-industrial mindset that divorces the subconscious perception of “value” from the idea of “tangible goods” while strongly incorporating the idea of shopping on value rather than price.
Traditional economies have never lent themselves especially well to Western-style economic rationalism and efficiency, either. Say what you will about the humanistic effects of that rationalism (which I would characterise somewhat ambivalently), but the reality is that a strong cultural focus on optimising workflows and business processes drives much of the demand for IT universally.
Yerevan actually has rather good and ubiquitous FTTH-based consumer broadband. However, it’s easy to forget that the country’s connections to the outside world are tenuous and reflect its geopolitically precarious position. Armenia’s only real connection to the greater Internet is through the neighbouring republic of Georgia, and it’s quite easy to take the whole country offline, as the world learned in April 2011 (The Guardian).
I’m told that the fibre paths have got a bit more diverse now, but there’s not that much diversity you can add to a largely mountainous, landlocked country most of whose land borders are closed. Armenia’s border with Turkey has been closed since 1994 and has no cross-border telecommunication connections, and the border with Azerbaijan is ever-so-slightly militarised, you might say. Together with the Azeri exclave of Nakhichevan, that’s about 85% of Armenia’s land borders. As with many other things, Armenia clings to life through Georgia, subject to its whims and caprices, as well as the geophysical realities of doing so. To the south, Iran is connected to some very robust, high-bandwidth Persian Gulf cable systems, but, I’m told that for fairly obvious political reasons, the Internet link through Iran isn’t used much (if at all).
The inability to build redundant, multilateral physical connections to its neighbours makes Armenia quite ill-suited to the operation of any regionally significant Internet interconnection exchange or peering point. Armenian utility power is fairly reliable (as long as the Metsamor reactor keeps running), but definitely at “developing world” levels of redundancy. The power frequency isn’t terribly clean. There is high seismic risk. Wholesale IP bandwidth to the outside world is quite expensive. All these things likely preclude the possibility of Armenia hosting a real data centre or getting into the hosting or “cloud” business in a big way. So you want to operate a network? Who are you going to network with?
IT also depends on strong logistical links to the outside world and benefits from proximity to supply chains. Armenia is landlocked and largely blockaded, and, on account of its small size, constitutes an exotic, high-cost shipping destination. No access to open water means expensive transit through Georgia’s Black Sea ports, or even more expensive air cargo. Slow and unreliable internal logistics, as well as high import duties, are also a killer.
It takes more than just electricity, Internet connectivity and low labour costs to create or sustain a significant IT sector. IT is highly interdependent and horizontally allied with a variety of other inputs, all of which require a critical mass of economic activity and sophistication to sustain.
I hear all kinds of nonsense from diaspora tech people about how Armenia can be an incredible startup hub because of its low costs. However, startups need clean business climates, low barriers to entry, transparent financial institutions, easy access to relatively abundant financing, and a critical mass of other startups that concentrates talented, experienced people in one place. There must be some sort of established and humming growth, exit and/or liquidation track. Armenia doesn’t offer much of that. Cheap labour does not a startup hub make. Without the right factors of production (principally human ones), any spark will quickly fizzle out.
None of this is to denigrate the efforts of the Yerevan tech startup community to do what it can with what it’s got. However, the chances of an Asian Tiger-type economic miracle there are vanishingly slim in my estimation.
It seems to me that government officials haven’t actually caught onto IT as a source of wealth or value yet. For the most part, they are rather sclerotic, stuck in the mindset of twentieth century industrialists and in keeping with Armenia’s largely traditional economic composition: if it’s not a physical good, it’s not a real thing that actually matters. Actually, this is probably a good thing; if technology companies weren’t so “under the radar”, they’d be subject to the same harsh extortion and shakedown racket that the notoriously corrupt bureaucracy, in concert with large business interests, visits upon most businesses in Armenia. It’s only a matter of time until they fully realise that there’s more to IT than just a bunch of guys sat at desks typing or whatnot.
In light of this, talk of crafty government policy incentives to lure startups or foster a more teeming IT investment climate seems like a very distant pipe dream.
IT is a globally competitive field. If you want to compete, you have to answer the fundamental question of just what it is that you can offer that is better than other countries or locales, or at least on par with other countries and locales. Generally-accepted criteria for a market poised to break out in IT include:
- Abundant human capital at a low cost (Armenia’s got the low cost, but not the abundant human capital);
- Adequate physical infrastructure;
- Logistical integration with the outside world — easy to travel to, ship to and do business with (this is particularly important if Armenia’s destiny were to become an “outsourcing centre” rather than a “startup hub”, and the “outsourcing centre” seems like a more practical step);
- Relatively transparent regulatory and legal climate;
- Location that is in some way central or regionally significant;
- Established education pipeline to feed the human talent pool, in some significant volume;
- Concentrated networks of financing resources, advisors, mentors and talent.
Does Armenia have any of this?
Rather than becoming a startup zoo, the more likely emergent development track for a place like Armenia is to work on becoming an offshore development centre, which is a simpler, dumber configuration that doesn’t make such enormous demands on its scarce and ill-prepared ecosystem.
This is the same sort of thing that propelled India to IT-led economic growth, and in principle, it seems possible. I’ve seen a number of American companies move their development offices to Yerevan, or acquire Armenian companies seemingly for the purpose of leveraging their existing engineering talent.
If this trend were to gain any traction, it might help to retain Armenian IT talent in Armenia. However, there is a natural tension between this and the downward wage pressure that gives offshoring its competitive edge from the point of view of the arbitrageur.
Still, I think if there’s any hope of IT taking a real hold in Armenia as an export, it’s probably going to proceed down this route. However, it would benefit a lot from government incentives to nurture it, as well as movement toward greater administrative and financial transparency that is going to be at odds with Armenia’s endemic corruption.
The biggest cause for pessimism is, in my mind, the lack of a critical mass of local talent. I don’t see where these companies are going to find enough local bodies. Seemingly in recognition of this fact, I’ve even heard proposals to convince IT-able diaspora Armenians to move to Armenia and ply their craft there for foreign companies, but when pressing for details on how to pull off this feat of psychological alchemy? I get crickets. Diaspora Armenians are sometimes strongly receptive to nationalist-irredentist demagoguery, though, so I suppose one could do it with ideological bombardment. The sorts of people who are easily persuaded by that sort of claptrap don’t tend to make very talented engineers, though; computing work requires good critical thinking.
A side note about Tumo
Tumo has occasionally been trotted out to me as a vanguard of Armenia’s high-technology future. I was fortunate to have the opportunity to take a very in-depth tour once and see it firsthand before passing judgment.
I’m afraid I have little to say in praise of Tumo as a job skills creation engine, at least from the perspective of an engineer. They’ve taken what was fundamentally a rather good idea, backed by very significant money, lots of good hardware, a nice in-house curriculum management and interactive lesson delivery platform, and squandered it on teaching kids “lite” stuff that doesn’t matter. This focus on animation, design and media may be sexy, but if they want to give kids the foundation for skills that will actually help them thrive in critical, high-value roles, they need to put all this fluffy multimedia away and focus on serious software engineering and operations. That would take a rather radical retooling away from what they’re set up for now.
I’m all for artistic endeavours, but if you want to talk about Tumo as a player in some possibility of a serious future for Armenia through IT, these design-oriented skill sets do not represent an effective vehicle for investment in that viewpoint. They need to learn a thing or two from the people that went to the 1990s school of hard knocks.
If there are still any left in Armenia by the time this goes to press.
This article on how the FBI is seeking expanded surveillance powers for “cloud”-hosted Internet communication services reiterates the frustration of law enforcement agencies feel at the way technological evolution has caused many interception capabilities hitherto taken for granted to slip from their grasp.
First, let’s get something out of the way: people who are at least somewhat technologically aware understand and take for granted that security services and intelligence agencies have the means to intercept almost anything you do on the Internet–at least, if your means of doing it are even remotely conventional. I assume this is not a novel thesis to most readers. The difference is that this type of interception activity is rarely subject to the same kind of restrictions as evidence-gathering operations whose product must be admissible in court. So, we’re not talking about interception here per se; we’re talking about “lawful interception”.
Essentially, what the police are mad about is that CALEA, the last big legislative initiative, passed in 1994 to force technical cooperation from service providers and provide standardised data interfaces for streamlined tapping, simply doesn’t keep pace with modernity in the way they would like. It originally applied only to phone companies. It has been expanded to VoIP providers and ISPs, of course, but it’s still a mechanism that was fundamentally designed with a view to the state of the telecommunications world in the early nineties. CALEA proceeds from a PSTN-oriented view of communications networks, which is one in which they are hierarchical, highly centralised, despotically controlled by a very limited cadre of individuals and entities, and meticulously standardised.
That paradigm doesn’t begin to cover all the diffuse packet-switched, federated, multi-jurisdictional, and above all, increasingly distributed and peer-to-peer communication transports that the world has diverged into since then. Essentially, they’re upset that there’s no conveyor belt on which your Gtalk (XMPP) messages or your World of Warcraft voice-over conversations can be fed straight into a large-scale FBI tap, in real time, on demand.
Worse yet, this multi-protocol, multi-service, multi-topology landscape is only getting more complex and diverse, not less. Fifteen years ago, how many different ways did you have to communicate person-to-person online? AIM, ICQ, e-mail? I could list dozens of mainstream methods now, and the list is only growing.
Of course, the technically minded among us have done these agencies a huge favour by opting for the convenience of “cloud”-hosted e-mail (Gmail), documents (Google Docs, Dropbox, etc.), messaging (Gtalk, Facebook Messenger, etc.). Those of us who have the knowledge and capability to run our own IM and e-mail servers, as we used to have to do, are, in my opinion, quite irresponsible for opting not to do so in the name of economics. However, this was never even a point of discussion for most people; most ordinary Internet users have always been stuck with whatever services some large company (AOL, Mirabilis, Google, Facebook, Twitter) fed them.
Certainly, the police and the FBI can–and do–subpoena your data from these companies. Their complaint here is really that the process is not streamlined or real-time. It’s not enough that they can, in principle, get at the data. They want to get at it quicker, better, faster. In other words, they want to reap some of the same efficiency increases that you’re reaping. Why should you get to send encrypted IMs through dozens of services straight from a burner phone with a 3G data plan, while they have to chase their tail and jump through antiquated, time-consuming legal hoops to try to piece some of what you’ve just done together?
The only effective strategy for bolstering public support for increased surveillance ventures is through scare-mongering, invoking the usual bogeymen of terrorism, identify theft–pick your fashion of the hour. Law enforcement asks you the rhetorical question: Do we really want IT to provide an unprecedentedly untraceable and profoundly effective mechanism for the next 9/11 hijackers to collaborate?
My view: yes, we do.
They government just needs to get over it. The march of technological progress often brings unpleasant realities for certain institutions. A significant and growing portion of the population of the developed world has an Internet-connected 1+ GHz multi-core computer in their left pocket. That’s going to put unprecedentedly powerful encryption and encapsulation capabilities in the hands of everyday people–capabilities that have increased by several orders of magnitude in just two or three decades.
For the most part, this is all good news for information privacy, civil rights, freedom of expression, and financial data protection. I am convinced that the net social benefit of being able to send information in ways that are not trivially interceptible greatly outweighs the downside of criminals also doing so. On the whole, I feel much better about having a conversation with a friend now about politics, confidential family problems, and sensitive financial details than I did ten or fifteen years ago.
From a technical point of view, I don’t see a realistic way for government agencies to keep up with the magnitudinal increases in communication complexity. It is starkly at odds with the way the distributed way the Internet and its constituent layers of abstraction are organised–even if it is more centralised at the physical layer than most of us think or would like to admit. Yes, I’ve heard of the Bluffdale Data Centre, but I cannot imagine how one would need to reorganise the Internet, topologically, or what kind of tentacular monstrosity one would need to build, in order to have a reasonable chance of actually tapping all of its communications. The biggest obstacles are not physical, but rather the sheer diversity of protocols and applications that would need to be understood by The Behemoth in order to make this process anything like scalable, which is the real problem for law enforcement. The resources already exist to spin their wheels on wiretapping someone as an expensive one-off. As I mentioned above, what the government wants is something closer to the economies of scale that have been reaped everywhere else. It stands to reason that in their ideal world, they’d want to force everyone to take their data, in all its heterogenous patchwork, and massage it into a standard format and spoon-feed it to them. They want wiretapping to be easier, and are upset that instead, it’s gotten harder.
In an adversarial court system with a judicial presumption of innocence, they want you to make their job easier.
I don’t any reason why it would be socially desirable to allow law enforcement to try to move this immovable object. I can’t see what good results can come from it, even if we completely sidestep the issue of massive abuses of law enforcement power and grant that there is a morally legitimate application of wiretapping by state agencies in at least some cases.
It seems to me that the greatest danger in all this is not that money launderers might use Tor to plot and scheme or that professional clubgoers might buy ecstasy on Silk Road, but rather the drag that costly bureaucratic boondoggles impose on economies and livelihoods. Anyone aware of the true state of CALEA compliance in the VoIP industry–or, to be precise, the elaborate motions of compliance–knows that there is no hope of achieving full compliance with such initiatives, and no hope that they will, in the grand scheme, achieve their ostensible goals.
Impossible boondoggles not only cost money, but have the effect of turning everyone into criminals or civil defendants, since nobody is actually in compliance. And, as usual, small companies suffer the most, since they don’t have the resources of large companies to put on elaborate charades of complying with Byzantine requirements. And, as usual, the big companies buy their way out, while the small companies are shaken down in the great cosmic lottery of selective enforcement. It all starts to be reminiscent of the cynical “we pretend to work and they pretend to pay us” mantra in my native USSR.
As progress moves forward, some institutions adapt, but some certain legal and civil artifacts of previous historical frames fall away. Sometimes they just have to. Maybe you can’t trivially tap one or two ubiquitous methods of communication anymore, but so what? You can’t own slaves or torture people anymore, either.