Too cool for school: a retrospective on dropping out of university

The homo computatis college drop-out was a cliché whose establishment in folklore predated my departure from the University of Georgia by at least two decades. Nevertheless, I also joined the club. In the first semester of the 2005-2006 academic year, after dabbling half-heartedly in coursework for two years as a philosophy major, I threw open the gates and exiled myself into the great beyond.

Although my actions conformed to a known stereotype, I still feel I was something of an early adopter, virtually alone in my group of peers. I came to count many in my acquaintance who never pursued post-secondary education in the first place, or who floated in and out of community and technical colleges amidst working and financial struggles, but knew of vanishingly few, especially at that time, who straight-up dropped out of a four-year institution–that is, academic majors who unregistered abruptly from their courses mid-semester and skipped town with no intention of returning. That sort of thing seemed to be the province of Larry Page, Sergey Brin and Bill Gates–definitely outliers. And unlike them, I wasn’t at the helm of a world-changing startup on a clear trajectory into the multi-billion dollar stratosphere, so I couldn’t point to an overwhelming and self-evident justification.

A lot has changed since then. By all appearances, we seem to be passing through a watershed moment where existential questions about the value and purpose of college and traditional higher education in America are emerging onto the mass level among Millenials and Generation Z-ers. This discussion has been spurred on by the ensuing housing crisis, a growing tower of debilitating student loan debt, tuition rises, and mounting questions about the future of employment in the developed world, especially the ways in which opportunity has become more and less democratic in the context of technological shifts and globalisation. There’s also a growing interest in open courseware and novel forms of technology-enabled correspondence learning–though, I should say, I don’t share in the TEDdies’ conflation of the Khan Academy with higher education. Still, nearly a decade has passed since I made my fateful decision to forsake the path of higher learning, so it seems like a good time to reflect on where it’s taken me and whether it was a good call.

Visiting Western Michigan University in Kalamazoo for a conference, somewhere around 4th or 5th grade.

Some aspects of the progression of events will sound familiar to many in IT. I grew up mostly around university environments, and computer programming figured dominantly among my childhood interests. It was an interest easily encouraged by proximity to lots of expensive computer equipment, good Internet connectivity and access to sympathetic graduate student mentors. I had been playing with computers and the Internet since age 8 or so, and wrote my first program at 9. I spent much of my adolescent and teenage years nurturing this hobby, having a strong interest in both software engineering and operational-infrastructural concerns. As most people in IT know, ours is a profession that offers unrivaled self-teaching opportunities, enabled by a highly Googleable open-source software ecosystem and collaborative social dynamic. That’s why so many programmers like me are self-taught.

I also had various other intellectual interests, however, and had no plans to make a tech career. In fact, for most of my life prior to age eighteen or so, I wasn’t even particularly aware that I had a marketable skill set. The desire to get into computing as a child was utterly innocent and non-ulterior, as arbitrary as some kids’ choice to take up the cello or oil painting. I entered UGA in 2004 as a political science major and shortly switched to philosophy, with vague ideas of law school in the future.

It’s also worth remarking that I came from a humanities-oriented academic family and cultural background; my parents were professors of philosophy at a top-tier Soviet university, and my father is a professor of the same at UGA. My extended family background includes a venerable dynasty of musicians, including my great-grandfather, Mikhail Tavrizian, the conductor of the Yerevan State Opera and a National People’s Artist of the USSR, as well as his Russian wife Rassudana, a renowned ballerina in the Bolshoi Theatre. My late grandmother was a philologist and a member of the philosophy faculty of the Russian Academy of Sciences. When my parents emigrated to the US in 1992 (I was six years old), they redid graduate school entirely at the University of Notre Dame, which is where my primary school years were spent. My social and cultural life at that time played out in housing for married graduate students with children, where I ran around with friends from dozens of different nationalities.

All this to say, I was on a strong implicit academic trajectory as a function of my upbringing, a trajectory rooted in the humanities, not hard sciences. In fact, my parents were not especially supportive of my computing hobbies. As they saw it, I think, spending my days holed up in my room modem-ing away interfered with schoolwork and was not especially promotional of cultural development consonant with the mantle I was meant to inherit.

Nevertheless, I began working when I was eighteen (my parents did not let me work prior to that–Russian parents do not, as a rule, share the American faith in the virtues of part-time work for teenagers or students). My first job was technical support at a small Internet Service Provider in our university town of Athens, GA, at first very much part-time and reasonably complementary to college. I earned a 4.0 GPA in the first semester of my freshman year.

However, I was ambitious and precocious, decidedly more interested in work than school. Within a year, after some job-hopping (which included a stint in IT and/or warehouse labour at a Chinese importer of home and garden products–how’s that for division of labour?), I returned to the ISP at twice the pay rate and assumed the role of systems administrator. I was learning a great deal about real-world business and technology operations, getting my hands on industrial infrastructure and technologies, and rapidly assimilating practical knowledge. I had been around theoretical publications and underpaid graduate student assistants hunkered in dimly lit carrels my whole life, but I never had to learn the basics of business and how to communicate with all kinds of everyday people on the fly. Although the cultural clash was sometimes frustrating, the novelty of learning practical skills and how to run a real-world operation was intoxicating. It occasionally led to an outsized moral superiority complex, too, as I became conscious of the fact that at age 19, I could run circles around most of the job candidates being interviewed, some of whom had master’s degrees. Clearly, I was doing something right!

3191_754237504880_3594838_n

Fielding customer calls at the ISP. Clearly, I’m thrilled to be doing customer support despite being a sysadmin.

From that point, my career rapidly evolved in a direction not compatible with school. Formally, I was still part-time and hourly, but it was effectively closer to a full-time gig, and I rapidly took on serious responsibilities that affected service and customers. Small as the company was, in retrospect, I was a senior member of its technical staff and a sought-after authority by people ten to twenty years older. My commitment to school, already a decidedly secondary priority, rapidly deteriorated. I had no semblance of campus life immersion or student social experience. From my sophomore year onward, I was effectively a drop-in commuter, leaving in the middle of the day to go to a class here and a class there, then hurrying back to the office. I neither had time for studying nor made the time. My GPA reflected that. I didn’t care. School droned on tediously; meanwhile, T1 circuits were down and I was busy being somebody!

As my interests in telecom, networking, telephony and VoIP deepened, it became clear that the next logical career step for me was to move to Atlanta; Athens is a small town whose economy would not have supported such niche specialisation. Toward the end of the second semester of my sophomore year, I began looking for jobs in Atlanta. I unconsciously avoided the question of what that means for my university; I was simply too engrossed in work and captivated by career advancement. In the first semester of my junior year, by which point my effort at university had deteriorated to decidedly token and symbolic attendance, I finally found a job in Alpharetta (a suburb of Atlanta) at a voice applications provider. In October 2006, at the age of twenty, I announced that I was quitting university and moving to the “big city”.

My parents reacted better than I thought they would. I halfway expected them to disown me. However, in hindsight, I think they were pragmatic enough to have long realised where things were headed. It’s hard for me to say, even now, to what degree they were disappointed or proud. I don’t know if they themselves know. What was most clear at that moment was that I am who I am, and will do as I do, and there’s no stopping me.

That’s not to say that East and West didn’t collide. I remember having a conversation that went something like:

– “But what happens if you get fired in a month?”

– “Well, I suppose that’s possible, but if one performs well, it’s generally unlikely.”

– “But is there any guarantee that you won’t lose your job?”

Guarantee? That was definitely not a concept to which I was habituated in my private sector existence.

– “There are never any guarantees. But my skills are quite portable; if such a thing happened, I could find another job.”

– “It just seems very uncertain.”

– “That’s how it goes in the private sector.”

All the same, it was clear enough that, for all the problems this decision might cause me, I certainly wasn’t going to starve. Even in Athens, I was an exceptionally well-remunerated twenty year-old. My first salary in Atlanta was twice that. Moreover, it was clear that IT was an exceptionally democratic and meritocratic space; if one had the skills, one got the job. My extensive interviews in Atlanta drove home the point that potential employers did not care about my formal higher education credentials by that point in my career development. The “education” section from my résumé was long deleted, replaced by a highly specific employment history and a lengthy repertoire of concrete, demonstrable skills and domain knowledge with software and hardware platforms, programming languages, and so on. The résumés I was sending out to Atlanta companies at age twenty proffered deep and nontrivial experience with IP, firewalls, routers, switches, BGP, OSPF, Cisco IOS, Perl, Bash, C, PHP, TDM, ADSL aggregation, workflow management systems, domain controllers, intrusion detection systems–what didn’t I do at that ISP? There aren’t many companies that would have let someone with my age and experience level touch production installations of all those technologies. I was bright-eyed, bushy-tailed, and soaked it all up like a sponge. And when asked for substantiation by potential employers, I sold it.

Those of you in IT know how this works: formal education is used by employers as signalling about a candidate only in the absence of information about concrete experience or skills. All other things being equal, given two green, inexperienced candidates among whom one has a university diploma and one doesn’t, employers will choose the one who finished university, as it’s a proxy for a certain minimal level of intelligence and ability to complete a non-trivial multi-year endeavour. When concrete experience and skills are present, however, the educational credentials fly out the window for most corporate engineering and operations jobs, and the more one’s career evolves, the less relevant early-stage credentials become. Moreover, there are innumerable people in mainstream IT whose university degrees were not in computer science or affiliated subject matter, but rather in a specialty like literature, history or ecology.

My next three jobs were in Atlanta, within the space of about the next year and a half. I averaged a job change every six months or so, often considerably increasing my income in the process. By the time I was barely twenty-two, I had worked for a voice applications provider, a major local CLEC and data centre operator, and an online mortgage lender.

Of course, certain jobs were off-limits. I couldn’t do research work that required a formal computer science background, nor take jobs in government or certain other large institutions who remained sticklers for credentials. I lacked the formal mathematics and electrical engineering background necessary for low-level hardware design work. It’s also quite likely that if I had tried to climb the corporate ladder into middle to upper management, I would at some point, later in life, bump into a ceiling for degree-less drop-outs. When one gets high enough, it becomes comme il faut to have an alma mater in one’s biography, even if it’s just an airy-fairy management degree from a for-profit correspondence course mill. The only way I know of to get around that is to have had a famous and inscrutable business success (i.e. an acquisition) to overshadow it. Click on “Management Team” under the “About” section of some tech companies’ web sites to get the drift. Appearances are important at that level.

I didn’t stick around long enough to figure out where exactly the limits are (although I didn’t get the impression there were many, as long as one could demonstrably do the work). In early 2008, I was abruptly fired after some political clashes. Also, they don’t take kindly to the habitual morning tardiness of “programmer’s hours” in financial services. Instead of looking for my seventh job in four years, I decided to go out on my own. I had been itching to do it for quite some time, but didn’t quite have the wherewithal to walk away from the steady paycheck. Getting fired has a way of forcing that issue.

33446_10100118160946360_1749762_nAnd so, on a cold, windy January day in 2008, barely twenty-two, I left the building with my belongings in a box, with nearly zero dollars to my name, having wiped out my savings with a down payment on a downtown Atlanta condo. I had no revenue and no customers. A friend and I went to celebrate. I was determined and hit the ground running, though, and that’s how I started Evariste Systems, the VoIP consultancy turned software vendor that I continue to operate today, nearly eight years later.

Because the US does not have a serious vocational education program and because the focus of the “everyone must go to college” narrative of the last few decades is reputed success in the job market (or, more accurately, the threat of flipping burgers for the rest of one’s life), the first and most pertinent question on American students’ minds would be: do I feel that I have suffered professionally because I did not finish my degree?

I didn’t think I would then, and I still don’t think so now. Notwithstanding the above-mentioned limitations, it’s safe to say that I could qualify for almost any mainstream, mid-range IT job I wanted, provided I evolved my skill set in the requisite direction. In that way, IT differs considerably from most white-collar “knowledge work” professions, which are variously guilded (e.g. law, medicine) or have formal disciplinary requirements, whether by the nature of the field (civil engineering) or by custom and inertia (politics, banking). Although politics perverts every profession, IT is still exceptionally meritocratic; by and large, if you can do the job, you’re qualified.

The inextricable connection of modern IT to the history and cultural development of the Internet also moves me to say that it’s still the easiest and most realistic area in which one can receive a complete education through self-teaching. You can learn a lot about almost anything online these days, but the amount of resources available to the aspiring programmer and computer technologist is especially unparalleled.

An insert from a stack of mid-1990s PC Magazines discarded by my neighbour, which I coveted tenaciously.

That doesn’t mean I’d recommend skipping college generically to anyone who wants to enter the profession at roughly the same level. I put in, as a teenager, the requisite ten to twenty thousand hours thought to be necessary to achieve fundamental mastery of a highly specialised domain. However, I can’t take all the credit. I was fortunate to have spent my formative years in a university-centric environment, surrounded by expensive computers and highly educated people (and their children), some of whom became lifelong mentors and friends. Although my parents were not especially thrilled with how I spent my free time (or, more often, just time), they had nevertheless raised a child–as most intelligentsia parents do–to be highly inquisitive, open-minded, literate and expressive, with exposure to classical culture and literature. Undergoing emigration to a foreign land, culture and language at the age of six was challenging and stimulating to my developing mind, and the atmosphere in which I ended up on the other side of our transoceanic voyage was nurturing, welcoming and patient with me. The irony is not lost upon me that I essentially–if unwittingly–arbitraged the privilege associated with an academic cultural background into private sector lucre. A lot owes itself to blind luck, just being in the right place and in the right time. I could probably even make a persuasive argument that I lucked out because of the particular era of computing technology in which my most aggressive uptake played out.

This unique intersection of fortuitous circumstances leads me to hesitate to say that nobody needs a computer science degree to enter the IT profession. My general sense is that a computer science curriculum would add useful, necessary formalisation and depth to the patchwork of the average self-taught techie, and this certainly holds true for me as well–my understanding of the formal side of machine science is notoriously impoverished, and stepping through the rigourous mathematics and algorithms exercises would have doubtless been beneficial, though I don’t think it would have been especially job-relevant in my particular chosen specialisation.

Still, I’m not committed to any particular verdict. I’m tempted to say to people who ask me this question: “No, you don’t need a degree to work in private industry–but only if you’re really good and somewhat precocious.” Many nerds are. Almost all of the really good programmers I know have programmed and tinkered since childhood. It comes part and parcel, somewhat like in music (as I understand it). In the same vein, I don’t know anyone who wasn’t particularly gifted in IT but who came out that way after a CS degree.

On the other hand, for the median aspiring IT professional, I would speculate that a CS degree remains highly beneficial and perhaps even essential. For some subspecialisations within the profession, it’s strictly necessary. I do wonder, though, whether a lot of folks whose motive in pursuing a CS degree is entirely employment-related wouldn’t be better off entering industry right out of high school. They’d start off in low entry-level positions, but I would wager that after four years of real-world experience, many of them could run circles around their graduating peers, even if the latter do have a more rigourous theoretical background. If practicality and the job market are the primary concern, there are few substitutes for experience. Back at my ISP job, CS bachelors (and even those with master’s degrees) were rejected commonly; they had a diploma, but they couldn’t configure an IP address on a network interface.

Another reason I don’t have a clear answer is because things have changed since then; a decade is geological in IT terms. I’ve also spent twice as much time self-employed by now as I did in the employed world, and niche self-employment disconnects one from the pulse of the mass market. I know what I want in an employee, but I don’t have a finely calibrated sense of what mainstream corporate IT employers want from graduates these days. When I dropped out, Facebook had just turned the corner from TheFacebook.com, and there were no smartphones, no Ruby on Rails, no Amazon EC2, no “cloud orchestration”, no Node.js, no Docker, no Heroku, no Angular, no MongoDB. The world was still wired up with TDM circuits, MPLS was viewed as next-generation, and VoIP was still for relatively early adopters. The point is, I don’t know whether the increasing specialisation at the application layer, and increasing abstraction more generally, has afforded even more economic privilege to concrete experience over broad disciplinary fundamentals, and if so, how much.

All I can firmly say on the professional side is that it seems to have worked out for me. If I were in some way hindered by the lack of a university diploma, I haven’t noticed. I’ve never been asked about it in any employment interview after my student-era “part-time” jobs. For what I wanted to do, dropping out was the right choice professionally, and I would do it again without hesitation. It’s not a point of much controversy for me.

The bigger and more equivocal issue on which I have ruminated as I near my thirtieth birthday is how dropping out has shaped my life outside of career.

I don’t mean so much the mental-spiritual benefits of a purportedly well-rounded liberal education–I don’t think I was in any danger of receiving that at UGA. 80% of my courses were taught by overworked graduate teaching assistants of phenomenally varying pedagogical acumen (a common situation in American universities, especially public ones). The median of teaching quality was not great. And so, I’m not inclined to weep for the path to an examined life cut short. It’s not foreclosed access to the minds of megatonic professorial greats that I bemoan–not for the most part, anyway.

However, moving to Atlanta as a twenty year-old meant leaving my university town and a university-centric atmosphere. My relatively educated environs were replaced with a cross-section of the general population, and in my professional circles, particularly at that time, I had virtually no peers. My median colleague was at least ten years older, if not twenty, and outside of work, like most people living in a desolate and largely suburban moonscape, I had nobody to relate to. At the time I left, I found value in the novelty of learning to work and communicate with the general public, since I never had to do it before. I thought our college town was quite “insular”. In retrospect, though, it would not be an exaggeration to say that I robbed myself of an essential peer group, and it’s no accident that the vast majority of my enduring friendships to this day are rooted in Athens, in the university, and in the likeminded student personalities that our small ISP there attracted.

1636_692097982980_2912_n

Beautiful morning view from the balcony of my recently foreclosed condo.

As a very serious and ambitious twenty-year old moving up the career ladder, I also took a disdainful view of the ritualised rite of passage that is the “college social experience” in American folklore. I didn’t think at the time that I was missing out on gratuitous partying, drinking, and revelatory self-discovery in the mayhem of dating and sex. If anything, I had a smug, dismissive view of the much-touted oat-sowing and experimentation; I was leapfrogging all that and actually doing something with my life! Maybe. But I unraveled several years later, anyway, and went through a brief but reckless and self-destructive phase in my mid-twenties that wrought havoc upon a serious romantic relationship with a mature adult. I also at times neglected serious worldly responsibilities. Being a well-remunerated mid-twenties professional didn’t help: it only amplified gross financial mistakes I made during that time, whereas most people in their twenties are limited in the damage they can do to their life by modest funds. I’m still paying for some of those screw-ups. For example, few twenty-one year olds are equipped to properly weigh the wisdom of purchasing a swanky city condo at the top of a housing bubble, and, subsequent developments suggest that I was not an exception. Oh, a word of advice: pay your taxes. Some problems eventually disappear if you ignore them long enough. Taxes work the opposite way.

But in hindsight, a bigger problem is that I also missed out on the contemplative coffee dates, discussion panels, talks and deep, intelligent friendships that accompany student life in the undergraduate and post-graduate setting. While the median undergraduate student may not be exceptionally brilliant, universities do concentrate smart people with thoughtful values densely. It’s possible to find such connections in the undifferentiated chaos of the “real world”, but it’s much harder. I situated myself in a cultural frame which, while it undergirds the economy, is not especially affirmative of the combinations of the intellect. To this day, there is an occasionally cantankerous cultural clash between my wordy priorities and the ruthlessly utilitarian exigencies of smartphone-thumbing business. Get to the point, Alex, because business. Bullet points and “key take-aways” are the beloved kin of e-solutions, but rather estranged from philosophy and deep late-night conversations.

This facet of campus life is less about education itself than about proximity and concentration of communities of intelligent people at a similar stage of life. Because I grew up in universities, I didn’t appreciate what I had until I lost it; I traded that proximity to personal growth opportunities for getting ahead materially and economically, and my social life has been running on fumes since I left, powered largely by the remnants of that halcyon era of work and school.

If leaving the university sphere was a major blow, self-employment was perhaps the final nail. Niche self-employment in my chosen market is a largely solipsistic proposition that rewards hermitism and prolific coding, perfect for an energetic, disciplined introvert. I probably would have done better at it in my teenage years , but it didn’t suit my social nature or changed psychological priorities as an adult. A lot of time, money and sacrifice was emitted as wasted light and heat into the coldness of space as I spun my wheels in vain trying to compensate for this problem without fully understanding it.

The essential problem is much clearer in hindsight: in leaving the university and the employment world, with its coworker lunches and water cooler talk, I had robbed myself of any coherent institutional collective, and with it, robbed myself of the implicit life script that comes with having one. I was a man without any script whatsoever. I rapidly sequestered myself away from the features of civilisation that anchor most people’s social, romantic and intellectual lives, with deleterious consequences for myself. I did not value what I had always taken for granted.

There are upsides to being a heterodox renegade, of course. Such persistent solipsism mixed with viable social skills can make one very fluid and adaptable. I took advantage of the lifestyle flexibility afforded by the “non-geographic” character of my work to travel for a few years, and found unparalleled freedom few will experience in wearing numerous cultural hats. I had the incredible fortune to reconnect with my relatives and my grandmother on another continent. For all its many hardships, self-employment in IT has much to recommend it in the dispensation it affords to write the book of one’s life in an original way.

Be that as it may, the foundations of my inner drive, motivation and aspirations are notoriously ill-suited to the cloistered life of a free-floating hermit, yet I had taken great pains to structure such a life as quickly as possible, and to maximal effect. My reaction to this dissonance was to develop a still-greater penchant for radical and grandiose undertakings, a frequent vacillation between extremes, in an effort to compensate for the gaping holes in my life. The results were not always healthy. While there’s nothing wrong with marching to the beat of one’s own drum, I should have perhaps taken it as a warning sign that as I grew older and made more and more “idiosyncratic” life choices, the crowd of kindred spirits in my life drastically thinned out. “Original” is not necessarily “clever and original”.

In sum, I flew too close to the Sun. When I reflect upon the impact that my leaving the university has had upon my life, I mourn not professional dreams deferred, nor economic hardship wrought, but rather the ill-fated conceit that I could skip over certain stages of a young adult’s personal development. Now that the novelty has worn off and the hangover has set in, I know that it would have been profoundly beneficial to me if they had unfolded not within the fast and loose patchwork I clobbered together, but within a mise en scène that captures the actions, attitudes and values of the academy–my cultural home.


On “communication skills” and pedagogy

Here’s a pet peeve: the widespread belief that any two people, regardless of the disparity in their levels of intellectual development, are destined to fruitfully converse, as long as both exhibit “good communication skills”.

First, acknowledgment where it’s due. It is indeed an important life skill to be able to break down complex ideas and make them accessible to nonspecialists.

“If you can’t explain it simply, you don’t understand it well enough” is a remark on this subject often attributed to Einstein (though, as I gather, apocryphally). The idea is that explaining something simply in ways anyone can understand is the sign of true mastery of a subject, because only deep knowledge can allow you adroitly navigate up and down the levels of abstraction required to do so.

Those of us in the business world also know about the importance of connecting with diverse personalities–customers, managers, coworkers. In the startup economy, there’s a well-known art of the “elevator pitch”, wherein a nontrivial business model can be packaged into ten-second soundbites that can hold a harried investor’s attention–the given being that investors have the attention spans of an ADHD-afflicted chipmunk.

I would also concur with those who have observed that scholarly interests which don’t lend themselves to ready explanation–that are “too complex” for most mortals to fathom–are often the refuge of academic impostors. There are a lot of unscrupulous careerists and political operators in academia, more interested in what is politely termed “prestige” than in advancement of their discipline and of human understanding. These shysters, along with more innocent (but complicit) graduate students caught up in the pressures of the “publish or perish” economy, are the spammers of house journals, conferences and research publications, often hiding behind the shield of “well, you see, it’s really complicated”. Most legitimate scholarly endeavours can be explained quite straightforwardly, if hardly comprehensively. Complexity is an inscrutable fortress and a conversation-stopper in which people more interested in being cited and operating “schools of thought” (of which they are the headmasters, naturally) hide from accountability for scholarly merit.

All this has been polished into the more general meme that productive interaction is simply a question of “learning to communicate”. With the right effort, anyone can communicate usefully with anyone. It doesn’t matter if someone is speaking from a position of education and intelligence to someone bereft of those gifts. Any failure to achieve necessary and sufficient understanding is postulated as a failure of communication skills, perhaps even social graces (e.g. the stereotypical nerdling).

This is an extreme conclusion fraught with peril. We should tread carefully least we impale ourselves on the hidden obstacles of our boundless cultural enthusiasm for simplification.

First, there’s a critical distinction between clarity and simplicity. It is quite possible to take an idea simple at heart and meander around it circuitously, taking a scenic journey full of extraneous details. Admittedly, technologists such as programmers can be especially bad about this; their explanations are often vacillatory, uncommitted to any particular level of abstraction or scope, and full of tangents about implementation details which fascinate them immeasurably but are fully lost on their audience. I’ve been guilty of that on more than a few occasions.

However, there is an intellectually destructive alchemy by which the virtues of clarity and succinctness become transformed into the requirement of brevity. Not all concepts are easily reducible or lend themselves to pithy sloganeering–not without considerable trade-offs in intellectual honesty. This is a point lost on marketers and political activists alike. It leads to big ideas and grandiose proclamations that trample well-considered, moderate positions, as the latter are thermodynamically outmatched by simplistic reductions. Brandolini’s Law, or the Bullshit Asymmetry Principle, states: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.” As always, sex sells–a fact of which the TEDdies have a firm grasp, with their peddling of seductive insight porn. As Evgeny Morov said:

Brevity may be the soul of wit, or of lingerie, but it is not the soul of analysis. The TED ideal of thought is the ideal of the “takeaway”—the shrinkage of thought for people too busy to think.”

Second, the idea that “communication skills” are at the heart of all matters has wormed its way into pedagogy rather disturbingly in the form of group work and so-called collaborative models of learning. As the thinking goes, the diversity of a student body is an asset; students have much to learn from each other, not just the lecturer, and encouraging them to do so prepares them for “the real world”, where they’re ostensibly going to be coworkers, police officer and arrestee, and so on.

It reminds me of an episode recounted by my favourite author William Blum in his memoir about the political upheaval of the 1960s:

At one point I enrolled for a class in Spanish at the so-called Free University of Washington, and at the first meeting I was flabbergasted to hear the “teacher” announce that he probably didn’t know much more Spanish than the students. And that’s the way it should be, he informed us–no authoritarian hierarchy. He wanted to learn from us as much as we wanted to learn from him.”

The counterculture kids were challenging incumbent hierarchies of authority. I see the same kind of anti-intellectualism recycled today into the putatively more laudable goal of social flattening.

But there’s a limit to the productive fruit of such ventures. It’s best illustrated by an anecdote from my own life.

When I was a freshman at the University of Georgia, I took an obligatory writing and composition course, as part of the infamous “core requirements” (remedial high school) that characterise the first year or two of four-year undergraduate university education in the US. One day in November, our drafts of an expository essay were due, presumably for commentary and feedback on writing mechanics by the English graduate student teaching the course.

Instead, we were paired with a random classmate and told to critique each other’s papers. My partner was an Agriculture major–a farmer’s son, he readily volunteered–who was only at the university because his father insisted that he needed a college degree before taking up his place in the family business. I would estimate his reading level to have been somewhere in the neighbourhood of fifth to eighth grade. I was going to critique his paper, and he was going to critique mine.

Candidly, his paper was largely unintelligible gibberish; it would have taken many improbable megajoules of energy input much for it to rise merely to the level of “unpolished”. Were the problems strictly mechanical–paragraphs lacking topic sentences, no discernible thesis in sight, no clear evidentiary relationship between his central claims and the sentences supporting them–I would have earned my keep in a few minutes with a red pen.

The problem was much deeper: his ideas were fundamentally low-quality, benighted in a commonsensically evident kind of way. They were at once trite, obvious, and all but irrelevant to the assigned topic. The few empirical claims made ranged from startling falsehoods to profoundly unfalsifiable arrangements of New Agey words that grated on the ear of someone accustomed to the idea that the purpose of arranging words was to convey meaning. He was hurtling at light speed toward an F. What could I do, rewrite his paper for him? How would I even begin to explain what is wrong with it? There was no room to start small or to evolve toward bigger, more summative problem statements; it was a genuine can of worms: pry it open, and all the worms come out to play at once.

I don’t mean to impugn him as a human being; he just wasn’t suited to the university’s humanities wing, whose business was reputed to be the life of the mind, set in a programme of liberal education. He didn’t know how to argue or how to write — period. He was more of a hero of proletarian labour, as it were, reared in a life script ineffably different to my own, never having crossed paths with me or anyone else in the pampered, effete, bourgeois “knowledge work” setting before, and destined to never cross paths with me in any such setting again. I was utterly paralysed; there just wasn’t much I could do to help him. Plainly, I couldn’t tell him that his thoughts issue forth from a nexus of civilisation unrecognisable to me. There wasn’t much of anything to say, really. I made a few perfunctory remarks and called it a day.

His feedback on my paper, which in turn suffered from organisational and topic transition problems that continue to dog my writing today, was: “Looks good, man!” Verily, his piercing insight knew no bounds. We really learned a lot from each other that day. Along the way, I overheard bits and pieces of a rather erudite peer review by a considerably better-educated classmate. Why couldn’t she review my paper? It would have almost certainly helped. My writing wasn’t stellar, and my devoted readership–I do it all for you, much love!–knows it still isn’t.

Later, I privately enquired to the lecturer as to how I was supposed to condense a lifetime–however hampered by the limitations of my age and experience–of literacy, intellectual curiosity, familial and cultural academic background, semi-decent public education and informal training in polemic and rhetoric into a functional critique that would realistically benefit my beleaguered cohort and help him write a better paper. She replied: “That was the whole point; you need to work on your communication skills.”

In defiance not only of the comme il faut tenets of political correctness, but in fact–in some sense–of the national mythos of our putatively classless and democratic melting pot, I brazenly suggest something that is, I think, considered fairly obvious elsewhere: not all categories of people are destined to communicate deeply or productively.

When such discord inevitably manifests, we should not reflexively blame so-called communication skills or processes. People operate in silos that are sometimes “civilisationally incommensurable”, as it were, and sometimes there just isn’t much to communicate. This is the reality of culture, class and education, and the thinking on collaborative learning and teaching methodologies should incorporate that awareness instead of unavailingly denying it. Matching partners in group activities by sociological and educational extraction clearly presents political challenges in the modern classroom, though. Instead, I would encourage teachers to rediscover–“reimagine” is the appropriate neologism, isn’t it?–the tired, hackneyed premise of leadership by example. At the risk of a little methodological authoritarianism and a few droopy eyelids, perhaps the best way to ensure that students leave your course better than you found them is to focus on their communication with you. They’ll have the rest of their lives to figure out how to transact with each other.


On workplace ergonomics and motivation

In tech, we’re always talking about workplace ergonomics, the fine points of Aeron chairs and standing desks, the wretchedness of open-plan spaces and cubicle farms versus private offices, how many monitors to have, and so on. Usually, I’m an eager participant, very much attuned to the idea that comfort and pleasant aesthetics are essential to output, motivation, and sustained concentration in high-focus, specialised creative labour.

But sometimes it does help to get a little perspective. A glance at the work environments of many world-class musicians, scholars, authors and researchers from the 18th, 19th and early 20th centuries, or at the cramped physical environments in which highly capable, clever IT people work in developing countries, may convey a useful reminder that if you really want to do something, you can do it just about anywhere.

 

office-space-copierFor that matter, my first job, as a technical hand and later sysadmin at a small-town ISP, when I was a bright-eyed, bushy-tailed 18-20 year old, was in office space not altogether Class A; it was a freezing, windowless dungeon in a small 4-room suite whose HVAC controls were shared with a machine room in need of constant, high-intensity cooling, and the rooms were littered with stray computer hardware and accessorised with fairly second-rate furniture. Yet, I’ve never been so productive or so excited to go to work since those days.

In fact, the physical configurations in which I eagerly wrote code for many hours as a teenager are downright sadistic by cushy Silicon Valley standards; a crude wooden desk from Big Lots, a far too low wooden dining room chair, a 7 lbs (~3.1 kg) laptop, a shared family PC in the tiny living room of graduate student barracks. And what a PC that was–a 386SX/40 with 2 MB of RAM, at a time when most respectable citizens were packing 60 and 90 MHz Pentiums. We were poor–by American standards, anyway–but I didn’t really notice.

I have a friend and colleague who travels around the world, living his digital life out of a rather clunky Lenovo ThinkPad, sat on a variety of surfaces, whichever are available at the moment. While he’s not usually staying in mosquito-ridden youth hostels in the Congolese jungle, I fully grant, he doesn’t have 30″ IPS displays or wrist rests, to say nothing of a snazzy office with an adjustable-height desk and a foosball table–you know, the bare essentials for Ruby on Rails jockeys in the Bay Area. Yet he’s one of the most disciplined and entrepreneurially successful people I know. I think of him every time someone says that they “literally cannot work” without a chair with proper lumbar support.

More extremely, some of my Armenian colleagues learned to code in a time when Yerevan was largely without grid electricity, during the disastrous Nagorno-Karabakh War and its contemporaneous power crisis. They hooked up their clobbered-together computers to car batteries for a few hours day, batteries which they improvised some means of charging occasionally, usually by leeching surplus electricity from cables to critical facilities that did have it.  They’re some of the most capable and multifarious IT guys (not to mention electrical engineers!) I know.

Yes, one’s eyes, back, wrists, etc. become more fragile and capricious with age, and it would be prudent to afford them some care. As I near 30, I’m acutely aware of that fact, having all sorts of aches and pains I didn’t used to have.

Nevertheless, my conclusion on the psychological purpose of constant twiddling of small features of our environment – desks, monitor sizes and so forth – is that it’s more about tricking yourself into working on stuff you don’t really want to do. It’s to create the illusion that now everything is truly right with the world, and productivity will seamlessly spring forth from one’s fingers. It’s a means of papering over the fact that you’d rather be doing something else. If most of us were actually doing something stimulating, we’d probably be happy enough doing it on a rooftop while it’s snowing.

To drive that point home, a story:

I rented an office in a repurposed Soviet-era building in Yerevan, Armenia for a while. I should pause to say that this was very much a “you get what you pay for” kind of thing, so please don’t think all life in Yerevan looks like this. Anyway, here’s what was going on in the very room next door some of the time (“renovation”), and also a picture of the ice that formed under my doorway some winter days:

1538950_10102885224718380_539324755_n

 

10169320_10103206049512920_3452546932364153491_n

The “included” Internet connectivity was sorely lacking; in the end, I ended up with some WiMax receiver that gave me 384k/128k on a good day. I grumbled about it some of the time, sure. But I also wrote much of our product’s middleware, user interface and API documentation there. I was furiously productive, and it would not be unreasonable to say that our product made a titanic generational leap whilst I was there. Some other parts of this product infrastructure were written, also at a very impressive clip, whilst sat in a stiff chair and Spartan metal table in the rentable upstairs workspace at Sankt Oberholz, a Berlin coffee shop and coworking enterprise situated in a late 19th century building. I was hunched over uncomfortably, my nose stuck in a 13″ ultrabook with a loathsome keyboard. My eyes burned and my wrists tingled by the end of most days.

Now I’m sitting in my eminently comfortable Class A office in Atlanta, with great connectivity and a 32″ LED display on my desk, a favourite Das Keyboard at my fingertips, and I’m writing blog posts, poking around on Facebook. The difference is pretty clear to me: when I was in Yerevan and Berlin, I wanted to work, and now I don’t.

I won’t end with some hackneyed and nauseating Millenial “thought leadership” about “doing what you love”, nor the facile conclusion that it’s all in your head. I would just offer the modest speculation that an ounce of tweaks to the intellectual content of one’s work, or other, more existential life issues that inform your inner drive, is probably worth a pound of major adjustments to one’s office furniture, seating, barriers, and computer peripherals.


Fond memories of Innsbruck

It’s wonderful to be back amidst the beautiful Alpine scenery of Innsbruck, and I’m overwhelmed by nostalgia. I was last here almost exactly twelve years ago, in 2003, when I was 17, for about two months during the summer before my senior year of high school.

This was before the era of smartphones and ubiquitous WiFi, and we had no Internet access in our rented apartment. I still had to use a real map, and had maybe an hour of Internet access a day. For the first time in many years, I learned to happily do without, and to go outside and enjoy life without anxiety about the torrent of news, information and opinion to which I was not privy.

IMG_20150604_153933_hdr-copyMy father got a rare and coveted opportunity to teach on a summer abroad programme for American students. Practically, classes would let out around noon on Thursday, and the weekend was ours for travelling; this way, I got to see Vienna, Rome, Florence, Berlin and Paris, generally connecting by train through München. The München-Innsbruck EC train got to feel something like a commute home by the end of it all.

But it was in Innsbruck itself that I got my first exposure to Western European life, and it did a lot to mellow me out of my teenage angst, in those times expressed through we might call “niche” intellectual and ideological fixations. In everyday life in Austria, waking up after dawn to behold this ring of spectacular mountains above and piles of unlocked bicycles below, I found my idea of “capitalism with a human face”. Its attachment to reality is a complex topic, but irrelevant; my teenage mind had learned to stop worrying and love the small things, love the petite bourgeoisie. Apart from a brief exposure to the Rockies, I had never been in mountainous settings. I had only known the stifling humidity and mugginess of the Eastern half of the US and never crisp, cool air. Last time I had seen vestiges of daylight at 10 PM was in the “white nights” of Riga when I was four–a last-hurrah holiday in 1990 preceding the secession of the Baltic Republics.

IMG_20150604_165353In many ways, my quotidian walks up and down Maria-Theresien-Straße, ventures west on Anichstraße to the Universität Innsbruck cafeteria for our included lunch of schnitzel, and my hikes to Hungerburg (868 m) did more for my spiritual health than the whirlwind of train travel.

I returned to America in August calmer, thinner, fitter and happier, with very concrete–for once, not theoretical–expansion of horizons. I had forgotten a lot of Spanish grammar just in time for the AP course, but had a bit of German up my sleeve.

My literature teacher from the previous semester asked, dejectedly, “Where is the angry communist Balashov?” I had no answer for him; I was in good spirits, and it was a great time to be alive.


Predictions for 2015

It is challenging to make prognostications that are specific and testable, or, in the thinking of Karl Popper’s brand of empiricism, falsifiable. Many predictions are formulated such that they could be argued, post facto, to be true regardless of what actually happens.

I’ll try to avoid making those. At the same time, prognoses about complex phenomena like global economics and war are often, by their nature, vague and open to interpretation, and they consist of many parts. They are replete with statements about things that aren’t necessarily discrete or quantifiable. At best, they can be viewed as compositions, as systems of many interdependent propositions and variables which must be accepted or dismissed holistically. This is the insight of Thomas Kuhn, who found the Popperian view of how science evolves, through the testing of individual hypotheses, to be naive. He popularised the use of the word “paradigm”, a superset of a hypothesis. It may be that the apparent truth or falsehood of my predictions will depend on whether you buy into certain paradigms.

Without further ado:

1. The price of crude oil will rebound to US $80/bbl or more.

It’s not in the interest of any oil-producing nation to perpetuate the current freefall, which has led, at the time of this writing, to oil trading in the $50-$60/bbl range .

I’m not an industry analyst, but my impression is that the current slide is caused by a combination of:

  • Reduced short-term global demand.
  • Saudi Arabia breaking ranks with its OPEC cohorts and refusing to restrict output.
  • Optimism about the increasing production of the American domestic energy sector (leading to expectations of higher supplies).

I don’t think Saudi Arabia going rogue is going to last. Saudi Arabia can weather low prices better than many rival oil autocracies, but ultimately, these states have common interests. Someone will probably make a deal with the Kingdom to encourage them to return to cartel discipline. The Russian economy suffers particularly heavily from a fall in oil, as its state budget depends almost entirely on high oil price targets, and it exports almost nothing apart from energy and raw materials. It’s possible that Russia, despite not being an OPEC member state, will offer some carrots to get the Saudis to cut output.

As for the optimism about American domestic energy supplies, there’s definitely an underlying reality: undeniably, output has increased profoundly in the last few years, to the extent that it may be one of the most significant structural changes of the early 21st century. But I would wager that some of the correspondent market movements can be attributed to irrational exuberance and speculative trading, too; once the sizzling party cools down and the fast rave music gives way to slower ballads, cracks will emerge, in the form of concerns about sustainability of yields (at certain EROEIs) and environmental impact of this boom. I don’t believe that short-term advances in hydraulic fracturing of shale change the big picture, which is that the EROEI of global hydrocarbon energy supplies is falling.

I hope that the current trend in increasing energy capture efficiency (and therefore EROEI) of photovoltaic cells and wind turbines, as well as ongoing research into hydrogen fuel cells, thorium reactors, and other alternative sources, will continue undistracted by short-term fossil fuel supply bumps. I’m not holding my breath for capital markets to get smart enough to emphasise long-term prospectus over short-term speculative opportunities, though. I’ve always thought that energy and environmental issues are key examples of market failure, alongside healthcare.

2. Russia will continue to be mildly depressed.

Most of Russia’s present economic malaise and inflation (50% devaluation of the rouble) probably owes itself more to the crash in crude oil prices than to any effect of Western sanctions, in the same way that Russia’s stabilisation in the 2000s owes itself to the rise in crude oil prices rather than Putin’s much-touted economic policies.

So, I think any recovery is likely to be linked to the oil. As I expect the gains in crude to be modest (to ~$80/bbl or so, rather than $100+), I don’t expect Russia to see a dramatic reversal in its current fortunes.

3. The Ukraine conflict will stagnate, unresolved.

There is neither political commitment nor opportunity for either the West or Russia to, as outside influences, drive the Ukraine conflict to any endgame. The only actor that has an incentive to take decisive action and definitively consolidate Ukraine is the Kiev government, but it does not have the military capability or finances to do that.

What I expect is that the current status quo will coalesce into a kind of uneasy détente, not wholly unlike the aftermath of the Georgian-Russian conflict of 2008, where South Ossetia and Abkhazia became nominally independent and de facto Russian-controlled–and, at the very least, ungovernable for Georgia. This kind of situation has a tendency to ossify over time, unless some dramatic transpiration suddenly reanimates active conflict.

That is to say, the Russian-adhering Eastern Ukraine will continue to be a semi-ungovernable patchwork for the central government. Professional ‘rebels’ from the likes of the Donetsk People’s Republic will continue to carve out careers for themselves, and be variously, depending on what’s going on, nudged by half-hearted Ukranian army offensives, officially disavowed by Russia, or unofficially utilised by Russia as a tool to coerce Ukraine and the West (with the implicit threat of whipping up pro-Russian nationalist agitation among such groups). More generally, Eastern Ukraine will continue to run nominally as part of Ukranian territory, but with its fronds tending increasingly toward the Russian sun in terms of economic and political linkage.

This situation will, over time, reach some sort of equilibrium where nobody really “wins”, while everybody claims to have won and trades accusations of banditry. The world will forget about Ukraine and move on; Russian economic relations with the West will slowly and subtly renormalise, particularly with Europe, though the remnants of sanctions will continue to be used by the US to pressure Russia, while the instability of the East–with the implied influence that Russia has over it–will be used by Russia as leverage against the West. Both the US and Russia win in having an outside enemy to refer to, particularly the Putin regime, which will point to US sanctions as easy blame for ongoing internal malaise it cannot fix.

4. US-Iran relations will improve and the US will ease sanctions against Iran.

While some recent improvements in US-Iran relations can probably be attributed to the ascendance of more moderate post-Ahmadinejad forces, the more significant incentive for collaboration involves the common enemy of ISIS. It is likely that there will be some quiet, underplayed horse-trading and compromise regarding Iran’s nuclear programme in the service of this awkward alliance.

5. The ISIS-driven partition of Iraq will solidify.

With increasing US commitment to airstrikes against ISIS, which is also perceived as a regional threat by nearly all incumbent regimes, it seems likely that ISIS military gains will be arrested. On the other hand, another US ground war to decisively drive ISIS out of the territories they currently occupy does not seem politically possible.

This will probably lead to a hardening of existing boundaries between ISIS-controlled and non-ISIS-controlled Iraq. Small flare-ups will occur between ISIS and local governments who have more enthusiasm for driving them out decisively than the US does, such as the battle-hardened Peshmerga of Iraqi Kurdistan, though they will not succeed in doing so for lack of supplies and firepower. Other locals will probably come to an accommodation with ISIS, though this is something neither side will publicise because doing so is a losing proposition PR-wise (failure to defeat ISIS on one side, failure to comprehensively consolidate an Islamic caliphate on the other side).

6. The Syrian Civil War will continue without resolution or settlement.

This war will continue and inflict even more destruction upon an already destroyed and war-weary country.

In all likelihood, however, it will come to be increasingly simplified down to a pro vs. anti-ISIS conflict, especially since the American view is that any enemy of hard-line Islamists is its friend. Depending on the overall military successes of ISIS in Iraq and Syria, this may even lead to a delicate, understated thawing or rapprochement with the Assad regime, though neither side will publicise this fact because it’s a lose-lose PR proposition.

7. Chinese (PRC) growth will continue to plateau.

As China undergoes industrialisation without a clear next step beyond its specialisation in manufacturing, its GDP growth will continue to plateau. Combined with a possible pop in its overheated urban real estate market, this may lead to a mild recession.

8. China will become an increasingly attractive IT outsourcing destination over India.

As a result of its growth and increasing (though very unequally distributed) affluence, India has become too expensive for many Western firms’ tastes, and they will increasingly look to China to fill the gap, particularly in low-skill business process outsourcing and backoffice functions. But this in itself is unlikely to be China’s ticket out of a looming existential crisis of macroeconomic purpose.

9. The West African Ebola epidemic will peak and fade from public view.

The West African Ebola epidemic is currently projected to peak in April or May of 2015 by the WHO. After that, it will probably fade from public view entirely, punctuated by the occasional incident of an infected individual making it across Western borders.

10. Pope Francis will face political challenges from conservative hardliners.

It is unlikely that Pope Francis’ spate of rapid and theologically radical liberal reforms will be allowed to continue forever by the conservatives within the Vatican inner circles and conservative bishops elsewhere. Such conflicts have already flared up, and thus far, the Pontiff has prevailed.

Canon law has no procedure to impeach or recall a pope. I don’t think his job is in danger. His influence over certain conservative constituencies and their local directors will probably be reduced in increasingly conspicuous acts of defiance, though.

11. The global appreciation of the US Dollar will end.

The increased dollar demand is most likely due to:

  • The Federal Reserve is nearing the end of its most recent period of Quantitative Easing (QE), which is expected to pull back money supply.
  • The high performance of the S&P 500 spectrum of the US stock market.
  • Exuberance about American energy supplies and perceived domestic recovery in jobs and real estate.
  • Increased demand for hard reserve currency in places with high inflation, such as Russia and many of the former Soviet republics (whose currencies’ fate is, in many instances, closely linked to the rouble).

I think this exuberant climate will cool by the end of 2015Q2 and that former-USSR inflation will stabilise in tandem with rebounding oil prices. The stock market is thought to be in need of a correction phase. These things will send the dollar back down to historical norms.


What Armenians should know about life in America

For most Armenian immigrants to the US, it is quite likely that American life offers a higher material standard of living and more access to vastly greater opportunities. If that weren’t broadly true, people wouldn’t emigrate there. It is no less true that the relative poverty and ongoing demographic implosion of Armenia can be crushing, and that daily life there for a great many people is closer to the problems of basic survival than the life of most Americans. That’s fairly self-evident.

Still, from the year and a half that I’ve spent in Armenia, if I had a 10 dram coin for every time I’ve heard from native Armenians that America is the promised land of high dollars and low worries, or that I’ve heard righteously indignant gripes about stingy relatives in the US who “only” send a few hundred dollars in monthly remittance, or “you’re American, what’s a hundred dollars to you?”, or (my favourite) “isn’t everything cheaper in America?”, I’d be a billionaire (in dollars). I could get my own faux Sphinx, or Egyptian pyramid, or something on a highway on the outskirts of Yerevan.

As a first-generation immigrant to the US, and an experienced traveler to the so-called “developing world”, I’d like to address some of the myths held by Armenians, be it that life in the US is convenient and comfortable, or that their US-side relatives, with the pocket change they send back to tatik, have crossed to the dark side of unconscienable avarice and forgotten the meaning of family.

I’m not passing judgement on whether anyone should emigrate. However, if you’re going to try to emigrate, it’s better to do it with some realism about what to expect, and some appreciation for the complexity encountered in trying to make meaningful comparisons between life there and life back home. Like many people, Armenians have a tendency to compare the worst aspects of life in Armenia with the best aspects of life in America, or elsewhere abroad. Yes, being young, healthy and rich in America is better than being poor, sick and aging in Armenia, no doubt. But it’s not an accurate or reasonable comparison.

Mind you, this is not some myopic apologia full of First World Problems, to tell native Armenians how hard life is for us in one of the world’s richest countries. I’m not here to share the horror of a broken espresso machine or Banana Republic being out of khakis (with apologies to George Carlin). You can put the world’s tiniest violin away.

The US has always done an excellent job of marketing itself as the promised land, and the global reach of its mass-cultural and media exports to support that narrative is unrivaled. So, I don’t really need to tell you what is potentially good about it. Instead, I’ll speak to the more ambiguous notes.

Jobs and immigration

If you’re not earning well in Armenia because you don’t have any specialised skills or education, you’re going to face the same problem anywhere else; wages for unskilled labour are low everywhere.

As hundreds of thousands of Armenian migrant workers in Russia know, a blue-collar labourers can still fetch superior (relative to Armenia) wages during boom times, particularly in construction and industrial labour. However, this is not a realistic vehicle for emigration to the US for two reasons:

  • The criteria for immigration: the US has plenty of unskilled manual labourers, both domestic and immigrant (legal and illegal). It has absolutely no incentive to let any more of them in, and US employers can’t sponsor a pair of hands and a strong back for a visa.
  • The sheer expense of life in the US, where a great deal of costs, such as healthcare, are borne directly by the consumer. Some of these are covered at least rudimentarily by state infrastructure or held down by free-market pricing elsewhere. In other words, I would contend that earning little in the US by US standards can be riskier and more problematic than earning little by Russian or Armenian standards in Russia or Armenia. We’ll return to that topic later.

Russia has a (somewhat) common market with Armenia and is politically motivated to offer an inclusive attitude to CIS guest workers. This is not so for the US.

Some Armenians I’ve met seem to be under the impression that if they just had an axperes in California that could hook them up with a job at his auto shop, the visa and immigration issues will solve themselves. This is completely false. While knowing people and having connections is useful anywhere, overall the US economy, along with its immigration system, are fairly transparent and follow the law strictly. Moreover, it is important to remember that the US is very large, very diverse, and not run by Armenians–who represent a tiny and insignificant minority. Nobody there can just “arrange” it all for you unless you meet official immigration criteria. It doesn’t work like that.

If you’re a highly skilled, educated professional, presumably you have or will attempt to solve this problem in advance of emigrating, and you might be successful. Even so, there are a few things to keep in mind. Armenians specialise highly in urbane intellectualism and their diaspora has a high proportion of academics. This should not be confused with a high availability of academic jobs, especially in the liberal arts and humanities. Funding for that sort of thing is quite slim by developed-world standards. While professors are generally compensated well in the US, the number of available tenure-track positions, or even full-time instructorships, is small and shrinking.

The cost-saving approach of American universities is not so much to pay professors little as it is to eliminate their positions and replace them with part-time graduate teaching assistants and part-time instructors–expendable armies of people who are paid measly wages (effectively below minimum wage) a la carte (by the course), with no job security or benefits.

Competition, and the harsh office politics that come with it, is formidable, because even in this sweatshop atmosphere, graduate students need teaching hours. Your formal credentials may not carry over to the American system, which means you’ll be disadvantaged in competing with 26-year old second-year graduate students for the opportunity to provide essentially free labour to this system.

In the large public university that I attended, easily 80% of my courses were taught by graduate students not much older than I was. Needless to say, teaching quality is not a major preoccupation of the American university system.

That’s all to say that I wouldn’t count on an academic route in. It’s not impossible; in fact, it’s the one my parents took. But they had to step down from top-tier Moscow professorships to repeat graduate school for six years, then beat very low job market and employer sponsorship odds to stay. I wouldn’t have bet on us. We just got lucky. A lot of it probably owes itself to the unique boom times of the nineties.

Objectively speaking, the people in the best position to emigrate to the US, with employer sponsorship, are probably highly-skilled professionals in the private sector. Even so, the quotas on H1B visas are highly restrictive. In the tech sector, at least, there is widespread agreement that the part of the US immigration system that deals with legal immigration of high-skill professionals is in badly in need of reform.

The costs of life

It’s fairly obvious that the absolute cost of living in the US, and in Western countries in general, is higher than in Armenia. Native Armenians recognise this in abstract, but many seem to lack the perspective to apply that knowledge concretely to any given situation.

Let’s say an Armenian family of three or four gets by in Yerevan, somehow, on roughly US$900/mo. They hear that their cousin and his wife in Fresno pull in a household income of US$95,000 (roughly US$7900/mo). So, they figure, “How much more expensive could it be? Twice? Three times? How stingy do you have to be to only send back $300 every month while we’re working twelve hours a day, six days per week, to make ends meet with $900?” It seems to be human nature to figure: “If I had that kind of money, I’d find a way. How expensive could it be?”

That’s because, at some level, most people think that if they can get by on $900, then anything over that–or at least half of anything over that–is, in some shape or form, “extra” or “disposable”, even taking into account the theoretical recognition of higher living costs. The higher costs don’t concretely register, mostly because people don’t know what they are.

Cost of living differential is a fluid concept that presents in many different forms. First, there are the exact same or substantially similar things, but which simply cost more in, say, Fresno–maybe a little more, maybe a lot more. Then, there are the things that are of the same category but are qualitatively incommensurable in some way, so it’s difficult to meaningfully compare their price. There are also things that are free or close to free in Armenia but cost money in the States. As well, there are substantial differences in anticipated risk and statistical incidence of certain expenses.

The point is, one cannot simply compare prices straight across. It is important to holistically understand the overall differences in the available lifestyle options, as well as the categories of expenditures that are structurally, legally or culturally particular to the respective locales. In many cases, they are very specific to the exact socioeconomic terms of a given place. In other words, you’re not going to buy the same things in the US that you do in Yerevan; they’ll neither be the same products or services, nor identical categories of outlays. It’s not intelligible to conceptualise a monthly budget in Fresno in everyday Yerevan terms, though that doesn’t stop many people from doing exactly that.

Taxes

I don’t know who seeded this meme that Americans pay low taxes. I’ve heard it repeated a number of times in casual conversations with Armenians.

Average American salary and wage employees pay a tax rate that is typical of the developed world, and is fairly comparable, in the aggregate, to the 35-45% that most Western Europeans pay, though, of course, that varies by country. A rigourous comparative analysis of international taxation could easily consume a whole dissertation in itself, and I don’t have data handy. In some respects, US taxes are indeed lower than European ones: marginal tax rates are unquestionably much lower. On the other hand, marginal US corporate income tax rates are some of the highest in the developed world.

The main difference is that the US is, by “First World” standards, very jurisdictionally fragmented, owing to its unique fixation with federalism. The aggregate individual tax burden in the US doesn’t come from one national-level tax authority. There are several different kinds of federal taxes, all assessed at different rates, limits and progressive brackets. There are state-level income taxes. Some counties (administrative divisions of states) have special income taxes. Some cities, such as New York City, have municipal income taxes. In addition to that, there are sales taxes (varying by county and state), property taxes (a very substantial source of local government funding), excise taxes, and gasoline taxes, and for business owners and self-employed individuals, a variety of other kinds of taxes. Most of these are assessed in a complex cascade by a zoo of separate agencies at the state, federal and municipal level, all with their own rules, forms, courts, collection practices, and so on. It’s all quite Byzantine.

This fragmentation makes it difficult to add up one’s aggregate tax burden, but if one does, one will find that it’s not substantially out of line with Western Europe. The difference is that the Western European taxpayer receives an abundance of social benefits and subsidies for it. In the US, there is no free healthcare, free university education, or free housing. From that point of view, the US is one of the most expensive jurisdictions in the “First World”; you have to pay both sides!

It’s virtually impossible to compute exactly how much our hypothetical Fresno family earning $95,000 will pay in taxes, because nobody pays the same amount in taxes. It greatly depends on whether they file jointly or separately, whether they have children, and if so, how many, and many other factors. California state income tax is deductible from the federal taxable income, and so on, but not from the 7.5% that they contribute toward federal payroll taxes (Medicare & Social Security). Without running all those numbers, together with the particulars of their withholding allowances, it’s not easy to arrive at a figure. Nevertheless, a very rough estimation for a single individual making $95,000 and filing as single in California puts total tax at around $33,000, or roughly 35%. This, of course, is just state and federal taxes, and does not include local sales tax (which, in California, varies from 7.5% to 10%), nor any possible property taxes, vehicle registration taxes, and a variety of other taxes that, when added up, will certainly push the tax burden past 40%.

But even with our 35% figure, you can expect take-home income off $95,000 to drop to $62,000 or so, which is not so much $7900/mo as $5100/mo. Not a small difference! It is certainly not outside the realm of possibility for one to pay close to half one’s income in taxes. For a variety of reasons related to the intricacies of tax codes at a variety of levels, most Americans don’t end up paying quite that much, but a lot of the professional middle class comes close. The point is: don’t jump to conclusions from sensational, eye-popping gross figures.

Housing, real estate and rents

According to the US Census Bureau, 2013 Median Gross Rent in 2013 was US $905. That may not seem like so much, but that’s a median across the country as a whole.

However, the US is a very large and decentralised country, and there are plenty of inexpensive rural areas, as well as blighted post-industrial places, where housing is cheap. That doesn’t mean you want to live there. The jobs are fewer and the incomes are lower, too.

All in all, I think a little Googling will persuade one that housing in the kinds of places where most Armenians would want to live and seem to agglomerate, e.g. southern California, is a matter of at least $1200/mo, and very possibly closer to $2000/mo or beyond.

Armenians, like other ex-Soviet people, are in a globally unique position of benefiting from the privatisation of real estate after the collapse of the USSR. Simply, if you had an apartment at the time the USSR dissolved, you got to keep it; it was gifted to you as private property. It became a source of wealth on which many people rely. It is common for Armenians to own the home in which they live, even if they are likely to be living there with extended family. The fact that most of them don’t have a mortgage or rent to pay explains how they can get by on such low wages.

The fact is, if you have an apartment in Yerevan, you’ve got a roof. It may not be a good roof, but it’s a roof. You can rent it out to other people, as many do to supplement their monthly income. It may not be much, but it’s something. Other Soviet people benefited from privatisation much more; people fortunate enough to own Moscow apartments genuinely came to sit on some wealth, both from rental and liquidation.

Armenians are sometimes under the impression that lots of Americans own houses, too. This is a misapprehension; Very few Americans own homes free and clear. Lots of Americans have purchased financed houses, usually on 15 or 30-year mortgage loan terms. While the interest rates are low by present-day Armenian standards, the post-recessionary credit environment has contracted and it has become harder to get a mortgage.

The point is that almost all Americans pay rent or a mortgage in order to have a roof. It’s usually their biggest expense, and frequently a large proportion of their income. The processes of civil law operate quite expediently; if you don’t pay, you will be evicted, or your home will be repossessed by the bank and auctioned off. This is something that many Armenians are unaccustomed to considering, since a good many of them live in apartments inherited through post-Soviet privatisation and/or through family.

Credit and debt is a way of life in America, especially for big-ticket items. Few homes or cars are bought in cash. That may seem like a good thing–it makes things more affordable and at more realistic rates than in Armenia. However, as with any leverage, prices rose to reflect the widespread availability of credit (a consequence that monetary stimulus policy relies upon), which means most people need credit to buy things that they could not conceivably afford in cash. For quite some time now, credit has not been merely a tool to make it easier to afford some things, but rather the only means of obtaining them at all for average-earning people. That means that unless you become wealthy by American standards, you might be able to buy a house, and if you do, you’ll be in enormous debt for what is, by most people’s standards, a close-enough approximation of forever. You can sell a house to get out of that obligation, but clearing what you owe the bank is your problem and your risk. The recent housing recession should serve to remind that housing is not such a dependable store of wealth.

Many would say that against the background of the state of the Armenian economy, it is an enviable luxury to even be in a position to contemplate optimal stores of wealth or weigh the downsides of credit, as a member of a sizable middle class. I don’t disagree. The point of this article is not to convince you that, from a material point of view, life in the US is as bad or worse than life in Armenia. Instead, I want to put emphasis on things that are typically underemphasised by starry-eyed aspiring emigrants who imagine life abroad to be a panacea. Life in wealthier countries brings problems and stresses of its own.

Healthcare

Among Armenians’ chief complaints is the high cost of healthcare relative to local incomes, and understandably so–it’s high. Nevertheless, Armenia has a largely free market in healthcare; payments are direct from patient to medical provider, and so prices are constrained by what the market will directly bear. Healthcare is a price-inelastic service, and so the prices the market will bear–grudgingly–are quite high. Nevertheless, there is a quantitative limit to the madness.

I don’t think any immigrant to the US can be fully prepared for the disaster that is the healthcare system. Simply put, it has neither the virtues of prices held down by supply and demand, nor the virtues of a state-operated or single-payer socialised healthcare model that predominates in Western Europe and elsewhere in the developed world. Instead, the US has managed to achieve the worst of all worlds: all downsides, no upside. Astounding inflation in the market is caused by the distortions of an intermediate bureaucracy of private insurers, rendering it ipso facto unaffordable without insurance. At the same time, the system is highly inefficient, having the highest proportion of medical expenditures going to nonmedical purposes (e.g. administration, marketing, legal costs) in the developed world.

Most nontrivial medical procedures and hospitalisations cost tens of thousands of dollars. A serious illness will incur hundreds of thousands of dollars in medical billing, and very serious illnesses likely well into the millions. If you’re fortunate enough to have a health insurance policy through your employer, which is the normal mechanism for obtaining health insurance in the US, these fees are billed to your insurer, who will do anything they can to get out of paying the claim. The Affordable Care Act (“Obamacare”) reforms have seemingly put an end to the most audaciously hostile aspects of this (huge departments in every insurance company dedicated to finding “preexisting conditions” on the basis of which to reject your claim, leaving you responsible for the fees), but it remains to be seen whether these reforms will survive future legal challenges by conservative (i.e. pro-big business) political forces.

Plenty of other caveats that can lead to your insurance claim being rejected remain, the main one at this point being that insurers typically will cover what they consider to be medically necessary, which is often the bare minimum of indicated treatment. Anything beyond that is “elective” (a.k.a. superfluous and unnecessary) and they are not obligated to cover it. It’s true that any system with cost controls imposed by a third party, such as state-operated socialised healthcare systems in Europe, must limit what medical providers will do for a patient in a given scenario. However, I think you’ll find that in the US, the resulting mixture is especially perverse; very often, unnecessary procedures and tests (which pad doctors’ pockets) are easily approved, while procedures that would be deemed medically essential elsewhere are treated as elective and denied.

Regardless of whether you have insurance, you will share in these inflated costs. The sharing doesn’t end with high premiums (which have gotten significantly higher since the ACA came about), but also a maze of other mechanisms insurers use to defray some of their financial risk directly onto you (but on the cost basis of the severe inflation they themselves helped create): deductibles, copays and coinsurance ratios, which vary considerably with type of medical service or procedure.

Simply having insurance by itself means little; it all depends on what kind of insurance. The vast majority of Americans do have private health insurance of some sort, yet 60% is a well-accepted figure for the percentage of personal bankruptcies attributable to medical bills. 

Analysis of the myriad of pathologies of the American healthcare system would take a lengthy book. Once upon a time, insurance was–as most insurance is in any other sphere–for low-probability, high-magnitude catastrophic events only. Over time, it seems to have evolved into a payment gateway for all medical procedures, period, including the most routine care. Along with a panoply of other factors, such as the absence of heavyhanded government price controls or regulation of the business side of healthcare, this enabled enormous inflation, since insurance spreads the cost around.

The economic aspect important for the potential immigrant to realise is that the only people in the US that have access to good healthcare they can afford are:

  • Affluent white-collar professionals working for large private companies that provide generous nonsalary benefits;
  • Employees in the government sector (state or federal);
  • Those fortunate enough to work in unionised professions (far less common in the US than in many other developed countries, and varies highly by region) who happen to have negotiated good benefits;
  • People over 65 years of age, who receive Medicare (ironically, very functional socialised healthcare);
  • Very low-income people who meet stringent criteria for Medicaid;
  • Military veterans;
  • Miscellaneous wealthy or semi-wealthy individuals.

Most of the American public does not fit into these categories, and this includes many people you would be moved to otherwise describe as middle class. Statistically speaking, medical bills will probably be a problem for you, too.

Yes, I know that the comparatively “inexpensive” medicine in Armenia is equally unaffordable to a population whose median monthly wage is just under US$300. Nevertheless, I would contend that there’s something to be said for owing hundreds or thousands of dollars rather than a quarter million–a very real concern in a place where a short ambulance ride can cost nearly US $2000. American healthcare bills are not in sums you can somehow borrow from family or friends. I suspect one reason why the TV series Breaking Bad had such uptake is that it resonated with a lot of people. It’s not such a stretch for most Americans to imagine that running a methamphetamine drug empire would be the only way for an ordinary schoolteacher to afford treatment of his lung cancer and ensure his family’s financial security.

Suburbia, layout, and transportation

The US economy is highly decentralised and has an excellent roadway network for distribution. Outside of a few older cities such as New York, it is mostly laid out in a monotonous, low-density, suburban architectural pattern. The vast majority of the American population lives in a landscape consisting of freestanding houses, roadways, and utilitarian shopping areas with large parking lots. Because the density is low, the driving distances are quite typically rather high. Americans spend more time commuting to work than almost anyone on Earth. In many places, there are no foot paths, and in many other places where they do exist, they are a strict formality, with nary a pedestrian in sight. One could not be blamed for coming away with the impression that cars are first-class objects on the American terra firma, while pedestrians are distinctly second.

You might think I’m describing the countryside and rural areas, but no. In fact, this is all true even in places that nominally market themselves as “cities”. All of the major growing “cities” of the Sun Belt, such as Dallas, Phoenix, Houston, Los Angeles, Atlanta and Miami, which developed in the postwar automobile boom, are in fact little more than vast agglomerations of bland suburban tract, separated by vast lengths of highway. Sometimes they’re wrapped around a small, central urban core, but in most cases that’s just a small financial district, or a depressed post-industrial nexus of economic decay. Either way, most of the population doesn’t live there, and these cores are neither a central feature of American aspirations nor of life in this country.

This has certain upsides. It theory, it allows one to feel as if one lives in the peaceful retreat of the countryside while having the amenities of a nearby conurbation. This might seem like a refreshing prospect to someone who has lived amidst the constant din and crowds of Yerevan their entire life.

The decentralised infrastructure and penetrating roadway network required to support this also gives rise to equalisation and homogeneity. The material level of life in Evergreen, Alabama isn’t really that different from life in New York City. Sure, it’s definitely different, but the differences are mostly a matter of small nuances. Broadly, you have access to essentially the same supply chains of commoditised groceries, fuel, medicine and services in the countryside as you do in a megapolis. This is rather different from most of the rest of the world, including Armenia, where everyone knows there’s Yerevan, and there’s not-Yerevan, and the developmental chasm between them can sometimes seem to be almost be measurable in centuries, depending on where exactly you go. Much of the US is very homogenous in both practical and aesthetic terms. You’ll have to come to New York for world-class neurosurgery, but you can drive on good roads, go bowling, and go to the same supermarket and buy the same Cheetos pretty much anywhere, even in the tiniest hamlet.

So, people are always surprised when I lean on this aspect of the US critically, as a primary reason for low quality of life. It’s a topic that receives so little attention it may as well be categorised as “a problem that has no name”. The reality, however, is that the impact of this way of designing the world goes far beyond mere aesthestics–which, by the way, are terrible; the suburban landscape is unrivaled in its monotony and depressing blandness. The problem is more insidious, though; the way that we build our settlements has deep implications for our civic life, our communities, our patterns of interaction, the relationships we form, the company we keep, and ultimately, the purpose and meaning we find in our lives.

Much of the rest of the world takes for granted architectural principles of how to build life-affirming human settlements. These principles evolved over thousands of years, and it’s no accident that so many cultures reached the same conclusions. Urban Europeans, and indeed Armenians, are accustomed to vertical growth, mixed-use development (shops on first floor, apartments above), sidewalks, plazas, public squares and street cafes. These are the fixtures amidst which your halcyon childhood days played out, where you walked hand in hand with your first love, where you met friends for coffee, and hopped the train to work. It’s the corner with the pastry shop, it’s the supermarket down the street, and the bench in between.

Few people can prepare themselves for the degree to which Americans have, in the last half-century or so, taken this entire corpus of human experience and thrown it completely into the trash, with the exception of a few older cities–not the places where the majority of Americans live. What has replaced it is a surreal moonscape. For those accustomed to the traditional urban civilisation, the primary question in America is: where do I go? What do I do? Looking around leads to an intangible but intense realisation of emptiness. Suburbia is both a cause and an effect of the destruction of civic and community life in America: there’s increasingly little to come home to, and vanishingly little to go out to. This has real effects. Your children will have nowhere to play, as there is no courtyard full of friends; they will depend on your willingness to drive them (sometimes quite far) for prearranged “play dates”. You will not take leisurely strolls to admire the scenery, for there is neither admirable scenery nor anywhere to stroll. It’s likely that you won’t even know your neighbours. You certainly can’t venture downstairs for lettuce or milk; strict zoning codes have ensured that only residential structures can be built where you live, and you’ll have to drive a few miles to reach the commercial zone, where the grocery stores are.

The architect James Howard Kunstler does a good job of anatomising the essential problems of suburbia in this TED talk. I don’t necessarily share all of his ideological accents, but I think he’s summed up the general problem very nicely. The thing you have to realise when watching that video is that he’s not talking about a particular kind of neighbourhood; he’s talking about the overwhelming majority of the US, including places others are accustomed to thinking of as cities. Dallas, for instance, is not a city by the global standards. Much of it should probably be reclassified as a rural area.

In this atmosphere, the almighty car–still a matter of social status, prestige and perceived convenience in Armenia–falls from grace. It’s no longer a luxurious way to thumb your nose at the teeming masses. You are one of the teeming masses. A lot of your energy and money and will go toward the purchase and upkeep of a rapidly depreciating hunk of metal in which you will spend a significant fraction of your life, all alone. It’s only cool when most people don’t have one; when four wheels have replaced two feet, it’s just a needlessly expensive way to traverse pointlessly large distances of identical-looking road for unclear reasons.

It should go without saying that public transportation doesn’t exist in the US–at least, not by European standards. Unless you live in New York City, or well within the centres of Chicago or one or two other cities, you’ll need a car, and you’ll be spending a lot of time in it. Guaranteed.

For a culture as warm and sociable as that of Armenians, this is all anathema. Truly committed people can maintain friendships and connections across the most hostile landscapes, but so much of how we meet and relate to others is inextricably bound up in the convenience and opportunity in how we are situated. Physical layout cannot, by itself, either make one friends or hinder those who are determined to have them anyway. But it does matter. A lot.

Therefore, I’m moved to say that one of the most important things about the US is that it’s lonely. They built it that way.

Legal system

No list of American peculiarities can be complete without due mention of its legal system.

The US is not the only country to have a Byzantine legal code or statutes both complex and numerous, although both of those things are certainly true. Compared to the rest of the developed world, the US criminal justice system is harsh and metes out severe sentences. It often seems to have more punitive than rehabilitative aims, and this is, in general, politically rewarded, fed by intense “get tough on crime” populism. It’s aggravated by what might be described as a “prison-industrial complex” of legislative machinery and lobbying–one that extends to and encapsulates vested law enforcement institutions. It is a major contributor to the result that the US has the highest incarceration rate in the world. Not the developed world. The whole world.

In addition, there are some eccentricities of Anglo-American common law that people coming from a CIS legal system are sure to find bewildering. One is the enormous amount of personal discretion that is afforded to prosecutors and other individual actors in a criminal case. In many other countries, there are statutes that clearly outline the process a state prosecutor must follow. In the US, a lot more depends on the whims of the concrete personalities involved.

For instance, over 90% of American criminal cases are settled by “plea bargaining”, a technique where the accused and the prosecutor make a deal to exchange the accused’s guilty plea for reduced charges. That is to say, the prosecution is allowed to literally change the accusation being made from the same body of evidence: “Instead of felony armed robbery, we are going to charge you with a misdemeanour of ‘disturbing the peace’.”  The official argument for this practice is that it saves public resources by avoiding the expense and complexity of a trial.

Naturally, this flexibility leads to inflated charges that the prosecution cannot support in a real trial. The prosecution gambles that the defendant will be intimidated by the worst-case possibility of conviction for grandiose charges, and will settle for the certainty of conviction on lesser charges over the uncertainty of conviction on bigger ones. The prosecution gets a higher conviction rate, which is politically beneficial. The effect is class-discriminatory: well-heeled defendants who can afford good legal representation, post a bail-bond and have time on their hands will fight the prosecution, while poorer defendants will fold and settle.

By American lights, this passes for justice, which leads to a larger and more important point: the US criminal justice system is overwhelmingly preoccupied with procedure and process, often at the expense of justice. This myopia is the product of a technocratic bureaucracy. It’s summed up nicely in a New Yorker article called “The Caging of America”:

William J. Stuntz, a professor at Harvard Law School who died shortly before his masterwork, “The Collapse of American Criminal Justice,” was published, last fall, is the most forceful advocate for the view that the scandal of our prisons derives from the Enlightenment-era, “procedural” nature of American justice. He runs through the immediate causes of the incarceration epidemic: the growth of post-Rockefeller drug laws, which punished minor drug offenses with major prison time; “zero tolerance” policing, which added to the group; mandatory-sentencing laws, which prevented judges from exercising judgment. But his search for the ultimate cause leads deeper, all the way to the Bill of Rights. In a society where Constitution worship is still a requisite on right and left alike, Stuntz startlingly suggests that the Bill of Rights is a terrible document with which to start a justice system—much inferior to the exactly contemporary French Declaration of the Rights of Man, which Jefferson, he points out, may have helped shape while his protégé Madison was writing ours.

The trouble with the Bill of Rights, he argues, is that it emphasizes process and procedure rather than principles. The Declaration of the Rights of Man says, Be just! The Bill of Rights says, Be fair! Instead of announcing general principles—no one should be accused of something that wasn’t a crime when he did it; cruel punishments are always wrong; the goal of justice is, above all, that justice be done—it talks procedurally. You can’t search someone without a reason; you can’t accuse him without allowing him to see the evidence; and so on. This emphasis, Stuntz thinks, has led to the current mess, where accused criminals get laboriously articulated protection against procedural errors and no protection at all against outrageous and obvious violations of simple justice. You can get off if the cops looked in the wrong car with the wrong warrant when they found your joint, but you have no recourse if owning the joint gets you locked up for life. You may be spared the death penalty if you can show a problem with your appointed defender, but it is much harder if there is merely enormous accumulated evidence that you weren’t guilty in the first place and the jury got it wrong. 

Plenty of immigrants inadvertently get into legal trouble in the US because they fail to realise how much the system is focused on the correctness of process rather than the holistic propriety–and indeed, the humanity–of the outcome. Such concerns are far too “interpretive” to enter into anyone’s mind. The contemporary incarnation of the peculiar mindset of Anglo-American jurisprudence leads to the question, “Were the rights of all parties, as enumerated by the law, protected?”, eclipsing the much larger issue: “Is this outcome compatible with justice?”

For instance, I have known Soviet immigrants who got into serious legal problems because of family and child custody-related problems. In one case, a mother was prosecuted with felony kidnapping for taking her child from a husband who himself had fled with the child, and who obviously had malevolent motives and committed extensive fraud. Common sense should say that she had understandable reasons for doing that. The American system says she interfered with a custody order (that he had supposedly obtained somewhere) and thus committed kidnapping.

In the graduate family housing community where I grew up, I seemed to have been one of few ex-Soviet children whose parents somehow avoided charges of child neglect. All of our parents were busy graduate students who worked all day and all night, and none of them knew that in the State of Indiana, it is illegal to leave a child under 12 years old home alone. In our home countries, schoolchildren of single-digit age routinely commuted to and from school alone. We were in a safe, enclosed community with an abundance of constant adult supervision from stay-at-home mothers; common sense should say that this is not child neglect, in that the parents’ intentions were not negligent, and us latchkey kids were not, in fact, being neglected. The authorities were not concerned with that; leaving a child under 12 home alone meets the statutory definition of child neglect, therefore it’s child neglect–end of story. There was no allowance for mitigating factors.

I don’t think this rigidity should be confused with effectiveness or precision. It’s not the same as a Germanic fastidiousness for law and order or attention to detail. German courts still consider the issue of whether a verdict and a sentence is consistent with the spirit or intent of the law. One of the key functions of a judge is to interpret that in a given scenario. American authorities are zealously preoccupied with much more narrow concerns of definition, execution and enforcement.

This shows up in civil law as well. The US is probably the most litigious society on the planet, leading to rather mechanistic approaches to the assignment of liability and risk. You’ll find yourself signing a lot of disclaimers, releases and waivers of liability for things that offend all sentient reason, and you’ll find yourself needing to take peculiar and cumbersome steps to ensure that you yourself are held harmless and indemnified in a variety of scenarios you would not have customarily assumed yourself to carry liability for. McDonald’s Corporation really can be held liable if you spill hot coffee on yourself, and maybe that’s good, but if you employ a mechanic and he spills hot coffee on himself while on the job, you might be held liable. Is that as strange as it sounds?

Social safety net and state services (or lack thereof)

For paying essentially similar tax rates to Western Europeans, Americans do not receive many state benefits, nor are able to rely on a substantial social safety net.

On the whole, the American body politic is chlorerically opposed to perceived “socialism”. This means that extensive taxpayer-funded social benefits are an impossible political sell in one of the richest countries on the globe.

This means that in many cases, you pay (at least) double; you both pay relatively high taxes, and pay out of your own pocket for things you wouldn’t have to pay for in Western Europe and many other developed countries. Healthcare is the most obvious and dire example, as I discussed above, but the same is largely true of child care, housing, university education, and, despite the existence of Social Security, pension.

There do exist unemployment benefits, disability benefits and government income assistance schemes to the very poor. The problem is rather that these programs provide very little compared to most developed-world countries, and are very limited in scope.

For instance, Armenians sometimes scoff at Americans for delaying having children until their thirties. Notwithstanding cultural causes, it’s worth noting that the expense of child care is expected to be shouldered by working parents themselves. The government does not provide, at a large scale, any sort of preschool or daycare, as is the case in many other developed countries. These services are available in private form, but are quite expensive, often so expensive as to substantially offset the income realised from liberating a parent to work. Quality cannot be counted upon; many of the cheaper daycare centres are little more than holding pens or warehouses for children, performing little to no useful pedagogical function.

Private university can cost tens of thousands of dollars annually in tuition. Public universities can be considerably cheaper, but the price is still well into the five figures once total cost of attendance, including boarding and meals, is considered. Some scholarships and financial aid from the universities is available, but not enough to realistically provide most students a way to afford attendance. The actual way most American university students afford university is by going into enormous debt, in the form of semi-government-sponsored student loans. It’s not uncommon to pay these loans back for the rest of one’s life, and they are not dischargeable in bankruptcy. All in all, American students owe $1.2 trillion dollars in student loans, and the average debt is $26000.

My home state of Georgia is rare in that it offers a rather novel form of state scholarship for state residents to attend public university in Georgia. The HOPE scholarship waives tuition if one qualifies on the basis of high school marks. How is it funded? Through state lottery ticket sales. It’s an upward income redistribution scheme; most buyers of lottery tickets are relatively poor and uneducated, and most recipients of the HOPE scholarship are middle-class kids from relatively affluent households. Because HOPE is awarded on the basis of academic performance, income is not a factor. This is a barometer of what is politically possible in America: upward income redistribution is okay, but to even imagine that the state itself could fund tuition directly? That would be socialism!

Federal housing aid for the very poor does exist, in the form of so-called Section 8 housing. But Section 8 housing is not somewhere you’d want to live unless you like gunshots and heroin needles.

Social Security was created in the 1930s to provide income security in old age. It is a mandatory government pension scheme. However, one would be crazy to rely on Social Security income alone in retirement; the payouts are quite low relative to contributions, which means that for most people, it’s not enough to live on. It should not be confused for an actual pension. Moreover, there are some unsettling questions about the long-term solvency of the fund.

Mandatory paid holiday in the US, where it exists, is limited to two weeks per year, and applies only to full-time, salaried employees. A great deal of employment in the US is in the form of part-time work, where the employer is not required to provide this or most other benefits. It is increasingly combined with paid illness time.

Bottom line: for their ~40%, the Europeans get more. A lot more.

Individualism

The Anglo-American cultural heritage is uniquely individualistic.

This may be a welcome respite to Armenian denizens who are weary from a lifetime of collective social responsibilities and living for the concerns of others, and who may be eager for some privacy and freedom from gossip and judgement.

My experience has been that the other side of this can lead to a lot of culture shock. With due recognition to the fact that the US is diverse and has many subcultures, including very close-knit ones, it’s fair to say that the American ethos is decidedly more self-centred. The prevailing cultural expectation is that most people will take care of themselves and keep to themselves, and avoid being a burden to others in any way.

As I said above, there’s a lot to appreciate about this. But it also means that the concern you are accustomed to feeling from others for your well-being will fall off sharply. You can’t just take for granted that you can go ask your neighbour for a favour without a second thought; if you don’t know them well, it would be unseemly. In my personal experience, some Americans have even been known to respond in a hostile fashion to strangers knocking on their door. “Trespassing” and “privacy” are terms that get thrown around often.

I’m not saying that Americans are uncaring or unaffectionate people, by the way. It’s hard to make that generalisation about over 300 million people. Some of them are very caring. I’m just saying that if you are found to have signs of a meningioma and go in for an MRI scan, you might expect eight to ten friends and relatives to take time off work to show up with you and weep in anticipation in the waiting room. You will not find that here, and you’ll feel a sharp crash and withdrawal.

By and large, you are expected to take care of your own personal business in America and to “manage” your emotions and not allow them to interfere with your work. The inability of many human beings to actually meet this standard might explain why approximately half the American population is on some sort of psychotropic medication, e.g. antidepressants. If you need the Yerevan standard of personal involvement from others, the Anglo-American culture is about the worst for that.

Speaking of drugs–and I feel this is quite germane to the issue of “individualism” and social psychology, which is why I put it in this section: drugs they are a huge and extremely pervasive social problem, and you’re bound to collide with that reality sooner or later, directly or indirectly. A substantial proportion of the GDP depends in some way either on the production, distribution and consumption of drugs, or on enforcing draconian laws against it. There’s quite a lot of what one might call industrial-scale militarisation on both sides. Like any war, it’s damaging even to the victors, there is enormous collateral damage to civilian bystanders, and it’s hard to tell who the real villains are. By Armenian standards, the US can be criminal, gritty and dangerous in unexpected ways.

More interestingly, perhaps, the world of legal drugs bleeds easily into the illegal, as evidenced by widespread illegal abuse of prescription painkillers and the entirely legal overprescription of psychoactive medications such as amphetamines. Americans are probably the most psychoactively medicated people on the planet. The whole legal-illegal distinction is a nebulous, foggy continuum in a place with so much regulatory capture and other corruption driven by mega-pharmaceutical shysterism.

Conclusion

As I tell every Yerevan taxi driver who lyricises America, “like everything else, it’s got its pluses and minuses”.

The US is a good place for the incorrigibly entrepreneurial and the well-paid, and in either case, the young. It’s a good place to be if you’re in a well-remunerated profession that is complementary to machine intelligence and other emergent trends indicative of the future of employment in “post-industrial” economies. It still offers a dynamic business climate–something that is as much a function of culture as of regulation and economics. It’s an intriguingly diverse multicultural “melting pot” where just about anyone can find a social group of likeminded people, which owes much to both its size and its history as a nation of immigrants. If the more collectivist psychology of the East is your vexation, the strong current of individualism and independence in American culture would probably an ideal antidote. For certain kinds of people, the US has much to recommend it.

However, I hope I have tempered that with some sober realities about the challenges of everyday life. The US lacks many of the socially stabilising factors and policy objectives of Western European countries. If you’re looking for a calm, moderate life and are allergic to extremes, I suggest you set your sights upon another OECD country to romanticise. Either way, I would pause and take a minute before reflexively deeming a visiting American to be enviably rich and happy.

There’s no paradise anywhere.


Journalism, epistemology and conspiracy theories

I have an acquaintance that has tried numerous times to persuade me that Osama bin Laden was not killed in a commando raid in Abbottabad in 2011, but actually died of natural causes back in 2001. Just look at the photos of bin Laden in the press from the last ten years on WhatReallyHappened.com! They are “obviously” doctored!

We all know that one guy, or several. They’re devotees of their favourite “what the government doesn’t want you to know” web site. When you tell them that such claims, while interesting, suffer from a dire lack of peer review and corroboration from credible sources, that just amps them up even more: “I don’t need the megacorporate disinformation machine to know what’s true. I think independently. Free your mind! Look at the photos and judge for yourself; could the World Trade Center towers really have collapsed due to fire?”

There’s no arguing with conspiracy theorists. The conversation is complicated by the fact that established media certainly have, in their history, disseminated their share of outright lies, or, far more often, omissions and skewed narratives. Mainstream media often disseminate Official Truth in any country and political system, and, in more democratic societies, this is sometimes done under the guise of hard-hitting investigational journalism and putative independence. Yet, anyone who’s grown beyond the infantile thumb-sucking stage of critical thinking understands that the idea of media as a professionally neutral conveyor of consistently objective truth is, at a minimum, complicated and caveat-ridden, and perhaps even slightly risible. Furthermore, amidst the noise on WhatReallyHappened.com and tenc.net there are, in bits and pieces, some kernels of truth.

Such a polemic cannot be made intelligible by arguing about “the facts”, as people are wont to do with conspiracy theorists. This is a trap. What does the idea of “the facts” even mean? To my mind, this issue goes back to the very nature of our knowledge about the external world, and is, at heart, a philosophical issue.

In a sense, it’s a fundamentally defeatist stance, because if we’re going to interrogate the pillars of our construction of “the facts”, we have to be very honest about the limits of what we can justifiably claim to know. It’s the only reasonable thing to do. However, our loony friends aren’t going to reciprocate our concessions; you may not be completely sure about anything (because you cannot reasonably be, as I’ll discuss below), but the guy writing at TheNewReichstagFire.com/the-big-lie-always-works-time-after-time/ about how 757s don’t just smash into the Pentagon like that? He’s sure. Very sure. Your vague, slightly noncommittal scepticism is no match for his positive assertions. And yet, there are reasons why it’s more justifiable to believe The New York Times account than his.

For those who slept through their Philosophy 101 course (or an introductory course to scientific methodology), let’s revisit a well-beaten horse. There are two types of reasoning recognised in logical argumentation: deductive reasoning and inductive reasoning.

Deductive reasoning is the kind that is used to reach certain conclusions based on ironclad axioms of logic. It is used in self-contained systems of well-defined rules, such as mathematics. The kinds of proofs you used to do in geometry class are deductive in nature, because the conclusion follows certainly from the premises.

The simplest illustration of a deductive claim is a syllogism such as:

  1. All men are yellow;
  2. Alex is a man.
  3. Therefore, Alex is yellow.

Alex’s yellowness follows from the premises (the first two points). It cannot be otherwise. Given the premise that he’s a man and that all men are yellow, he must be yellow.

Note that we haven’t considered the question of whether all men are, in fact, factually yellow. That is not essential for this claim to be deductively true. Maybe not all men are yellow. Maybe Alex is not a man (though, I’m pretty sure I am). The claim is simply that given that all men are yellow and that Alex is a man, he must be yellow.

Deductive logic has relatively narrow applications in the grand universe of human endeavours. It’s usable in bounded systems that meet the Closed-World Assumption, most notably in the field of mathematics, but also, in a more applied dimension, in finite deterministic systems, such as for example in electronics.

Inductive reasoning is the other kind of reasoning. Inductive claims are probabilistic in nature. A strong inductive claim is a “good bet”, not a guaranteed or certain conclusion. An example of an inductive claim would be:

  1. The Sun has risen every day.
  2. Tomorrow is a new day.
  3. Therefore, the Sun will rise tomorrow.

There’s no guarantee the Sun will rise tomorrow merely on the basis of the fact that it has done so in the past. It could explode overnight. You never know. But, it’s a pretty good bet that it will rise tomorrow; it’s had a pretty consistent track record of doing that. Of course, to flesh out this claim realistically requires quite a bit more premises about what we know to be true about the Sun, quite apart from the fact that it rises. But even if we add all the scientific knowledge in the world, there’s still no logical guarantee that the Sun will, in fact, rise tomorrow. Maybe it won’t. But it probably will. And that’s the essence of inductive reasoning.

Pretty much all human knowledge is based on inductive claims. Pretty much anything humans say to each other, ever, is based on a giant cascade of inductive claims. There’s not much that we know about the outside world that could be said to be deductively true. Deductive claims participate in inherently synthetic universes. That’s why it’s a bit silly when a person arguing with another about who really shot John F. Kennedy says: “Your logic is terrible!” The other party’s logic is probably not wrong–at least, not in the sense of deductive logic. Most intelligent people are pretty good at basic deductive logic. The disagreement is generally about the truth or falsehood of the premises, or about their evidentiary relationship to the conclusion being advanced. Lee Harvey Oswald may have been on the top floor of the Texas Schoolbook Depository with a rifle, but how can you be sure he shot Kennedy?

Even direct human sensory experience can be called into some doubt: just because you’ve seen it with your own eyes doesn’t mean it’s true. The very idea of perceived physical reality as being “true” has been the subject of ongoing philosophical contention. It relies on a lot of metaphysical assumptions, such as that causes resemble their effects (just because you feel like you’re sitting in this chair and it feels very “chair-like” doesn’t mean the underlying object–if it even exists–is essentially “chair-like”). 

Worse yet, inductive logic is not deductively valid. The Scottish philosopher David Hume famously pointed out that the justification for inductive logic uses circular reasoning, because the only real justification for inductive reasoning is that inductive reasoning has worked in the past–itself an inductive claim. “Induction, because induction” isn’t very persuasive, and it certainly isn’t what most people would intuitively call “logical”.

But, this wouldn’t be a very interesting post if I ended it on a note of “so, you really don’t know anything at all, period”. Let’s skip over the metaphysical issues and assume, among other things, that our direct sensory experience of reality is mostly accurate, in the sense that the underlying reality meaningfully resembles our apprehension of it through our five senses.

So, back to journalism. When you open the The New York Times, do you really know that anything you see is true at all? Have you ever even been physically present, in a position to directly observe any occurrence that the Times has reported on? I’m not sure that I have. Most people haven’t either. And even if you have, it’s probably something that happens a few times in a lifetime, unless you’re the President.

The metro section says there was a fire on 125th St. It’s got photos of fire engines and a menacing blaze. Were you there? Did you see it? Probably not. Maybe it’s all fake. Maybe the whole story is a fake. Maybe there was no fire.

Maybe. But it’s probably true. It’s true because you can’t think of a compelling reason for the Times to make something like that up, a matter of fact seemingly without any ideological valence or political magnitude. It’s true because other articles on similar events in the past have been supported by the testimony of people you know. It’s true because the same, or substantially similar report is also carried by other local newspapers, television stations, and online sources, and an elaborate conspiracy among them all to convincingly report a fake fire is unparsimonious. It’s true because the Times would not be considered a credible news source if it just made stuff up all the time.

And yet, all of those are inductive claims. The last premise is particularly ludicrous, as it is circular: the Times is credible because it’s credible. Certainly, it could be that all of these different sources of information that you’re cross-referencing and integrating elaborately conspired to report on a fire that never happened, motivated by reasons unknown to us. It doesn’t pass Occam’s Razor, but it certainly wouldn’t be the first time they’ve reported something that was subsequently shown to be completely false; Gulf of Tonkin incident, anybody? And yet, when I say “shown”, was it shown to you? No, it was “shown” by other sources that you believe to be credible, because you’ve experienced them as having credibility in the past.

Well, that’s very inductive of you. And it’s deductively illogical. You can’t prove any of it. Your justifications are just other inductive claims; things you reasonably believe to have a high probability of being true, but you are in no position to directly verify. Indeed, it’s safe to say that ninety nine percent of human knowledge–which we utilise with great conviction every day–is that way: for all our cosmopolitanism, there’s not a whole lot in the range of an individual human’s direct observation and experience. You’ve probably never experimentally verified whether hypothermia truly sets in at or below 35C. Have you ever personally tested whether antibiotics really work? You’ve probably taken them, but what if it was just your awesome immune system? Want to give yourself and your friends a menacing infection and really explore it with some rigour? Be sure there’s a control, someone’s got to get the placebos!

Most of the time people are arguing about any matters of fact in the outside world, they are arguing about things they have never seen or touched; they just read about it somewhere. I confidently contend that you can land a Cirrus SR22 (given a longer runway) without flaps safely because a pilot told me so, citing the Cirrus operator’s manual. But hell if I’ve ever tried. I don’t even know how to deploy the flaps on a Cirrus. I don’t even know how to get it up in the air. Yet, I’m perfectly comfortable making this claim to you, and I’ll bet you $1000 it’s true.

Have you ever been to Niue? What if Niue doesn’t really exist? But surely it does! And yet, all the reasons you have for supposing so are based, foundationally, on other inductive claims, few or none of which you’ve directly verified. You simply suppose that atlases and maps generally reflect geographic fact, and that if there’s an entry in Wikipedia, it must be true. How gullible of you! (Incidentally, there have been plenty of fake entries on Wikipedia. What if the entry for the planet Jupiter is one of them? Is Jupiter a real thing?)

Even the meaning of everyday work of researchers and specialists in scientific fields relies on their belief in inductive assumptions that are widely held in their field. These inductive assumptions are sometimes invalidated, and, throughout history, have undergone massive upheaval and revolution. It’s actually quite possible to invalidate decades or centuries of scientific labour with a shift in high-level assumptions. (There is an entire field, Philosophy of Science, that seeks to properly capture or describe the process of how all this actually happens at a theoretical level, as well as to sometimes make prescriptions about how it ought to work.)

I think we’ve satisfactorily illustrated that nearly all knowledge about the external world is premised on a complex, interdependent house of cards, where the cards are inductive claims. Where it leads us with regard to journalistic credibility is this:

It is reasonable to say that The New York Times occupies a higher place in an inductive “hierarchy of believability” than does WhatReallyHappened.com or something Alex Jones said. It is higher in that hierarchy because, on average, the things written in it correlate much more extensively and profoundly with other sources of (inductive) knowledge that we draw upon elsewhere. We believe that the Times is a more professional journalistic enterprise that goes to greater lengths to check its facts, review its sources and report the truth, because of other things that we believe to be true about journalistic organisations that do and don’t do that (the latter are seldom cited as a source of truth across a wide spectrum of human endeavours).

None of this guarantees that any given thing written in the Times is true. I personally cannot say that the World Trade Center towers definitively did not collapse due to a controlled demolition. But, Mr. What-They-Don’t-Want-You-To-Know can’t say that they did so, either, and that’s what he’s missing. Like me, he wasn’t there, and he’s no more of a structural engineer than I am. What’s more, my secondhand source is better than his.

Provable? Absolutely not. It’s just a better bet.