Monday, October 27, 2014

Human Things

Disconcerting circumstances can sometimes yield serendipitous benefits – if only because one is so far out of one’s customary comfort zone. Thus I found myself, a native New Yorker, in a wretched little town in southern Utah sitting at a table across from a young woman who would soon break my heart worse than anyone had since I was 18 years old, and nonetheless happy to be living where I was and doing what I was doing there. I had taken a seasonal job as an archaeologist in one of the most archaeologically and environmentally rich places I know of, a place that I already frequented as a hiker and backpacker and general ogler about as often as I could manage. Between bites of small-town Utah diner fare, then, came that oft-asked and almost never satisfactorily answered question: “So why archaeology?”

Why indeed? Answers abound in both the academic and popular literature, ranging from the erudite (“because knowledge of the human past is inherently valuable and furthermore…”); to the practical (“so that we can learn from the triumphs and mistakes of the past”); to the more subjective (“because it’s interesting”); to the extremely subjective (“because it’s sacred”); to the selfish (“because I enjoy it”); to the patently frustrated and, also, somewhat plagiaristic (“because it’s there!”). Among the innumerable possible responses to this question, there are only two in which I have any enduring faith – one of which is tautological and the other paradoxical. But then that’s us, isn’t it?

To begin, a prompting question: why do people do crosswords? Or word-searches? Or Sudoku? Play video games, or board games, or word games and riddles…? Why, in short, do people so often, so thoroughly and passionately, devote themselves to creating and solving puzzles? To begin to understand the answer to this question, we have to delve down toward the deepest tendrils of the roots of humanity itself, to the tiniest gearwheels that precipitate our ticking, some several conceptual hours before the dawn of anatomically modern Homo sapiens sapiens.

So far as we know, the earliest obligate bipedal hominins, the australopithecines, lived their lives entirely within their respective ecological niches in Africa from about 4.4 to about 1.8 million years ago, spanning some two and a half million years. Despite occupying a variety of territories across the African continent, they appear to show surprisingly little in the way of morphological alterations during that considerable timespan. Instead, their proliferation was more likely due to variability in the australopithecine lineage itself, with different subspecies occupying different environments – there are at least seven distinct subspecies of Australopithecus recognized in the literature, nine if you include habilis and rudolfensis. It’s only with the emergence of Homo erectus, the first truly definitive member of our own lineage, that one finds widespread migration and expansion over a tremendous range of habitats – stretching from sub-Saharan Africa to as far  north as modern-day Siberia and about as far east as Asia goes – by a single hominin.

Not insignificantly, it was also in the last epochs leading up to this time that global climatic conditions shifted from relatively stable and mild to positively chaotic. During the Miocene epoch, which extended up to about 5.5 million years ago, overall conditions were warm and wet without much in the way of seasonality. The succeeding Pliocene epoch, stretching from then to about 2.6 million years ago, was one of gradually drier and cooler conditions with generally increased seasonality. And then came the Pleistocene, an epoch in which factors like orbital forcing and tectonic uplift took a coalescent and colossal toll, when the intensity of both short-term seasonality and less predictable long-term variability became the climatic norm. The effect of this on local environments had some magnitude to it: a narrow swath of land could go from jungle to desert and back again within a decade or two.

Creatures alive during this era had therefore to deal with an enormous amount of environmental diversity and complexity, leading, in the words of paleoanthropologist Robert Potts, to “evolution of adaptability.” Simply put, it doesn’t pay to be a specialist in the face of near-constant change. What pays best in those circumstances is to become, in effect, a jack of all environments, one more adapted for adaptability itself than for any particular ecological architecture – to become, in other words, an obligate puzzle-solver.

Pleistocene archaeology underscores this point with considerable definitude. As mentioned, erectus managed to colonize so much of the Earth’s surface that the type specimen was found not in Africa but in the Indonesian province of East Java, with subsequent and more expansive discoveries emerging soon thereafter near Peking. But they weren’t different subspecies occupying different environments, as with the earlier australopithecines and modern analogues like grizzly and polar bears. They were all erectus, a species which had evolved to specialize in non-specialization and to learn, and learn, and learn in order to adapt to nearly any environmental circumstances. So that now, erectus’ descendants – the one writing this, say, or the one(s) reading it – have colonized almost every type of environment on the planet’s surface and are beginning to leer lasciviously at places beyond even that.

That’s why we love to solve puzzles. It’s because we’ve evolved to gain tremendous pleasure from doing so. The experience of pleasure associated with solving a puzzle is the result of the same neurological mechanisms we’ve inherited to signal when we’ve achieved what behavioral scientists call “fitness gains.” In a creature adapted for supreme adaptability, what this means is that we’ve been hard-wired by natural selection to enjoy solving trivial puzzles because those of our ancestors who managed to solve somewhat less trivial puzzles enjoyed greater fitness as a result. All of which is borne out in our anthropology in general, and our archaeology specifically, evidenced by our dizzying array of behavioral and technological innovations throughout the millennia.

In sum: why do we love to solve the puzzles of prehistory? Because the puzzles of prehistory designed us to.

The second, and more concise of my preferred answers to the perennial question of my choice of vocation centers upon the life lessons derived from – and this phrase will soon strike the reader as ironic – a career in archaeology. It's based on one of the most salient observations I've ever made whilst studying and conducting archaeology, which is this: you can spend your entire professional career plowing as deeply and broadly as possible through human prehistory, and you know what you won't find any of? Professional careers. They're a totally modern phenomenon, along with gluten intolerance, road rage, and Facebook addiction.

Relatively speaking, for all but about the last few moments of the total anatomically modern human experience (that is: the last few hundred out of, by current estimates, as much as 190,000 years) our “jobs” consisted of “whatever has to be done to secure food and shelter for the foreseeable future.” According to ethnologists like Marshall Sahlins, this was usually achievable with about +/-20 person-hours of labor per calendar week. Presumably, then, the rest of our time was spent on craft specialization, or on chasing lovers, or hanging out, exploring, fighting, learning, playing.... That is to say: all the things we now think of as distractions from what’s really important in our lives. They used to constitute practically the entirety of our lives, at least if the Sahlins camp is to be believed, and again the archaeology bears this out: right about the time we start seeing full anatomical and behavioral modernity, we also start seeing art.

In his 1954 volume Motivation and Personality, famed psychologist A.H. Maslow conducted a thorough review of texts and publications by the recognized authorities on American psychology, and discovered that the very concept of fun or “joy” simply never appears. Even pleasurable endeavors like sex and play are perceived as existing, at their core, purely for (respectively) procreation or to unwind before going back to the serious business of life – not, in other words, for their own ends in any way whatsoever. The pleasure associated with such activities is dismissed as adaptive motivation; i.e., if it didn't feel good we apparently wouldn't bother doing it, and would instead crumble into extinction either through overmuch toil or under-abundance of offspring. Thus, as a result of its articulation with our culture, the prevailing concept of our own behavioral processes is “overpragmatic, over-Puritan, and over purposeful,” and by such means we are missing out on “the other – and perhaps more important – half” of life.

Earlier in this same essay I argued for the pleasure associated with puzzle-solving as just such adaptive motivation, and I very much believe that, but said pleasure should not therefore be dismissed on this account in the way that Maslow charged the main body of American psychology with doing. As Voltaire’s Dr. Pangloss famously quipped, “the nose is [adapted to] fit spectacles, therefore we wear spectacles.” Well, no, the nose evolved for entirely other reasons; but that doesn’t mean we should stop wearing spectacles as a show of solidarity with better hypotheses about nasofacial adaptation. Similarly, setting aside, deprioritizing, or altogether missing out on the enormous variety of possible joys inherent to human life seems an abominable sin to commit in solidarity with a cultural mythos that holds pragmatism in the highest possible esteem – regardless of how pragmatically evolutionary history has designed us.

Maslow summed up his review by arguing that a great deal of our behavior is and, in all healthful regard, should be “unmotivated” by the bugbears of practicality and pragmatism. That joy and pleasure should be regarded as ends in themselves, human things in their own right rather than enticing but vagarious stimuli for drudgery, and at least as important as so-called serious struggles like professional careerism. And I agree; not merely because I try to live my own life that way, but because I see the material evidence of myriad such-lived lives every time I go to work. 




Monday, May 26, 2014

Dreadful Selection: How American Culture is Designed to Create Monsters



Fitness is a tricky thing. It’s an abstract term, like “quality” or “beauty,” that can’t be quantified and in fact doesn’t really exist outside of our ideas and concepts. Never the less, if held to a consistent standard, fitness can be an extremely useful part of modeling strategies employed to solve puzzles about how physical and behavioral adaptations emerge. Fitness is simply the ability to survive and produce viable offspring. It’s not an inherent quality; it’s relative to the environment in which it is being considered. Thick, white fur imparts a tremendous amount of fitness in the Arctic, for example, but does exactly the opposite in the desert.   

The principle operation of natural selection is fairly straightforward: those individuals who maximize their success in a given local environment will enjoy maximal fitness and therefore more likely pass their genes on to the greatest number of offspring. If the local environment is cold and has only a few fish for prey items, an individual with a thick coat and excellent fishing abilities will likely spawn the most viable offspring. If the local environment is temperate and contains plentiful resources, an individual who outcompetes his or her rivals for either the best resources or the most favorable attention from the opposite sex will likely spawn the most viable offspring. It’s a simple, elegant economic system that has yielded everything from slime molds to polar bears to peacocks – to ourselves.

But what if the local environment is full of assholes?

Because natural selection is economic in nature, principles of economics play key roles in determining how it applies to living organisms and ecosystems, and this is never more evident than in consideration of economic game theory. Given that there is always a cost [c] to everything you do, the benefits [b] you reap from whatever that thing is must always be higher than the cost [b>c] in order for that thing to be worth doing. If you spend 1,000 kcal hunting and you only succeed in capturing 500 kcal worth of prey, you’d better have a supplementary strategy or you won’t be at it for long. Where game theory comes into it is when organisms compete with each other. Take the classic Hawk-Dove game. Two foxes encounter a chicken, and there are two strategies available to each: the Hawk strategy is to be aggressive, the Dove strategy is to back off. If both players back off, nobody gets anything [b|b = 0] – obviously. If both players act aggressively, nobody gets anything then either; the chicken runs away while they’re fighting, and both opponents limp away with wounds [c|c = 1]. The ideal situation is for one player to be aggressive and the other one to be passive; the Hawk player gets the chicken, while the Dove player walks away unscathed and hopes to encounter his/her next chicken alone or at least opposite a weaker opponent [b = 1|c = 0].

You’ve seen this dozens of times in your life, I assure you.

An even more relevant classic for human society is the Prisoner’s Dilemma, although I prefer to explain it as the Hunter’s Dilemma (in order to keep things consistent). Two foxes encounter a porcupine. One fox cannot easily overtake a porcupine because it can spin around and present a wall of quills faster than the fox can run around it; meanwhile two foxes can overtake one much more easily because they can attack from opposite angles. Again, there are two available strategies for each player: Cooperate, which means work with the other fox and share the rewards; and Defect, which means act selfishly. If both players Defect, nobody gets anything because, again, an individual fox can’t easily kill a porcupine [b|b = 0]. If both players Cooperate, they kill the porcupine and both players each get half [b|b = .5]. This would be great if “morality” was anything more than a human social convention, but natural selection favors maximization more than compromise – it doesn’t care how you get there. So the real winner is the fox that Defects when the other player Cooperates, i.e., the one that convinces the other one to work together and then runs off with the entire porcupine without sharing any. Like in the Hawk-Dove game, one player gets a [b] while the other one doesn’t, but in this case it’s a little more nefarious: because the other player participated in the kill and didn’t get any meat out of it, that player actually takes a hit [c] for the wasted effort [b = 1|c = -1]. That’s how you win at evolution.

You’ve also seen this dozens of times in your life, I promise. It’s the operative function of both exploitation and theft, in the first place because others are doing one’s work and in the second because one is taking the product of others’ work without giving anything back. In fact you’ve taken part in this yourself, literally thousands of times, without even realizing it. The reason food is so cheap in the United States is because we don’t pay the actual cost – others plant it, pick it, and process it at near slave-labor pay rates, passing the savings on to you. Ditto gasoline, most consumer products, and basically all of modern technology. Hell, we don’t even pay the costs of disposing of our junk, at least not explicitly; a lot of it – especially the dangerous stuff – is exported at a relatively meager monetary cost as “externalities” to Third World countries, whose residents pay the actual costs (cancer, etc.). But this is all pretty well-known, especially to educated people, and it’s also just plain capital-e Economics. It has nothing to do with evolution. Or so you think.

Culture is another tricky, ambiguous thing, one that has proven notoriously tricky to define. But, like fitness, if held to a consistent standard it can be a useful convention for answering questions about human behavior, and the basis of that standard is that culture essentially means “social environment.” Whereas in a physical environment the constraints are entirely physical (e.g., in a pine forest you’re likely to find piñon nuts but unlikely to find jellyfish), in a social environment they’re entirely social (e.g., in a French city you’re likely to find people speaking French and unlikely to find street signs written in Swahili). Of course, any social animal occupies a local environment that contains both physical and social constraints – even bower birds adhere to local “customs” when constructing their elaborate displays. The point isn’t that either of them ever exists by itself, but rather that both of them are relevant.

So now think again about natural selection, particularly as it applies to behavioral rather than to physical traits. Surely it is the case with social animals that those individuals which prove themselves to be of greatest value to the society will enjoy the greatest fitness because everybody likes them so much. This is the line touted by both “for the good of the species” group-selection enthusiasts and advocates of human beings as the Moral Animal, and it’s also entirely wrong. Remember, natural selection doesn’t care whether you’re a “good” or “bad” – i.e., whether or not others like you as an individual. It only cares how many viable offspring you’re able to produce, and this, in the human as in the non-human world, comes down to resources. The more resources you’ve got at your disposal, the more you can attract mates and also the more you can invest in your offspring (whether directly or indirectly – more on that in a moment). And, as in the game theory examples above, the way to gain the biggest resource advantage is to reap the rewards from others’ hard work. To, in other words, screw people over.

Researchers from both within and outside the social sciences are viciously critical of such simplistic models of behavioral evolution and their applicability to human beings, and rightly so. The thought that natural selection would literally produce human beings who were experts at screwing others over is equivalent to the thought that natural selection would produce, say, chimpanzees that indiscriminately bully others and act a bastard. And yes, before you start to wonder, in the animal kingdom there are naturally-evolved safeguards against that sort of thing. According to the principles of frequency dependence and inclusive fitness, those offspring who receive the greatest amounts of care and doting (and food) will enjoy the greatest health and thus the greatest reproductive fitness in their turn – which, naturally, means the genes that code for “care” and “doting” will be passed along through them. Vicious, selfish, bullying assholes may do just fine among non-social animals because all they’re expected to contribute are gametes, but among social animals it pays to be a good parent.[1]

And this is true of human animals as well, or at least it used to be. In modern society the worst sorts of people can still turn to either hired help or, depending on their economic status, the state to raise their children to reproductive age with relative ease. I know a 21 year-old stripper who, with the help of Food Stamps and a string of boyfriends, is doing her level best to raise her infant son to be a dispassionate alcoholic. On the other end of the economic spectrum, when I was a country club bartender I knew at least half a dozen upper-class scumbags that got rich through the immeasurable suffering of others, who’d fathered something like 100 kids both in and out of wedlock between the lot of them.

I also know a lot of really, truly great people – generous, hard-working, intelligent, passionate, and moral – who have chosen not to have children because they think the modern world is too terrible a place right now to bring a child into. I love those particular friends, but I don’t think they realize that by withholding their gametes they’re contributing to the problem.

Our culture, simply put, is predicated on greed, and it encourages sociopathy on that account. We heap the greatest rewards on people who win at what is essentially a crooked game: screw the most people over and you’ll come out on top. You’ll become the CEO or the Chief of Resources or the Senator or the President or whatever. You’ll win. Free-market capitalism ensures that corner-cutting, back-stabbing, exporting costs and jobs, exploiting workers, raping the earth of resources in the most cost-effective ways, and outright lying (we call it “advertising”) and victimizing consumers will net you the biggest reward [b] while passing the costs [c] on to your competitors, your workers, your customers, the wilderness, foreigners, and/or future generations [b = 1|c = -1]. But you can’t blame free-market capitalism itself; that’s low-hanging fruit. The real culprit is culture. That may be the way to succeed in a laissez-faire economy that exists in a vacuum, but in a human society there are supposed to be constraints in the forms of taboos, rules, and laws that prevent this from getting out of hand. There just aren’t any.

Picture if you will a society in which the principle explicit rule is:

If you lie, cheat, steal, or harm others for your own gain, you will be punished.

While the principle implicit rule is:

If you perfect the art of lying, cheating, stealing, and harming others for your own gain, you will be heaped with envy, riches, and privilege and your children will want for nothing.

That’s us. That’s the United States. We’re evolved to act that way because we’re animals, a product of eons of natural selection, and over and above that we’ve created a society in which the people who commit “crimes” in a sloppy way get caught and punished – while people who commit full-blown atrocities in an efficient way get fancy cars and laid. The effect of that on our gene pool is as straightforward as it is terrifying: whereas genes that code for things like caring and empathy are likely to proliferate in a population in which being a sociopath doesn’t confer differential advantages, the opposite is also likely to be true. And the result?

Have a look in the newspaper.

In a famous piece of research, anthropologist Pauline Wiessner noted a practice of “leveling the hunter” among foraging peoples like the Ju/’hoansi by which sanctions are imposed on hunters to prevent them over-exploiting their hunting success as a means to gain status. Richard Lee noted a similar leveling mechanism among the !Kung that he nicknamed “shaming the meat” (literally telling a hunter that his catch is scrawny and awful and that he shouldn’t have bothered bringing it home at all, especially when it’s a particularly big and fatty kill). And, at least until the dreaded era of contact with the industrialized world, those and similar cultures thrived for tens of thousands of years. In our culture, the closest thing we have to a “leveling mechanism” is encouraging teenagers to think they’re ugly until they either a) buy every fashionable piece of clothing and beautifying product available on the market, or b) kill themselves. We do the same thing to adults, punishing the weak and sensitive through criticism and over-working straight into impotence, addiction, and an early grave – and rewarding sociopaths at both ends of the economic spectrum with ease and plenty. We’re selectively breeding monsters. How long do you think that’s going to last?  



[1] Authors like Barbara Smuts and Kristen Hawkes like to argue that males – who have the cheaper end of sexual reproduction, according to anisogamy, and who maximize their reproductive fitness more by quantity than by quality – practice good parentage not for the benefit of the kids so much as to impress females. That is to say: it’s not a parenting strategy but a mating strategy. Maybe so, maybe not, but for the purposes of this essay it’s a moot point because the end result is the same.  






Friday, October 26, 2012

The "Coals to Newcastle" Model of Health and Medicine



The fount of vernacular springs eternal. Even in America, where most people don’t know where or what a “Newcastle” is (outside of amber lager in a yellow-labeled bottle), the term “coals to Newcastle” is still well-warn in many circles, particular collegiate ones. For those who don’t already know: Newcastle on Tyne was England’s coal exporting port and well-known coalmining town since deep into the Middle Ages. To carry coals to Newcastle, then, is to do something that is rendered totally pointless by scale – similar to a drop of rain in the ocean, a grain of sand in a desert, or, for that matter, a vote in a presidential election (yes I said it). There are two ways to employ this phrase: to say directly that something is superfluous because it is already vastly outweighed by an overabundance of the same thing, e.g., Newcastle already has enough damn coal; and to say indirectly that a small gesture is pointless because it is vastly outweighed by other, more powerful factors, e.g., drinking a “Diet” Coke with your extra large lardburger and grease-sticks. It is with the second one that I am concerned in this essay.   

Like most people of a scientific bent, I have long suspected that herbal medicines, alternative remedies, and so-called healing foods – their very concepts, in fact – were nearly, if not actually, completely hogwash. I have never taken a dose of ginseng, echinacea, kava kava, rhodiola, ashwaghanda, cat’s claw, St. John’s wort, or chamomile and had it do anything other than imparting a nasty taste (except that last one, which can be pretty tasty). Nor have I ever experienced any surges in immunological function from mega-doses of vitamin C, any calming effects from magnesium, any beautifying effects from bio-available silicone, or any noticeable healing acceleration from colloidal silver. And as for the currently insanely popular fermented teas, I stand alongside the growing body of researchers who believe those things are potentially less beneficial than harmful.

And yet – and yet… Until recently, and exclusively in the West, nearly all medicines were taken straight out of the gardens, fields, and forests in which people dwelled. Practitioners in the field of ethnobotany have done a terrific job of tallying and studying the use of plant- and animal-derived medicines and entheogens that continue in some cultures to this very day, and to them and their cultural informants we owe – with allowances for ethical slipperiness in a lot of early cases – a considerable debt for a whole host of modern medicines. According to a 2007 article by the U.S. Cancer Institute’s David Newman and others, around 70 percent of all new medicines introduced in the United States in the last 25 years were derived from “natural products.”  Among the most widely-known and beloved examples are aspirin and quinine, which both come from tree bark; novocaine and cocaine, which come from South American bushes; opiate painkillers like codeine and morphine, which come from poppies; and birth control, the main biochemical source of which is wild yams from Central America.

Granted, in cases of nature-derived cures such as these, the emphasis is on the word derived. The name of the game in modern medicine is to identify, isolate, synthesize, and amplify the specific molecule or molecules in the host sample in order to make it stronger, more chemically durable, more easily manufactured and marketed, and more easily regulated. While there’s no telling how much willow tree bark it would take to clear up a headache – well, I assume someone has done the experiment, but I certainly haven’t – it usually only takes a pair of Aspirin to do the trick, each of which is no larger than the nail on my pinky. And therein lies the root of my total lack of faith in alternative medicine, for most Americans, most of the time.

To reiterate: I have never experienced a noticeable effect from any of the alternative remedies and supplements mentioned above, nor any others to speak of. Bear that word “noticeable” in mind. Meanwhile I, like most people in this country, smoked cigarettes for many years, drank and continue to occasionally drink alcohol, wake up in the morning with a fishbowl-sized mug of coffee, and, while I eat as cleanly and healthfully as I can manage, as a matter of course I regularly ingest immeasurable amounts of preservatives, nutrient additives, chemical flavor enhancers, industrial food-like substances, pesticides, herbicides, and so on. I also breathe city air and drink city water, and God only knows what dark forces lurk therein. And that’s me, somebody who eats what most Americans would consider a positively Spartan diet and exercises nearly every day; I can scarcely imagine the total amounts of who-knows-what that makes its way into the bodies of average Americans. Against such odds, how could something as subtle as an active compound in its organic, non-isolated, non-magnified form possibly have any noticeable effects?

Note again the first three complicating factors on my list: smoking, booze, and coffee; the primary vehicles for, respectively, nicotine, alcohol, and caffeine. Smoking is falling out of fashion in the US, thankfully, and while alcohol is still very much in fashion that doesn’t mean everyone drinks it regularly and/or to the point of intoxication. But coffee is pretty ubiquitous. Tobacco, by the way, is a plant, as are the bushes from which we get coffee “beans” (actually dried berries), and alcohol is produced by yeast in an oxygenated atmosphere full of cereal or grain sugars. Moreover, all three of them were originally employed for either ritual or very occasional use, at least in most cases, and were neither manufactured nor consumed in anywhere near the industrially titanic concentrations in which they’re now available (with the one exception of South American tobacco variants which were, at one time, way stronger than what we grow in Virginia). Add into that refined sugars, saturated fats, preservatives and additives and enhancers and whatever else, and you’ve got the Newcastle to which ginseng is a bucket of coals.

While I’ve mused on this line for many years, I’ve never seen it experimentally tested, until now. The point was driven home with some force in a recent federal study headed up by Brown University’s Rena Wing and including a remarkable supporting cast. The study was intended to measure the effects of a change of diet and exercise in obese adults with type 2 diabetes. Which seems like a foregone conclusion, when you think of it; and for that reason little more than an expensive and totally pointless waste of federal dollars – of course diet and exercise will help! Earlier studies had already shown that proper diet and exercise significantly lower blood sugar levels, blood pressure, and cholesterol. At this point I pass the mic to Gina Kolata of The New York Times:

The study randomly assigned 5,145 overweight or obese people with Type 2 diabetes to either a rigorous diet and exercise regimen or to sessions in which they got general health information. The diet involved 1,200 to 1,500 calories a day for those weighing less than 250 pounds and 1,500 to 1,800 calories a day for those weighing more. The exercise program was at least 175 minutes a week of moderate exercise.

But 11 years after the study began, researchers concluded it was futile to continue [because] the two groups had nearly identical rates of heart attacks, strokes and cardiovascular deaths.   

Www... whuh?

Noted blogger and paleoanthropologist John Hawks appended this in an online commentary by saying, “One expert quoted in the article thinks that the large effects of smoking cessation, statin drugs and blood pressure medications may swamp the small effect of diet and exercise.” To which he added this piquant reaction: “Swamp the small effect of diet and exercise on type 2 diabetes in obese patients? Wow.” (Original italics)

So there you have it. None of the researchers interviewed for the Times article had anything to say about the daily intake of [use your imagination] by Americans, particularly obese ones, throughout their entire lives, although that may come up in the subsequent research articles the team is readying to publish. But they don’t have to. The study involved limiting patients to either 1,200-1,500 or 1,500-1,800 calories a day, but I can find no information about the sources of those calories. Contrary to popular belief – for some reason – a calorie is not a discrete thing, like a vitamin or a nutrient; a calorie is a unit of energy, like a watt or a joule, specifically the amount of energy required to raise one gram of water by one degree Celsius in a normal atmosphere. Focusing on calories is a handy dietary starting-point but it leaves out quite a lot, not least of which being the fact that literally anything from which your body can derive energy therefore consists of calories, including toothpaste, cardboard, and candle wax. Exactly what does that have to say about the quality of one’s diet?

Very little, I’m afraid. Author and dietary guru Michael Pollan likes to lament over our nutrient-oriented view of diet, as if getting the Recommended Daily Intake of this and that is the best way to achieve optimal health. (It isn’t – the Inuit eat almost nothing but animal fats, the San eat mostly vegetables and tubers, and the French drink wine and olive oil in almost equal measure, and all three populations boast considerably better average health than even those Americans who only shop at Whole Foods.) If there is any general rule of dietary health, it is derived not from consideration of diets as constellations of nutrients but as behavioral adaptations, and it would sound approximately like this: eat what you, by your specifically ethnic as well as generally human background, have evolved to require for maximum nutritional efficiency, whatever the hell that happens to be. It probably involves fresh vegetables. It probably doesn’t involve drugs like caffeine and alcohol. It definitely doesn’t involve high-fructose corn syrup, monosodium glutamate, potassium benzoate, Yellow #5, or DDT. And it surely isn’t helpful to augment that by breathing carbon monoxide every time you step outside or drinking chlorine every time you pour yourself a glass of water.

All of which converges on one central point: there are simply too many elements going into our bodies every hour of every day to even keep track of, no matter how healthfully we try to live, and the bulk of them speak with much louder voices than do the dried leaves in a small bag of Yogi Tea. This is America, where subtlety is simply not an option. We live large, work large, play large, drive large cars and trucks, create and endure large amounts of stress, eat large and drink large and swallow large handfuls of magnified large-molecule drugs every time we get so much as a hangnail. And we are all sick, tired, and dying, in a very large way, in the face of which herbal and other "traditional" remedies are indeed as useless as carrying coals to Newcastle. You're better off carrying a backpack away from all this. 



Sunday, August 19, 2012

Personae Non Grata



As a teenager, I once wrote an entire essay arguing that the indefinite article a/an is the first sign that you’re dealing with an imbecile. What, for example, is the difference between someone who says “I am Catholic” and someone who says “I am a Catholic”? Or someone who says “I am liberal” and someone who says “I am a liberal”? The difference is that the first person is giving one (of possibly very many) descriptors of him- or herself, while the second is identifying entirely with a social role. The same can be said of someone who says “I am a nerd” versus “I am nerdy,” someone who says “I am a spiritualist” versus “I am spiritual,” someone who says “I’m a Yankees fan” versus “I like/love/support the Yankees,” and so on. Would you call yourself feminist or would you call yourself a feminist, because only the former is a person. The latter is an artifact, a thing, altogether cobbled from the  hearts and minds of others.   

I’ve grown up a lot since writing that original hate-speech, and I now consider it to be rather reductionist, but the principle of the thing is still valid and in fact I now consider it more apt than I ever did before. From Carl Jung – and expounded upon by Joseph Campbell – we learn that the very words person and persona come from the Latin word for masks that were used on-stage by actors in Roman drama, through which they “sounded” (per-, through; sonare, to sound) their lines. The reason for this is pretty simple: nobody came to a play with a program featuring a dramatis personae or list of characters because there weren’t any printing presses, so in order to remember who was speaking you needed only to look at his mask. If whoever is talking looks like Oedipus, then whatever he’s saying is what Oedipus is supposed to say. In society, according to Campbell, “one has to appear in one mask or another if one is to function socially at all; and even those who reject such masks can only put on others, representing rejection.”

In that one prescient sentence, Campbell predicts the hipster subculture about a half-century before its inception.

So we’ve all got roles to play, and we’re supposed to behave according to the parameters associated with those roles. Idealists like to argue that we should strip off our masks and live in a more natural state – which just underscores the point, since that’s exactly what idealists are supposed to say. In reality, we cannot, in our present civilized state, ever strip away our masks or entirely step out of our roles because of just how deep they go. We pretend we don’t like criminals, for instance, but in fact in a tacit way we mostly feel that criminals have a place in society; they are on the opposite side of the balancing-beam from philanthropists, and they provide paying jobs for cops, judges, security thugs and jailers. What we really don’t like is unpredictability. This is why “crazy” people get locked away in institutions and/or medicated into vegetation, regardless of whether they are actually a threat to themselves or anyone else – it’s because they’re unpredictable. An insane person might walk into a grocery store and start doing a strip-tease for the potato chips, and that scares us – or anyway it scares our social leaders – far more than an armed gunman. At least armed gunmen act like armed gunmen, in a predictably ruthless and naughty way.

It warrants noting that there really is a way to live outside of behaviorally-constrictive social roles, and it’s the way we all live as children before we get shipped off to the factory (elsewhere referred to as a “school”) to have our spontaneity stripped away and replaced with predictability. That’s what schools really teach, incidentally; you can jolly well learn readin’ and writin’ and ‘rithmetic at home with your parents, but only by having both stern adults and a densely-packed peer group referring to you over and over as “the nice boy” or “the mean girl” or “the joker” or “the scholar” or “the jock” or whatever do you get pigeonholed into the role that’s supposedly your identity. Before this process occurs, however, we are as malleable as clay, being totally different people in each and every setting in which we find ourselves. Naughty, nice, friendly, lazy, courteous, quiet, loud, thoughtful, reckless… we respond to our environments the way chameleons respond to colors. And in fact it’s the very spontaneity of pre-adolescence that Zen people and their ilk aim for in their practice. The end game of non-attachment doesn’t mean letting go of your television; it means letting go of your supposed identity and, ultimately, your very self.

But that’s for Zen people. As for the rest of us, we come to adopt our social identity as our actual identity not because we’re lazy or sheep-like but because, contrary to a silly cliché, we only see ourselves in other peoples’ eyes. Everything we exude, every action we take or appearance we assume creates an impression in others, and it’s that impression that shapes our conceptions of ourselves. Which seems simple enough and is actually pretty self-evident, but for the following: when we focus too much on the impressions we make on others it makes us appear to be fake, our identity mere affectation. True, in the strictest philosophical sense all identities are fake – if by “fake” one means inorganic or not so of its own accord – but, like the film buff who detests spoilers and behind-the-scenes stills, we loathe actually seeing that fakeness. Nobody goes to a puppet show to appreciate the wires. And the fakeness of identity is nowhere more obvious and ingratiating than in people who identify themselves wholly, emphatically, and exclusively with the details of a single social role.

You don’t have to look far to find examples of this. Leak a story or start an internet meme that associates toughness with eating radishes, and the stores around every frat house will suddenly be radish-free. Plaster pictures or stories all over the place linking rock-a-billy music with Sailor Jerry-style tattoos, linking Dr. Who with steampunk, linking hippie music with hula-hoops or tight ropes (call it a “slack line” all you want; it’s still the same thing I saw in the circus when I was nine), linking night-clubbing with bright things in martini glasses (seriously, as a former bartender, how in the hell do you dance holding onto a cosmo or an appletini? I can’t even carry one of those things without spilling it all over me), linking rap music with sagging jeans, linking snowboarding with silly hats… and those aren’t even hypothetical examples, they’re real ones…. Anyway, spread the idea that thing X is intimately associated with thing Y, and everyone whose identity is solely and obsessively fused to one of them will ravenously snap up the other. 

I don’t often quote psychologists – indeed, I don’t often even acknowledge them – but I do have a few favorites, including the abovementioned Jung. Another of my favorites is A. H. Maslow. Known principally for his “hierarchy of needs,” Maslow was also the first prominent psychologist to make the seemingly heretical claim that pleasure might not be a mere by-product of gratified biological urges and may be an end in itself. Being happy for the sake of being happy, instead of as a reward for good works – imagine such a thing! Another of his contributions was the notion that personal identity is real, or that it can be real, but that it’s often hidden under a rolling snowball’s worth of accumulated bullshit. Said Maslow of finding one’s identity:

The loss of illusions and the discovery of identity, though painful at first, can be ultimately exhilarating and strengthening.

Yes, it’s painful to cease being the center of your own attention, the precisely-crafted model of a comfortingly narrow role within a comfortingly narrow subset of the larger and more impersonal culture that surrounds it. You will no longer be an Alpha Nerd if you admit that while you like The Big Bang Theory you don’t like the original Star Trek; no longer be an Alpha Bro if you admit that while you think Axe Body Spray smells okay you also think wearing a ball cap at a 45-degree angle looks ridiculous; no longer be an Alpha Hippie if you admit that while you think getting stoned is fun you also find that eating organic burritos all the time gives you painful gas; no longer be an Alpha Hipster if you admit that while you genuinely enjoy American Spirits you have always thought Pabst Blue Ribbon tastes like someone wrung a gym sock into a can… No, I’m sorry to say, once you start letting go of some of the prescriptive baubles of your selected subculture you can no longer sneer at everyone else like they just limped onto the field, but at least you’ll be one step farther from obviously and overtly fake. Mix it up enough and you might just pass as no more fake than the rest of us.  




Friday, August 3, 2012

Online Dating: A Short Treatise on a Target-Rich Environment


Online dating has evolved from an esoteric practice to a social phenomenon, a cash cow, and, as more and more people turn to it, a legitimate way to find romance (if such a thing exists). Internet-based dating services pop up at over 100 per year, catering to every niche and nod that tickles peoples’ fancies along the way. Yet among a goodly amount of people the whole idea of online dating still shoulders an unpleasant stigma, conjuring images of pudgy shut-ins with no redeeming physical qualities and no social skills, because – let’s be honest, here – that’s how it all began and – let’s be more honest – in a lot of ways that’s still the case. But that isn’t the case exclusively, and it’s becoming less so with each new year and new flock of people willing to give online dating a try. My purpose in this article is to explore why and how that works.

For anyone who has deigned to try online dating, its appeals are fairly straightforward and obvious. For one thing, in the real-world world you never know if that person you’re eyeballing from across the room is single or taken, and in fact in most cases (not because of luck but as a consequence of that ultimate buzzkiller: statistics) it’s the latter. On a dating website you have only to read a person’s status. Moreover, because everyone’s there for a stated purpose, there’s no need for awkward fumbling-with-the-laces questioning. The ice, if you will, comes pre-broken.

Accessibility is another boon of online dating. Not only do you not have to squeeze a night at the bar into what for many Americans is already a hectic schedule, you don’t even have to be in the same place. I’ve pre-emptively set up dates in places where I was either moving or planning to spend a period on an archaeological project (what I do when I’m not writing), and with no more or less success than normal. I don’t care how smooth you are, you can’t astral-ly project smoothness over state lines. I personally didn't find this to be the shiningest of appeals when I tried online dating because I'm often recklessly confident and a pretty shameless flirt, i.e., I met my second-to last girlfriend standing in a crowded city bus. But for some people it's a godsend.

And then there’s what social scientists call the “Simon Effect” (after the character Simon from William Golding’s Lord of the Flies), whereby a measure of distance between a oneself and one’s audience – in the case of Golding’s Simon, this was accomplished by makeup – makes a person less inhibited. This is probably also the case with the popularity of texting; if people can’t hear your voice they can’t hear your hesitance, your nervousness, or your lisp (nor can you hear their boredom or reproach). Behind the safety glass of the internet you can really cut loose, freely admit to being a cat-lover or a pervert or a vegan or a religious fanatic or whatever, without having to meet anyone’s critical gaze. Some people are born with the ability to be wholly socially confident right out of the gate; others, like myself, possess it now but had really to work at it; and for everyone else there’s online dating.

Finally, there’s the geography of the dating game: the internet broadens one’s range. This is in direct (I should say an indirect) correlation with a person’s confidence, i.e., the more cripplingly insecure a person is the better his or her chances of meeting a potential mate online – having nowhere to go from zero but up. But even for the measurably confident it can buffer the stakes. Take me: I’ve got about a 20% success rate in the world of face-to-face dating, “success” here meaning one responds to another’s advances and at least one decent date ensues; and about a 3-7% success rate online (for reasons I’ll get into shortly). That’s about a 25% combined success rate overall if I’m hitting both avenues. The only sure-fire way for me to get more dates is to either a) take a college course in my subject, where I tend to stand out as "that motherfucker who always talks in class;" or b) lower my standards.

Naysayers will balk, of course, and rightly so. Dating profiles can at times be so informative they take away all chance of surprise. Part of the adventure of romance is getting to know somebody new. Then there’s the amount of time it often takes to “feel each other out” – without the benefit of in-person contact it can be very difficult to satisfactorily express oneself and one’s desires, particularly for those who aren’t professional writers (yep - sorry). And of course there are online dating’s biggest and most obvious problems, the participants themselves, the worst of whom summarily fit into one of three separate and discrete groups:

1.      Liars – that is, people who lie about their weight, age, and/or other physical attributes. Beyond the obvious “you don’t look anything like your pictures” that resounds in the online dating world like a tolling bell, I have an especially ugly tale of this sort to tell. When I was 22 I checked out Yahoo’s fledging Personals service (probably my first mistake) and immediately hit it off with a cute, heavily-tattoed 21 year-old – or so her ID said – skate punk I'll call Hailey, although Pinnochiette The Whore would work just as well. Everything was fun and exciting for about a month, until one day we were in the mall for some reason when we ran into some pimply-faced teenager at the food court. Hailey showed him her newest tattoo, an enormous piece on her upper back, and he exclaimed, “Wow Hailey, I can’t believe you’ve got so many tattoos already and you’re only 16!” So watch out for that.

2.      Liars – that is, people who misrepresent their personalities or character traits. This is more common than you’d think, even more so than people misrepresenting their looks, and it has to do with our magnificent culture and its tendency to teach people that “putting your best foot forward” means “making up a bunch of fake shit about yourself as if you were bait on a hook.” I can’t think of how I could limit myself to just one specific example of this, so I’ll provide a broad one: while attempting to find dates online I’ve gone on dates with something like 30 women who described themselves as “adventurous,” exactly one of whom owned so much as a pair of boots. One of them called herself "athletic" and was in fact obese, like morbidly obese; she's athletic in that she's "going to the gym every day trying to get in shape." I can only imagine it’s just as bad with guys.

3.      Liars – that is, people who lie about their non-physical, non-psychological facets, the most common of which are men or women who lie about their relationship status. Nobody likes to be someone’s “other” man or woman (well, okay, some people do, but they prefer to know it), and it’s murderously infuriating to find out that a potential lover is actually married and trolling for an affair. I’ve never had this happen to me, personally, but it has happened to a few of my female friends and they were insensed. Luckily, because all things on the internet tend to move ahead apace, there is a handy workaround for this last one that works about 25% of the time (which is enough): grab a few photos and plug them into Google’s Image Search. If those same images appear on a social networking site, it’ll pop up. Incidentally, this is also a handy way to see if a potential date has gained 50 pounds since that photo was taken, and/or is or isn’t really a billionaire astronaut with an Olympic medal in sex.

Of course, honesty in its own right doesn’t always count for much if it’s poorly packaged and presented, and because of the aforementioned Simon Effect of online dating you see that rather a lot. One woman who contacted me had this as the opening line of her dating profile:

First off, I am a heavier person, and if you’re so shallow that you can’t see past that then don’t even bother contacting me!

Ignoring the obvious (e.g., that is isn’t a case of “see” so much as “feel,” and that anyone who couldn’t see that she was overweight hadn’t bothered to look at her photos), this is an upfront assertion of prejudice. How does not being attracted to overweight people make someone shallow? Being curvaceous or slightly overweight is one thing, but this woman was positively rotund and that isn’t a matter of shallowness – it’s a serious health risk. Had she instead opened with “I know I’m overweight but I’m working on it” or “I know I’m overweight but I’m totally okay with myself” – that is, had she spun the argument around so that it celebrated her rather than made her seem judgmental – then... well... more likely someone else would have been willing to date her. Someone not me. 

And it isn’t just women; I knew a guy in Oregon who pulled a similar stunt. His line was this:

I am a no-nonsense badass looking to get the most out of life. I’m not interested in censoring myself or dividing my time between my passions and some chick. Hit me up if you want to come along for the ride!

He thought it made him come across as a tough guy. It didn’t. It made him come across as the last person in the world any sane-minded woman would want to date.

Anyway, the bottom line is that dating – every form of dating – is a numbers game. You will never meet all of the single people in a given region, city, or even neighborhood, and if you’re serious about wanting to find romance you do well to bolster your chances by broadening your range to include online venues. Online dating sites are, after all, places where people go for almost no other reason; they represent what ecologists would call a “target-rich environment.” Sure, there are risks, but they’re mostly - mostly - harmless and usually easy to evade. Besides, if you aren’t willing to take a few risks then your love life is probably doomed already.