The Longest Average Shot Lengths in Modern Hollywood

The Longest Average Shot Lengths in Modern Hollywood

null

Director Alfonso Cuarón likes long
takes, preferring to cut his films as little as he can. His 2006 movie Children of Men features three
relatively long single takes: the scene where Kee gives birth (3:19); the
roadside ambush (4:07); and the final battle (7:34). (Here’s a video that features all of them,
as well as every other take in the film that runs at least 45 seconds.) Now
he’s preparing to release a new film, Gravity, which supposedly opens with a 17-minute-long
take. (The first trailer was recently released, and can be viewed here.) What’s more, the rest of the film apparently
contains only 155 other shots. Assuming that the movie runs 2 hours long (the
actual run time hasn’t been announced yet), that would mean that each shot is,
on average, slightly longer than 46 seconds apiece.

That’s extremely long for
contemporary Hollywood, where shots typically don’t last longer than a few
seconds each. For instance, Michael Bay’s Transformers movies are pretty
rapidly cut, with Average Shot Lengths (ASLs) between 3 and 3.4 seconds.) But
that’s not altogether unusual. For instance, Inception (2010) has an ASL of 3.1. (I
made a video about that, here.) Scholars such as David Bordwell and
Kristin Thompson have documented how, over the past thirty years, cutting in
Hollywood films has gotten faster, resulting in ASLs of under 5 seconds.
Foreign films have remained slower by comparison, and European filmmakers often
bring those habits to Hollywood. Drive, for instance, which was directed
by the Danish filmmaker Nicolas Winding Refn, features pretty long takes, and
an ASL of 7 seconds/shot). But that’s still much faster than what Cuarón has
just accomplished.

Advance word about Gravity got some friends and me
wondering: what other contemporary Hollywood films have ASLs higher than 46? Or
is Gravity going to set some new record?

To answer that question, I turned to
the Cinemetrics Database, an online database
for ASLs and other measurements for films. It’s important to note that the data
there is submitted by volunteers, and very prone to errors. Furthermore, the
database is also far from complete. Still, it’s a very useful tool. (The site
also provides free software that anyone can download to use and to
participate.)

Here’s what I did. First, I clicked
“Show all,” so I could sort the films by ASL—simple enough. I saw right away
that Russian Ark was #1, which makes sense.
That 2002 film consists of only a single shot, and thereby yields an ASL of 5496.3). So far, the database appeared
correct.

The next step was harder. I imported
the sorted data into Excel, and began distinguishing the Hollywood films from
the rest. This is important because, as already noted, lots of foreign films contain
longer takes than their US counterparts. But we want to know how remarkable Gravity is going to appear in US
cineplexes this summer. I took out a lot of works by familiar European names here:
Béla Tarr & Ágnes Hranitzky, Theodoros Angelopoulos, Chantal Akerman, Hou
Hsiao-Hsien, Kim Ki-duk, Pedro Costa, Jean-Marie Straub & Daniele Huillet,
the Dardenne Brothers, Jafar Panahi, Apichatpong Weerasethakul, Tsai Ming-liang
… (If you’re unfamiliar with their films, you’re missing out on some of the
best movies being made today).

The next step was to weed out
experimental/underground directors like Andy Warhol and George Kuchar, and
older Hollywood directors like G.W. Bitzer and D.W. Griffith. Again, we want to
compare Gravity to recent Hollywood
films. Shot length slowed down a lot when sound was introduced, and has been
speeding up over the past eighty-something years. For instance, Howard Hawks’s His Girl Friday (1940) has an ASL of
about 15 seconds. (That said, an ASL of 46 would be remarkable even in Classic
Hollywood.)

And here’s what I found (although
keep in mind I wasn’t able to independently confirm any of this, and I had to
weed out a lot of anomalies—the
database really needs some cleaning up!)

1. Rope (1948, Alfred Hitchcock),
ASL = 433.9

OK, this isn’t a recent
recent film, but it has to be noted, as it’s most likely the highest ASL in
Hollywood. Hitchcock used only 10 shots in making it (the film’s Wikipedia page
lists them). (As you probably know,
Hitchcock designed those shots, then edited them such that the finished film appeared
to be a single take.)

After that, editing speeds up considerably:

3. Down by Law (1986, Jim Jarmusch),
ASL = 51.1

4. Elephant (2003, Gus Van Sant),
ASL = 49.4

5. Bullets Over Broadway (1994,
Woody Allen), ASL = 48.2

6. Last Days (2005, Gus Van Sant),
ASL = 46.5

nullActually, we’ve already encountered
an omission. The #2 film isn’t in the database (yet)—that being Gus Van Sant’s Gerry (2002), the first part of a
trilogy that also includes Elephant and Last Days. Gerry is one of my favorite of Van Sant’s
films, and since I’ve seen it many times I know that its footage of Matt Damon
and Casey Affleck wandering through different deserts doesn’t feature much
cutting, The IMDb trivia page for the film claims that it consists of exactly 100
shots, which over 103 minutes would yield an ASL of 61.8. (Subtracting the
credits would put it closer to 60 seconds per shot.)

So, given the data so far, Gravity
looks ready to clock in at #7 in the list of Hollywood movies with the highest
ASLs.

However, like I said, the Cinemetrics
Database contains a lot of anomalous data. One entry that stood out was Blizzard, a 2003 children’s film about
a magic reindeer, directed by Star Trek‘s own LeVar Burton (who played
the blind engineer Geordi LaForge). There are two records for this film: 46.5 and
76.9. One entry I could overlook, but two raised my suspicions (even if their
claims wildly differ). So I obtained a copy of the film and gave it a look. And
I didn’t watch the whole thing, but I can report that, unless there’s
some 15-minute-long shot lurking in there somewhere, its ASL is entirely
typical—about 3–4 seconds per shot.

After that, Woody Allen has a lot of
the list locked up:

8. Alice (1990, Woody Allen), ASL =
40.5

9. Sweetgrass (2009, Ilisa Barbash
& Lucien Castaing-Taylor), ASL = 39.5

10.  Mighty Aphrodite (1995, Woody
Allen), ASL = 34.6

11. Redacted (2007, Brian De Palma),
ASL = 34.4

12. Don’t Drink the Water (1994,
Woody Allen), ASL = 33.1

13. Everyone Says I Love You (1996,
Woody Allen), ASL = 32.9 (another entry lists 31.9)

14. Shadows and Fog (1991, Woody
Allen), ASL = 32.7

15. Celebrity (1998, Woody Allen),
ASL = 32.1

16. September (1987, Woody Allen),
ASL = 31.3

17. Slacker (1991, Richard
Linklater), ASL = 31.1

18. Vernon, Florida (1982, Errol
Morris), ASL = 30.5

19. Cloverfield (2008, Matt Reeves),
ASL = 28.9

20. Husbands & Wives (1992, Woody
Allen), ASL = 27.8

21. Manhattan Murder Mystery (1993,
Woody Allen), ASL = 27.7

22. Another Woman (1988, Woody
Allen), ASL = 26.9

23.  My Son, My Son, What Have Ye Done
(2009, Werner Herzog), ASL = 26.9

24. Gates of Heaven (1980, Errol
Morris), ASL = 26

25. Mystery Train (1989, Jim
Jarmusch), ASL = 25

26. Rules of Attraction (2001, Roger
Avary), ASL = 24.9

27. Hannah and Her Sisters  (1986,
Woody Allen), ASL = 24.5

28. Grizzly Man (2005, Werner Herzog),
ASL = 24.4

nullThis isn’t surprising. Woody Allen
has long been noted for his reluctance to cut, and his preference for shooting
whole scenes in single takes. This makes shooting the film more complicated,
but it does allow actors more flexibility in their performances (since they can
move about the set more freely), and greatly speeds up the editing process.

That said, it is odd that the sorted
data didn’t include any Woody Allen film made after 1996. Their absence would
indicate one of two things: that the man has changed his way of working (which
I don’t think is the case), or that his later films have yet to be analyzed and
included. I find the latter possibility more likely. (Also, note that the most
recent film here is four years old, so it’s possible some recent titles are
missing.)

I’ve seen every film on this list
except for Sweetwater, Redacted, Cloverfield, and My
Son, My Son
, so I can’t vouch for them, but the rest looks correct. (Mystery
Train
also has two other entries that claim 24.1 and 23.9, respectively;
either way, it probably ranks somewhere around 24.)

That said, Rules of Attraction
has to be a mistake. It is a remarkable film for many reasons, featuring an
extraordinarily wide variety of cinematic techniques: splitscreen, reversed
footage, extensive slow motion, and more. And it does contain many wonderful long
takes—but it also contains a sequence comprised of hundreds, if not thousands, of
rapid cuts. My guess is that whoever was measuring the film chose not to count all
the shots in that section, which is of course incorrect. (To get the ASL, you
have to average the length of every shot in the film.)

I stopped analyzing the data at this
point because after this the field starts getting increasingly cluttered,
meaning the inaccuracies in the database render the results less meaningful.

So with Gravity‘s
release, Cuarón looks ready to not only make his most languorous film to
date, but also to take his place alongside long-take masters like Allen, Van
Sant, Jarmusch, Herzog, and Morris.

Seventh place, to be exact.

A.D Jameson is the author
of the prose collection
Amazing
Adult Fantasy
(Mutable Sound, 2011), in
which he tries to come to terms with having been raised on ’80s pop culture, and the novel
Giant
Slugs
(Lawrence
and Gibson
, 2011), an absurdist retelling of the Epic of
Gilgamesh. He’s taught
classes at the School of the Art Institute of Chicago, Lake Forest College,
DePaul University, Facets Multimedia, and
StoryStudio Chicago. He’s also the
nonfiction / reviews editor of the online journal
Requited. He recently
started the PhD program in Creative Writing at the University of Illinois at
Chicago. In his spare
time, he contributes to the group blogs
Big
Other
and HTMLGIANT. Follow him on Twitter at @adjameson.

Some Like It Dead: What WEEKEND AT BERNIE’S Owes to Billy Wilder

Some Like It Dead: What WEEKEND AT BERNIE’S Owes to Billy Wilder

null

Procrastination
can bring you to surprising places. Recently, I made the decision to leave
a stack of papers ungraded and watch Weekend
at Bernie’s
, because . . . why not. I’d already gone through a string of Iron Chef repeats, some Ninja Warrior, and the back half of the
astounding, horrendous We Bought a Zoo.
Bernie’s, I thought, was the natural
next step—a movie notorious for its badness, something that would remind me I had
far more important things to do. But what I saw as I watched Bernie’s blindsided me—the kindred soul of a much older, much
more respected film.


What I
saw in Bernie’s was Some Like It Hot.

I hadn’t seen Weekend at Bernie’s since its release in 1989. I was eleven then,
and in the decades since, I’d managed to retain nothing about the movie beyond
its crass, high concept: Richard and Larry, two broke, young accountants for a
Manhattan insurance firm (played by Jonathan Silverman and Andrew McCarthy),
find evidence of millions of dollars in corporate theft. But their high-rolling
boss, Bernie Lomax (Terry Kiser), is the actual thief; in the guise of a
congratulatory gesture, he invites Richard and Larry for a weekend at his
Hamptons home, then arranges for a mafia hit man to meet them there first. But

But the mafia don pulls a switcheroo, Lomax gets whacked instead,  and when Richard and Larry arrive to find his
body slumped in a chair, they do what any movie worth its weight in farce
would: they use Lomax’s corpse as an all-access pass to infiltrate a world far
beyond their means. Perhaps because in 1989, we weren’t ready for a buddy
comedy built entirely around necro-play, Bernie’s
opened poorly at the box office. It was panned by critics.

Yet, somewhat like the body at the core
of the film, Bernie’s has somehow
stayed alive in our cultural memory. As with the Police Academy films, Summer
School
, or Just One of the Guys, Bernie’s has become a kind of
apologetic, cultural shorthand for a time when our tastes veered toward the
horribly inexplicable. But people seem drawn back to Bernie’s more than any
other schlocky comedy of that era, especially in recent years. In 2011 a
Colorado news team cited Bernie’s to
describe a
real
crime
in which two Denver guys found their buddy dead, then “
took his body — and his credit card — out for a night
of diners, bar hopping, burritos and a strip club.” There are two
Facebook campaigns and an online petition to jumpstart another sequel (Bernie’s 2 hit theaters in 1992), and
at
least one t-shirt
dedicated to the same cause (as of this writing, a
total of 948 people have “liked” this idea). Just weeks ago Bill Maher
lit
into the ancient members of Congress
by calling it a “Weekend at Bernie’s government.”

Something about Bernie’s sticks with us. But what?


For twenty-four years, I thought
it was its ironic value, and when it came on that night I expected to be
transported to a time when I was far too young to understand what “good” comedy
was. But it was too late. Perhaps I’d taken too many “film as literature”
classes in undergrad, or streamed my way through too much of the Criterion
collection, but now all I saw when I looked at Bernie’s were the sensibilities, timing, and even shot makeup of
Billy Wilder’s 1959 classic.

For one, Bernie’s pickpockets the Some
Like It Hot
’s plot, wholesale: unlucky,
prohibition-era musicians Joe and Jerry (Tony Curtis and Jack Lemmon), witness
a New York mob hit, then hide out by posing as women in an all-girl
band—fronted by Marilyn Monroe—at a Florida resort stuffed with millionaires,
and when one falls for Jerry’s lady version, “Daphne,” things get interesting.
Down-and-out
buddy trope? Check. Mafia-related danger? Check. Taboo as a dramatic hook?
Absolutely. But reputation-wise, Some
Like It Hot
bests Bernie’s on all
fronts. Currently atop AFI’s list of greatest American comedies, any mention of
the film conjures Wilder’s golden catalogue (Sunset Boulevard, Double
Indemnity
, The Apartment, etc.)
and talk of its
revolutionary
take on gender roles
. It’s been called the “Great American Comedy,”
while the most Bernie’s
could muster was a “heavy-handed spoof of social life in the Hamptons” that’s
“as sophisticated as a ‘National Lampoon’ romp.” But reviewers of Bernie’s seemed too hung up on the dead
guy, writing it off as a retread of Hitchcock’s The Trouble With Harry or Blake Edwards’ S.O.B. What critics missed was how well Bernie’s harmonizes with Some
Like It Hot
in tone, sensibility, and in the interesting (and maybe even
sophisticated) things it has to say about privilege, wealth, and what it means
to move within those worlds without possessing either.

From the
early club scenes to Joe and Jerry’s arrival at the resort, Wilder layers the
world of Some Like It Hot with dark excess:
rum-running in hearses, police raids, Vassar girls on the hunt for sugar
daddies (Monroe’s character is actually
named Sugar Kane), and Wilder puts his heroes on the outside looking in, where
they become their most dangerous. Bernie’s
director Ted Kotcheff (of Fun With Dick
& Jane
and, oddly enough, First
Blood
fame) updates that world to the boom-time eighties with just as much
ingenuity. Bernie’s opens with a
montage of sweltering Manhattan—soundtracked with an eighties-tastic Jermaine
Stewart cut, the chorus of which repeats “
some like it
hot
”— as stodgy Richard and proto-slacker Larry schlep to the office
on a Saturday to number-crunch for Lomax, a stand-in for the sharks and
soulless moneymakers of the Reagan/Bush I era. While both films traffic in
deception, in Bernie’s it begins way
before Lomax is killed, and the deception here is less for the sake of survival
than for that of social preservation. In an early scene, Richard’s first date
with the new company intern, Gwen, goes south once she realizes Richard’s been
lying all night about being the heir to a fortune. When the two meet again that
weekend, in the Hamptons, Richard launches a quest to convince Gwen he’s
trustworthy enough to sleep with—all while passing off a dead guy as alive.

And it’s once both
films reach their moneyed destinations that Kotcheff works his hardest to keep
up with Wilder’s tone and aesthetic, from the pacing to the look. From getaways
to seduction scenes, boats and waterways play huge roles in each film, and
Kotcheff, along with his cinematographer, François Protat, frame Larry’s and
Richard’s Hampton arrival shots to match the way we see Joe and Jerry arrive as
“Josephine” and “Daphne” in Florida: docks, expansive skies, sand leading to
mansions. And both films waste zero time establishing the natives of these
lands as, at best, absolute idiots. Within seconds of Joe and Jerry’s arrival
in Florida, Some Like It Hot gives us a meet-cute between Lemmon’s
Daphne and rich bachelor Osgood Fielding III that results in an improbable
(and, it must be said, date-rape-ish) bout of elevator grab-ass. In Bernie’s
we get it moments after Richard and Larry grasp their predicament, when the
house is invaded by the now-dead Lomax’s hangers-on, all zombified versions of
rich archetypes and clichés far too self-involved to realize they’re
humble-bragging to a corpse. “He’s dead,” Richard says to a half-in-the-bag
partygoer. “That’s the idea, isn’t it?” he replies.

Wilder seems more
interested, as his film goes on, in making Monroe the butt of his film’s jokes,
particularly the plotline in which Joe/Josephine tricks Monroe’s Sugar into
sleeping with him by disguising himself, yet again, this time as the heir to
the Shell oil fortune. But in Bernie’s, the wealthy remain Kotcheff’s
mark, even as the joke goes increasingly stale. Every scene with Lomax in
public is an indictment of him and his kind, as when his body is met on the
beach with big hellos from oblivious Hamptonites, or when, in the film’s most
bizarrely sterile scene, Lomax actually gets laid. Even Gwen, ever burned by
Richard’s cover-ups and lies, refuses to acknowledge Lomax is dead until an
exasperated Larry drags his corpse to her feet.

Both films make a
case that to move in wealthy circles is to engage in a certain kind of
self-deception, but each film is only as rich as its choice of taboo, and it’s
here that Kotcheff’s effort gets a bit exposed. Wilder’s
taboo—homosexuality—allowed him to crack open what could have been a
boilerplate crime caper, then push it into thrilling territory that muddles the
way we think about love, money, redemption, and self, leading to one of the
most memorable endings in American film history. And Bernie’s? Bernie’s
has a dead guy at its heart, which presents about as much opportunity for
narrative growth as you’d imagine. The hardest part of re-watching the movie
after so long, after noticing its potential, was how it devolved in its final
act into easy dead-guy jokes. Dead guy falls off a boat. Dead guy as a life
raft. Dead guy as deus ex machina.
I could almost feel Kotcheff realizing
the limits of his ambition for Bernie’s, then, like his protagonists,
deciding to get the most he can out of the conceit and exit the movie as cleanly as
possible.

But maybe the most
important quality of Bernie’s—and why it’s stuck with us so long—is its
inhabiting of the spirit of Some Like It Hot, which presented a
controversial but universal concept to an audience in a digestible,
non-threatening way.
Some Like It Hot was revolutionary because it
was a movie about coming out that skirted all the murky—and in that era,
legal–complications. Bernie’s performs a similar trick, only with
something as bleak as death, which might be the key to why we still carry
affection for it. Weekend at Bernie’s was
released three years after children my age had huddled excitedly around a TV,
only to see the Space Shuttle Challenger explode, violently, in midair. It came
out two years after news channels broadcast footage of a press conference in
which Pennsylvania State Senator Budd Dwyer removed a revolver from a manila
envelope and shot himself through the mouth. It’s not hard to see how that
generation might harbor a soft spot for a movie that starred a corpse, yet
wasn’t about death at all. Instead, Weekend at Bernie’s becomes about
two young people who confront death and, for at least a weekend, find new,
crude ways in which to defy it—which, when you think about it, isn’t the
worst possibility to find yourself revisiting now and then.

Mike Scalise’s essays and
articles have appeared or are forthcoming in
Agni, The Paris Review, PopMatters, The Wall Street Journal, and elsewhere. Follow him on Twitter here.

An Open Letter to Richard Linklater: Let Jesse and Celine Separate. Preserve Cinéma Vérité.

An Open Letter to Richard Linklater: Let Jesse and Celine Separate. Preserve Cinéma Vérité.

null

[Warning: The essay below contains spoilers for the Richard Linklater films Before Sunrise, Before Sunset, and Before
Midnight.]

Increasingly,
members of my and succeeding
generations have come to understand that marriage might not be the
sacrosanct institution we once believed. It’s not unusual, of course, for
an individual to decide that their plans for their future—for their own
self-development emotionally, professionally, and spiritually—are not
conducive to the sorts of sacrifices a marriage calls for. But recent
generations seem to have put enough thought into this possibility that,
for the first time, the very institution of marriage is in doubt. We know from history, experience, and the Richard
Linklater film Before Midnight that the
ambitions of women can be particularly imperiled by marriage, because our culture still considers
the lion’s share of marital sacrifices to be feminine. One not only hopes
but expects this circumstance to end in the near
future; until it does, it will be the responsibility of each man and
woman considering marriage to ask of themselves this question: Have I
developed in my youth and young adulthood the inner resources to war
with myself over the competing demands of self-realization and marital
compromise without permitting this war to
permanently scar
myself, my mate, our prospective children, and other bystanders? This
is not the same as
treating marriage as a series of silent sacrifices: couples can, do,
and should communicate to whatever degree is necessary to navigate shared
and separate hopes and ambitions. But when dialogue invariably escalates
into irreparable verbal aggression, as happens in the denouement of
Linklater’s Before Midnighta
documentation of several hours of small, unanswered provocations that
predictably explode into a relationship-threatening tilt—the question
becomes not whether a marriage can survive, but whether it should survive.
To this moviegoer, it seemed that Jesse and Celine could, at the conclusion of Before Midnight,
continue in their common-law marriage only if
they developed complex and abiding strategies to turn frustrated
self-realization into productive dialogue. I know from hard experience
that arguments punctuated by “I don’t love you” can happen only a few
times in a relationship—perhaps
not more than once—before cataclysmic damage has been done to the
relationship jointly and to both parties individually. There’s no
evidence either of the partners depicted in Before Midnight
has developed an appropriate strategy for coping with noxious feelings
of entrapment, unless we’re to count the implied infidelity of each
partner as a solution, which of course it isn’t. It’s hard to conclude,
then, from the evidence of the final scenes of
Before Midnight,
that Jesse and Celine’s marriage will continue. One expects, though, that
in nine years of Hollywood and actual time we will discover (in a film
called Before Dying or some such)
that in fact either the well-being of their children, inertia,
couples’ counseling, or a deus ex machina has saved Jesse and Celine from
the dissolution of their union.
nullThe harder question to ask, of course, is whether a relationship such
as the one we witness in the Linklater trilogy should continue. For his part, Jesse makes clear in Before Midnight that he decided, years earlier, that his happiness lay with Celine, for
better or worse, in bad times and good. Celine appears to have drawn no
such conclusion. If nine years of common-law marriage, two
children, and countless shared sacrifices and joys have not
convinced her to either a) choose what happiness she can find with
Jesse, or b) take whatever steps might make such a feeling possible on her part (be it individual and/or couples’ counseling,
substantially more generative dialogues with her partner, or some adjustment of her own or Jesse’s ambitions), there is no particular
reason to think her relationship with Jesse can, will, or should sustain
many more direct hits to its bow. These hits are equally damaging to
Celine and to Jesse, and each year that passes in which Celine believes
her marriage the terminus of all her ambitions is another year those
ambitions are not being realized and she and her mate are suffering the
calamity of being slowly but violently pulled apart.
Relevant to this discussion is a 2012 article in The Guardian,
which featured the reminiscences of a hospice nurse regarding the five
most
common regrets of her dying charges.
The most curious entry to the list—in the nurse’s view, and perhaps in
the minds of many of her readers—was the fact that many of those whose
deaths had been witnessed and memorialized had not realized before
dying that happiness is a choice. The words themselves (“happiness is a
choice”) sound trite, but in fact if there is a significant cinematic
achievement to be found in Before Midnight,
and there is, it is that the movie exhibits better than any before it that happiness is indeed something we either learn to choose during
our lifetimes or do not. 
Of
course, “choosing happiness” is no guarantee of actual
happiness, nor does it prevent isolation, depression, or
self-destruction. What it is, however, is an attitudinal alignment that
says each choice one makes will be made, to the best of one’s ability,
with sufficient self-knowledge to make happiness a marginally more
likely outcome than would have been the case were the decision made
blindly. In other words, to choose happiness, we must
first work diligently at self-knowledge, as those who cannot
or will not know themselves (the good, the bad, the ugly, all of it)
are those who cannot intelligently determine their future likelihood to
produce happiness in themselves or in others. Such individuals only harm
themselves and others in their meanderings, and while we do well to
care deeply about such individuals and to help them on their way, we
also do well to give them wide berth when the time comes to choose a
lifelong partner–at least until they find themselves differently
situated. This is not because such people can’t be vigorously happy, as
they can be; or because they cannot bring joy to others, as depending
upon their circumstances they often will; or because they’re
ill-intentioned, as far more often than not they’re not; but rather
because, of all the institutions human civilization has devised,
marriage most requires as an antecedent the inner
resources to wage productively rather than destructively the war we all,
to some
degree or another,
perpetually wage within ourselves over when and how to sacrifice for
those we love. It is no crime to know oneself an
ill match for the institution of marriage; it is no crime
to not know oneself well enough to protect oneself and another from
an ill-made and ill-fated match; it is, however, a tragedy
to so conjoin and to be so conjoined, and an even worse tragedy to
remain so past the point a change is still possible. And it’s a tragedy
that’s avoidable from the start.
Like
many of my generation, I have both dated individuals facing the same
tough questions as Celine and Jesse and
also wondered about my own suitability for a lifelong commitment. And
like many of my generation, I have suffered at the hands of those
who believed themselves prepared for the sort of long-term union that
was not, in the event, what they really wanted. It will be a poor result
of the remarkable act of filmmaking that is Before Midnight
if the consequent conversations between partners who’ve seen the
movie hinge primarily on whether one or another of the two central characters could have done this or that or avoided doing this
or that to make all well between the film’s two leads. It will be a poor
result because the sort of conflict depicted in the movie, at least in
the lines of dialogue we hear onscreen, is not navigable, and believing
it so only brings more pain and suffering to its participants. 
nullThe conflict between Jesse and Celine is, indeed, impassable, as it was seeded in the identities of both partners when
they first met
eighteen years ago (in Before
Sunrise
) and then reunited nine years later (in Before Sunset).
Both parties made a decision, on those dates, to continue a liaison
with someone whose ambitions and temperament and self-identity were not
compatible with their own; Jesse and Celine, in short, confused lively
conversation with a future. But abiding relationships delve much deeper
into the psyche than mere repartee does, a fact Linklater’s first two
films displayed little enough awareness of to be disconcerting.  
No
blame for any of the above lies at the feet of either Jesse or Celine,
though we could certainly wish that, as a couple, the two had either
seen their initial meetings for what they were—something glorious but
fleeting—or else, in deciding otherwise, developed more resources to
work through what (by the time of the events of Before Midnight) has
clearly become a hardened
disconnect. Anyone who can watch the hotel scene in Before Midnight and not
see a relationship in which this sort of aggression has played out many
times before has never been in a relationship in which this sort of war
of words occurs in the first instance. The accusations and insults
hurled in anger—I hate making love to you; I ruined my life for you;
you’re mentally disturbed; your selfishness makes my happiness a
perpetual impossibility; I cheated on you; I also cheated on you; I
don’t love you anymore—are harrowing and more often than not
relationship-ending. Those who say the movie depicts a couple who’ve
just “grown a bit weary,” or are merely “a little bitter” were clearly watching the movie they’d hoped to see, not the movie they were given by the film’s writers and director.
The central conflict of Before Midnight—the film we actually see, not the film we might wish to see—is one for which an earnestly romanticist
sensibility (as opposed to one of gloomy pragmatism)
can offer only one solution: separation of the parties. It
will hurt their children, likely irreparably, but as Before Midnight takes
great pains to establish with respect to Jesse’s son Henry, such a
wound is survivable. It is, moreover, preferable to a childhood spent
listening to one’s parents arguing (or brooding silently) over acts of
verbal aggression, infidelities, or even (at the extreme terminus of
such a destructive downward spiral) physical aggression waged by one or
both
of the parties against the person or property of the other. It is no gift to seal two characters moviegoers love so much into a coffin of
shared fate neither truly wishes for themselves. It is, in fact, our own
selfishness in wishing for life to be different in the movies than it
is in our own bedrooms and backyards. 
And
so, with the foregoing in mind, I ask—even beseech—Richard Linklater
to divorce these two characters and let each live the life they were
meant to be living. If you want to make a movie that reflects the times
we live in, Mr. Linklater, make a movie in which marriage is not, in
fact, for everyone, and in which no one is forced to spend a lifetime
with someone they see as an obstacle or an albatross rather than a
partner. One wonderful day in Vienna, and another wonderful day in
Paris, do not a lifetime make. Like many my age, I have had such days, I
have even been lucky enough to have many months of such days, and I
know as well as you do, Mr. Linklater,
that
they simply aren’t enough. The day Celine chooses
happiness is the day she leaves Jesse, and the day Jesse chooses
happiness is the day he accepts it and moves on. I don’t want it to be
so, but I know it to be so. I
recognize the bind you’re in—your commitment to cinema verite is at
odds with your own (and Ethan Hawke’s and Julie Delpy’s) abiding
attachment to Jesse and Celine—but the obligation you owe to love,
life, and art takes precedent over the obligations you owe to the box
office, the media, and even your audience.

Seth Abramson is the author of three collections of poetry, most recently Thievery (University of Akron Press, 2013). He has published work in numerous magazines and anthologies, including Best New Poets, American Poetry Review, Boston Review, New American Writing, Colorado Review, Denver Quarterly, and The Southern Review.
A graduate of Dartmouth College, Harvard Law School, and the Iowa
Writers’ Workshop, he was a public defender from 2001 to 2007 and is
presently a doctoral candidate in English Literature at University of
Wisconsin-Madison. He runs a contemporary poetry review series for
The Huffington Post and has covered graduate creative writing programs for Poets & Writers magazine since 2008.

Beautiful and Claustrophobic: MAD MEN’s Inferno

Beautiful and Claustrophobic: MAD MEN’s Inferno

nullWhen I
started taking classes in creative writing, one of my teachers told our class
that all we had was one story we would spend our entire lives rewriting. At the
time I found the prospect of this frightening. In a home of Cuban-Jewish
refugees I had grown used to two concepts: the impermanence of material things
and the permanence of loss. Both themes were ones I strove to break away from.
I nurtured an intense fascination with born-again Christianity. There seemed
something glorious to me about the idea that you could start again, fresh in
the world, free from the past. 

The longing
for rebirth is a motif, which dominates our literary imagination and our
spiritual and emotional lives. The rebirth narrative is often constructed as a
narrative of resolution. We long to read about characters who are constantly
making choices which propel their life forward and we love reading about heroes
and heroines who are brave enough to make the choices that will ultimately lead
to some kind of change. In real life we are creatures of habit. We love a
routine, because it makes an unruly universe seem manageable and safe. In
fiction we open a box in one scene and in the next we close that box for good.
In real life, we keep—consciously or subconsciously—reopening that box.

Mad Men, which at first glance seems to
be a period drama, has actually proven to be a drama that explores how every
rebirth is a repetition. When I first started watching, I’d feel a deep,
overwhelming sense of dread with every episode. Ever swig of a martini, every
suck on a cigarette, every fuck behind another spouse’s back filled me with great
anxiety. On Mad Men, no character
(except, arguably, Peggy Olson) is ever able to change, even as the world is rapidly
changing around them. Our desire to rebuild our lives is shown to be just as
much of an illusion as anything else Don Draper or Peggy or Pete Campbell tries to sell to a
client. Both Don and Betty Draper repeat patterns from their old marriage in their new
ones. The new ad agency may look different from the old ad agency, but the same
ugliness that hid beneath the surface of the old polished veneer is there under
bright lights, mod fashion and art deco design.   

In many
ways, Mad Men’s insistence on denying
us the pleasure of resolution is the secret to its success and the reason so
many of us are hooked on it, despite being frustrated that nothing ever really
changes, time and time again. Repetition of experience is electric. It grounds
us in the past and connects us to the present. We think what we seek is an
experience, which is new, but what we really want to feel connected to is an
experience that makes us feel happy and safe, in a way we once felt happy and
safe before. All addictions are nurtured by our love of repetition, a need to
feel as high as we once were, as loved as we once were loved. Don’s continuous
cheating has always had a somewhat addictive quality to it. In every case Don
wants the simultaneous thrill of the new, along with the comfort of the old.           

The
repetition of familiar collective memories and period fashions has always given
Mad Men a kind of warm intimacy,
which is strange because many of its most fervent viewers haven’t personally
experienced the 60s. In an article for Vanity
Fair
, “You Say You Want a Devolution,” Kurt Anderson claims that this
yearning for the past is a peculiar development of the 21st century,
which he claims is a reaction to constant technological newness. In Anderson’s
view we would rather rehash the past, rather than create anything new at all. We
watch television shows that are episodic, where characters continuously revisit
experiences, and we live in the age of the remix, where we borrow snippets
from the past as a way to reinvent the present.

But, in reality, I don’t think that
our desire for repetition is anything new at all. There is something very human
about our love of patterns. Our obsession with the past is more than just
fashion. It is built into our bones. We harvest food according to different
seasons. We pray for different purposes at different times of the day and
different times of the year. Ceremonies like graduation and weddings are built
into the very fabric of our culture, in both religious and secular settings. Poets
and lyricists have long been seduced by repetition. You can find the repeated
word or line in a classic love poem, and you can find it in contemporary songs.
We sing song refrains ranging from, “Hey Jude” to “Mmm Bop.” 
The repeated onomatopoeia word can be sing-songy, as in children’s
songs, or visceral and raw. Kanye West’s brutal album, My Beautiful Dark Twisted Fantasy, is often about obsession and
addiction and its most brutal, harrowing lines are repeated words. When Kanye West sings “bang, bang, bang, bang, bang,” so icy and
perfectly metered, on his new album, are these words the sound of a gang-bang
or a gunshot? The more we hear a word repeated, the stranger it sounds and the
more we re-think meaning.

Anyone who
has participated in a writing workshop knows that there is a danger in treating
art as personal therapy. Often, especially for beginning writers, we do repeat
the same story over and over, until we reach the sense that we have finally get
it “right”—we’ve made sense of the motifs we were continuously drawn back
to.  My writerly “coming-of-age” was no
different. In grad school most of my writing focused on two relationships: my
relationship with my mother and a romantic relationship that broke my heart in
two.  

One story resolved. For months after
the relationship was over the repetition of words from my ex’s poems would
drift through my brain at odd intervals, like a song I knew all the words to,
until one day, I didn’t remember many of those words at all. At that stage I no
longer loved this person any more and it felt like what it had become: a tiny,
tender loss, wholly different than the dramatic poems I wrote when I was still
angry and passionate about a love I didn’t want to see die.

In contrast, the relationship with
my mother evolved. We learned to understand each other. I’m not sentimental by
nature. I don’t obsess over pictures. When I move I throw stuff out. My mother
is the opposite. She takes forever to get rid of anything. Whenever I go back
home, my room is a museum of me, except it isn’t a museum of me at all: it is a
museum of the girl I was when I was 15 years old. Whenever I go home I am
stunned at how much I’ve changed and how I haven’t changed at all.

Repetition reminds us of that gap within each
of us: between that part of us that stays constant and that part of us that is
willing and able to evolve. It reminds
us that if everything is ephemeral, repetition is all we have. It reminds us
there are lovers we will leave behind and mothers we will love forever.

The opening image of Mad Men shows a man falling to his
death; in reality, the path down is a spiral rather than a straight line, which
means it is ultimately going to take a longer time to bottom out.  This season the space between Don’s domination of
Sylvia and his tiny voiced “please” begging her to stay is getting narrower
and narrower. This season’s first Mad Men
episode opened with a scene on the beach and Don reading The Inferno. It ended with an ad that Don created: the image of
an empty beach, bare tracks in the sand, discarded clothes, the open ocean. For
Don this was an image of escape. For his clients it was an image of a suicide. Escape and suicide have always been
dangerously close throughout the series, but this season, we are reminded
over and over how it is impossible to only love the beginning of things, when
everything that begins is ultimately going to end.

Arielle Bernstein is a writer living in Washington, DC. She teaches
writing at George Washington University and American University and also
freelances. Her work has been published in
The Millions, The Rumpus, St. Petersburg Review, and South Loop Review, and she has twice been listed as a finalist in Glimmertrain‘s Family Matters Short Story Contests. She is Associate Book Reviews Editor at The Nervous Breakdown.

You Are What You Play With: How SESAME STREET and Legos Generated a Generation

You Are What You Play With: How Sesame Street and Legos Generated a Generation

null

For a man or woman of a certain age, it’s hard to imagine a
single commercial or non-profit venture having had more of an impact on one’s
psychological maturation than Legos or Sesame
Street
. Yet even today’s youth might say the same thing: In 2013, we have
Lego-based television shows (Ninjago: Masters of Spinjitzu and Legends
of Chima, 
both on the Cartoon Network), Lego-based video games (more
than forty-six so far, including sequences based on Lego Star Wars, Harry
Potter
, and Indiana Jones sets), and even a forthcoming
feature-length film (The Lego Movie, due out in 2014 and starring the
voice-acting talents of Will Ferrell, Liam Neeson, Morgan Freeman, and Will Arnett).
Meanwhile, Sesame Street, now in its forty-fifth year of broadcasting,
remains ubiquitous in the lives of millions of American children. In short, it
would be difficult to name two cultural touchstones more worthy of being
written about by pop-culture critics, yet less often discussed in the
mainstream media. I’m thirty-six, and like many my age I spent much of my
childhood amongst the friendly monsters of Sesame Street, and another
significant percentage of my child’s play amongst store-bought and self-modeled
creations from Lego’s City, Space, and Castle lines of building bricks. So when
The Lego Group, now in its sixty-fourth year of operation, suddenly sat front
and center in the news last week due to a new report on design changes to its
building blocks, I paid closer attention than I would have anticipated. 

A recent study urges
parents to consider, when purchasing toys for their children, the indisputable
fact that Lego minifigures are substantially more likely today than twenty
years ago to feature angry or otherwise non-smiling plastic faces. Meanwhile,
anxious parents continue fretting publicly today, as they have for decades,
about the entertainment options available for their kids on television and at
the movies, meaning Sesame Street remains ever at the border of
conversations about American child-rearing, just as The Lego Group is right
now. And certainly there’s good reason for parents to worry about both toys and
television: Children are sponges, often noticing stimuli adults don’t. In
internalizing certain stimuli and ignoring others, they decide, by themselves,
the sort of adults they’ll become. The question, then, is a simple one for many
of today’s most anxious parental units: Does the anger painted on the face of a
toy make it more difficult for a child to access happiness? Would the gradual
loss of children’s programming of the caliber of Sesame Street—which is
increasingly likely, as each year it seems a greater and greater percentage of
children’s entertainment is provided by the Disney Channel rather than Jim Henson’s
heirs and successors—contribute to a generation incapable of growing up? And a
larger question: Isn’t one of American culture’s most unsettling blind spots
that it takes us longer to mature emotionally than seems to be the case in
other cultures? And isn’t this at least partially attributable to how we spend
our playtime as children and young adults?

nullThe answer to the above questions may well be
“yes,” but it may also be that these are the wrong questions. When I
was a Lego-obsessed child, the thing about every Lego minifigure featuring the
same smiling, yellow-plastic face—and they did; it wasn’t until 1989 that
additional facial features got added to Lego minifigures, and it took until
2003 for Lego to introduce lifelike skin tones—was that you quickly learned to
ignore your Lego minifigures’ facial expressions in imagining your own
Lego-based melodramas. Children instinctively (and from hard experience) know
that not every moment is a happy one. If their toys seem to be selling a
different story, they opt for empiricism over marketing and ignore the false
positives in their midst. If, however, as is now the case, Lego minifigures are
carefully painted to represent a series of distinct ethnicities, facial
expressions, and emotional attitudes, it’s much more difficult for a child to
impose their imaginative will upon their playthings. The same is true for the
feature of modern-day Legos most children and AFOLs (Adult Fans of Lego)
complain about, which is that increasingly Lego sets feature stickers to
portray complicated bits like engines or headlights or chassis details. This means
that once again children are denied the authority (and discouraged from
exercising their capacity) to imagine these features on their own.

If store-bought Lego sets represent, more and more, a
predetermined endpoint rather than a beginning, it says much for the
opportunities today’s kids do or don’t have to engage in imaginative play. That
said, the fact that Lego now regularly uses flesh-toned hues for its
minifigures rather than stock yellow headpieces is a far more significant
development than the one that made the news last week, at least from the
standpoint of child psychology. What happens when the cartoonishly fantastical
World of Lego begins to look significantly more lifelike, with minifigures that
are (variously) white, black, Latino, pale, tanned, young, old, et cetera? The
study states that what happens—as I’d suggest has been the case with Sesame
Street
from the very beginning—is that children begin to make decisions
about which faces and temperaments are most relatable to their own experiences,
and it’s in those decisions that juvenile psychologies may well get formed, or
so instinct and common sense tell us. It’s all to the good that children can
now play with toys featuring faces that don’t look like their own, and perhaps
it’s even to the good that children can now play with toys whose facial expressions
better match the range of expressions present in kids’ real-time environs; the
question is whether it would be even better if Lego minifigures were configured
abstractly enough to encourage children toward entirely-homespun playtime
narratives.

When Sesame Street was testing its pilot episodes
before audiences in 1969, social scientists told Henson and his collaborators
that children would be confused if puppets and human beings appeared on-screen
together. Yet the juxtaposition of Henson’s friendly puppet
“monsters,” who individually represented dramatically different
emotional and intellectual archetypes, and human beings, who generally
exhibited the full range of homo sapiens’ complexity, scored much better
among young test audiences and so—just like that—the social scientists’
objections were pushed aside. The result, of course, is one of the most
celebrated television programs in American history. It’s also a cultural
phenomenon that tells us much about how Generations X and Y learned to understand
themselves.

Each of the “Muppets” featured on early episodes
of Sesame Street could credibly be said to have represented a discrete
set of emotional and intellectual characteristics; some of these were
“positive” traits, some “negative,” though of course this
is a gross over-simplification (one popular theory
holds
that it’s more useful to think in terms of “Chaos Muppets” versus
“Order Muppets”). In the broadest terms, however, each of the
“major” Muppets of the early years of Sesame Street
represented a personality portfolio a child could instinctively choose to
relate to or be repelled by. Because these bundled archetypes were commingled
on-screen with human actors, it seemed reasonable for children to see Henson’s
friendly monsters as worthy not only of sympathy but empathy. Sesame Street
thus featured a pantheon of Muppetry ranging from the generally admirable
(e.g., Big Bird, Elmo, Kermit, and Grover) to the generally undesirable (e.g.,
Oscar, Bert, and Cookie Monster). Yet each Muppet was just three-dimensional
enough for any child to find them at least partially relatable.
 

nullGiven all this, we might posit here a personality test, in
the mold of the Meyers-Briggs assessment, that uses Muppets instead of
readily-definable character traits as its primary touchstones. It seems a
worthwhile hypothesis, given that so many of the Muppets of the 1970s and 1980s
simultaneously exhibited positive and negative characteristics that were
essentially symmetrical. That is, each “positive” trait had a
“negative” corollary, and vice versa. For instance, Big Bird, and
later Elmo, were both naive and oversensitive, but also—on the other side of
the same coin—friendly and empathetic. Perennial fan-favorite Grover was unwise
and impetuous, but also courageous and self-confident. Telly was neurotic and
anxious, but also kind-hearted and sympathetic. Ernie was irresponsible and
flippant, but also jovial and extroverted. Cookie Monster, like The Count,
could equally be seen as harrowingly obsessive and admirably passionate. Bert
was often tense, irritable, and impatient, but he was also intelligent,
motivated, and a self-starter. Oscar the Grouch, like Kermit the Frog, sat more
steadfastly at one of the spectrum than the other: If Oscar was generally
undesirable for his ill temper, pessimism, and reclusiveness, the Kermit of Sesame
Street
was consistently admirable for his intelligence, wisdom, and
emotional acumen. Other high-visibility monsters on Sesame Street also contained
important dichotomies, albeit more subtle ones: Herry Monster, for instance,
was, like so many of our fathers, equal parts imposing/unapproachable and
powerful/comforting.

As a child I most admired Ernie, Grover, and Cookie Monster,
which sounds suspiciously like my own psychological profile. I imagine some readers
will likewise be able to see themselves in some triangulation of Reagan-era
Muppetry. Are you a BCE (Big Bird, Cookie Monster, Ernie)? A COG (Cookie
Monster, Oscar, Grover)? Whatever one’s predilections, the point is that we can
understand, now, why parenting advocates are constantly mindful of what their
children are watching, and why social scientists are so skeptical of Legos’
recent evolution. Still, the question for both parents and social scientists
remains the same: Are we really considering, in our activism and our science,
how children consume entertainment, or do our anxieties merely underscore what
building blocks and puppets mean to us now, as adults? When I consider my own
history with Legos, for instance, I’m reminded that up until the age of
fourteen I wanted to be an architect, as it was somehow kept from me until that
time that architects have to do a lot of math; likewise, up until my
mid-twenties I carried with me the sort of childlike naivety about the ways of
the world that would be familiar to anyone who’s spent any time on Sesame
Street. It wasn’t, in either case, that either my toys or my television were
too constricting, but rather that just enough imaginative freedom was provided
me by them to make my playtime either a danger or, depending on my luck and my
instincts, a boon.

Seth Abramson is the author of three collections of poetry, most recently Thievery (University of Akron Press, 2013). He has published work in numerous magazines and anthologies, including Best New Poets, American Poetry Review, Boston Review, New American Writing, Colorado Review, Denver Quarterly, and The Southern Review.
A graduate of Dartmouth College, Harvard Law School, and the Iowa
Writers’ Workshop, he was a public defender from 2001 to 2007 and is
presently a doctoral candidate in English Literature at University of
Wisconsin-Madison. He runs a contemporary poetry review series for
The Huffington Post and has covered graduate creative writing programs for Poets & Writers magazine since 2008.

Not As “Himself”: Three Early Alan Arkin Screen Performances

Not As “Himself”: Three Early Alan Arkin Screen Performances

null

The
notion of an actor “playing him/herself” is slippery. When expressed, it
implies that we really know the performer when we probably don’t; we just know
their often-employed stage or screen persona. But also, it suggests that there
is something easy, automatic and unskilled about an actor’s “being him/herself”
when, in fact, being one’s self in an artificial and contrived situation or
scenario really isn’t a cakewalk.

Maybe
when we say that an actor “just plays him/herself,” what we mean to say is that
an actor has grown (perhaps too) comfortable in their craft. And under this
description fall many renowned older actors: Robert DeNiro, Al Pacino, Jack
Nicholson, Christopher Walken, and, not least of these, Alan Arkin. Yet what’s
interesting in Arkin’s case is that, unlike those other stars, he seems largely
exempt from being criticized or lampooned for “playing himself,” probably
because many do not mind him doing so (including myself). When he won an Oscar
for his supporting turn in 2006’s Little
Miss Sunshine
—in which he was part of an ensemble cast and not on screen
that much—it was as though he was receiving one of the highest rewards in his
profession for doing what only he can do best: play “Alan Arkin,” and as a
flawed yet lovable grandpa to boot.
And when he was Oscar-nominated for his supporting part in Argo, it was as though he was being recognized for playing “Alan
Arkin” as a gruff, scheming, yet noble movie producer (thereby giving the
archetype of the Hollywood insider– something that many AMPAS members must be—a
somewhat positive spin).

Yet
what’s also interesting is that, like some of the other actors mentioned, Arkin
broke through by giving screen performances that, to various degrees, required
him to be characters that he clearly wasn’t. As evidenced in The Russians are Coming, The Russians are
Coming;
Wait Until Dark; and The Heart is a Lonely Hunter, he was once
a chameleon-like new screen talent and not just “himself.”

*******

Before
his first major screen role in The
Russians are Coming, The Russians Are Coming!
(1966, dir. Norman Jewison),
Arkin had been an early member of the improvisational theater troupe Second
City and acted in Broadway shows like Enter
Laughing
and Luv. But while he
has experience with, for lack of a better term, traditional acting, he
considers himself to be an “improvisatory actor” and
his performance in TRACTRAC is indicative
of that tendency. As Rozanov—a Russian lieutenant who has to lead a “covert”
emergency landing party into a coastal New England town after his captain runs
their submarine vessel aground (which then leads to a panicked community, which
in turn leads to hijinks)—Arkin’s controlled, well-timed and humorous spontaneity
stands out and conveys the character’s professionalism as well as his beleaguered
state (something that would become a hallmark of his general screen persona).
And because much of the Rozanov role is spoken in non-subtitled Russian, the
performance often relies on effective yet subtle facial expressions, gesticulation
and vocal inflections. These acting choices render Rozanov a believable person
as well as a source of comedy.

While
warmly received upon its release by critics and audiences for humanizing and
relativizing the Cold War conflict during a period of Red Scare fatigue, TRACTRAC has become a product of its era
since the dismantling of the U.S.S.R. As a consequence, its flaws are more
apparent. Intended as both a satire and a farce, many of the other performances
come across as only farcical and are reminiscent of the brazen It’s a Mad Mad Mad Mad World, thereby
making the overall work of the cast somewhat uneven. And while well meaning,
the resolution to a climatic and literal stand off between Russian soldiers and
American townfolk is like something out of D.W. Griffith’s early work. Yet, by
first portraying Rozanov as a relatable and aggrieved man caught in a tough
situation, Arkin’s work in the film preserves some of its universal and
non-jingoistic message. Also, it demonstrates a quality of his acting style that
is evident elsewhere in his early work and that has been attributed to others
who have had similar improvisational training: even as he gets your attention,
he still functions as a team player within an ensemble. Remarkably but
deservedly, he received an Academy Award nomination for Best Actor for this
debut performance.

*******

His
next major role after TRACTRAC was
something more sinister. As the psychotic criminal Harry Roat, the big bad in
the screen adaptation of the Frederick Knott play Wait Until Dark (1967, dir. Terence Young), Arkin is almost
unrecognizable: wearing dark teashade sunglasses, a short bowl-cut and a
leather coat, and speaking “hip” in a creepy staccato, he is an original
nightmare hipster.    

WUD was shot as Arkin was becoming a
known quantity, and retroactively knowing that it’s him only gives the
performance an uncanny quality. Yet
Roat is so awry and menacing that it’s easy to overlook that he is a huge source
of exposition. For instance: while entrapping
two con men (Richard Crenna, Jack Weston) into helping him to retrieve a
heroin-filled doll from an apartment in which an innocent and blind housewife
Susy (Audrey Hepburn) lives, Roat explains the story’s set-up in the film’s
first sustained scene. When casting such a part, a wise course of action is to
hire a talented actor who is able to make a contrived, unreal situation feel
believable to an audience, and the complicated set-up in WUD is one that could have seemed more incredible when translated
from the stage to screen. But Arkin makes it work, and with panache.

Some
critics at the time of WUD’s release
considered Arkin’s performance as Roat to be too much: Roger Ebert wrote that
it’s “not particularly convincing”
and Bosley Crowther went as far to compare it to a Jerry Lewis caricature.
This point of view is fair if WUD is
understood as something approximating realism. But if WUD is understood as something akin to an Alfred Hitchcock thriller,
then the performance—which also uses the actor’s skill of spontaneity not to
get laughs but to unnerve—succeeds: Roat is a big movie villain who would feel
at home in a Tarantino film due to his theatrical, idiosyncratic nature.
Also, the jump-scare in WUD’s climax—which
actually involves both a jump and a scare—must be mentioned; it is one of the
all-time best in film, and the crooked and swift physicality of Arkin’s animalistic
leap during the moment is much of what makes it effective.

http://i.cdn.turner.com/v5cache/TCM/cvp/container/mediaroom_embed.swf?context=embed&videoId=351596

*******

Based
on the eponymous Carson McCullers novel, The
Heart is a Lonely Hunter
(1968, dir. Robert Ellis Miller) stars Arkin as
John Singer, a deaf mute who relocates to Jefferson, Georgia to be closer to his developmentally
disabled and committed friend Spiros (Chuck McCann). As a result, he helps and
befriends a small group of people, including music loving teenager Mick (Sandra
Loche), a resident of the house where he rents a room.

While different from its source material in some ways, THIALH is a straightforward adaptation that is bolstered by a
well-modulated and sensitive dramatic tone. For the most part, the work of the
ensemble cast is solid, but—to sound like a broken record—Arkin’s truly
understated performance is the standout, and it stands out despite the risk of
becoming elusive. Relying on a realistic pantomime as well as sign language and
body language, the performance’s subtlety exemplifies and extends the story’s
theme of how the hardships, tragedies, kindnesses and kismets of life tend to
happen in discrete ways. Singer is a selfless, decent and almost imperceptible
altruist who changes lives for the better, but his natural inconspicuousness
makes others oblivious to his problems and loneliness, which ultimately causes him
misfortune. In other words, Arkin’s heartfelt work in THIALH personifies its title: it earned him another Academy Award
nomination for Best Actor.

Ostensibly,
Arkin’s performance is notable for creating and sustaining Singer’s early life.
Yet upon a close examination, it’s clear that the performance isn’t great
because he is physically convincing as mute or because he expresses things in a
contained yet clear manner; it’s great because you can tell that he’s genuinely
listening to and observing others. Actors will often say that one of, if not the most essential thing to master when
you’re learning the craft, is listening to your scene partner or partners. That
may seem simple enough, but if you’ve tried acting, you’ve probably realized
that really listening to others as
you say your lines and hit your marks is a true skill. And if you master it,
then you can react to others authentically, which is what goes into most great
acting, and which is evident in all three of Arkin’s performances in TRACTRAC, WUD and THIALH.

*******

In
his 2011 memoir An Improvised Life,
Arkin wrote that “from the beginning I always thought of myself as a character
actor—someone who transfers himself into other people. I had no interest in
being myself onstage. In fact, because I didn’t know who I was. I didn’t have a
clue. I only knew myself as other people.” Yet,
as he describes in the book, when his film career ebbed after that initial
breakthrough,
he had a spiritual shift that was a result of studying Eastern philosophy and practicing
meditation. His consciousness and self-knowledge changed,
which required him to alter his approach to acting.
By his own account, it became more public and vulnerable
and, as a result of applying the Zen Buddhist concept of Shoshin or “beginner’s
mind” to his work,
less self-controlled and even more spontaneous. In other words, Arkin’s acting
style changed due to a period of self-actualization and, incidentally, his
screen persona became more identifiable and unique to his actual self, and
different from his performances in TRACTRAC,
WUD and THIALH.

This
suggests an interesting notion: maybe, as a result of maturing and becoming more
comfortable with their own selves, some great actors no longer feel a need to “hide
behind a mask” within their work. If such is the case, then whenever a DeNiro,
Pacino, Nicholson or Walken give a mediocre performance while seeming to be
“DeNiro”, “Pacino”, “Nicholson” or “Walken”, they’re probably just coasting and
failing to meet their earlier, better standard (i.e. Raging Bull, Dog Day Afternoon, Five Easy Pieces, The Deer Hunter)

In
Arkin’s case, however, he has remained an interesting and compelling screen
presence even if the movie he’s in might be nothing to write home about. As he
writes, this consistent quality is deliberate: “for me, every activity I engage
in has to contain the possibility of internal growth; otherwise it ends up as
either ‘making a living’ or ‘passing the time’—two ways of going through life
that feel to me like a living death. I want to know with every passing moment
that I am alive, that I am conscious, that with every breath I take there will
be some possibility of growth, of surprise, and of complete spontaneity.”

So
long live Alan Arkin, as well as “Alan Arkin.”


Holding
degrees in Film and Digital Media studies and Moving Image Archive
Studies, Lincoln Flynn lives in Los Angeles and writes about film on a sporadic
basis at
http://invisibleworkfilmwritings.tumblr.com. His Twitter handle is @Lincoln_Flynn.

RAISED IN FEAR: LET’S SCARE JESSICA TO DEATH and the Perils of Country Living

RAISED IN FEAR: LET’S SCARE JESSICA TO DEATH and the Perils of Country Living

null

Most
potential viewers would expect a film made in 1971 with the title Let’s Scare Jessica to Death to be a
teen slasher picture, but in fact, it is a subtle, moody piece of cinema that
explores the fragility of the mind and the persistence of the past, achieving
moments of rich psychological insight. 
It is also one of the most powerful treatments of the dream of getting
away from it all, and the horrors that ensue when we seek refuge in places we
little understand and where, in the end, we may not really belong. 

The story
is told largely from Jessica’s point of view, and creates a disturbing sense of
uncertainty in the gap between her own perceptions and those of the other
characters. This is nicely captured in
the opening scene’s voice-over narration, spoken by Jessica (Zohra Lampert):
“Nightmares or dreams … madness or sanity … I don’t know which is which.”  Her seemingly tenuous grip on reality is
partially explained in the back-story given in the early scenes of the
film. Jessica has just spent several
months in a mental institution, and she and her husband Duncan (Barton Heyman)
have decided to escape from the confines of their Manhattan apartment to try
the curative powers of country living on an apple orchard in rural
Connecticut.  Later, they encounter an
antiques dealer who made the same move, and he recognizes in them fellow
“refugees from urban blight.” But despite this antique dealer’s idyllic
portrait of the area they’ve just moved into, the newcomers are given many
signs that something is seriously wrong in this superficially bucolic retreat.

In the nearby
small town, they encounter hostility from the native population, which seems to
consist almost exclusively of old men. 
While the newcomers are all evidently in their thirties, the enmity
seems largely to derive from a generation gap, one that is reinforced by the
hippyish appearance of Jessica and Duncan’s friend Woody (Kevin O’Connor). Though their unfriendly encounters appear to
be the expected clash of anti-establishment baby boomers with the so-called
“greatest generation,” these tensions also derive from a more ancient enmity,
one between country folk and city folk. Many great films of the seventies address this theme, notably Deliverance, Straw Dogs, and The Texas
Chain Saw Massacre
, but what makes Jessica’s
treatment unique is the brooding ambiguity that shrouds the true nature of this
rural community. Since portrayed events
are filtered through the protagonist’s melancholia and relentless self-doubt,
it becomes impossible to be certain whether we are witnessing mere uncultured
rudeness and suspicion of newcomers or something much less benign.

My wife
and I moved to the mid-Hudson valley five years ago. At that time, we often felt such
doubts. The demographics of this area
are difficult to read from an outsider’s point of view, and we often felt
uncertain of the nature of our adopted community and its environs. Driving through the countryside on weekend
rambles, we would be mystified by the sudden transitions from quaintly
gentrified little towns with espresso cafes and antique shops to run-down
whistle stops with little more than a gas station and a grain silo, where
locals sip 40 ouncers and stare malevolently as you drive by.  While generally I find New Yorkers to be the
most friendly people of any state I’ve lived in, I have also walked trails in
the Catskills where people pass by stonily ignoring my hello, or worse, glaring
back silently.  Though I have come to
know my neighbors for the wonderful people they are, when we first moved in,
they frankly gave me the creeps.  Perhaps
this is because one of them introduced himself by saying that he had watched me
carry my wife over the threshold when we first moved in.  Moving into a new place has its perils, in
the city as well as the country, but there’s something especially unsettling
about the country’s unique sense of isolation. If your country neighbors turn out to be monsters, who you gonna call? I’ve seen enough horror movies to be wary of
the local sheriff’s connections. At the
end of the day, one’s doubts and suspicions most often turn out to be
groundless; but then again, what if they’re not?

As with
Roman Polanski’s Rosemary’s Baby,
Hancock’s film carefully choreographs our doubts by selectively withholding
information and calling its protagonist’s perceptions into question. And yet, as with Rosemary, Jessica’s point of view
is richly, sympathetically rendered, and as the film progresses we begin to
feel that the men in the film are the naïve, deluded ones. Jessica’s world is magical and strange, an
effect largely achieved by Joe Ryan’s complex sound design, in which
non-contextual sounds and voices form a constant countercurrent to the film’s
narrative flow. Wind blows even when the
trees are still, and queasy, seething electronic noises provide an aural
equivalent to the characters’ unease. Jessica’s disembodied voice offers a running disjointed monologue, often
uttered over spare piano or melancholy acoustic guitar figures. The increasing claustrophobia of this
would-be idyll is as much a product of the protagonist’s psychological
isolation as the characters’ rural equivalent.

With the
entrance into the story of the enigmatic character Emily (Mariclaire Costello),
Jessica’s internal monologue begins to incorporate other voices. Emily appears to be a free-spirited wanderer
squatting in the house newly purchased by the film’s protagonists, but as the
film progresses she seems more deeply connected to the town’s history.  Jessica seems uniquely attuned to this, a
connection furthered by a séance scene in which she declares her receptivity to
“everyone who has ever died in this house.” 
The abiding presence of the dead and their stories is a theme struck
early by the film, when the three main characters (who drive a hearse, by the
way) stop at an old cemetery so that Jessica can take rubbings from
tombstones. These rubbings adorn the
walls of her and Duncan’s bedroom and seem to summon further voices in
Jessica’s head.  In some respects she is
a visionary, attuned to the local spirits.
Yet this potentially empowering receptivity gives way to powerlessness as
the characters begin to reenact the family dramas of those long dead. Let’s
Scare Jessica to Death
moves subtly from being a film about retreating to
an idyllic place to being about the spirits of that place reasserting themselves.

Although
the spirit of place in Jessica is
clearly malevolent, the film’s cinematography, saturated with color and
suffused with shimmering natural light, continues to seduce us into its dark
pastoral world. Like all great horror
films, this is not so much about what horrifies us in our daily lives, but also
what entices us, revealing two seemingly conflicting sides of the same
experience. One of the voices in
Jessica’s mind often repeats the phrase “You’re home now,” but after a certain
point it becomes difficult to tell if this is the incorporated voice of the
mysterious Emily, or Jessica herself; the seemingly malevolent voice of the
rural township or the consoling voice of Jessica’s own city-bred mind, hoping
to reconcile herself to her country retreat. 
At the conclusion of the film we return to where we began, with the
voice-over musing on whether we are living a dream or a nightmare.  Though horror films can show us how easily
one can turn to another, they can also muse upon those paradoxical moments when
our life choices seem to unleash an uneasy combination of both.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

Attention Red Wedding Crashers: Get a Grip, Sit Back, and Enjoy a Best-in-Genre Moment

Attention Red Wedding Crashers: Get a Grip, Sit Back, and Enjoy a Best-in-Genre Moment

null**Warning: This piece contains spoilers. Read at your own risk.**

If one struggles to name any fantasy-genre
standout on the small screen or silver screen that isn’t a book adaptation, an
animation, or a mawkish cult classic like David Bowie’s Labyrinth, the reason’s simple: American audiences consider
the entire genre frivolous and flippant, and won’t embrace it in new
media unless book-lovers, kids, or hipsters have already given it their stamp
of approval. In other words, outside the context of video games, Americans need
an excuse to love a fantasy-genre production; either it borrows its gravitas
from the fact of it having sold well in bookstores first, it needs no gravitas
because it’s essentially kiddie-candy, or it operates beyond the reach of
gravitas because it’s pure kitsch. The end result is that no one takes the
genre seriously and, beyond a few hundred thousand mass-market paperbacks sold
annually at brick-and-mortar bookstores, no one really cares much about it.
It’s tangential to American life; it’s a first-world curiosity. The reason?
Fantasy authors, animators, and directors have never found a way to make
readers or audiences feel in their gut the grotesque moral savagery around
which the genre is built, or to see in fantastical morality plays lessons with
timely relevance for modern living, and in consequence no story rendered as a
fantasy ever properly lands with American audiences. It’s simply too removed
from anything that really matters.

The Red Wedding scene from the HBO series
“Game of Thrones” may not have reinvigorated a genre—one could argue that the entire series, which
lights up Twitter and Facebook weekly like few other cultural artifacts do, has
done that—but it may well have reinvented it. Martin’s controversial killing off of
three major characters in the middle of the series’ seasons-long story arc, and
his unceremonious ending of the two-family feud at the center of that arc
seemingly seasons too early, is a best-of-genre moment that has roused much
anger among television-watchers precisely because it changed the ground rules
of an entire genre in mid-stride.

Many Americans, this author included, go to
television generally, and fantasy or fantastical shows specifically, as a means
of escaping time—that is, to watch consequence-free melodrama in a space that
feels entirely removed from anything we really care about. Horror films don’t
meet that standard because they frighten; contemporary dramas, because they make
use cry; comedies, because they make us laugh (and sometimes, when done right,
cry while laughing); and romances because they make us swoon. Fantasy shows and
movies are supposed to be more like documentaries that entertain us in the
absence of any informational content; if they refresh our spirit, they do so
quietly and only with our implicit preapproval.

Enter “The Rains of Castamere,” an
episode of “Game of Thrones” that led fans of the series to take to
Twitter and Facebook to issue death threats to the series creator, George R.R.
Martin; many others announced they’d no longer watch the show. Fans of the book
had a similar reaction when the now-infamous Red Wedding scene appeared in the
book on which Season 3 of “Game of Thrones” is based, A Storm of Swords.

In the scene immediately preceding the Red
Wedding in the Robb Stark/Catelyn Stark storyline, the King of the North’s
mother urges him to let his mortal enemies, the Lannisters, know what it feels
like to lose something they love. It’s considered, by both Stark scions, to be
just about the only thing that will awaken the callous Lannisters from their
complacent wealth and endless political victories (also, a string of de facto
military victories brought on not by their own military prowess but the
weakness and disorder of their enemies). In the very same way, Martin’s killing
off of the two senior Starks has affected a complacent, wealthy, victory-rich
nation—America—by taking from it two characters it loves and admires, and doing
so without any of the advance warning first-world countries implicitly demand
before they’re handed a major defeat. That’s what really gets our goat about
the Red Wedding: It was a sneak attack against our affections and our courage,
launched from a platform (the fantasy genre) which has long been free of
such audience-rattling excursions. It’s no wonder the most successful
fantasy-film franchise in the history of Hollywood, the Lord of the Rings trilogy, was based on a book Britons once voted
the best of the twentieth century and which, consequently, both the English and
their American cousins already know the ending to. The Red Wedding was
something different; it was a nasty surprise that stole from us something we
actually value and made us actually hurt, thereby breaching the contract
fantasy readers and filmgoers have implicitly always had with the genre.

But George R.R. Martin has taken this particular
best-in-genre moment even further, and in doing so has returned fantasy to
real-time cultural consequence for the first time in, well, forever. Fans
mourning the deaths of Robb Stark, Catelyn Stark, and Talisa Stark fail to see
that these are precisely the characters who needed to die. They needed to die
immediately and they needed to die in precisely this way, for what has always
made the fantasy genre the most underrated of all the genres is not only that
(as with the Red Wedding) it carries the capacity to move us as deeply as any
other form of entertainment, but also that it teaches us better than any other
genre about the moral savagery that still endangers us daily. Whatever we may say
of their deaths, the now-deceased Stark trio each had it better than almost
anyone in Westeros, which left viewers with little to learn from them except
the falsehood that in an unpredictable world the emotionally rich are rarely in
peril. 

Robb Stark had a father who not only loved him
but inspired him, a mother who loved him and modeled for him every strength a
man or woman of any time-period could need, a wife with whom he shared true love, a home for which he felt genuine fondness and with which he shared a
genuine spiritual attachment, brothers and bannermen and vassals who he loved
and who loved him in return. He knew himself, he knew his cause to be just, and
he knew himself to be capable of generative moral audacity and abiding
political courage. The same could be said of his wife and his mother, excepting
that his mother also enjoyed the most loving marriage in Westerosi history for
several decades and was perhaps the first mother in Westerosi history to be
sincerely and justifiably proud of every one of her children (even Sansa). The
tragedy of her last year of life, like the tragedies of Robb’s and Talisa’s
last months together, in no way erases the permanent mark of a life well lived.

In a fantasy book or film, we expect emotional
removal and cultural irrelevance, and so we expect a life well lived to end happily,
as in our own reality they so often do not. In our reality, children are killed
by cluster bombs dropped pursuant to military squabbles they have nothing to do
with; loving mothers are killed in childbirth or by drunk drivers or from
breast cancer; good men are ruined by men with fewer scruples, baser instincts,
and a larger quantity of money. Sometimes, but with precious rarity, what is
true in life is also true in fantasy: We learn from goodness, when we learn
from it at all, only from its downfall. That that’s a lesson we rarely get from
artifacts of the fantasy genre is something we’ve come to live with, in fact
it’s become something that (ironically) makes fantasy palatable to American
audiences.

We call George R.R. Martin a cretin for killing
off the three most noble Starks this side of Arya—Ned, Robb, and Catelyn—but
look for a moment at the miserable lives of his tale’s supposed
“victors.” Cersei is still alive; she’s a beautiful and intelligent
woman who’s never felt romantic love for anyone but her brother, is afforded a
tenth of the respect her intellect deserves, was married off like a parcel of
property (and is about to be so married again) to a man she doesn’t love or
respect, has no mother and fears rather than loves her father, has no friends,
parented a sociopath into a reign of unfettered derangement, and will never
achieve even a fraction of her life’s ambitions. Her brother and lover Jamie
Lannister has led a life of such self-loathing that the first consequential
interpersonal encounter of his thirty-something years is with a six-foot-tall
virginal pariah who’s charged with his prisoner’s transport; it’s not clear that
he’s ever had sex with anyone but his sister or been loved by anyone but her
and his near-universally-despised little brother. Petyr Baelish has spent his
entire life pining after a woman who doesn’t love him and compensating for a
childhood spent getting the snot beat out of him by stronger, taller,
better-looking, better-armored men. He has not a single friend. Lord Varys is a
castrato who endured years of penury, torture, forcible rape, and public
humiliation just so he could work harder than anyone in his immediate vicinity
on behalf of a kingdom that does not appear to deserve (or in any sense
appreciate) his efforts to counter Baelish’s Chaos with Order. Let’s put aside
that no one loves him, either, that he loves no one, and that his only
“friend” is Tyrion Lannister—who doesn’t trust him. All of these
people, and the many other Lannisters and assorted baddies who survived the Red
Wedding, are miserable wretches whose lives and loves we do not admire or envy.
The few days and weeks and months we’re permitted to watch their lives
notwithstanding, they’ve suffered substantially more, and lived substantially less
well, than those they have killed or have just heard about being killed at the
Red Wedding.

The lesson of the Red Wedding, then, isn’t just
that well-written fantasy takes from us things that are precious to us in a way
that actually hurts us, but that we learn more from the suffering of the bad
than the clean living of the good. This isn’t a lesson we normally associate
with fantasy–in fantasy, or so the casual fantasy-watcher thinks, the evil
ultimately perish and the good ultimately prosper—but it’s a lesson many of us
have been associating with the very best exemplars of the genre for a very long
time. If you’re a Ned Stark-like father-figure who happens to live in a
war-torn Middle Eastern country, all your hard lessons about righteousness and
many years of dedicated love may not keep your children or wife safe; if
you’re a homosexual in the wrong place on Earth, your true love for another may
someday lead to your brutal murder; if the way you live and love is an
inspiration to others, you may have your entire life toppled someday by someone
lacking your stringent codes of honor and various self-restrictions. The only
way to encourage a nation to fight the worst human instincts—whether they arise
from within the nation or without it—is to engender in that nation an abiding
understanding of what it means to lose what one loves and what it means to
watch the devious succeed. By the same token, the only way to encourage a
nation to honor the best human instincts—whether they arise from within the
nation or without it—is to enforce an understanding that goodness sometimes
leads to happiness before it leads to tragedy, and that savagery often leads to
misery before (and even while) it leads to perpetual skin-of-one’s-teeth
survival.

One of the worst things about human history is that
we have often learned the above lessons, when we’ve learned them at all, from
violence and loss of life; one of the best things about human history is its
continual production and reproduction of art, and one of the best things about art is that it teaches us what we need to learn about ourselves and language
and the nature of attachment without any accompanying need for bloodshed.

Don’t hate George R.R. Martin for taking from
you what you love, “Game of Thrones” viewers, thank him. Don’t hate “Game
of Thrones” for bending the conventions of fantasy to make you feel
something real in real-time, be grateful for it. And don’t underestimate the
beauty of something good—whether a life or a love—because it’s ended, nor
overestimate the comforts of something false and miserable because it persists.
Most of all, don’t treat the death of a pregnant woman, her husband, and his
mother as the end of an era for a television program; treat it as what it is:
the rebirth of an entire genre, and a regeneration of the belief all
well-intentioned persons share, which is that living justly and kindly is its
own reward and earns back any subsequent cost a thousand times over.

Seth Abramson is the author of three collections of poetry, most recently Thievery (University of Akron Press, 2013). He has published work in numerous magazines and anthologies, including Best New Poets, American Poetry Review, Boston Review, New American Writing, Colorado Review, Denver Quarterly, and The Southern Review.
A graduate of Dartmouth College, Harvard Law School, and the Iowa
Writers’ Workshop, he was a public defender from 2001 to 2007 and is
presently a doctoral candidate in English Literature at University of
Wisconsin-Madison. He runs a contemporary poetry review series for
The Huffington Post and has covered graduate creative writing programs for Poets & Writers magazine since 2008.

BIG PRESS PLAY NEWS!

BIG PRESS PLAY NEWS!

Some big changes are happening at Press Play!

For
those of you who haven’t heard, our friend and co-founder, Matt Zoller
Seitz, is now the Editor-in-Chief of RogerEbert.com. Congratulations,
Matt, and the very best of luck! Ebert Digital could not have chosen a
better person to replace Roger.
Matt’s integrity and humanistic, very personal
style of critical writing is a quality Ebert himself
possessed, in spades.

Also
in Press Play news, Max Winter will shoulder the (big) job of
Editor-in-Chief. Under his leadership, the site will continue to publish thoughtful and enlightening writing, both critical and
journalistic, as well as video pieces that can be watched over and
over again.

Ken Cancelosi, Publisher
Max Winter, Editor-in-Chief

The Dangers of An Empty Suit: Marvel Comics’ War on War Continues

The Dangers of An Empty Suit: Marvel Comics’ War on War Continues

null(Warning: This article contains spoilers for the film Iron Man 3.)

The implicit argument of every comic book and
comic book-inspired movie is that the world outside comic books is a better
place for having no superheroes in it, and a far worse place for having so many
warmongers. Iron Man 3 is Marvel
Comics’ strongest argument yet on both scores. True, the Iron Man films have always been conspicuously anti-war—Stark
removes his privately-funded R&D enterprise from U.S. Defense Department
involvement in the first entry in the now-trilogy—but Iron Man 3 is a uniquely instructive exemplar of Marvel’s war on
war by way of Hollywood.  

In Iron
Man 3
, the United States, in the person of billionaire playboy and
self-described “mechanic” Tony Stark (Robert Downey Jr.), has perfected the
drone as a weapon of mass destruction. Whereas Stark actually had to be in his specially-designed metal-alloy
suit to become “Iron Man” in both Iron
Man
and Iron Man 2, the man is
now superfluous to the machine: Downey’s titular character has a veritable army
of man-shaped drones (a metaphor that ought not be lost on us) ready to do his
bidding at a moment’s notice.

In one particularly charged scene toward the
end of the film, Stark says to his nemesis, of girlfriend Pepper Potts, “she’s
perfect as she is.” As action-flick dialogue goes, this is pretty insignificant
stuff, yet it’s also a good summary of the chief theme of Iron Man 3, which ultimately pits men who believe they’ve perfected
machines against men who believe they’ve perfected humans. It’s no spoiler to
say that neither pipe dream is realized in the end; the question is just how
lifelike both the dream and its perpetual deferral really are.

Stark’s technological breakthroughs don’t fall
very far from our own reality, given that just a couple weeks ago the real-life
United States Navy launched a drone from a nuclear-capable aircraft carrier for
the first time. This means that American drones can now officially drop
cluster-bombs on anyone, anywhere, at any time, as if that weren’t already the
case in practice. Meanwhile, the Obama Administration recently launched an
initiative to map the human brain—in the same way scientists mapped the human
genome several years ago—and in this well-intentioned effort there’s an eerie
reminiscence of the baddies of Iron Man 3,
who believe they’ve perfected the human body by (you guessed it) mapping the
human brain to create an army of super-soldiers. In short, Iron Man 3 asks us to ponder the question: Is the perfect man any
less dangerous than the perfect machine, and isn’t Pepper Potts (Gwyneth
Paltrow) actually perfect just the way she is?

But Marvel Comics’ increasingly cerebral and
interconnected film productions are wont to do much more, now, than simply
throw mud at all corners of the global military-industrial complex. The
presidential administration portrayed in Iron
Man 3
, which appears to be vaguely Republican (much is made of the White
House doing nothing to investigate a major oil spill, an oversight an oilman
president, say, might be wont to make) dresses up its Don Cheadle-cum-War
Machine drone in patriotic colors, redubbing it The Iron Patriot, and it’s this
obsession with re-marketing drones as a nationalistic imperative that nearly
gets Marvel’s imaginary President Ellis blown to Kingdom Come. The message is
clear: The more attractive-looking the drone, the more likely it can be used as
a Trojan Horse for dangerous geopolitical initiatives and even more dangerous
first principles.

Likewise, the villain of Iron Man 3 is not, as it turns out, a gnarly Ben Kingsley—whose
primary job in the film is to look dirty, foreign, asexual, and (worst of all)
old—but rather a blond, perfectly-coiffed Lothario who (as it happens) can
literally breathe fire. Here, too, the message is clear enough: Dress up a
villain in something like the clothes we’d expect a “winner” to wear, and it’s
not much different from dressing up a nation’s foreign policy in those
metaphoric clothes we expect “winner” nations (that means us Americans) to
favor. Each of these premises is equally alluring; each is even—at the risk of
taking the analogy too far—equally sexually intoxicating. Yet both are a threat.
The upshot is that we don’t need or want perfect men or women, any more than
we’d want perfect war machines. This isn’t to say we shouldn’t map the human
brain, or strive to perfect certain strains of military-industrial innovation (recent
advances in non-lethal weaponry come to mind), but rather that it’s the perpetual
search for perfection and self-perfection that often leads us to destruction. This
theory can be applied with equal force to men and women who judge others
primarily by their physical appearance and voters who judge elected officials
by how good a game they talk on anti-terrorism and national defense.

What Tony Stark ultimately learns in Iron Man 3—we’ll see if the lesson
sticks in Iron Man 4—is that he needs
to be more fully human, not more fully superhuman. He finally has the metal
shards lodged in his heart removed so that he can once again function without
the aid of blood-pumping machinery; he turns aside from his “mechanic” identity
by destroying the fruits of his labors in spectacular fashion; he re-dedicates
himself to his relationship with the already-perfect Pepper Potts by increasing
their face-time and decreasing his log-times (after first paying for surgery to
reverse artificial “perfections” performed upon Pepper by the villainous
Mandarin); and he concludes, in a final voiceover, that he’s the “Iron Man”
even if all his high-tech toys are taken away—something many a Marvel fanboy
would dispute. In other words, Stark discovers that it’s not enough to turn
aside from direct complicity with warmongers, what’s required of a strong and
capable human is the ability to turn aside from the fallacy of perfectibility,
too. 

This message is one particularly at odds with contemporary
American culture, which convinces us more easily than we’d like to admit that
there isn’t a single facet of our physical or emotional well-being we can’t
perfect with a crash diet or a brain-boosting iPad app. Likewise, Marvel seems
to take a dim view of the current penchant for political panaceas: The idea
that a single political solution exists (whether in the form of a politician or
a policy) for the complex problems of the nation and the world is one with
little backing in any of the recent Marvel films. Indeed, it’s not too much to
say that Marvel Comics is reminding us anew, with each successive film in the Avengers network of storylines, that
the worst sort of war is the war we wage daily against our own fears of
fallibility and failure, as it’s this sort of windmill-tilting that ultimately
leads us down the path to ruin. Tony Stark’s realization that his desire to
protect Pepper from alien invaders is fueling the destruction of both their
relationship and his psyche—in the same way the fictional United States of Iron Man 3 fuels its own demise by
up-jumping its fear of terrorism to ever more frenzied levels—is just the sort
of thing Yoda always warned us about (“Fear leads to anger, anger leads to
hate, hate to suffering”).

Ironically, it’s a yearning after perfection that
sells untold millions of comic books to young male and female consumers the
world over, so we ought to read Marvel’s Avengers
films as a particularly ingenious bit of reverse psychology. If we actually
took the lesson of Iron Man 3 and its
ilk to heart, we too would blow up our personal anxieties, demand real rather
than Hollywood courage from ourselves and the many empty suits in political
office, and plant a long, lingering kiss on the already-perfect lips of
whichever Gwyneth Paltrow is presently brightening our days.

Seth Abramson is the author of three collections of poetry, most recently Thievery (University of Akron Press, 2013). He has published work in numerous magazines and anthologies, including Best New Poets, American Poetry Review, Boston Review, New American Writing, Colorado Review, Denver Quarterly, and The Southern Review.
A graduate of Dartmouth College, Harvard Law School, and the Iowa
Writers’ Workshop, he was a public defender from 2001 to 2007 and is
presently a doctoral candidate in English Literature at University of
Wisconsin-Madison. He runs a contemporary poetry review series for
The Huffington Post and has covered graduate creative writing programs for Poets & Writers magazine since 2008.