What Mise-en-scène Is and Why It Matters

What Mise-en-scène Is and Why It Matters

null1. What Is Mise-en-scène?

 

Any student of the cinema quickly encounters the term mise-en-scène, and often comes away the worse
for the wear. The word—or is it words?—is long and funny-looking (to those who
don’t speak French). Making matters worse, the term isn’t always spelled the
same way: sometimes there’s an accent, sometimes there aren’t any hyphens, and sometimes
it’s written in roman type, not italics.

The term’s meaning is similarly complex, having shifted many
times over the years since its creation; it has also gotten bound up in several
different arguments, many of which we no longer inhabit directly. In this
article, I aim to survey that evolution, paying special attention to how it has
become associated with only particular types of filmmaking—the cinema of the
long take. Finally, I’ll argue against that tendency, and attempt to
demonstrate the relevance of mise-en-scène
to the short take.

First things first. Mise-en-scène
was applied to film in the 1950s by the French critics writing at Cahiers du Cinéma (Notebooks on Cinema). They borrowed it from French theater, where
it essentially referred to everything that appears on the stage (it literally
means “putting in the scene”). The thinking was that a film’s mise-en-scène consisted of everything
that the camera sees: the setting, the lighting, the actors, their performances
(including blocking), costumes, makeup, props. It also referred to how those
elements were arranged within the frame—in other words, it was synonymous with
the shot’s composition.

A few problems sprang up immediately. The first was that the
Cahiers critics never defined their
term all that precisely. Alexandre Astruc famously called mise-en-scène “a song, a rhythm, a dance” (267); in a 1998 interview,
Astruc’s Cahiers colleague Jacques
Rivette claimed, “Here’s a good definition of mise en scène—it’s what’s lacking in the films of Joseph L.
Mankiewicz” (Bonnaud). I thought at the time that Rivette was simply being
cheeky, but there’s a way in which he’s also deadly serious: he means that All About Eve, despite literally having
lighting and staging and props and settings, etc., nonetheless somehow lacks a
certain special quality, which is mise-en-scène.
Delving into the Cahiers writing of
the 1950s makes it apparent that there was, right from the start, a tendency to
define the concept loosely, poetically—which is what led critic Brian Henderson
to later call the term “undefined” (315).

The second problem occurs when you consider how people who
make films see different things than those who view films. When you watch a
play, the stage is in front of you, and it’s clear what’s on it and what isn’t.
But films differ from theater in two key aspects. One, the camera frames the
image. Two, cinema includes cuts (edits).

Let’s say you’re making a film, shooting a scene on a busy
street. The camera sees only so much of that street, but you, being there, can
see the whole thing (and the actors can see the whole thing, which presumably
influences their performances). Where does the mise-en-scène begin and where does it end? What’s more, a lot of
what you shoot won’t end up in the film—parts of takes, and perhaps even whole
takes (what we today call “deleted scenes”), will end up on the cutting room
floor, or in some separate portion of a hard drive. What happens to the mise-en-scène of those images?

This is why mise-en-scène
isn’t really a production term— as Astruc had already noted by 1959, it’s not
something that filmmakers talk about when they’re shooting (267). Instead, it’s
a critic’s term, referring to the content of shots that appear in the finished
film. And since it refers to the content of the shot, then it also must refer
to camera movements, since panning and tracking changes the shot’s content.
(The famous long take in Goodfellas
that follows Henry Hill and his date as they enter the Copacabana via the
kitchen features more than one setting, as well as numerous actors, props,
costumes, and so on.)

So mise-en-scène
refers to the entirety of any given shot: the stuff that was filmed, as well as
how it is framed (and how that changes). And in many places, the term has more
or less survived into the present day in this form. For instance, here’s how Ed
Sikov’s Film Studies: An Introduction
(2010) defines it:

“Everything—literally everything—in the
filmed image is described by the term mise-en-scene:
it’s the expressive totality of what you see in a single film image.
Mise-en-scene consists of all the
elements placed in front of the camera to be photographed: settings, props,
lighting, costumes, makeup, and figure behavior (meaning actors, their
gestures, and their facial expressions)
. In addition, mise-en-scene includes the camera’s actions and angles and the
cinematography
, which simply means photography for motion pictures. Since everything in the filmed image comes
under the heading of mise-en-scene, the term’s definition is a mouthful, so a
shorter definition is this: Mise-en-scene
is the totality of expressive content within the image
.” (5–6, italics in
the original)

But when one stops to think about this concept, one sees how
even this is problematic. For one thing, how is mise-en-scène any different from the term “shot”? Or “composition”?
Obviously, we’re not dealing with the actual things in the shot—the actual
setting, the actual props—but a two-dimensional record of them, frozen in a
particular arrangements. What’s more, if every shot is essentially its mise-en-scène, and a film is made up entirely
of shots, then isn’t mise-en-scène in
fact synonymous with the entire film? Which is to say, isn’t mise-en-scène synonymous with cinema
itself?

null

2. Mise-en-scène and the Long Take

Some critics noted straight away that the one thing that mise-en-scène didn’t refer to was editing. As such, they started using mise-en-scène and editing as antonyms.
Here it will help to know that, for the Cahiers
critics, editing was a hotly contested topic. Simply put, certain film theorists
who had gone before them—namely Lev Kuleshov and Sergei Eisenstein—had
emphasized the importance of editing, or montage. To them, the artistry of
cinema lay very much in how a film was assembled from disparate shots. This was
due to their noticing early on how editing could be used to create wholly
artificial relationships between shots. For instance, you could shoot a person
looking up at something on one side of town, then go to the other side of town
and shoot an image of a sign. When you edited them together, the resulting film
gave the impression that the person was looking at the sign, even though that look
was impossible in real life. Similarly, you could film a person walking into a
building in one locale, then film a different interior. And so on.

There proved to be no end to the artificial relationships
that you could create between shots. We partly understand this phenomenon today
as the Kuleshov Effect. If you take a picture of a man’s face, and follow it
with a shot of a bowl of soup, it creates the impression that he’s hungry. But
if you follow it with a shot of a woman reclining on a divan, it makes it look
like he’s ogling her. (See this entire
clip
for a humorous description by Alfred Hitchcock of how editing changes
the way viewers interpret shots.)

To put things very crudely, the critics at Cahiers du Cinéma began questioning the
importance of montage. They were led here by André Bazin, whose background was
in documentaries and Italian Neorealism. As such, he was less interested in how
cinema could artificially warp reality, and more interested in how it could be
used realistically. Accordingly, he devised an argument of cinematic realism in
which he proposed that the history of cinema was one of an increasing capacity
for realism. The way he saw it, improvements in film technology allowed
filmmakers to more faithfully capture reality. Improved film stocks (including
the development of color) allowed for higher resolution images. Sound cinema replaced
silent cinema. Widescreen formats allowed for larger compositions. Cameras got
smaller, enabling filmmakers to leave studios and shoot on real locations. Lenses
improved, allowing for deeper focus shots. And takes could also get longer and
longer, being less limited by the capacities of earlier reels.

Given this, Bazin and his protégés deemphasized the artistic
importance of the cut. They argued it had less to do with the “expressive
content” of cinema than the content or composition of the shot itself. So it’s
no wonder that they invented and emphasized the concept of mise-en-scène. And it’s because of this period in film criticism that
the term came to mean something opposed to editing. People began speaking of
two different approaches to filmmaking: editing (or montage) vs. mise-en-scène (which got tied up with other
devices that Bazin favored—long takes and deep focus). According to this line of
thinking, a director necessarily favored one approach over the other. The art
of cinema was either one of cutting or of long takes.

Critics, too, often fell into one camp or the other. Those
who supported montage noted how editing allowed for the manipulation of reality,
and the creation of effects that were impossible in real life. Such arguments,
of course, became the very grounds for dismissal from the long takes / mise-en-scène camp. To them, filmic
artistry depended not on artifice, but on the faithful imitation of reality. According
to this line of thinking, since we experience time and space continuously, a
superior cinema—a primarily realist cinema—should by definition avoid cutting. Returning
to our earlier example from Goodfellas: when we follow Henry Hill
from his car through the kitchen and to a table in front of the stage at the
Copacabana, we see how all those spaces are connected; we aren’t just cutting
from an exterior shot to an interior shot on a back lot or on a soundstage.

If these arguments sound quaint, then I hasten to stress
that I am indeed oversimplifying them here in order to highlight a very
particular historical debate. It’s also worth mentioning that Bazin died quite
young, at the age of 40 in 1958, and as such had no control over the ways in
which his arguments were later transformed by some into clichés. There are of
course complexities and subtleties to this long history of criticism that a general
survey necessarily omits. It’s is also indisputable that modern film studies is
largely based on the work of Bazin and the Cahiers
critics. Without their contributions, we critics of today would be
significantly impoverished. (We might not even be here!)

That having been said, there is a historical tendency to
oppose mise-en-scène to montage—an
entrenchment that lives on today in various forms. It’s hardly unusual to hear film
buffs claim that long takes are somehow inherently superior to shorter ones.
For instance, cinephiles often celebrate movies like Goodfellas and Russian Ark
and Children of Men and Gravity simply because they feature
long, complicated shots; meanwhile, people dismiss Michael Bay’s Transformers films, or movies like Quantum of Solace, because they feature way
too much cutting. These arguments are heir to the debate between Bazinian mise-en-scène and Eisensteinian montage.
Meanwhile, plenty of critics continue to equate mise-en-scène with long takes—see, for example, the opening line of
Ben Sachs’s recent Chicago Reader review
of Gareth Edwards’s Godzilla
, as
well as this
AV Club article
by Mike D’Angelo, which directly engages the debate between
editing and long takes (and does so by opposing mise-en-scène to montage). And the Wikipedia article
on mise-en-scène
, while garbled
as a whole (and of course always prone to sudden revision), contains some
language equating mise-en-scène with
long takes.

(Actually, the Wikipedia article is even more restrictive in
its usage, equating mise-en-scène only
with something called “oners,” or scenes that are filmed in single takes, and
that also feature mobile camerawork. This is so selective an association that
it renders the term practically useless. It’s also fairly nonsensical. This
particular line was added
by a now defunct Wikipedia contributor, “StephanDuVal,” who popped into the
conversation for twenty minutes two years ago, then disappeared. Since then,
various users have randomly appended sources that themselves don’t employ the
term, resulting in the kind of hodgepodge so typical of the Wikipedia. Instead
of defining the term objectively, the article stakes out a peculiarly small
tradition. A term that was once seemingly synonymous with all of cinema is
there reduced to the point where it refers only to a miniscule number of shots
in a miniscule number of films! Not even the most fervent devotees of Bazin
ever restricted the usage of mise-en-scène
to scenes that were executed in single, mobile takes.)

3. Mise-en-scène and Its DiscontentsnullKeeping this convoluted history in mind, I want to examine
now at what is overlooked by the historical tendency to associate mise-en-scène with the long take, and to
oppose it to editing. Because I believe that these traditional oppositions and associations
limit our understanding of the richness and artistry of the cinema.

For starters, let’s look more closely at Bazin’s argument
that the long take is better than the short one for representing reality. A
commonly heard argument here (one still hears it today) is that whenever a filmmaker
cuts, he or she is guiding the viewer’s attention, and forcing them to look at
particular things in particular ways. By way of contrast, Bazin argued that
long takes allowed viewers more freedom—they could look where they wanted. This
contributed to the idea that long takes are somehow more respectful of film viewers,
and as such require more sophisticated viewers. Over time, this created the
kneejerk association that long takes are somehow smarter than shorter ones (an
idea that lives on in the attacks on Michael Bay).

But is this argument necessarily true? There are many
reasons to doubt it. For one thing, all shots, long and short, are equally
artificial. It simply isn’t the case that as a shot gets longer, it somehow gets
truer. To think that way overlooks the artifice of the long take.

For instance, Bazin’s arguments about how long takes were more
respectful or less manipulative than shorter ones don’t always hold up to
scrutiny. As it turns out, there is nothing stopping long takes from being just
as composed and manipulative as shorter ones. Directors have many tools at
their disposal to direct the viewer’s attention through the long take, just
like they do in shorter ones. Composition can be, and often is, a means for directing
attention. So, too, are performances and camera movements. In other words, there’s
no reason to assume that mise-en-scène
is any less “manipulative” than editing.

This point is well made by David Bordwell in his article “Widescreen
Aesthetics and Mise-en-Scène Criticism” (1985, available
here as a PDF
). In particular, Bordwell observes how Otto Preminger’s use
of widescreen was celebrated by certain critics operating in the Bazinian
tradition. He relates how two critics writing in the Bazinian tradition, V.F.
Perkins and Charles Barr, praised a scene in River of No
Return
(1954) in which Marilyn Monroe’s character, Kay, drops her
valise while boarding a raft. As the scene continues, we see the valise drifting
away in the background of other shots. The argument here is that Preminger has
left it up to the viewer to see this detail, even as the action continues in
the foreground. Both Perkins and Barr celebrated Preminger’s employment of long
takes and deep focus, arguing that they gave his films a kind of naturalism,
transparency, and subtlety.

But Bordwell argues that this isn’t the case at all. In his
analysis, he notes how the film actually employs several devices that function
to draw attention to the disappearing valise:

“When Kay drops the valise she glances
frantically toward it and cries out, ‘My things!’ Harry shouts, ‘Let it go!’ At
the same moment, the camera pans sharply to the right to reframe the valise,
and a chord sounds on the musical track. Our attention to the drifting bundle
is just as motivated. For one thing, the bundle is initially centered when Matt
and Henry pass. Furthermore, Preminger has anticipated this camera position a
few shots earlier, when matt ran to the edge of the bank. It is common for a
classical film to establish a locale in a neutral way and then return to this
already-seen camera setup when we are to notice a fresh element in the space.
We thus identify the new information as significant against a background of
familiarity. As a fresh element in a locale we have already seen from a
comparable vantage point, the bundle becomes noteworthy. In sum, Preminger’s
staging of the scene stands out because it avoids editing, but it uses other
means to draw our attention to the bundle—centering, the return to a familiar
setup, and the repetition of cues for the bundle’s loss.” (22–3).

So much, then for Preminger’s supposed naturalism and transparency.
His long takes and use of deep focus—hallmarks of Bazinian realism, and supposedly
free of manipulation—turn out to be saturated with artifice, and highly
manipulative. (Preminger is hardly the only example one can find of this—Citizen Kane, for instance, also uses
composition and sound cues to focus its viewers’ attention, in addition to its celebrated
usage of long takes and deep focus.)

Another problem with the critical tendency to oppose long
takes and editing is that it ignores the many ways that those two techniques commonly
work together. This point is well made by Brian Henderson in his 1976 essay “The
Long Take,” which seeks to deconstruct the false binary between Bazin’s long
take and Eisenstein’s montage. For one thing, Henderson points out that long
takes still have duration, beginnings and endings, and as such still employ
editing—they’re edited together. What’s more, even a director like Max Ophuls—truly
a master of the long take if there ever was one—rarely assembled his films out
of nothing but long takes. Thus, a film with many long takes may also feature
shorter ones, and those shorter takes may in fact come between long takes:

“The present article takes its chief
emphasis from the fact that the long take rarely appears in its pure state (as
a sequence filmed in one shot), but almost always in combination with some form
of editing. […] Most analyses of long take directors and styles concentrate on
the long take itself and ignore the mode of cutting unique to it—what we call
below the intra-sequence cut. But such cuts or cutting patterns (one could even
speak of cutting styles) are as essential to the long take sequence as the long
take itself.” (316)

Throughout the essay, Henderson patiently draws attention to
these problems in order to ultimately argue that a film criticism that simply opposes
long takes and editing is bound to overlook the crucial role that editing plays
in defining the long take, and sequences of long takes. His goal is to point
out an area of filmmaking that has largely gone unstudied. Sadly, the tendency
to diametrically oppose mise-en-scène
with cutting prevails nearly forty years later, leaving a fascinating realm of
cinema still largely unstudied. At the present moment in popular film
criticism, the championing of long takes has once again risen to something of a
fetish. It receives a disparate amount of attention despite the fact that the long
take is but one element of filmmaking, no better or worse than any other.

A related problem is the tendency to measure the length of a
film’s takes by calculating the Average Shot Length, or ASL. This value is
calculated by dividing a film’s running time by the number of shots it
contains. And ASL is a very useful value in many respects. For one thing, when
one surveys many different films, ASL can give a general sense of how rapidly
films are cut in a given place or time. Thus, one can say that the average rate
of cutting in Hollywood cinema has increased throughout the sound era. Or, one
can attempt to catalog which contemporary Hollywood films feature the longest
ASL’s—as
I did here
, in an earlier article for PressPlay.

But ASL also leaves out a lot of information, especially
when one is analyzing specific films. Alfonso Cuarón’s Gravity,
for instance, has an ASL of roughly 35, since it features 156 shots in 90
minutes. (I’m speaking approximately here; I’m going off reported numbers, and
haven’t performed this analysis myself. Also, I don’t know the actual runtime
of the film, sans credits. The exact ASL, however, is beside the point.) Having
in hand an ASL of 35 doesn’t mean that every shot in Gravity is 35 seconds long—or indeed that any shot in Gravity is 35 seconds long. The opening
shot, for instance, is at least twelve minutes long—meaning that, on average,
the remaining 155 shots have an ASL of 30 seconds. And for every shot longer
than that, there means there are also shots shorter than that. What of them?
Are they any good? What is their relationship to the longer shots in the film?
Or is Gravity only good during its
long shots? (And if that’s the case, then why?)

The fetishizing of long takes is part of a larger,
long-running problem in film criticism, which as a whole is arguably less
critical than it pretends to be. As David Bordwell has expressed it:
 

“Instead of asking how films work or
how spectators understand films, many scholars prefer to offer interpretive
commentary on films. Even what’s called film theory is largely a mixture of
received doctrines, highly selective evidence, and more or less free
association. Which is to say that many humanists treat doing film theory as a
sort of abstract version of doing film criticism. They don’t embrace the
practices of rational inquiry, which includes assessing a wide body of
evidence, seeking out counterexamples, and showing how a line of argument is
more adequate than its rivals.” (“Articles“)

Put another way, the fascination with the long take risks
becoming entirely symptomatic, and uncritical. What makes a movie good? Long
takes! How do you know which movies are the best? Why, just check which ones
feature the longest takes! This is a totally dumbed-down type of film
criticism, where all we need do is calculate ASL’s in order to rank all the
movies ever made.

I don’t want to imply that long takes aren’t important, or
don’t feature a special relationship with mise-en-scène.
Certainly we should be sensitive to the unique challenges and properties posed
by the long take, and how it presents its content to the viewer. No film better
illustrates this than Aleksandr Sokurov’s feature Russian Ark
(2002), whose 96 minutes of footage consist of a single take. I myself watched
the film twice in a row in the theater, something I’ve rarely done—but Russian Ark is truly an atypical film.

However, is Russian
Ark
somehow more realistic than films that feature editing? Hardly. It’s worth
remembering, a la Henderson, that the 96-minute-long shot, and the film itself,
still has a beginning and an end. When compared with a person’s life—or even a
single day—it is still but a miniscule slice of time, unable to compete with actual
lived experience.

What’s more, we would do well to remember Bordwell’s
analysis of Preminger. The film as a whole, rather than being some transparent
documentation of reality, is entirely contrived. The single take carries us
from room to room, and from scene to scene. We go where Sokurov takes us. And
the man isn’t just wandering the Winter Palace of the Russian State Hermitage
Museum with a camcorder, capturing whatever reality he finds there. Instead,
he’s organized everything that we see. His camera movements and framing glide
along very differently than we people do, being balanced by a Steadicam. And
they continuously direct our attention, focusing it to particular aspects of
the spectacle. Meanwhile, everything that appears on screen is the product of meticulous
design and rehearsal. And we aren’t even seeing the first take, but the fourth!
(Goodfellas’s Steadicam passage
through the Copacabana is similarly no more real or less artificial than any
other shot in any other film ever.)

Along these lines, associating mise-en-scène exclusively with long takes perpetuates the bias
toward long takes, since they then seem to have a special quality (mise-en-scène!) that’s lacking in
shorter ones. Because, I mean, if cutting eliminates mise-en-scène, then aren’t they inherently worse? But short takes
do have mise-en-scène, and
understanding the connection between the mise-en-scène
and the montage is extremely important. To put it another way, if montage is
the study of the interrelation of shots, and all shots possess a mise-en-scène, then montage is also the
study of the interrelation of mise-en-scènes.
This is a topic just as worthy of serious critical attention as the study of
individual long takes. What is needed, overall, is a critical approach to cinema
that seeks to relate the various parts to the whole (as we find in the works of
critics like Bordwell and Henderson).

null

4. Mise-en-scène and the Short Take

In order to demonstrate the importance of mise-en-scène in short-take cinema, I’d
like to devote the remainder of this article to analyzing a scene from Scott Pilgrim vs. the World
(2010), looking at how mise-en-scène and
editing work in concert to produce several complicated larger effects. A few
notes first. I chose this scene because the editing in it is very fast. (The
editing in Wright’s films tends to be very fast in general.) Here, we have 24
shots in 57 seconds, yielding an ASL of only 2.1. That of course doesn’t tell
us how long each shot is, but it’s worth noting that this ASL is lower than in
most contemporary Hollywood films, which tend to hover in the 3–6 second range.
And yet, despite the brisk pace, a great amount of information is communicated in
this minute of film. Let’s see how that is done.

The scene in question occurs roughly 26 minutes into the
film. Scott Pilgrim has just had his first date with Ramona Flowers. Later that
day, his band (Sex Bob-omb) is due to play in a battle of the bands at a club
called the Rockit:

Ramona arrives, surprising Scott; she then meets some of
Scott’s family and friends. She also meets Scott’s current girlfriend, Knives
Chao, who kisses Scott, causing the young man to stammer and flee. Along the
way, we also get the beginnings of a subplot in which Wallace will seduce Jimmy
away from Stacey. In order to understand how Edgar Wright accomplished all of
this (and more!), we need to examine his sophisticated deployment of mise-en-scène.

For one thing, even though the Rockit isn’t the primary
focus of the scene, the setting is still important. The first two shots (of the
club’s sign and the interior, including the stage) function as establishing
shots, after which we catch glimpses of people milling about, and crew members
preparing for the upcoming battle of the bands. The next ten minutes of the
film will take place at the Rockit, and these establishing and background
elements help set the stage (literally) for the coming action. The setting also
figures into the film’s larger plot: its dive-bar atmosphere (“this place is a
toilet”) helps establish the upward progress that Scott and his band mates are
striving to make, which will be entwined with Scott’s struggle to win Ramona’s
heart. As both Sex Bob-omb and Scott advance, the clubs grow progressively
nicer until they wind up at the final battle, at Gideon Graves’s
state-of-the-art Chaos Theater.

Other background elements are also doing important work.
Edgar Wright sets up a quick joke by using the first few shots not only to
reveal information, but to conceal some as well. Ramona arrives and greets
Scott, and we get some conversation between them done as shot-reverse-shot.
Wright then cuts to reveal that Wallace, Stacey, and Jimmy are also present,
and have been standing there the whole time. The reveal is humorous, and helps
further Scott’s obliviousness (he has eyes only for Ramona). (The maneuver
recalls the joke in the opening scene of Shaun
of the Dead
, where Wright gradually adds in characters.)

Another important function of the mise-en-scène of each shot is how it helps focus our
attention—which is in fact vitally important, given how short these shots are.
Lighting and costuming are used to offset the characters from the background,
drawing our attention to their faces. And it’s worth noting here that, even in
short takes, there’s still room for mobile camerawork. (In other words, changes
in composition and changes in shots through editing are hardly opposed to one
another, but can work in concert.) As Stacey introduces Wallace and Jimmy, the
camera whip-pans to show us each character. Wright then builds another joke out
of this, hand-in-hand with the cutting, as Wallace sets his sights on Jimmy.

As the scene progresses, our attention is gradually shifted
away from the background elements of the setting, and more toward the
characters themselves. Again, numerous elements are working together here to
accomplish this (including tighter framing and a shallower depth of field). The
focus grows increasingly shallow throughout the scene, as our perspective
shrinks to that of Scott Pilgrim and his discomfort. The payoff comes in the
final shot of the scene, where Wright opens the space up once again, returning us
to a larger sense of the club. The pounding of Scott’s heart turns out to be a
drum being used in the sound check. Meanwhile, Scott, unable to handle the
conflict at hand (his basic problem as a protagonist), takes advantage of the deeper
focus of the shot to run off into the distance, and out of sight. (We have here
an illustration of how cinematography often anticipates how the actors are
going to move in the course of a shot.)

Yet other elements of the mise-en-scène work to develop the ongoing conflicts and jokes. When
Knives Chau shows up, her performance calls attention to her new hairstyle,
which is part of her character’s arc: her adoration for Scott is causing her to
become an indie rock fan. In a later scene, she’ll dye her hair blue, in
imitation of Ramona—and already the film is drawing comparisons between their
respective looks, and setting that love triangle in motion.

It’s also worth noting that the scene, despite being rapidly
edited, is hardly incoherent, either temporally, spatially, or narratively.
Indeed, a great deal is being communicated here in all three of those aspects
of the film. Several of the jokes depend on a consistent sense of space. And,
narratively, the scene introduces many characters to one another, delivering
some exposition to them and to the audience, as well as establishing two
separate love triangles (Scott / Ramona / Knives and Stacey / Wallace / Jimmy).

And this analysis only scratches the surface—we haven’t
considered much how sound functions in the scene, or color, or any of the CGI
elements. But I think we can see how the scene functions due to its complex
interaction of lighting, costuming, setting, character positioning (blocking),
camera movement—and editing (and camerawork). Rather than opposing one another,
all of the elements of the film—including the mise-en-scène and the editing—are working in concert to progress a
wealth of character and plot detail. Indeed, it’s only because those elements
are so carefully arranged in consideration of one another that Wright can
accomplish so much so economically. That complex interplay is the very heart of
the film’s sophistication, and artistry.

A.D Jameson is the author
of three books:
  Amazing Adult Fantasy (Mutable Sound, 2011), Giant Slugs (Lawrence and Gibson, 2011), and 99 Things to Do When You Have the Time (Compendium Inc., 2013). Other writing has appeared
at
Big
Other
and HTMLGIANT, as well as in dozens of literary journals. Since August 2011 he’s been a PhD student at the University of Illinois at
Chicago. Follow him on Twitter at
@adjameson.

Speak, BATMAN: Tim Burton’s Version, 25 Years Later

Speak, BATMAN: Tim Burton’s Version, 25 Years Later

null
In the summer of 1989, I had just completed my first year at
Columbia University, fresh out of the family car from Dallas, Texas. While some might say the winters
in New York have gotten milder, the summers have not changed: it was miserably
hot. I was living in a dorm room in Wien Hall, a block of Soviet-style student
quarters in a tall red brick tower whose most exotic characteristics were its
co-ed bathrooms and the private sink in each room. My diet was terrible: pancakes, hamburgers, coffee, soda, bagels, beer.
I was not in a good place. The academic year had left me spent. I hadn’t slept much,
with all the work, but my grades had nevertheless been poor. Most of my
acquaintances (I had few friends) had left for the summer. The campus was
thoroughly empty. At sunset, the expansive steps of Low Library, full during
the school year, could boast just a few random, out-of-shape young souls
hunched over unusually large slices of pizza (my other choice for dinner). The
view north on Amsterdam Avenue, which seemed like a glittering slope of traffic
lights and taillights leading down into unknown territory from September to May,
now seemed like a shimmering tunnel into a bottomless oven. Dangerous. Out of bounds.
Chaos. I was touchy, every second: the smallest thing could send me into a funk
for days. Love, or anything remotely like it, was very, very far off. My Friday
nights often began and ended with a trip to the Metropolitan Museum, open until
9. That was my life. The city itself wasn’t much better off than I was. The crime
rate, which had been escalating for the past few decades, was at an unusually
high point. That spring, the Central Park Jogger incident had occurred, with
all that event entailed, damage lasting for many years afterwards. The crack
business was thriving: the corner of 94th and West End was known as
“Crack Central.” The homeless population on the Upper West Side was large and
often aggressive. In this climate, along with a bunch of other seemingly harmless
summer movies, Tim Burton’s Batman
opened in 1989, on June 23.

I wasn’t necessarily initially drawn to see the film. As a
high school student, I had watched mainly foreign films—Bergman, Fellini,
Truffaut—or older classics—The Wild One,
Streetcar, Psycho
. In fact, I’d studiously stayed away from anything
that didn’t have a fair amount of cultural intellectual endorsement. Due to the
nurturing influence of a number of friends in high school, I’d cautiously added
certain American directors, most notably Martin Scorsese (whose frequent
lunches in the Columbia student center were a high point of the academic year)
and Woody Allen (ah, the pleasure of seeing Radio
Days
or Hannah and Her Sisters at
the time of their release!). Something, though, got me to the theater, to see
Burton’s film: perhaps it was my love of Beetlejuice,
perhaps it was the concept of casting someone as schlubby as Michael Keaton as
a superhero; maybe it was the heat. But there I was. And, at the time, I
probably found the film quite entertaining, and funny: Michael Keaton was still
a relatively new talent to me. Jack Nicholson retained some of the mystery he
held for me after having starred in The Shining, Prizzi’s Honor, and Terms of
Endearment,
all within one career. And Kim Basinger, was, for most 19-year-old
heterosexual males, still carrying the line of credit for titillation she’d earned in 9½ Weeks, however witheringly wrong-headed
that film might seem at this point. Watching Batman today is a bit like watching the 1970s Star Wars today: the good parts stand out, the bad parts seem
worse. Jack Nicholson’s Joker is a remarkable figure, the work of an actor
pulling out all the stops, enjoying himself, and possibly scaring himself in
the process. Michael Keaton’s self-consciousness is still amusing, his mouthed
“I’m Batman” still an indication that this is, above and beyond its
summer-comic-book-thriller-blockbuster aspirations, a movie about repressions,
and psychological damage. The rest is a bit of a wash: Kim Basinger’s quite
stiff as photographer Vicki Vale, Robert Wuhl is stumble-footed as reporter Alexander
Knox; the other supporting actors deliver their lines with the awkwardness of Law & Order extras. The onrush of
Danny Elfman’s soundtrack sounds dated, as well, almost like soundtracks from
before the first Batman movie, of the
1960s.

A couple of things about the film, though, do endure. One
is, of course, its design. Burton’s Gotham/New York, as Anton Furst created it,
is a dangerous, gritty place, and at the time, it matched New York all too
well. Although, as with all of Burton’s films, you can practically see the
brushstrokes in his urban tableaux, you can still sense a seething energy in
the frame, as the old (the dilapidated look of the buildings, the pedestrians
in fedoras) brushes up against the new (the shiny look of the taxicabs). In
1989, Times Square was still a dangerous, seedy, unpredictable place; the risk
of being mugged there, if you were alone, was considerable. I remember being
palpably nervous when going there in broad daylight to get a fake ID (so I
could see a show at the long-departed King Tut’s Wah-Wah Hut), so nervous, in
fact, that I gave my dormitory address as my home address for my “Official
Identification Card.” Avenue A, bordering Tompkins Square, was not for lone
travelers after dark, and really not much fun during the daytime either.
Williamsburg was barely a place, it was so dangerous. When I looked at the blue-black
hues of Burton’s Gotham, I saw a reflection of the city I both worshipped and,
from a Texan’s perspective, feared.
 

In addition, its Black-White-and-Gray Morality Play lasts. I
identified with this aspect of it partially because of my own mental state at
the time. I was blasted out from a year’s worth of reading everything from
classics to Lolita to Mayakovsky to Marquez to Hobbes to Hume, lonely, freaked
out, psychologically tired from combating the regular pressure New York puts on a novice. The world began to seem like one of extremes to me:
either a day was good, or it was terrible. Either I was sated, or I was
starving. Either I was wide awake, or I was collapsing. Similarly, the movie’s
polarities are dramatic: Rich vs. Poor. Innocent vs. Corrupt. Happy vs. Unhappy.
Past vs. Present. (In other words, it’s a movie based on a comic book.) The
movie isn’t necessarily simple-minded—these qualities dance around each other,
and occasionally disguise themselves, in the film, but the manipulation we
witness is writ large. There’s nothing complex about the way the complexity is expressed.
Bruce Wayne is Batman, but he is tormented about it—and then, on the other
hand, he isn’t. All of these sides of his character are openly stated.
Similarly, the Joker’s complicated stance—a crook out-crooked by his more crooked
boss, with a tremendous sense of humor (remember his sparing of the Francis
Bacon grotesques in the museum? Or “I’m no Picasso”? Or “This town needs an
enema”?)—makes him both malevolent and sympathetic, as with all the great
villains of literature and film. His complexities, as with Batman’s, were
broadcast on such a large scale that you would have had to have been asleep or deeply stupid not to have noticed them. So, my younger self, nursing the
dogmatically snotty should-I-be-here feeling only a 19-year-old can pull off,
sat in the theater, surprised at the degree to which I could relate to the film, and to its warped figures.

Things would improve: for Batman retellings, for Gotham, and
for me. It would be hard to deny, in all honesty, that Christopher Nolan’s
Batman films, based as they are on a more nuanced telling of the superhero’s
story, are more subtle, more multi-layered, more deftly filmed, more atmospheric,
and possibly more profound than Burton’s version, or any of its sad successors;
Batman Returns could boast the gifts
of Michelle Pfeiffer and Danny DeVito, but the series did not progress well
after that point (enough said). New York City looked up after 1989 as well; while David
Dinkins’ mayoralty of New York was problematic on many levels, the crime rate
was reduced, and with each successive leader, the metropolis has continued to change. Today, Times Square is a clean, well-maintained tourist
depository; Avenue A is prime real estate territory and a dining destination;
and many parts of Williamsburg resemble a suburb populated by Ivy-League
educated hipsters who like drinking beer out of the can. And me? Well, my days
became more well-rounded, the summers shorter; my sociability intensified; my
mind grew; my urban environment became, rather than a vast zoo in which I was
wandering without defenses, a complex place with which I would develop a relationship,
much like an interpersonal relationship—and a place in which I would build a
life. Nevertheless, I remember Burton’s film as a document of the summer of
1989, of a particularly odd patch in my own life, and as a film with a
tremendous amount of, for lack of a better word, soul, with all of that word’s
glories and imperfections.

Max Winter is the Editor of Press Play.

How BORGMAN Makes an Ideal Storytelling Lesson

How BORGMAN Makes an Ideal Storytelling Lesson

null

The following contains spoilers, of a sort.

If you were a novelist, or a filmmaker, or a playwright, or
even a scholar, or a critic, and you wanted a primer on how a story might be put
together, you would need to look no further than Alex van Warmerdam’s Borgman. No, it wouldn’t teach you that all stories should contain
humanoids who may or may not come from another planet, or that all stories
should feature disembowelment or, beyond that, social critique. It could,
however, teach you how to start a story, how to make it interesting, and how to
end it. The film, which tells the story of a man’s intrusion and destruction of
a harmonious home, has many wonderful attributes: a strong, mercurial
performance by Jan Bivjoet as the title character, coupled with an empathetic performance
by Hadewych Minis as his onetime love interest, now a comfortably married suburban wife and
mother; a gorgeous sense of stillness in its tableaux, often shot from a
distance, so we look both harder and less attentively at the on-screen events;
a remarkable sense of pacing; and above all, an enticing, pervasive spirit of surrealism. But its structure is its most enduring element.

The film begins, persuasively, with a mystifying sequence, intended solely to raise questions in the viewer’s mind. Several men march, in a dogged
group, into a deep forest with weapons; there is a priest among them, and even
the priest is armed. At a certain spot, they stop, and they begin driving poles
into the ground. It so happens that they are poking into the underground home
of Camiel Borgman, the film’s questionable protagonist. He vacates his well-appointed
grotto immediately at the first sign of siege. He escapes quickly and finds his
comrades, all of whom are sleeping in what seem to be pods just underneath the
leaf-covered ground. Who are these men? Why do they live this way? Why are the
other men seeking them out? Have they done something wrong? Are they dangerous?
The film raises these questions and then, smartly, changes course.

A sudden shift in perspective is a common technique in
storytelling; films as varied as The
Crying Game,
Down By Law, or Mulholland Dr. all make sudden leaps,
the immediate effect being disorientation, the culminating effect being a sense
that a larger world tableau has been examined than might have been expected at
the film’s outset. Here, the shift is to a traditional stage for plot-making:
the happy home. That home, in this case, is a vast house in the suburbs, owned
by Richard, an entertainment executive, and his wife Marina, an artist. They have three children
and a live-in nanny. Into their lives comes shaggy Camiel, claiming both that he
hasn’t bathed in days (probably true, from his bedraggled appearance) and that
he knows Marina, that in fact she was his nurse at one point. Camiel is soon
stealthily ensconced in the family’s life, without Richard’s
knowledge—which means their happiness is about to be disrupted. From the time
of Paradise Lost forward,
storytelling demands that if a situation does not have any readily apparent
problems, its surface must be disrupted. Otherwise, there is no story. In this
case, the disruption is immediate. When Richard first meets Camiel, he beats him
for making inappropriate comments about his wife—and then he and Marina
fight. Marina sneakily finds Camiel a spot in a guest house on their rambling property. Not long after
that, we see Camiel, naked, crouched on top of Marina’s naked, sleeping body, disturbing
her dreams. And not long after that, Camiel kills the family gardener: one
disruption on top of another. Here, van Warmerdam deploys yet another standard
storytelling technique: the use of a crystal-clear, memorable image which
encapsulates all that happens within a story. When Camiel, along with two
be-suited associates (we never find out how these malevolent figures are connected),
kills the gardener, he also kills the gardener’s wife. The way in which he
disposes of their bodies tells us quite a bit about what the film is trying to
do, and elegantly: the victims’ heads are buried in buckets of concrete and they
are tossed to the bottom of a lake. We watch the bodies descend, slowly,
gradually coming to rest with their legs pointing directly upwards and their head
pointing directly downwards: they are turned upside down, just as the lives of
those above them are, increasingly, turned upside down and cast into disorder.

Here the narrative once again doubles upon itself, as if to
demonstrate to a viewer the extents to which stories must go to reach their
desired destinations—and also to show that within one narrative, constant
revolution may sometimes be necessary to keep it alive. In this story, Camiel
bathes, gets himself a haircut, and shows up once again at the family’s door,
this time in an interview for the gardener’s replacement. Richard, not
realizing that he is talking to the vagrant he pummeled earlier, likes him and gives
him quarters in the house itself. Shortly after his arrival, Camiel marshals a
tractor to tear up the garden, ostensibly in an effort to improve its
appearance—but, of course, also tearing up all the family has cultivated, all
of its peace, here embodied in the carefully landscaped trees. We learn,
gradually, that Camiel and Marina do indeed have a history; she reaches out to
him, and he does not reach back until a stage, of a sort, has been set. The
setting of that stage is gruesome, involving murder, drugging the children,
and, once again, Camiel’s invasion of the family’s dreams. From this point
forward, everything that van Warmerdam has put in place moves quite smoothly towards a neat (and perhaps all-too-neat).

For all of its structure, the actual conflict in the story
arises primarily from the alien quality of Camiel’s presence. The question of
what his purpose is in the film rises with beautiful restraint, until finally
he achieves his objective—at which point the film ends. Borgman has been chided
for not providing enough answers to the questions it raises—which may be fair,
given that if those answers were provided, the emotional weight of the film might
increase. For a film of this type, though, the questions are more significant
than any answer the director might provide. Indeed, the absence of such answers
helps to accentuate the film’s ultimate accomplishment, which is to show the
shapes madness and anarchy may take when properly contained—and how every story
is, in a sense, like the house described here, a box with four walls and a
roof, in which nightmares and daily realities compete for our attention.

Max Winter is the Editor of Press Play.

VIDEO ESSAY: White Knights and Bad People

VIDEO ESSAY: White Knights and Bad People

[The text of the video essay follows.]

When I watched Back to
the Future
with my parents as a child, I remember my shock at seeing Marty
McFly’s mom sexually assaulted by the high school bully, Biff, in the backseat
of a car. The assault was confusing. I remember my first viewing of this
relatively tame movie as a garble of images–the backseat, the fluffy curls of
the pink prom dress, the feet poking out, the muffled screams.

Of course, this entire scene is about Marty’s dad having the
guts to punch the rapist in the face, to tell him to “leave her alone.” By the
end Marty’s mother is all smiles, relief, and pride in having chosen a man who
would defend and respect her.

My exposure to cartoon gender relations was similarly
violent. The female cartoon characters in shows like Tiny Toon Adventures and Animaniacs
liked to don skimpy outfits. The male characters’ eyes would pop out of their
skulls, tongues hanging out lecherously. Of course, these shows played on old
cartoon favorites. Betty Boop often had to avoid unwanted male attention, poor
Olive Oyl was constantly placed in supposedly comic situations where she was
being either kidnapped or harassed, and in Tex Avery’s Little Red Riding Hood,
“Red” is a full grown woman who must be careful of the predatory wolf who
stalks her nightclub. 

When I was a child, the images of a female cartoon character
being catcalled, or a woman being assaulted, did not seem especially unusual. I
assumed that warding off male attention was met by most adult women with a
mixture of pride and mild annoyance. As I got older, I became more and more
concerned about this phenomenon. When even strong, powerful women are victimized
in films and television, a dashing hero saves the day.

Today, in the age of Steubenville, we still worry about the
ways boys and men prey on girls and women. Social organizations often still
rely on the white knight trope when they address this matter. Actors and
musicians who regularly objectify women on screen and in music videos are shown
looking sad as they pose with Real Men Don’t Buy Girls hashtag signs. In the
White House PSA on sexual assault, Daniel Craig and Benicio Del Toro are among the male
participants calling for heroic behavior.

Stepping in when someone is in trouble is certainly
honorable, but the moral lesson in these PSAs provides men with the same
options they had in Back to the Future.
Are you a Marty, or a Bif? Will you defend womanhood, or assault it?

The threat of rape is often used as a device for male
characters to become heroes, which contributes to the idea that sexual assault
is a normal part of growing up female. Rape is still seen as unchecked lust
rather than an expression of violence. 
This myth has far reaching repercussions, as girls and women live in the
very real shadow of sexual assault constantly. We get inured to sexual violence
on shows like Game of Thrones, where
rape is often presented in the background of a scene, something bad, brutal men
do to helpless women.

It’s exhausting as a woman to constantly see the female body
on the brink of violation. I’m tired of the voicelessness of those bodies, by
the fact that we still need to spread awareness about how horrible sexual
assault actually is. I know I’m supposed to be grateful when people express
that they are aware, when men who seem poised to protect me when I go out, when
someone develops an app designed to help get me home safe by checking in with my
family and friends.

The way rape is portrayed today is not so different from how
it was portrayed in 80s exploitation films, where rape is intended to shock and
titillate in one fell swoop, like it often does in the current series Game of Thrones. A film like Extremities, for example, promises the
sweetest of revenges for a female protagonist, but it is the image of Farrah Fawcett
cowering and sobbing, forced to take off her clothes, while her rapist looks on
and calls her beautiful that has become the ubiquitous Hollywood rape scene,
where a gorgeous woman is exposed and shamed and, despite the fact that we are
told to root for her, we are also given permission to ogle her, to see her
through the rapist’s lens, before we see her own experience.

This is one of the reasons that Joan’s rape scene on Mad Men is so effective is that it
portrays her quiet terror without fetishizing her body or her fear. We don’t
see her ample curves illuminated, the way they normally are. Joan’s sexuality
is a point of pride throughout the series, and the camera makes it clear that
what we are witnessing is a power play and violation. There’s nothing sexual
about it. The camera ends not on a close up of her body, but a close up of her
staring at a point just ahead of her in an office that isn’t hers, as she waits
for what is happening to stop.

Arielle Bernstein is
a writer living in Washington, DC. She teaches writing at American
University and also freelances. Her work has been published in
The
Millions, The Rumpus, St. Petersburg Review and The Ilanot Review. She
has been listed four times as a finalist in
Glimmer Train short story
contests
. She is currently writing her first book.


Serena Bramble is a film editor whose
montage skills are an end result of accumulated years of movie-watching
and loving. Serena is a graduate from the Teledramatic Arts and
Technology department at Cal State Monterey Bay. In addition to editing,
she also writes on her blog Brief Encounters of the Cinematic Kind.

OUR SCARY SUMMER: PROPHECY and the Toxic Environments of 1979

OUR SCARY SUMMER: PROPHECY and the Toxic Environments of 1979

nullDuring the first week of December, 1978, the covers of Time and Newsweek featured horrific images that would haunt me over the ensuing
year.  Both magazines bore the headline
“Cult of Death” superimposed over masses of dead, decaying bodies, victims of
the catastrophe at Jonestown, Guyana. 
Under the direction of their leader, Jim Jones, 909 members of the cult
organization calling itself the Peoples Temple Agricultural Project were
persuaded to commit “revolutionary suicide” by ritually drinking a sweetly-flavored
poisonous mixture in a senseless act of coerced self-destruction that spawned
the phrase “drinking the Kool-Aid.”  At
the age of thirteen, I couldn’t really fathom this event, nor did I know where
Guyana was, but I was deeply impacted by those horrific magazine covers.  I would only begin to make some kind of sense
of these events by watching the horror movies that were released during what Newsweek would call “Hollywood’s Scary
Summer.”

By that summer I had already become a fairly seasoned
watcher of horror films.  More than mere
thrills and escapism, however, horror movies had come to serve as a reflection
of the toxic environments around me.  My
understanding of the world was shaped by violent and disturbing images, not
only in theaters but also on television and in magazines, thanks to the news
media’s increasing tendency to capitalize on the graphic shock value of current
events.  I suppose that after televising
Viet Nam, nothing was taboo, and there was certainly a political significance to
a nation’s being asked by its reporters and photographers to bear witness to
what its military was being ordered to do overseas.  But there’s no denying that
images of massed dead bodies displayed on the family coffee table could have a
dramatic, even traumatic, effect, especially on children and adolescents.  

The poster advertising John Frankenheimer’s Prophecy featured a grotesque image of a
monstrous fetal creature wrapped in its placenta, an image I responded to like
all such images in the media environment of the 70s: with equal
parts fascination and horror.  After seeing
the film, however, I discovered that horror could help me to make social and
political, as well as emotional and imaginative, sense of the era’s disturbing
events.  Several months earlier, on March
28, 1979, the Three Mile Island nuclear plant in Pennsylvania experienced the
worst meltdown in the history of the U.S. nuclear power industry, releasing
radioactive material into the environment and alerting America to the
catastrophic risks courted by the industry. 
Less than two weeks before Prophecy
hit theaters, on June 3 the exploratory oil well Ixtoc 1 blew and began
spilling vast quantities of oil into the Gulf of Mexico, in a summer-long
disaster that would later be disturbingly reenacted in 2010, stage-directed by
British Petroleum.

With Prophecy,
Frankenheimer wanted to create an environmentally-conscious horror film that
would raise the ethical stakes of popcorn fare. 
While it can hardly be said that he succeeded in this goal—the director
has blamed his own alcoholism at the time, as well as production issues, for the
film’s relative failure—it did succeed in presenting images and settings that
managed to distill, at least for one young filmgoer, the toxic environments of
the 1970s. 

Dr. Robert Verne (Robert Foxworth) and his wife Maggie
(Talia Shire) leave their urban life for the Maine woods, in order to carry out
an investigation for the Environmental Protection Agency.  A native tribe has accused a local logging
mill of dumping pollutants into the Androscoggin River, pollutants that are
poisoning their land and people.  Verne
and Maggie find themselves trapped between the interests of Native Americans
and those of loggers, but gradually become advocates for the local tribe and
its environment once they begin to see the monstrous mutations spawned by the
mercury that the mill has been releasing into the environment.  Trouts the size of sharks, giant demented
raccoons, and tadpoles the size of overweight bullfrogs are just a few of the
initial signs that something weird is going on. 
Like the three-eyed fish that jumps from the river beneath Burns’
nuclear plant in The Simpsons
opening, these creatures are more a part of radiation lore than of mercury
poisoning. 

And it’s precisely the indeterminate, hybrid nature of the
creatures that stalk, wiggle, and hop through horror movies that allows them
such a broad range of reference.  Those
who don’t get horror movies would cite the implausibility of such monsters as
undermining the film’s environmental message. 
But the indeterminacy of this kind of horror imagery
actually multiplies meanings rather than negating them.  It matters that the creature of Mary
Shelley’s Frankenstein is created
from the dead as well as the living, from humans as well as animals.  In his hybridity, the creature embodies a
plethora of anxieties induced by the rise of scientific culture in the early
nineteenth century, when the novel was first published, including concerns over
the use of human corpses in anatomical research, the use of live animals in
laboratory experiments, and the use of animal-incubated antitoxins in
vaccines.  While it would be rather a
stretch to compare Mary Shelley’s classic novel with John Frankenheimer’s
not-so-classic film, the latter does partake of this rich tradition of the
monstrous that is horror’s enduring legacy.

As a thirteen-year-old, I was captivated by these creatures,
and horrified by the most dramatic of the film’s monsters, a giant mutant she-bear
that the natives regard as an avatar of their totemic nature spirit,
Katahdin.  But I was even more affected
by Katahdin’s grotesque cub, which Maggie tries to rescue, in a harrowing
sequence that remains one of the film’s most effective moments.  Early in the film we learn that Maggie is
pregnant, a fact that she keeps secret from her husband until she discovers
that the fish they been eating from the river have been poisoned by the same
substances that have produced Katahdin and its brood.  She knows her own child will suffer the same fate,
and consequently regards the mutant cub they find with a displaced motherly
affection.  As she swims through the
river, carrying it in one of her arms, the cub grows terrified as it hears its
biological mother howling in the narrowing distance of pursuit, and begins
biting and tearing at Maggie’s throat. 
She drowns the cub, as she will presumably abort the mutant fetus growing
inside her.

This scene stayed with me for several months after seeing
the film, for reasons I couldn’t quite place, until one weekend, bored during a
visit to my grandmother’s, I pulled out a collection of award-winning
photographs from Life magazine that
had I had often perused before.  There I
came upon an image that had long horrified and moved me, and the connection
between it and the scene from Frankenheimer’s movie instantly clicked.  The image was taken by W. Eugene Smith as
part of an expose on the mercury poisoning caused by the Chisso corporation in
Minamata, Japan.  It showed a mother
tenderly bathing her adult child; the young man’s limbs are bent and twisted at
unnatural angles and his face is distorted in an agonized grimace, the result
of mercury exposure.  In this grotesque
pieta, the mother supports him gently in the tub, and gazes upon him with a
look of steadfast love.  The image was
taken in December, 1971, a fitting emblem for the decade to follow.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

OUR SCARY SUMMER: DAWN OF THE DEAD and the New American Malaise

OUR SCARY SUMMER: DAWN OF THE DEAD and the New American Malaise

nullAs tag lines go, George Romero’s seminal zombie epic sports
a pretty good one: “When there’s no more room in Hell, the dead will walk the
earth.”  As a thirteen year old, I had
repeatedly stared at the lurid poster bearing these ominous words in the front
windows of the Maplewood Mall multiplex in the weeks before the film was
released in the summer of 1979.  But like
most tag-lines, these were grossly misrepresentative of the film they
advertised.  The notion of an overfull
Hell spewing forth its denizens is too mythic, too Dantesque, by comparison with
the abjectly modern and mundane world the film depicts.  A more fitting tag-line might have been taken
from a speech given by President Jimmy Carter later that same summer: “Often
you see paralysis and stagnation and drift. 
What can we do?”

Addressing what he described as a “crisis of confidence” in
America, Carter’s July 15, 1979 address has been called “the malaise speech”
for its focus on the country’s financial woes and lack of direction.  Though neither provide answers to the
dilemmas America experienced at the end of the 1970s, both Carter’s speech and
Romero’s film offer disturbing visions of a world succumbing to “paralysis and
stagnation and drift,” visions that clarified and vitally shaped my own
perception of the world, then and now.

Now that we are inundated with zombies, in the movies and on television, it’s hard to remember
how off-the-wall Romero’s film seemed at the time.  There was something both funny and disturbing
about seeing monsters that looked more or less like ordinary people, though well
past their “sell by” date.  Massed
together in vast hordes, these creatures, stupid and slow-moving on their own,
collectively assumed the contours of a nightmare, one that hadn’t been realized on such an
expansive cinematic canvas before. 

Yet despite of all its originality and strangeness, Dawn of the Dead made sense to me,
largely due to the fact that much of its action takes place in an enclosed
shopping mall.  As a Minnesotan, I grew
up in the land of malls.  The Mall of
America may be the most massive example of my home-state’s mall obsession, but
Southdale was the first mall of America, opened in 1956.  Many others followed, including the Maplewood
Mall, where my family and friends experienced their own version of the uniquely
American malaise evoked by Carter, and where I later saw Romero’s film.

“Human identity is no longer defined by what one does, but
by what one owns,” said Carter. “But we’ve discovered that owning things and consuming things
does not satisfy our longing for meaning. We’ve learned that piling up material
goods cannot fill the emptiness of lives which have no confidence or purpose.”  For all his failings, it’s hard to imagine a
President since Carter having the guts to offer such an honest criticism of our
country, verging on sacrilege against major tenets of the American commercial
gospel.  This description of vacuous
consumption is an apt description of zombie appetites—joyless and never
satisfied—as well as of the situation in which the four human protagonists find
themselves in Romero’s film. 

Holed up in the Monroeville Mall of Pennsylvania, an odd
collection of refugees from the zombie apocalypse gradually form a community
based on escapism and greed.  Sadly, escapism
and greed are also at the core of the uninspired “solution” offered by Carter
to our national dilemma.  Reducing the “growing
doubt about the meaning of our own lives and in the loss of a unity of purpose
for our nation” to our dependence on foreign oil, Carter advocated using more
coal, building a pipeline, and conserving what we’ve got until something better
comes along.  At least Romero had the
insight to foresee what the end result of this shortsighted thinking would be,
as the horizon of possibilities gradually closes in on the film’s protagonists.

It’s easy to forget about the larger world when you’re in a
mall, which offers a virtual environment catering to seemingly every consumer demographic.  The Maplewood Mall had two bookstores, three
record stores, two hobby and gaming stores, eight cinema screens, and two video
arcades: these so fitted my limited needs and consumer choices as a thirteen
year old that I could hardly imagine what more the world could offer me.  It took me some time to realize that my
consumer desires were not being catered to so much as created by the Mall
itself.  As with the protagonists of Dawn of the Dead, what began for me and
my family as an escape turned into a lifestyle. 
The walkways were lined with trees and shrubs to create an illusory
natural environment, and the utopian vistas of its vast central court, crossed by gently
rising and falling escalators, resembled the sets of seventies sci-fi films like Logan’s Run, Futureworld and Rollerball.  In my eagerness to live in a virtual reality,
whether through video-games, films, or malls, I somehow missed
the point that these visions were meant to be dystopian. 

Watching the film now gives me a strange frisson
Those muted earth tones, those defunct store-fronts with their period fonts,
those broad lapels and flared pants worn by the mannequins: they resemble the
lost iconography and ambient set-pieces of my youth, brought uncannily to
life.  The film’s soundtrack consists
largely of commercial background music of the period, what came to be called
“library music”—LPs that could serve as a ready source of musical interludes
to be played in the background of low-budget films, commercials, or educational
videos.  The genre has become a popular
one for collectors, largely because these virtually anonymous musical pieces
provided the sonic backdrop of our collective past.  An unofficial soundtrack release collects
many of these from Romero’s film, and for anyone who grew up in the 70s,
listening to it is the aural equivalent of watching a super-8 movie of an
average, anonymous day out of the past.

Dawn of the Dead is
less a horror film to me than it is a distorted snapshot of my youth, one into
which I still sometimes escape.  As the characters
frolic through the Monroeville mall, indulging their consumer whims while
zombies menace them from behind glass doors, I find the premise disturbingly
seductive even as I recognize its abject futility.  It’s a fantasy I could never really experience,
since even if there was some version of a zombie apocalypse, I wouldn’t want to
be holed up in some mall of the twenty-first century: I only want to be alive
in the mall of the 1970s.  The final
irony is that my own response to the New American Malaise has been to retreat
into nostalgia, but what I discover in watching films from the 1970s is an
America hardly dissimilar from the one from which I’d hoped to escape.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

I Spit on Your Fairy Wings, and Your Little Dog, Too!: On MALEFICENT and Other Films

I Spit on Your Fairy Wings, and Your Little Dog, Too!: On MALEFICENT and Other Films

null

“The woman has
power if she’s a villain.” That’s what my college art professor told me once,
when we were discussing the Disney films we’d grown up with. If you were a
girl, or female-identifying, you were Team Ursula. Team Wicked Queen. Team
Maleficent. These villains resonate with girls like us, who’d grown up knowing
that they’d never be Prince Charming’s type; that all of creation, from the
beasts in the forest to the flowers in the field, would never sing of our
sweetness; that our parents would never be royalty.

The villainesses
offered a new paradigm: If you can’t be beloved, be angry. Reduce the king of
the seas to simpering plankton, poison an apple, will your body to turn into a
dragon’s. But closer reflection, as well as exposure to feminist texts and more
adult film fare, reveals that what may seem like delicious wickedness is, in
fact, not real power: It’s just bullying. These women get back at the kings
(and the kingdoms) that have cast them out and insulted them by attacking
innocent princesses, young girls who haven’t done them wrong. This isn’t real
vengeance; it’s just women sinking their talons into other women. And why?  Just because.

Maleficent,
which reimagines the Sleepy Beauty story from the vantage point of the woman
who cast the curse, embraces the beating black heart of the villain’s
appeal—only to sink its fangs into it. The movie is a Disneyfied exploitation
flick: Maleficent’s curse is her roaring rampage against Stefan, the man who,
once upon a time, promised her true love’s kiss before drugging her and
stripping off her wings, so he could appease a dying king, and be named his
successor. Maleficent roars, and she rampages, but she doesn’t get bloody
satisfaction until she comes to the unsettling truth that she’s deployed her
power against Stefan’s daughter, the innocent Aurora, instead of directly
attacking the man who actually wronged her (and the patriarchal will-to-power
that he represents). Maleficent (and the movie that bears her name) turns the
righteous wrath of the woman wronged from a knife’s edge to a tightrope: She
tiptoes along that fine line between between claiming justice and identifying
with her aggressors.

null

Take Ursula the
Sea Witch, who may rival Maleficent as the most beloved baddie in the magic
kingdom. Ursula once lived in the pearlescent splendor of The Little Mermaid’s aquatic kingdom, only to be cast out (for
reasons unknown) by King Triton; the circuitous route of her revenge—getting
him to sign his soul to her to save his daughter—is designed as a pile-driving,
pile-on of pain for the king. And yet to do this, she literally steals the
voice of another woman. Ariel’s only “crime” is being the wasp-waisted
embodiment of everything that Ursula is not, and Ursula’s grand revenge becomes
an attack on the pretty girl—which, given the dark potency of her spells, is a
waste; it reinforces, instead of breaking open, that tired binary of the
lovely, much-loved “homecoming queen” vs. the ugly outcast whose countenance
matches her soul.

We can shrug this
off as a fairy-tale, a genre where only the purest of the pure-hearted and the
blackest of the black-hearted get starring roles. However, it’s still deeply
problematic to see a powerful woman literally tower over our innocent
heroine—especially when so many women, particularly younger women, believe that
there is no place for them within feminism because they “like men” or wear
make-up or want to be a stay-at-home mom. They believe that feminism isn’t a
movement for equality, it’s a matter of us vs. them—and never, sadly, a matter
of us vs. the real enemy, the Stefans of the world, people who value having
power over respecting the dignity and autonomy of women everywhere.

The in-the-flesh
incarnation of Maleficent is able to get the revenge that eludes her cartoon
counterpart because she realizes that the casting of the curse makes her no
better than her former love. Stefan is the flattest character in the film, a
man defined only by what he wants the most: to be king. His bristling ambition
parallels her blazing rage: It allows him to steal the parts of her that
brought her to the heavens, just so he can wear the crown. It allows her to
condemn a laughing baby to a living death, just so she can hear the king beg.
But she doesn’t truly get the better of him, or at least bring about his richly
deserved end, until she’s reconciled with Aurora.

Aurora liberates
Maleficent’s wings from the glass case where Stefan has entombed them, and
Maleficent drags him out of his castle, lets him fall; in his last moments, he
watches her hover above him as the air rushes around his body, and he knows
what it means to desperately long for wings. Stefan’s death is more than just the extinguishing of an enemy; it’s
the end of an era. The film ends with Maleficent crowning Aurora as a queen
without a king: the arbiter of a new age of matriarchy.

Maleficent now
exists within the archetype of the woman warrior, the righter of wrongs, and
the avenger. This archetype wields her wand and sword, her pistol and Tiger
Crane Kung-Fu, and, above all, her wits, directly against her enemies. She is
Coffy, hiding razor blades in her hair; she is Beatrix Kiddo, crossing names
off her “Death List Five”; she is Arya Stark, whispering her own kill list as a
nightly prayer; she is Carrie, unleashing telekinetic Hell against the high
school sadists and the fundamentalist mother who’ve tormented her; she is
Mystique, the mutant revolutionary out to assassinate the political operatives
who oppress her kind. She is Katniss Everdeen, who must “remember who the real
enemy is” if she’s to escape the ceaseless spiral of violence and use her power
for a purpose. And she is Maleficent, who must learn that cruelty is simply
scratching an itch, not treating the wound that burns clear to the bone.  Every time the woman warrior flexes her
might, she’s defining who she is and who she wants to be: the
victim-turned-avenger, asserting her worth against those who tried to break
her—or the villain, just another abuser who thinks that making someone, anyone, pay, is the same as actual gain.

null

We see this
dilemma played out directly with two of the younger, though still ethically
complex, examples of the woman warrior archetype: Katniss Everdeen and Arya
Stark. In Catching Fire, when Katniss, who’s been stop-lossed back into
the arena, has a choice to shoot an arrow straight into another tribute’s
heart, or to take out the heart of the arena itself—and the Capitol that
created it—by aiming her bow at its force-field. She spares the tribute and
sends her arrow whistling toward her oppressors. Arya Stark won’t use her
quickness and cunning to help The Hound steal from a peasant farmer, but she
will spear her sword through the throat of the brigand who’d stolen it from her
years before and used it to murder one of the boys she’d been traveling with.
Arya stares down at the man, who gurgles blood and rasps for air, with an
impervious haughtiness. She parrots back the taunts he’d made as he’d stabbed
her friend; his words are the hammer-strikes sealing his coffin closed: He
brought this on himself the second he raised his blade against Arya and the
people she loves. This is even Steven. This would be about square.

The woman warrior
must choose what—and most significantly, who—merits her lethal gaze, and that
choice reveals everything about her values. Will her capacity for violence imitate
an arrow’s arc, striking with purpose and direction? Or is her rage an engine
revving in a parked car, ceaseless churning and pointless noise? Toward the end
of Maleficent, a now-grown Aurora remarks, “my kingdom wasn’t united by
a hero or a villain, but by one who was both.” Maleficent’s evolution shows how
simple it is to conflate the ability to bring devastation with the snap of her
fingers with actual power, the kind of power that empowers her to stand up for herself and everything she cares
about, that does more than just charge up the same dull machinery of abuse and
degradation. Maleficent must show this
evolution within the confines of a PG rating; however, films like the Kill Bill saga can sift through all the
grit and the spatter for a more nuanced understanding of vengeance, violence,
and the relationships between women who’ve gotten used to feeling of blood
under their nails.

Despite the Kill Bill movies’ joint titles, our
yellow-haired warrior takes the lion’s share of the narrative as she cuts down
her former teammates on the Deadly Viper Assassination Squad, the women who
battered her swollen, pregnant body almost to the point of death after she
tries to leave the group, to become more than Bill’s woman, a woman who kills
for Bill. Beatrix’s impromptu retirement doesn’t actually hurt any of the DIVAS
(indeed, it allows Elle Driver to slip into her much-coveted role of Bill’s
best girl); they attack her at the behest of Bill. They’re a kung-fu coven of Ursulas:
lashing out because of, or in reaction to, some man.

But no matter how
savagely Beatrix and her former comrades battle, there is always a moment—“Just
between us girls …” or “Silly Rabbit, Trix are for kids”—that recalls the
intimacy they once had. Beatrix was one of them, and her arc toward autonomy is
a transition from deadly viper to righteous avenger. It’s fitting, then, that
the only DIVA who is given any substantive backstory is O-Ren, the character
whose origin tale functions as a parallel and an inverse of our heroine’s.
Beatrix recounts O-Ren’s revenge against Matsumoto, the yakuza boss who
murdered her parents when she’s at her own lowest point, freshly awakened from
her coma and willing her limbs out of atrophy. O-Ren’s story is rendered in
hyper-stylized anime and scored with a lean yet operatic mournfulness that
evokes the Fistful of Dollars trilogy,
vesting it with a mythic grandeur that does more than simply align the viewer’s
sympathies with her aim—it suggests that claiming her revenge is a vital, even
sacred task.

null

However, this
anime sequence ends with O-Ren delivering a round-house kick straight to
Beatrix’s pregnant belly, doing Bill’s bidding so he’ll back her Shakespearean
in magnitude quest to become the boss of all bosses of the Japanese yakuza. And
then we’re back to live-action, down to earth, and O-Ren is beheading
dissenters and letting her entourage bully the wait-staff of the bar she owns.
Her violence has no purpose, no passion; trafficking in mindless cruelty, she’s
more akin to Matsumoto than to the young girl who looks him in the eye and asks
if she looks like anyone he’s killed as she twists her sword into his gut. That
girl emerges again, however briefly, in that final fight with Beatrix; after
Beatrix draws first blood, O-Ren bows her head, says, “For insulting you
earlier, I apologize.” The sorrow in those six words shows that she can
remember the raw feeling of violation without recourse. The women rush each
other until O-Ren’s blood ribbons the snow: a single red spatter framed against
a pristine whiteness that suggests the purity of Beatrix’s mission.

Maleficent
shares a thematic kinship with Kill Bill
by suggesting that revenge really can be cathartic, and by having its heroine
find peace after vengeance through her bond with another woman: Maleficent has
Aurora, and Beatrix has B.B., her daughter. 
So it’s appropriate that Maleficent’s
final battle scene is set around another purifying force: fire. Dragon’s
breath surges over stone, leaps over the battlements as a re-winged Maleficent
takes flight with her nemesis, Stefan, clinging to her boot. It’s a grand
fuck-yeah moment, akin to Katniss delivering her quiver-full of a middle finger
to the Capitol and Arya scratching one name off her kill list, Coffy gunning down
her first drug dealer and Carrie turning a prom full of bullies into a taffeta
and sequined holocaust. But these are even more than fuck yeah moments—they’re
fuck yeah moments that show the self-affirming power of revenge. Their message
is written in blood and flame: I matter. I know who hurt me, and I’m going to
make them pay.

Laura Bogart’s work has appeared on The Rumpus, Salon, Manifest-Station,
The Nervous Breakdown, RogerEbert.com and JMWW Journal, among other
publications. She is currently at work on a novel.

Louis C.K. On Rape: Why Are We Listening to Him, and No One Else?

Louis C.K. On Rape: Why Are We Listening to Him, and No One Else?

null

Much has been written about this season of Louie, including pieces from Heather Havrilesky on Louie’s manic
bossy nightmare girls
and Kathleen Brennan on “fat,” “not fat,” and holding hands. Last week’s episode was no exception, as it
triggered commentary from Amy Zimmerman at The Daily Beast, and Madeline Davies at Jezebel. Louie
and its subsequent commentary offer poignant insight into a range
of issues, and most recently gender relations. But why is it, exactly, that viewers
take so much notice when Louis C.K. says something, and not other times? In
particular, considering last week’s near rape of Pamela, why are we paying so
much attention to Louie’s attitudes towards women and rape while ignoring women
who have expressed the same sentiments for years?

In the last episode, Louie’s perspective was clear as he decided
to try a “guy/girl” thing with Pamela, which consisted essentially of his
taking control in every sense. As I watched him discover Pamela half-asleep on
the couch, and then nearly rape her, I grew increasingly angry because, once
again, I felt silenced. We only see Louie’s point of view as he chases and
repeatedly grabs Pamela. Initially, Pamela is allowed a few refusals before quipping, “This would be rape if you
weren’t so stupid.” Then, once she is cornered in the doorway, she is
effectively silenced as Louie asserts control.

Finally after a resistant kiss, Pamela escapes and Louie shuts
the door on her—and with it, her response. After Louie’s perceived success at
his version of a “guy/girl” thing, women are denied a way to deal
with the experience of watching Pamela being nearly raped. We have no idea what
it’s like on the other side of the door.

During or after an assault, people are denied self-preservation
by not being allowed to run, ignore, seek revenge, or learn from the event.
This episode not only denied a reaction from Pamela, but also the opportunity
to learn from it. It follows the predominant narratives that offer nothing new
and focus on the assailant. In the recent rape case in Calhoun, Georgia that
is getting attention from local and national news outlets as well as
blogs, we watch the police as they do their jobs by investigating the
case and charging suspects. And while
local columnist David Cook deserves respect for pointing out that rape
mentality causes rape
, it’s problematic that these are the narratives. By not
empowering the woman involved, it makes it seem as though the immense amount of
courage it took for her go to the police was outweighed by praise for people
doing their jobs, or being human.

Certainly when comparing the Calhoun case to a situation like
Steubenville, it can seem like the reproduction of rape myths might have
momentarily lessened, and we might be making some progress in the acknowledgment
of the realities of rape. Sure, we know from press coverage that the students
were drinking in Calhoun, Georgia that night, but we haven’t yet heard about
her prom dress, or how she might have been asking to be assaulted to the point
of hospitalization. Additional hope that we might be making progress might be
found in the formation of the first White House task force to study rape, and now
the subsequent federal investigation into over 50 universities for their sexual
assault policies on campus. But we still have a long way to go.

Rape occurs off campus too, and it’s estimated to happen to one
in five women in their lifetimes. So given the frequency of rape, it’s
consistently disheartening that the male perspective is the dominant
perspective in popular culture. What possibly is most upsetting is that while
we continually see rape from a male perspective, as if it’s something men to do
to women (which it is 98% of the time), we don’t seem to address men’s behavior
that leads them to rape. And television episodes like this, as well as most
rape narratives in popular culture, just play into that by ending it with
closing the door and not focusing on Pamela. 

So, again, why are we taking notice when Louie offers commentary
on rape? Perhaps we are sticking with what is safe, or what doesn’t drastically
challenge our power dynamics. However, when we allow men to continue to control
the commentary, they also get to reinforce entitlement over women’s bodies. I
suppose that having men define our experiences prohibits us from incessantly
flicking men’s penises, seeking unlimited abortions, or generally taking
control of our lives.

Or maybe women’s perspectives on rape are too real and ugly for
a mainstream audience. In this episode, Pamela did a fine job of exhibiting
tortured resistance, but it ended there. It has been a long time since Thelma and Louise showed us that when a
woman cries like that, she isn’t having any fun. In popular culture we don’t
often experience, in a non-fetishized way, the complete violation that
accompanies forced penetration with objects or body parts, and the blood and
bruises that may result. Even more messy are the complicated emotions one might
experience: denial, bargaining, fighting, acceptance. While I certainly
wouldn’t want to fetishize rape, the acknowledgment of these
horrific experiences can enable us.

Consistently showing the male perspective of rape also
conveniently absolves us of the consequences. Not only do men often get away
with it—98% are never incarcerated—but women are also forced to navigate a
culture that has historically blamed or not believed them. So when the Louie
episode ends with the door closing, we don’t have to experience what goes
through Pamela’s head, and how she processes the experience. In the episode, if
things went further, we wouldn’t have to consider whether or not she would report it. And
if she did, we wouldn’t have to feel the shame or fear and consider how she
will deal with it when there might not be any justice.

Ultimately, maybe women’s perspectives on assault aren’t
reflected enough in popular culture because they counter the pervasive
acceptance of everyday violations women endure, such as being groped in public,
having erections pressed on our asses in the subway, and being told to smile on
the street. Perhaps our tacit acceptance of these behaviors make it easier to
follow the dominant narrative. But after seeing this play out once again,
especially from such a generally excellent show, I’ve had more than enough.
It’s time to stop shutting the door.

Allison Blythe is an urban planner and Chicago native who currently
lives in Brooklyn, NY. She tries to increase equity and improve the
quality of life for New York City residents through her work. She loves
to laugh, and you can have a drink with her at the happy hour for area
planners that she co-founded.

Our Scary Summer: ALIEN, the Energy Crisis and Desperate Consumerism

Our Scary Summer: ALIEN, the Energy Crisis and Desperate Consumerism

null

The cover of the June 1979 issue
of Newsweek featured an image of
Sigourney Weaver from Alien under the
caption: “Hollywood’s Scary Summer.” I was thirteen, and the horror movies
released that summer would form a kind of
grotesque carnival that mirrored my own and the world’s anxieties.
 Earlier in the spring, the disastrous nuclear accident at Three
Mile Island had occurred, and that summer major oil spills
polluted the waters of the Gulf of Mexico and the North Atlantic.  This
was also the year when oil prices doubled, Margaret Thatcher was elected, and
the Ayatollah Khomeini rose to power.  As I became aware of the
political and environmental degradation around me, the films I watched
reflected my awareness, as well as my own desires and fears as a thirteen year old, back at me.

I was caught between being a nerdy kid and a nerdy teenager;
the marketing strategies for promoting Ridley Scott’s Alien were similarly split. 
Following on the success of Star
Wars
, the film boasted special effects that would rival its
predecessor.  Reading about its
production in magazines like Starlog and
Heavy Metal, I joined other fanboys
in the building anticipation for its summer release.  The fact that Alien mingled SF with horror elements only further whetted my
appetite, but when the film was released with an R rating, I was consigned to
seeing it only through its comic book tie-ins and bubble gum card series.  Surely this was the only R-rated film to have
spawned its own action figure, yet the peculiar split in this marketing
campaign seemed to reflect my own divided self, too old to play with toys, too
young to get into adult films.

As Newsweek’s
chosen symbol of a new wave of Hollywood horror films, Alien embodied other split identities.  Formerly considered a disreputable genre,
associated with cheap special effects and lurid story lines, horror seemed to
be emerging into the mainstream, backed by mega-million dollar budgets and
featuring distinguished actors and directors. 
The process that had begun with 1973’s The Exorcist, and continued with 1976’s The Omen, seemed to reach its tipping point in the summer of 1979
with Alien and The Amityville Horror, culminating in Stanely Kubrick’s The Shining the following winter.   According to Newsweek: “What Alien proves
is that the B movies of yesterday provide the formulas for the A-movie
blockbusters of today.”

But any admirer of Ridley Scott’s film knows that the story
is anything but B movie fare.  While its
last half-hour seems patterned on the kind of murderous chase sequence
perfected the previous year in John Carpenter’s Halloween (1978), this is only one of the film’s many complex dimensions.  Interestingly, it is likely that this sequence,
along with the infamous “chest-burster” scene, brought the film its R
rating, while in fact its science fiction premise (the element of the film most
ostensibly appealing to younger audiences) more clearly mirrored what was going
on in that adult world I would soon be reluctantly entering.

While Star Wars was
predicated on an escapist premise that used science fiction conventions to
blast us into a galaxy far, far away, in the universe of Alien, space is confined, claustrophobic.  It is a universe very much like our own,
subject to the laws of supply and demand. 
As we watch a complex mass of space-borne metal slide slowly across the
screen, superimposed text tells us this is the commercial towing
spaceship Nostromo, hauling a refinery
and twenty million tons of mineral ore. 
Space, the final frontier, has become, like all frontiers, a resource to
be exploited.  The imposing size of the
ship is in perverse contrast to its seven-member skeleton crew, presumably the
result of corporate downsizing and its technological ally, automation.  It is some time before we encounter any
humans aboard, as the camera explores the ship’s instruments awakening to a
kind of ghostly, simulated life.  When
the crew is finally awakened, they emerge from steel cocoons that resemble both
eggs and coffins, clearly anticipating the deadly alien eggs the crew will
later encounter, but also figuring their grim dependence on the ship’s
technology. 

But of course it is not simply technology itself that
threatens the crew, but the exploitative uses to which it may be put.  The film gradually reveals that the real
villain of the story is not the fierce predator of the title, but the crew’s
employers, mega-corporation Weyland-Yutani, referred to simply as “The Corporation.”  Chief science officer Ash (Ian Holm) is later
revealed to be an android planted by the Corporation to superintend the capture
of the deadly alien for the company’s bio-weapons division.  Like the mineral ore the ship already
carries, the alien is yet one more resource to be commercially exploited, at
whatever cost.

Although I wasn’t yet old enough to have a driver’s license,
like everyone in 1979 I was highly conscious of rising gas prices and their
effects.  I didn’t understand the
relationship between what President Carter and Walter Cronkite repeatedly
referred to as the Oil Crisis, or the complex geopolitical issues centering
on the Iranian revolution and the Ayatollah’s return to power.  Regardless, I watched those daily images of
gas station lines, so long they looked like shanty towns, with a grim
fascination, as they so closely resembled the conjoined images of excess and
destitution common to those post-apocalyptic films I loved from that era, films
like The Omega Man, Damnation Alley, and Soylent Green, films that seemed half in
love with the world’s death.  What did
the Earth the Nostromo’s crew were trying to get home to actually look
like?  Probably something very much like
the one depicted in these films, and to which the images I watched on the
nightly news seemed to be offering a disturbing preview.

null

For kids who wanted to act out scenes from Alien (a film they weren’t allowed to
see without a chaperone), Kenner offered their 18-inch action figure of the monster
created by the disturbed imagination of the late H.R. Giger.  Given the fact that this was ostensibly an
R-rated toy, with sublimated sexual imagery typical of its designer, it is
somewhat odd that this is the first action figure I regarded as too childish to
buy.  Taking the alien to school would be
an invitation to bullying, and playing with it alone at home just seemed
sad.  I still had my extensive collection
of Star Wars figures, but these were
beginning to gather dust on the shelf, reluctant as I might have been to part with
them. 

Looking back now, with my wariness of buying any products
that aren’t ecologically correct, I can retroactively congratulate myself on
not purchasing a large plastic figure made largely of petroleum products.  The smaller, 3 ¾” size of the Star Wars figures, as compared to the
foot-long G.I. Joe of eras past, was a deliberate response to the increased
production costs brought on by the Energy Crisis.  So the foot-and-a-half long Alien figure was actually an avatar of
wretched excess lurking in the toy aisle, a fitting embodiment of the film’s
tacit themes of consumption and exploitation. 

The other reason I couldn’t bring myself to buy the Alien action figure is that there was
something kind of sad about it.  Produced
as a single unit, there were no other figures it could play with: no Kane
action figure to eat, no Ripley for it to chase.  It was over four times the size of my other
action figures, so had I bought it the alien would have stood alone on the
shelf, never fitting in, a large wasteful consumer product good for nothing but
packing away, eventually to be sold on eBay, shipped in a FedEx package, hauled
to its destination aboard a vast commercial aircraft, most likely piloted by a
skeleton crew of seven, reduced by corporate downsizing.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

In-Between Man: An Appreciation of Jon Hamm

In-Between Man: An Appreciation of Jon Hamm

nullThree years ago, a
supercut video
premiered online that compiled
all of the times, up to that point, that actor Jon Hamm had said “what?” as Don
Draper on the AMC show Mad Men. It
did what supercut videos are often meant to do: recognize, decontextualize and
repeat a specific motif in film and television for comedic effect. Yet what is
most striking when you watch it now is all of the slight contextual/emotional
variation that Hamm could get out of one word. It could be construed as a
demonstration of Hamm’s wide ranging and detailed capability as an actor,
something that usually isn’t required of someone with matinee idol looks.

Likewise, New York Magazine’s
Vulture blog also ran a photo slideshow titled “24 Photos of Jon Hamm Making Silly Faces in Nice Clothes,” functioning the same way as the supercut video, this time
applicable to Hamm’s public persona. It demonstrates that a) Hamm is,
ostensibly, a goofball and b) a dynamic physical expressivity is one of his
tools as a performer, just as the “Don Draper Says What” video demonstrates his
expressivity with language.

Yet, what is most fascinating about
Hamm, beyond the above,
is that his stature as an actor and celebrity points to the way leading male
actors are often dichotomized. Either they are stoic, austere, and masculine,
or they are dynamic, demonstrative, and possibly funny. Of course such a
dichotomization isn’t clear-cut, but it reveals something about gender roles in
our culture. Call it the “Dad/Uncle” split. Men onscreen may be
pseudo-patriarchs, emblematic of some sort of “traditional” order. Conversely,
they may be lively, with a possibility of being affable and amusing, like
everyone’s favorite uncle. It could be argued that an actor’s longevity depends
on whether an actor can switch between these modes, or blend them.

While our notions of masculinity
should have room for both types of character, there are still good examples of
this dichotomy in action among current leading male actors. Consider Ryan Gosling. While he is
an undeniable talent, he has made a transition from being a mercurial performer
to a fairly fixed one. During his child-acting days, he was a song-and-dance
lad on The Mickey Mouse Club. Then,
starting in his late teens he became dynamic and often idiosyncratic in films
like The Believer, The Notebook, Half Nelson and Lars and the
Real Girl
. Now, in the wake of Drive,
he gives more restrained (and limited) performances in things like The Place Beyond the Pines, Gangster Squad and Only God Forgives. While there’s still a chance for him to give
performances with more range, it seems as though he has settled into a more controlled phase.

In the opposite direction, consider
William Shatner. At the start of his career, he was a more serious, sometimes
histrionic actor, appearing in Richard Brooks’ adaptation of The Brothers Karamazov, The Intruder, and on TV shows like The Twilight Zone or Dr. Kildare. Then he became Captain Kirk
on Star Trek, which grew from a cult
series to a full-fledged franchise within ten years after its cancellation, due
to syndication. Consequently, Shatner became a household name. Not only that,
his very demeanor became a known, parodied quantity. At some point Shatner
became aware of this and used it to parlay his career. Now he’s William Shatner,
an actor who’s in on the joke and more human for it. He has gone from a
vainglorious leader to a vainglorious, barely-aging elder who can take a pie in
the face.

Hamm seems to be right on top of the
“Dad/Uncle” split. Born in St. Louis in 1971, he played a string of bit or
supporting roles in movies and TV shows before Mad Men creator and showrunner Matthew Weiner handpicked him to be
the show’s lead character in 2007.

Weiner must have recognized a
paradoxical quality to Hamm that makes him a near-perfect fit for the role of
Don Draper, a creative director for 60’s advertising agency Sterling Cooper
whom, despite appearing like what was once (and some still see as) an ideal of
American masculinity, was born as Dick Whitman, an illegitimate farm boy who
appropriated the identity of his superior officer Donald Draper while serving
in the Korean War in order to go AWOL. Through a high-degree of personality
compartmentalization, Whitman became
“Don Draper” but the character’s deception has slowly and drastically unraveled
throughout the series; in recent episodes, he has recognized the need to be
more transparent to the people in his life.

Hamm has come to embody the role so
well that Weiner has openly said that Draper is a work of collaboration between
the actor and showrunner. But as a result of becoming synonymous with the role,
Hamm’s work can be
disorienting when he doesn’t play Draper. Also, there’s a lack of roles
in TV or film that are a) as rich in character as Draper, b) could utilize
Hamm’s multifaceted and dual acting style, c) more than “he’s a brilliant
maverick with quirky issues” (which is what so many lead males roles are now on
TV) and d) prolonged enough to allow for such extensive characterization as a
TV character allows At this point, his success playing one of modern
television’s most iconic antiheroes could be as much a curse as a blessing.

When looking at performances that
Hamm gave in Mad Men episodes at the
beginning, in the middle, and near the end of the show’s run, it becomes clear
that Hamm’s layered portrayal of Draper has quite possibly made him a better
actor. In the series’ inaugural episode, “Smoke Gets in Your Eyes”, Draper is a
smart, slick ad man who’s revealed to be deceitful to his wife and family;
while it’s a calling card performance, it’s still rather straightforward and
un-dynamic. Granted, it was probably designed to be a template performance
within a pilot episode, emphasis being put on Draper’s archetypal façade and
attitude. But it doesn’t give much indication of what Hamm could or would do
later on in the series.

Especially in Season Four’s “The
Suitcase”, which may be the series’ best episode, a poignant portrayal of
initial grief. By that point, it had been revealed that Dick Whitman befriended
Anna Draper, the real Don’s wife, sometime between the Korean War and his
advertising career, and  that he
considered her to be the only person who really accepted him. But in 1964 she
succumbs to cancer and, during “The Suitcase,” Don does his daily work at the
agency as he suspects that Anna has passed, dreading making the call in order
to find out. Furthermore, work involves coming up with a pitch for Samsonite,
which involves harshly coaching his protégé Peggy (Elizabeth Moss) as she goes
through her own existential crisis on her birthday.

Mostly a two-header showcase for
Hamm and Moss, “The Suitcase” runs the gamut of moods, and Hamm (along with
Moss) nails every emotional beat: Draper is stern, reluctant, berating, drunk,
amused, amusing, stupidly heroic, wistful, grievous and, in the end, amicable
towards Peggy. It’s a remarkable single-episode performance that is likely to
be Hamm’s shining moment as Draper if nothing in the upcoming final episodes
matches it.

Speaking of final episodes, [GENERAL SPOILERS FOLLOW] Draper has been
gradually redeeming himself in the most recent and penultimate season of Mad Men, which has required him to
recognize that real personal change comes at the cost of accepting loss and
defeat. His marriage to his second wife Megan (Jessica Pare) has fallen apart
and, having returned to work after a forced hiatus caused by unprofessional
behavior, he has deigned to do copywriting work while being scrutinized by his
colleagues. At the same time, he has become more willing to reveal his true
self to those in his personal life, which implies that he’s trying to dissolve
his double nature. Instead of remaining a poisonous amalgam of different
personae, Draper is attempting to be a whole person. Remarkably, even after
seeing the character behave horribly for the umpteenth time in Mad Men’s sixth season, Draper’s
prolonged, humble pie redemption is believable. This, too, is a result of
Hamm’s well-honed versatility in the role, which is layered enough to allow for
more positive character development.

Hamm’s attempts to become a leading
man for the movies mirror Draper’s transformation somewhat. During Mad Men’s run he has appeared in a
variety of films and TV shows, usually to demonstrate his comedic chops in
things like 30 Rock, Saturday Night Live, Children’s Hospital, A Young Doctor’s Notebook and Bridesmaids. And as the star of last
month’s release, Million Dollar Arm,
he just barely makes an egocentric
sports agent who recruits two Indian kids to be major league pitchers into a
tolerable, decent guy through sheer charm. (On paper, the role is less an
anti-hero than an “anti-protagonist” that could’ve been an insufferable
representation of entitled, insensitive white dudes in the hands of another
actor. As is, the movie is a passable yet questionable sports tale that,
despite good intentions, privileges the wrong point of view.)

Yet one would hope that Hamm would
take a cue from the arc of his most famous creation and try to find roles that
befit and synthesize his dualistic, complex qualities as an actor. As the
concept of masculinity can be better relativized as a widespread examination of
gender politics can correct long-standing issues—call me a Pollyanna but I
believe that this is happening more than ever– we need leading men in our
movies and TV shows who can mirror and influence this relativization. Don
Draper’s characterization can be interpreted as a deconstruction of traditional
manhood that, while it still exists, demonstrates how it can cause
interpersonal chaos, just as sociological and psychological studies have
demonstrated that emulating traditional gender roles (i.e. men should be tough,
emotionless, unnecessarily callous, entitled, powerful and uninvolved in
childcare) most often leads to interpersonal problems as well as mental and
physical health issues.

Perhaps Hamm may not be fortunate enough to have other roles as
successful as Don Draper. But if he has any control over the outcome of his
career after Mad Men, he will
hopefully find work that suits his talents but also continues to blur the
“Dad/Uncle” dichotomization of leading men, which might help to continue to
redefine cultural notions of masculinity. We need leading men who can
positively destabilize mandated gender roles. So who better than Hamm, the
actor who has helped to complicate and reveal old-school manhood as Don Draper,
to do just that?

But in case he can’t, maybe Hamm
could do what Leslie Nielsen did later in his career: become a full-fledged
silly actor. It may not fulfill any ideals presented above, but he would be
good at that.

Holding
degrees in Film and Digital Media studies and Moving Image Archive
Studies, Lincoln Flynn lives in Los Angeles and writes about film on a sporadic
basis at
http://invisibleworkfilmwritings.tumblr.com. His Twitter handle is @Lincoln_Flynn.