How THE INTERNET’S OWN BOY Raises the Wrong Questions–Or Are They?

How THE INTERNET’S OWN BOY Raises the Wrong Questions–Or Are They?

null
The Internet’s Own Boy
, a recent documentary about the short
life and subsequent suicide of Aaron Swartz, raises a lot of questions, and it
moves forward very swiftly, efficiently, and with a fair amount of
heartbreak—but some of the questions it raises are not the ones you might think it
would raise. The story, oft-told, runs as follows: after helping to develop
RSS, after creating the information-sharing website Reddit, and after hacking
into JSTOR and downloading many rights-bound academic documents, Swartz was
ambushed by the federal government and handed an extremely strict jail
sentence, at which point he hung himself at age 26, in January 2013. The questions you would
think one might come away with are: how could the government do this? What was
wrong with Swartz’s hacking activity? How can we change society to loosen
corporate control over data? And yet, because the film provides ready answers
to these questions—the government treats citizens unfairly, there was nothing really wrong with Swartz’s activity, and
we must protest individually, each day, respectively—the questions one is left
with, and which the film does not answer to, are slightly more pedestrian, more
likely to come from a kindly grandparent than a curious absorber of information
in the 21st century. They run something like this: Was he depressed, even if the film says he
wasn’t, really? Did he not think he would be punished? And what’s the
cumulative effect of spending your life on the Web? Un-sexy questions, all. But
necessary, and ultimately valid, given that the filmmakers seem to have
resolved more thorny debates before the film has even begun. In glazing over these issues, the director only makes them stand out more boldly.

It should be said, at this juncture, that Swartz is a
fascinating, brilliant figure. The footage director Brian Knappenberger displays here reveals a person
with a relentlessly inquisitive mind, inquisitive almost to excess. Swartz made
his first accomplishments at age 14, developing the mechanism of the RSS feed with programming experts far his
senior; even as a teenager, speaking on a stage as part of a professional
panel, he has tremendous charisma. The film shows extensive interview footage of
Swartz, and as with other similarly driven, impish figures (the Bob Dylan of Don’t Look Back comes to mind), the young man
is interesting to watch. At one moment he smirks; at another he seems
wide-eyed; at still another, he seems a million miles away. He seems as if he
might be the sort of person—they’re fairly common—who talks to you without
really talking to you, radiating a certain blankness that is nevertheless
animated enough to be watchable. As he speaks about his goals, and about the
“realization” that the power structures surrounding the protection of
information (on the Web in particular) are flawed and unethical, one has the
strong sense that Swartz is not really “in” the conversation, that the conversation
he, Swartz, is having, is an entirely different one from the exchange he is
having with his interviewers, that the sights Swartz has his eyes on are too
large to be contained, really, within the confines of a documentary. This is as it should be, given that he had a tremendous, expansive mind, and it’s unlikely that any simple question from an interviewer would get a simple answer from him. The
director supplies quite a bit of information about Swartz and his life’s work
through his interviewees, including Cory Doctorow, Lawrence Lessig, Swartz’s
two partners near the end of his life, and others. These individuals are quite
voluble about a couple of things: their intense involvement with Swartz, which
has lasted beyond the grave, and the rightness of all that he was doing,
whether that meant making it more possible for more people to have more access
to information, or, as it happened, getting that information for himself,
without asking or obtaining permission—permission, in this case, being a funny
word, implying that those who “owned” the JSTOR documents Swartz downloaded
could legitimately claim the authority to guard it from the public. The film
indicates, with the moral equivalent of a sledgehammer, that such authority
could not be legitimately claimed by anyone.

Framing Swartz’s moral unimpeachability—as well as that of
hacker groups like Anonymous or Wikileaks—as a certainty causes the mind,
ultimately, to wander to other questions about this hero. These are bad questions to hear
one’s self asking. If someone risks their life, essentially, to make
information more broadly available and loosen the chokehold of corporations
over data, should the first question be, But
why did he kill himself?
Well, possibly it should. From the very start of
the film, Swartz appears a very headstrong, some would say beautifully
obstinate child. He reads at age 3. He doesn’t like school because he learns
better by reading by himself. In his early teen years, he only eats white food,
which is, by and large, an unhealthy diet. What he really wants to do is work
on his computer, a machine which will never talk back to him, which he can
control, and which is, essentially, the site of a bottomless project for his
young mind. We’re given no clear indications, in the film, that Swartz was an
unhappy child—and yet we’re also not given any indication that he had any other
interests besides the electronic coursings inside his computer after a certain
age (we see he has a large book collection, but his primary allegiances seem to lie elsewhere, at least as the film portrays it)—and beyond that, an interest in making things right, as a sibling
expresses it, a sense that he had a firm idea of justice and injustice, which
he would spend his life trying to execute, by the use of the Internet. And what of the Internet here? Swartz, and his colleagues interviewed in the film, seem very much under the sway of its importance and strength, as evidenced by their
vocal inflections and their firm belief in Swartz’s work—and yet this tool for
gaining information cannot be seen as fallible. When one is searching the web
for data, one is not engaging with others; one is completely alone. Regardless
of Swartz’s sociability—he seems to have been quite attractive to women, at
least in his twenties; the film shows him drifting from one relationship to
another fairly fluidly, even at a time when he was being questioned by the
Feds—he projects a personal shield in the film, a certain recessiveness which speaks more loudly, in some ways,
than his accomplishments (even including the legions of Internet publications he helped to begin) or than his justification for committing the acts
which ultimately caused his legal troubles. Nihilistic
is not possibly the word for someone with such a strong moral sense, and
yet one might possibly say he cloaked a certain nihilism, paradoxically enough,
in what he saw as concern for the common good. A concern, indeed, so strong,
that he was shocked when the authorities (the Feds) did not recognize his
activities as harmless. Which raises another bad question: didn’t he see it
coming? Could he have honestly been surprised that the hungry lion of the
federal government, when he presented himself as a piece of red meat, opened its
jaws? It’s terrible to ask this question, possibly stupid, beside the point, wrong-headed, but the film’s one-sidedness doesn’t
leave any choice. The question rises, and we don’t get an answer.

Watching The Internet’s Own Boy: The Story of Aaron Swartz reminded me of something
that happened to me recently. I had just finished reading a fairly long novel,
and, as is my habit, I had chosen another one to begin; I brought the book with
me to read during lunch. I also brought my iPhone. In an idle moment, I checked
my Twitter feed and noticed quite a bit of chatter surrounding one person, or
rather two: a book critic prone to engaging with others in protracted,
occasionally vulgar Internet spats and a newly debuted novelist whose previous
stints included an editorial position at an Internet gossip site. The critic,
after publishing an 11,000 word blog post rant on the novelist’s hypocrisy and
wrong-doing, announced via Twitter that he would be committing suicide shortly,
even Tweeting a picture of the bridge he planned to jump off of. I was quite
fascinated, reading the rant that preceded the threat, reading other Tweets
about the critic, the threat, the novelist, the 11,000-word blog post itself,
and anything else I could find about the event. By the time I had sufficiently
immersed myself in this data, my lunch was done, I had to leave, and I hadn’t
cracked the new novel. Walking out of the sandwich shop into the rather brisk
afternoon, I had to wonder a couple of things: would the same events have
transpired (the critic retracted his threat, but still) without the Internet’s
facility of communication and articulation? Had the two individuals only
interacted in person, would the exchange have headed in the same direction? I
also wondered: why didn’t I just read the novel? Why read about all of this, in tiny lettering, on my phone? My feeling after absorbing all
of this information was sadness, of course, and emptiness, and exhaustedness, but I can’t be sure if these feelings
were due to the information itself or due to the obsessive, stoplessly gluttonous
way in which I absorbed it, staring fixedly at a small screen which reflected, however
dully, my own face, my own fixed stare.

Max Winter is the Editor of Press Play.

OUR SCARY SUMMER: David Cronenberg’s THE BROOD and the Weirding of the American Family

OUR SCARY SUMMER: David Cronenberg’s THE BROOD and the Weirding of the American Family

nullI’d never thought of my family as hip, but for a brief time,
in 1979, it seemed as if we were on the cusp of a rising trend.  We were in family therapy, proudly airing our
co-dependencies and dysfunctions, along with so many other American families
caught up in the family therapy movement, reflected in the era’s
pop culture.  The prime-time soap Knots Landing debuted in 1979, setting a
new trend for dramas that favored pseudo-domestic realism and familial
dysfunction.  Oscar-winning films like Kramer vs. Kramer (1979) and Ordinary People (1980) seemed to
underscore an increasing fascination with sifting through the American family’s
dirty emotional laundry.  The narrative
structures of these dramas mirrored that of therapy itself, as dysfunctional
behavior leads to crisis, followed by reflection and self-examination, and
finally healing and self-actualization. 
Seeing these films was like undergoing vicarious family therapy,
creating the illusion that we were facing, and then working through, our
collective neuroses.

Thankfully, the horror films of those years provided an
antidote to the kinds of facile, feel-good narratives that abounded in popular
realist dramas.   While we were being encouraged to work through
our problems, to process and move towards acceptance, a different kind of
advice was offered in the tag-line to the summer of 79’s big horror hit The Amityville Horror: “For God’s sake,
get out!”  While the Lutz family gets
away at the end, the conflicts and tensions that emerge through their harrowing
residence in a haunted house are never really solved.  The resentments and fears linger rather than
being “worked through.”  Growing up in
what I was soon to learn was a classically dysfunctional family, horror films
provided another mode of storytelling that served as an antidote to the vapid,
feel-good narratives of popular dramas and family counseling. 

Few films expose the limitations of therapy narratives more
ruthlessly than David Cronenberg’s The
Brood
.  After having explored the psychosexual
demons haunting the individual human psyche in Shivers and Rabid, the
Canadian director anatomized the late-seventies zeitgeist by turning his
peculiar attention to the monsters lurking within the fractured family.  The
Brood
reads like the rotting underbelly of Kramer vs. Kramer, a divorce/child custody drama in which monsters
proliferate rather than being put to rest. 
After a long and tear-jerking custody battle the Kramers resolve their conflicts
amicably, setting free what they love, while The Brood suggests that there is no such thing as emotional
closure.

Like Meryl Streep’s dissatisfied housewife, Joanna Kramer,
Nola Harveth (Samantha Eggers) is hoping to find herself.  Rather than fulfillment in a career, Nola seeks
self-actualization at the ominously named Somafree Institute, an experimental
therapy center headed by the bearish psycho-patriarch Hal Raglan (Oliver Reed).  Nola’s husband Frank is disturbed to discover
that their five-year-old daughter Candice has a number of bruises on her body
after having recently visited her mother at Somafree.  As he confronts Dr. Raglan, he is told that
Nola is undergoing a critical stage in her therapy, and can’t be disturbed by
accusations of physical abuse. 

We only see Nola in the context of the Somafree Institute, a
narrative choice that frames her identity exclusively in terms of the
therapeutic setting.  The architecture
and interior design resemble a modern-rustic 1970s spa or
ski resort, mingling recreational coziness with institutional chill.  This emotional ambivalence permeates Dr.
Raglan’s therapy sessions, which exhibit a disturbing combination of empathy
and disdain.  Large, A-frame windows
reveal the bleak, late-winter weather, reducing the outside world to an
emotional void, and reinforcing a need for shelter that the Institute only
partly fulfills. 

Dr. Raglan practices a peculiar method of therapy branded as
“Psychoplasmics,” in which the patient re-enacts traumatic emotional events in
order to externalize or actualize them physically as well as psychically. It is a process of self-transformation that
becomes grotesquely real, as patients manifest their mental anguish through
bizarre physical transformations. Psychoplasmics
is an apt word to describe the kinds of special effects Cronenberg would become
notorious for in future films such as Videodrome
and Scanners, which mingle the
organic and the synthetic in the director’s disturbing re-imagining of the
physical body. Cronenberg has become
known as a purveyor of “body horror,” in which the monstrous arises from within
rather than without. The Brood cunningly turns this motif
into a metaphor for psychotherapy itself, which seeks to dredge up and cast out
the monsters haunting the unconscious.  But in The Brood these
monsters don’t simply go away: they seek out our loved ones and prey upon them.

In this respect, the mind’s monsters resemble the practice of
psychotherapy itself, which in Cronenberg’s film seems to foster a parasitic relationship between therapist and subject in which one gains
strength from the other. Oliver Reed perfectly
captures the smugly knowing, seemingly empathetic but oppressively overbearing
quality of the seventies therapist guru. Chest hair spilling from his open shirt, asserting his masculinity while
implicitly inviting his patients to “let it all hang out,” Raglan leads his subjects
through emotionally-fraught role playing games in which the roles seem to
shift, but he is always the one in control. 

Drawn to the film for its sensationalistic elements, I was
disturbed to find in Oliver Reed’s character a dead ringer for William Braun,
the director of family therapy at the Minneapolis Family Center, or MFC, where
my family was undergoing ten weeks of intensive therapy.  My sister and I had renamed it KFC for what
we recognized even then as an artificial, pre-packaged brand of therapy, but
for my mother these ten weeks were going to save our family.  My father was an alcoholic, but we’d learned
that his problem was our problem, in a self-perpetuating cycle of co-dependence
that only MFC could break.  We would all
have to search ourselves and dredge up our psychic demons in order to create a
healthy family environment.

In the mornings we’d all be split up into separate group sessions
organized by age level and mode of substance abuse, which came in two brands,
dependent and co-dependent.  There’s
nothing like putting a bunch of thirteen-year-olds together in a room, overseen
by an adult mental health professional, for getting the kids to open up and
share their most intimate thoughts and feelings.  These sessions dragged on interminably, as
would the various group activities and role-playing games that would fill the
middle part of the day.  Most disturbing,
however, were the group family sessions, in which three or four families were
gathered together, each to address their issues under the shared guidance of a
professional therapist. 

My mother was ecstatic to discover that our group’s
therapist was none other than the actual director of the Center, William Braun,
who was reputed to have done great work for families of alcoholics.  While it took awhile for the parents to warm
up to the uncomfortably public nature of these sessions, after a few weeks some
of them really started to get a taste for it, and were soon vying for the burly
therapist’s attentions, especially the mothers. 
The teens in the room studiously avoided eye contact, as their parents
laid their emotions bare in sessions that routinely included crying jags,
shouting matches and tearful reconciliations. 
One session that I will never forget culminated in an impromptu exercise
in primal scream therapy, in which Braun and an emotionally distraught Mrs.
Knutson kneeled together on a large throw pillow, as he squeezed cries of
mingled anguish and ecstasy from the depths of her body.  I’m not sure what Mrs. Knutson got out of it,
but I had to sleep with the light on for several days afterward. 

Though I would go on to seek therapy in subsequent years, occasionally
with some benefit, I can’t imagine what treatment could have been more
effective at the time than seeing The
Brood
, which allowed me watch the same kinds of bizarre rituals I saw
enacted in family therapy, but performed in a way that acknowledged their disturbing
strangeness.  Though my motives in seeing
films like Cronenberg’s might not have been so different from those of other
filmgoers working through their issues vicariously, horror films, at least for
me, have always offered a more honest, less processed form of narrative than
realist family dramas, or, for that matter, institutions like KFC.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

What Mise-en-scène Is and Why It Matters

What Mise-en-scène Is and Why It Matters

null1. What Is Mise-en-scène?

 

Any student of the cinema quickly encounters the term mise-en-scène, and often comes away the worse
for the wear. The word—or is it words?—is long and funny-looking (to those who
don’t speak French). Making matters worse, the term isn’t always spelled the
same way: sometimes there’s an accent, sometimes there aren’t any hyphens, and sometimes
it’s written in roman type, not italics.

The term’s meaning is similarly complex, having shifted many
times over the years since its creation; it has also gotten bound up in several
different arguments, many of which we no longer inhabit directly. In this
article, I aim to survey that evolution, paying special attention to how it has
become associated with only particular types of filmmaking—the cinema of the
long take. Finally, I’ll argue against that tendency, and attempt to
demonstrate the relevance of mise-en-scène
to the short take.

First things first. Mise-en-scène
was applied to film in the 1950s by the French critics writing at Cahiers du Cinéma (Notebooks on Cinema). They borrowed it from French theater, where
it essentially referred to everything that appears on the stage (it literally
means “putting in the scene”). The thinking was that a film’s mise-en-scène consisted of everything
that the camera sees: the setting, the lighting, the actors, their performances
(including blocking), costumes, makeup, props. It also referred to how those
elements were arranged within the frame—in other words, it was synonymous with
the shot’s composition.

A few problems sprang up immediately. The first was that the
Cahiers critics never defined their
term all that precisely. Alexandre Astruc famously called mise-en-scène “a song, a rhythm, a dance” (267); in a 1998 interview,
Astruc’s Cahiers colleague Jacques
Rivette claimed, “Here’s a good definition of mise en scène—it’s what’s lacking in the films of Joseph L.
Mankiewicz” (Bonnaud). I thought at the time that Rivette was simply being
cheeky, but there’s a way in which he’s also deadly serious: he means that All About Eve, despite literally having
lighting and staging and props and settings, etc., nonetheless somehow lacks a
certain special quality, which is mise-en-scène.
Delving into the Cahiers writing of
the 1950s makes it apparent that there was, right from the start, a tendency to
define the concept loosely, poetically—which is what led critic Brian Henderson
to later call the term “undefined” (315).

The second problem occurs when you consider how people who
make films see different things than those who view films. When you watch a
play, the stage is in front of you, and it’s clear what’s on it and what isn’t.
But films differ from theater in two key aspects. One, the camera frames the
image. Two, cinema includes cuts (edits).

Let’s say you’re making a film, shooting a scene on a busy
street. The camera sees only so much of that street, but you, being there, can
see the whole thing (and the actors can see the whole thing, which presumably
influences their performances). Where does the mise-en-scène begin and where does it end? What’s more, a lot of
what you shoot won’t end up in the film—parts of takes, and perhaps even whole
takes (what we today call “deleted scenes”), will end up on the cutting room
floor, or in some separate portion of a hard drive. What happens to the mise-en-scène of those images?

This is why mise-en-scène
isn’t really a production term— as Astruc had already noted by 1959, it’s not
something that filmmakers talk about when they’re shooting (267). Instead, it’s
a critic’s term, referring to the content of shots that appear in the finished
film. And since it refers to the content of the shot, then it also must refer
to camera movements, since panning and tracking changes the shot’s content.
(The famous long take in Goodfellas
that follows Henry Hill and his date as they enter the Copacabana via the
kitchen features more than one setting, as well as numerous actors, props,
costumes, and so on.)

So mise-en-scène
refers to the entirety of any given shot: the stuff that was filmed, as well as
how it is framed (and how that changes). And in many places, the term has more
or less survived into the present day in this form. For instance, here’s how Ed
Sikov’s Film Studies: An Introduction
(2010) defines it:

“Everything—literally everything—in the
filmed image is described by the term mise-en-scene:
it’s the expressive totality of what you see in a single film image.
Mise-en-scene consists of all the
elements placed in front of the camera to be photographed: settings, props,
lighting, costumes, makeup, and figure behavior (meaning actors, their
gestures, and their facial expressions)
. In addition, mise-en-scene includes the camera’s actions and angles and the
cinematography
, which simply means photography for motion pictures. Since everything in the filmed image comes
under the heading of mise-en-scene, the term’s definition is a mouthful, so a
shorter definition is this: Mise-en-scene
is the totality of expressive content within the image
.” (5–6, italics in
the original)

But when one stops to think about this concept, one sees how
even this is problematic. For one thing, how is mise-en-scène any different from the term “shot”? Or “composition”?
Obviously, we’re not dealing with the actual things in the shot—the actual
setting, the actual props—but a two-dimensional record of them, frozen in a
particular arrangements. What’s more, if every shot is essentially its mise-en-scène, and a film is made up entirely
of shots, then isn’t mise-en-scène in
fact synonymous with the entire film? Which is to say, isn’t mise-en-scène synonymous with cinema
itself?

null

2. Mise-en-scène and the Long Take

Some critics noted straight away that the one thing that mise-en-scène didn’t refer to was editing. As such, they started using mise-en-scène and editing as antonyms.
Here it will help to know that, for the Cahiers
critics, editing was a hotly contested topic. Simply put, certain film theorists
who had gone before them—namely Lev Kuleshov and Sergei Eisenstein—had
emphasized the importance of editing, or montage. To them, the artistry of
cinema lay very much in how a film was assembled from disparate shots. This was
due to their noticing early on how editing could be used to create wholly
artificial relationships between shots. For instance, you could shoot a person
looking up at something on one side of town, then go to the other side of town
and shoot an image of a sign. When you edited them together, the resulting film
gave the impression that the person was looking at the sign, even though that look
was impossible in real life. Similarly, you could film a person walking into a
building in one locale, then film a different interior. And so on.

There proved to be no end to the artificial relationships
that you could create between shots. We partly understand this phenomenon today
as the Kuleshov Effect. If you take a picture of a man’s face, and follow it
with a shot of a bowl of soup, it creates the impression that he’s hungry. But
if you follow it with a shot of a woman reclining on a divan, it makes it look
like he’s ogling her. (See this entire
clip
for a humorous description by Alfred Hitchcock of how editing changes
the way viewers interpret shots.)

To put things very crudely, the critics at Cahiers du Cinéma began questioning the
importance of montage. They were led here by André Bazin, whose background was
in documentaries and Italian Neorealism. As such, he was less interested in how
cinema could artificially warp reality, and more interested in how it could be
used realistically. Accordingly, he devised an argument of cinematic realism in
which he proposed that the history of cinema was one of an increasing capacity
for realism. The way he saw it, improvements in film technology allowed
filmmakers to more faithfully capture reality. Improved film stocks (including
the development of color) allowed for higher resolution images. Sound cinema replaced
silent cinema. Widescreen formats allowed for larger compositions. Cameras got
smaller, enabling filmmakers to leave studios and shoot on real locations. Lenses
improved, allowing for deeper focus shots. And takes could also get longer and
longer, being less limited by the capacities of earlier reels.

Given this, Bazin and his protégés deemphasized the artistic
importance of the cut. They argued it had less to do with the “expressive
content” of cinema than the content or composition of the shot itself. So it’s
no wonder that they invented and emphasized the concept of mise-en-scène. And it’s because of this period in film criticism that
the term came to mean something opposed to editing. People began speaking of
two different approaches to filmmaking: editing (or montage) vs. mise-en-scène (which got tied up with other
devices that Bazin favored—long takes and deep focus). According to this line of
thinking, a director necessarily favored one approach over the other. The art
of cinema was either one of cutting or of long takes.

Critics, too, often fell into one camp or the other. Those
who supported montage noted how editing allowed for the manipulation of reality,
and the creation of effects that were impossible in real life. Such arguments,
of course, became the very grounds for dismissal from the long takes / mise-en-scène camp. To them, filmic
artistry depended not on artifice, but on the faithful imitation of reality. According
to this line of thinking, since we experience time and space continuously, a
superior cinema—a primarily realist cinema—should by definition avoid cutting. Returning
to our earlier example from Goodfellas: when we follow Henry Hill
from his car through the kitchen and to a table in front of the stage at the
Copacabana, we see how all those spaces are connected; we aren’t just cutting
from an exterior shot to an interior shot on a back lot or on a soundstage.

If these arguments sound quaint, then I hasten to stress
that I am indeed oversimplifying them here in order to highlight a very
particular historical debate. It’s also worth mentioning that Bazin died quite
young, at the age of 40 in 1958, and as such had no control over the ways in
which his arguments were later transformed by some into clichés. There are of
course complexities and subtleties to this long history of criticism that a general
survey necessarily omits. It’s is also indisputable that modern film studies is
largely based on the work of Bazin and the Cahiers
critics. Without their contributions, we critics of today would be
significantly impoverished. (We might not even be here!)

That having been said, there is a historical tendency to
oppose mise-en-scène to montage—an
entrenchment that lives on today in various forms. It’s hardly unusual to hear film
buffs claim that long takes are somehow inherently superior to shorter ones.
For instance, cinephiles often celebrate movies like Goodfellas and Russian Ark
and Children of Men and Gravity simply because they feature
long, complicated shots; meanwhile, people dismiss Michael Bay’s Transformers films, or movies like Quantum of Solace, because they feature way
too much cutting. These arguments are heir to the debate between Bazinian mise-en-scène and Eisensteinian montage.
Meanwhile, plenty of critics continue to equate mise-en-scène with long takes—see, for example, the opening line of
Ben Sachs’s recent Chicago Reader review
of Gareth Edwards’s Godzilla
, as
well as this
AV Club article
by Mike D’Angelo, which directly engages the debate between
editing and long takes (and does so by opposing mise-en-scène to montage). And the Wikipedia article
on mise-en-scène
, while garbled
as a whole (and of course always prone to sudden revision), contains some
language equating mise-en-scène with
long takes.

(Actually, the Wikipedia article is even more restrictive in
its usage, equating mise-en-scène only
with something called “oners,” or scenes that are filmed in single takes, and
that also feature mobile camerawork. This is so selective an association that
it renders the term practically useless. It’s also fairly nonsensical. This
particular line was added
by a now defunct Wikipedia contributor, “StephanDuVal,” who popped into the
conversation for twenty minutes two years ago, then disappeared. Since then,
various users have randomly appended sources that themselves don’t employ the
term, resulting in the kind of hodgepodge so typical of the Wikipedia. Instead
of defining the term objectively, the article stakes out a peculiarly small
tradition. A term that was once seemingly synonymous with all of cinema is
there reduced to the point where it refers only to a miniscule number of shots
in a miniscule number of films! Not even the most fervent devotees of Bazin
ever restricted the usage of mise-en-scène
to scenes that were executed in single, mobile takes.)

3. Mise-en-scène and Its DiscontentsnullKeeping this convoluted history in mind, I want to examine
now at what is overlooked by the historical tendency to associate mise-en-scène with the long take, and to
oppose it to editing. Because I believe that these traditional oppositions and associations
limit our understanding of the richness and artistry of the cinema.

For starters, let’s look more closely at Bazin’s argument
that the long take is better than the short one for representing reality. A
commonly heard argument here (one still hears it today) is that whenever a filmmaker
cuts, he or she is guiding the viewer’s attention, and forcing them to look at
particular things in particular ways. By way of contrast, Bazin argued that
long takes allowed viewers more freedom—they could look where they wanted. This
contributed to the idea that long takes are somehow more respectful of film viewers,
and as such require more sophisticated viewers. Over time, this created the
kneejerk association that long takes are somehow smarter than shorter ones (an
idea that lives on in the attacks on Michael Bay).

But is this argument necessarily true? There are many
reasons to doubt it. For one thing, all shots, long and short, are equally
artificial. It simply isn’t the case that as a shot gets longer, it somehow gets
truer. To think that way overlooks the artifice of the long take.

For instance, Bazin’s arguments about how long takes were more
respectful or less manipulative than shorter ones don’t always hold up to
scrutiny. As it turns out, there is nothing stopping long takes from being just
as composed and manipulative as shorter ones. Directors have many tools at
their disposal to direct the viewer’s attention through the long take, just
like they do in shorter ones. Composition can be, and often is, a means for directing
attention. So, too, are performances and camera movements. In other words, there’s
no reason to assume that mise-en-scène
is any less “manipulative” than editing.

This point is well made by David Bordwell in his article “Widescreen
Aesthetics and Mise-en-Scène Criticism” (1985, available
here as a PDF
). In particular, Bordwell observes how Otto Preminger’s use
of widescreen was celebrated by certain critics operating in the Bazinian
tradition. He relates how two critics writing in the Bazinian tradition, V.F.
Perkins and Charles Barr, praised a scene in River of No
Return
(1954) in which Marilyn Monroe’s character, Kay, drops her
valise while boarding a raft. As the scene continues, we see the valise drifting
away in the background of other shots. The argument here is that Preminger has
left it up to the viewer to see this detail, even as the action continues in
the foreground. Both Perkins and Barr celebrated Preminger’s employment of long
takes and deep focus, arguing that they gave his films a kind of naturalism,
transparency, and subtlety.

But Bordwell argues that this isn’t the case at all. In his
analysis, he notes how the film actually employs several devices that function
to draw attention to the disappearing valise:

“When Kay drops the valise she glances
frantically toward it and cries out, ‘My things!’ Harry shouts, ‘Let it go!’ At
the same moment, the camera pans sharply to the right to reframe the valise,
and a chord sounds on the musical track. Our attention to the drifting bundle
is just as motivated. For one thing, the bundle is initially centered when Matt
and Henry pass. Furthermore, Preminger has anticipated this camera position a
few shots earlier, when matt ran to the edge of the bank. It is common for a
classical film to establish a locale in a neutral way and then return to this
already-seen camera setup when we are to notice a fresh element in the space.
We thus identify the new information as significant against a background of
familiarity. As a fresh element in a locale we have already seen from a
comparable vantage point, the bundle becomes noteworthy. In sum, Preminger’s
staging of the scene stands out because it avoids editing, but it uses other
means to draw our attention to the bundle—centering, the return to a familiar
setup, and the repetition of cues for the bundle’s loss.” (22–3).

So much, then for Preminger’s supposed naturalism and transparency.
His long takes and use of deep focus—hallmarks of Bazinian realism, and supposedly
free of manipulation—turn out to be saturated with artifice, and highly
manipulative. (Preminger is hardly the only example one can find of this—Citizen Kane, for instance, also uses
composition and sound cues to focus its viewers’ attention, in addition to its celebrated
usage of long takes and deep focus.)

Another problem with the critical tendency to oppose long
takes and editing is that it ignores the many ways that those two techniques commonly
work together. This point is well made by Brian Henderson in his 1976 essay “The
Long Take,” which seeks to deconstruct the false binary between Bazin’s long
take and Eisenstein’s montage. For one thing, Henderson points out that long
takes still have duration, beginnings and endings, and as such still employ
editing—they’re edited together. What’s more, even a director like Max Ophuls—truly
a master of the long take if there ever was one—rarely assembled his films out
of nothing but long takes. Thus, a film with many long takes may also feature
shorter ones, and those shorter takes may in fact come between long takes:

“The present article takes its chief
emphasis from the fact that the long take rarely appears in its pure state (as
a sequence filmed in one shot), but almost always in combination with some form
of editing. […] Most analyses of long take directors and styles concentrate on
the long take itself and ignore the mode of cutting unique to it—what we call
below the intra-sequence cut. But such cuts or cutting patterns (one could even
speak of cutting styles) are as essential to the long take sequence as the long
take itself.” (316)

Throughout the essay, Henderson patiently draws attention to
these problems in order to ultimately argue that a film criticism that simply opposes
long takes and editing is bound to overlook the crucial role that editing plays
in defining the long take, and sequences of long takes. His goal is to point
out an area of filmmaking that has largely gone unstudied. Sadly, the tendency
to diametrically oppose mise-en-scène
with cutting prevails nearly forty years later, leaving a fascinating realm of
cinema still largely unstudied. At the present moment in popular film
criticism, the championing of long takes has once again risen to something of a
fetish. It receives a disparate amount of attention despite the fact that the long
take is but one element of filmmaking, no better or worse than any other.

A related problem is the tendency to measure the length of a
film’s takes by calculating the Average Shot Length, or ASL. This value is
calculated by dividing a film’s running time by the number of shots it
contains. And ASL is a very useful value in many respects. For one thing, when
one surveys many different films, ASL can give a general sense of how rapidly
films are cut in a given place or time. Thus, one can say that the average rate
of cutting in Hollywood cinema has increased throughout the sound era. Or, one
can attempt to catalog which contemporary Hollywood films feature the longest
ASL’s—as
I did here
, in an earlier article for PressPlay.

But ASL also leaves out a lot of information, especially
when one is analyzing specific films. Alfonso Cuarón’s Gravity,
for instance, has an ASL of roughly 35, since it features 156 shots in 90
minutes. (I’m speaking approximately here; I’m going off reported numbers, and
haven’t performed this analysis myself. Also, I don’t know the actual runtime
of the film, sans credits. The exact ASL, however, is beside the point.) Having
in hand an ASL of 35 doesn’t mean that every shot in Gravity is 35 seconds long—or indeed that any shot in Gravity is 35 seconds long. The opening
shot, for instance, is at least twelve minutes long—meaning that, on average,
the remaining 155 shots have an ASL of 30 seconds. And for every shot longer
than that, there means there are also shots shorter than that. What of them?
Are they any good? What is their relationship to the longer shots in the film?
Or is Gravity only good during its
long shots? (And if that’s the case, then why?)

The fetishizing of long takes is part of a larger,
long-running problem in film criticism, which as a whole is arguably less
critical than it pretends to be. As David Bordwell has expressed it:
 

“Instead of asking how films work or
how spectators understand films, many scholars prefer to offer interpretive
commentary on films. Even what’s called film theory is largely a mixture of
received doctrines, highly selective evidence, and more or less free
association. Which is to say that many humanists treat doing film theory as a
sort of abstract version of doing film criticism. They don’t embrace the
practices of rational inquiry, which includes assessing a wide body of
evidence, seeking out counterexamples, and showing how a line of argument is
more adequate than its rivals.” (“Articles“)

Put another way, the fascination with the long take risks
becoming entirely symptomatic, and uncritical. What makes a movie good? Long
takes! How do you know which movies are the best? Why, just check which ones
feature the longest takes! This is a totally dumbed-down type of film
criticism, where all we need do is calculate ASL’s in order to rank all the
movies ever made.

I don’t want to imply that long takes aren’t important, or
don’t feature a special relationship with mise-en-scène.
Certainly we should be sensitive to the unique challenges and properties posed
by the long take, and how it presents its content to the viewer. No film better
illustrates this than Aleksandr Sokurov’s feature Russian Ark
(2002), whose 96 minutes of footage consist of a single take. I myself watched
the film twice in a row in the theater, something I’ve rarely done—but Russian Ark is truly an atypical film.

However, is Russian
Ark
somehow more realistic than films that feature editing? Hardly. It’s worth
remembering, a la Henderson, that the 96-minute-long shot, and the film itself,
still has a beginning and an end. When compared with a person’s life—or even a
single day—it is still but a miniscule slice of time, unable to compete with actual
lived experience.

What’s more, we would do well to remember Bordwell’s
analysis of Preminger. The film as a whole, rather than being some transparent
documentation of reality, is entirely contrived. The single take carries us
from room to room, and from scene to scene. We go where Sokurov takes us. And
the man isn’t just wandering the Winter Palace of the Russian State Hermitage
Museum with a camcorder, capturing whatever reality he finds there. Instead,
he’s organized everything that we see. His camera movements and framing glide
along very differently than we people do, being balanced by a Steadicam. And
they continuously direct our attention, focusing it to particular aspects of
the spectacle. Meanwhile, everything that appears on screen is the product of meticulous
design and rehearsal. And we aren’t even seeing the first take, but the fourth!
(Goodfellas’s Steadicam passage
through the Copacabana is similarly no more real or less artificial than any
other shot in any other film ever.)

Along these lines, associating mise-en-scène exclusively with long takes perpetuates the bias
toward long takes, since they then seem to have a special quality (mise-en-scène!) that’s lacking in
shorter ones. Because, I mean, if cutting eliminates mise-en-scène, then aren’t they inherently worse? But short takes
do have mise-en-scène, and
understanding the connection between the mise-en-scène
and the montage is extremely important. To put it another way, if montage is
the study of the interrelation of shots, and all shots possess a mise-en-scène, then montage is also the
study of the interrelation of mise-en-scènes.
This is a topic just as worthy of serious critical attention as the study of
individual long takes. What is needed, overall, is a critical approach to cinema
that seeks to relate the various parts to the whole (as we find in the works of
critics like Bordwell and Henderson).

null

4. Mise-en-scène and the Short Take

In order to demonstrate the importance of mise-en-scène in short-take cinema, I’d
like to devote the remainder of this article to analyzing a scene from Scott Pilgrim vs. the World
(2010), looking at how mise-en-scène and
editing work in concert to produce several complicated larger effects. A few
notes first. I chose this scene because the editing in it is very fast. (The
editing in Wright’s films tends to be very fast in general.) Here, we have 24
shots in 57 seconds, yielding an ASL of only 2.1. That of course doesn’t tell
us how long each shot is, but it’s worth noting that this ASL is lower than in
most contemporary Hollywood films, which tend to hover in the 3–6 second range.
And yet, despite the brisk pace, a great amount of information is communicated in
this minute of film. Let’s see how that is done.

The scene in question occurs roughly 26 minutes into the
film. Scott Pilgrim has just had his first date with Ramona Flowers. Later that
day, his band (Sex Bob-omb) is due to play in a battle of the bands at a club
called the Rockit:

Ramona arrives, surprising Scott; she then meets some of
Scott’s family and friends. She also meets Scott’s current girlfriend, Knives
Chao, who kisses Scott, causing the young man to stammer and flee. Along the
way, we also get the beginnings of a subplot in which Wallace will seduce Jimmy
away from Stacey. In order to understand how Edgar Wright accomplished all of
this (and more!), we need to examine his sophisticated deployment of mise-en-scène.

For one thing, even though the Rockit isn’t the primary
focus of the scene, the setting is still important. The first two shots (of the
club’s sign and the interior, including the stage) function as establishing
shots, after which we catch glimpses of people milling about, and crew members
preparing for the upcoming battle of the bands. The next ten minutes of the
film will take place at the Rockit, and these establishing and background
elements help set the stage (literally) for the coming action. The setting also
figures into the film’s larger plot: its dive-bar atmosphere (“this place is a
toilet”) helps establish the upward progress that Scott and his band mates are
striving to make, which will be entwined with Scott’s struggle to win Ramona’s
heart. As both Sex Bob-omb and Scott advance, the clubs grow progressively
nicer until they wind up at the final battle, at Gideon Graves’s
state-of-the-art Chaos Theater.

Other background elements are also doing important work.
Edgar Wright sets up a quick joke by using the first few shots not only to
reveal information, but to conceal some as well. Ramona arrives and greets
Scott, and we get some conversation between them done as shot-reverse-shot.
Wright then cuts to reveal that Wallace, Stacey, and Jimmy are also present,
and have been standing there the whole time. The reveal is humorous, and helps
further Scott’s obliviousness (he has eyes only for Ramona). (The maneuver
recalls the joke in the opening scene of Shaun
of the Dead
, where Wright gradually adds in characters.)

Another important function of the mise-en-scène of each shot is how it helps focus our
attention—which is in fact vitally important, given how short these shots are.
Lighting and costuming are used to offset the characters from the background,
drawing our attention to their faces. And it’s worth noting here that, even in
short takes, there’s still room for mobile camerawork. (In other words, changes
in composition and changes in shots through editing are hardly opposed to one
another, but can work in concert.) As Stacey introduces Wallace and Jimmy, the
camera whip-pans to show us each character. Wright then builds another joke out
of this, hand-in-hand with the cutting, as Wallace sets his sights on Jimmy.

As the scene progresses, our attention is gradually shifted
away from the background elements of the setting, and more toward the
characters themselves. Again, numerous elements are working together here to
accomplish this (including tighter framing and a shallower depth of field). The
focus grows increasingly shallow throughout the scene, as our perspective
shrinks to that of Scott Pilgrim and his discomfort. The payoff comes in the
final shot of the scene, where Wright opens the space up once again, returning us
to a larger sense of the club. The pounding of Scott’s heart turns out to be a
drum being used in the sound check. Meanwhile, Scott, unable to handle the
conflict at hand (his basic problem as a protagonist), takes advantage of the deeper
focus of the shot to run off into the distance, and out of sight. (We have here
an illustration of how cinematography often anticipates how the actors are
going to move in the course of a shot.)

Yet other elements of the mise-en-scène work to develop the ongoing conflicts and jokes. When
Knives Chau shows up, her performance calls attention to her new hairstyle,
which is part of her character’s arc: her adoration for Scott is causing her to
become an indie rock fan. In a later scene, she’ll dye her hair blue, in
imitation of Ramona—and already the film is drawing comparisons between their
respective looks, and setting that love triangle in motion.

It’s also worth noting that the scene, despite being rapidly
edited, is hardly incoherent, either temporally, spatially, or narratively.
Indeed, a great deal is being communicated here in all three of those aspects
of the film. Several of the jokes depend on a consistent sense of space. And,
narratively, the scene introduces many characters to one another, delivering
some exposition to them and to the audience, as well as establishing two
separate love triangles (Scott / Ramona / Knives and Stacey / Wallace / Jimmy).

And this analysis only scratches the surface—we haven’t
considered much how sound functions in the scene, or color, or any of the CGI
elements. But I think we can see how the scene functions due to its complex
interaction of lighting, costuming, setting, character positioning (blocking),
camera movement—and editing (and camerawork). Rather than opposing one another,
all of the elements of the film—including the mise-en-scène and the editing—are working in concert to progress a
wealth of character and plot detail. Indeed, it’s only because those elements
are so carefully arranged in consideration of one another that Wright can
accomplish so much so economically. That complex interplay is the very heart of
the film’s sophistication, and artistry.

A.D Jameson is the author
of three books:
  Amazing Adult Fantasy (Mutable Sound, 2011), Giant Slugs (Lawrence and Gibson, 2011), and 99 Things to Do When You Have the Time (Compendium Inc., 2013). Other writing has appeared
at
Big
Other
and HTMLGIANT, as well as in dozens of literary journals. Since August 2011 he’s been a PhD student at the University of Illinois at
Chicago. Follow him on Twitter at
@adjameson.

Speak, BATMAN: Tim Burton’s Version, 25 Years Later

Speak, BATMAN: Tim Burton’s Version, 25 Years Later

null
In the summer of 1989, I had just completed my first year at
Columbia University, fresh out of the family car from Dallas, Texas. While some might say the winters
in New York have gotten milder, the summers have not changed: it was miserably
hot. I was living in a dorm room in Wien Hall, a block of Soviet-style student
quarters in a tall red brick tower whose most exotic characteristics were its
co-ed bathrooms and the private sink in each room. My diet was terrible: pancakes, hamburgers, coffee, soda, bagels, beer.
I was not in a good place. The academic year had left me spent. I hadn’t slept much,
with all the work, but my grades had nevertheless been poor. Most of my
acquaintances (I had few friends) had left for the summer. The campus was
thoroughly empty. At sunset, the expansive steps of Low Library, full during
the school year, could boast just a few random, out-of-shape young souls
hunched over unusually large slices of pizza (my other choice for dinner). The
view north on Amsterdam Avenue, which seemed like a glittering slope of traffic
lights and taillights leading down into unknown territory from September to May,
now seemed like a shimmering tunnel into a bottomless oven. Dangerous. Out of bounds.
Chaos. I was touchy, every second: the smallest thing could send me into a funk
for days. Love, or anything remotely like it, was very, very far off. My Friday
nights often began and ended with a trip to the Metropolitan Museum, open until
9. That was my life. The city itself wasn’t much better off than I was. The crime
rate, which had been escalating for the past few decades, was at an unusually
high point. That spring, the Central Park Jogger incident had occurred, with
all that event entailed, damage lasting for many years afterwards. The crack
business was thriving: the corner of 94th and West End was known as
“Crack Central.” The homeless population on the Upper West Side was large and
often aggressive. In this climate, along with a bunch of other seemingly harmless
summer movies, Tim Burton’s Batman
opened in 1989, on June 23.

I wasn’t necessarily initially drawn to see the film. As a
high school student, I had watched mainly foreign films—Bergman, Fellini,
Truffaut—or older classics—The Wild One,
Streetcar, Psycho
. In fact, I’d studiously stayed away from anything
that didn’t have a fair amount of cultural intellectual endorsement. Due to the
nurturing influence of a number of friends in high school, I’d cautiously added
certain American directors, most notably Martin Scorsese (whose frequent
lunches in the Columbia student center were a high point of the academic year)
and Woody Allen (ah, the pleasure of seeing Radio
Days
or Hannah and Her Sisters at
the time of their release!). Something, though, got me to the theater, to see
Burton’s film: perhaps it was my love of Beetlejuice,
perhaps it was the concept of casting someone as schlubby as Michael Keaton as
a superhero; maybe it was the heat. But there I was. And, at the time, I
probably found the film quite entertaining, and funny: Michael Keaton was still
a relatively new talent to me. Jack Nicholson retained some of the mystery he
held for me after having starred in The Shining, Prizzi’s Honor, and Terms of
Endearment,
all within one career. And Kim Basinger, was, for most 19-year-old
heterosexual males, still carrying the line of credit for titillation she’d earned in 9½ Weeks, however witheringly wrong-headed
that film might seem at this point. Watching Batman today is a bit like watching the 1970s Star Wars today: the good parts stand out, the bad parts seem
worse. Jack Nicholson’s Joker is a remarkable figure, the work of an actor
pulling out all the stops, enjoying himself, and possibly scaring himself in
the process. Michael Keaton’s self-consciousness is still amusing, his mouthed
“I’m Batman” still an indication that this is, above and beyond its
summer-comic-book-thriller-blockbuster aspirations, a movie about repressions,
and psychological damage. The rest is a bit of a wash: Kim Basinger’s quite
stiff as photographer Vicki Vale, Robert Wuhl is stumble-footed as reporter Alexander
Knox; the other supporting actors deliver their lines with the awkwardness of Law & Order extras. The onrush of
Danny Elfman’s soundtrack sounds dated, as well, almost like soundtracks from
before the first Batman movie, of the
1960s.

A couple of things about the film, though, do endure. One
is, of course, its design. Burton’s Gotham/New York, as Anton Furst created it,
is a dangerous, gritty place, and at the time, it matched New York all too
well. Although, as with all of Burton’s films, you can practically see the
brushstrokes in his urban tableaux, you can still sense a seething energy in
the frame, as the old (the dilapidated look of the buildings, the pedestrians
in fedoras) brushes up against the new (the shiny look of the taxicabs). In
1989, Times Square was still a dangerous, seedy, unpredictable place; the risk
of being mugged there, if you were alone, was considerable. I remember being
palpably nervous when going there in broad daylight to get a fake ID (so I
could see a show at the long-departed King Tut’s Wah-Wah Hut), so nervous, in
fact, that I gave my dormitory address as my home address for my “Official
Identification Card.” Avenue A, bordering Tompkins Square, was not for lone
travelers after dark, and really not much fun during the daytime either.
Williamsburg was barely a place, it was so dangerous. When I looked at the blue-black
hues of Burton’s Gotham, I saw a reflection of the city I both worshipped and,
from a Texan’s perspective, feared.
 

In addition, its Black-White-and-Gray Morality Play lasts. I
identified with this aspect of it partially because of my own mental state at
the time. I was blasted out from a year’s worth of reading everything from
classics to Lolita to Mayakovsky to Marquez to Hobbes to Hume, lonely, freaked
out, psychologically tired from combating the regular pressure New York puts on a novice. The world began to seem like one of extremes to me:
either a day was good, or it was terrible. Either I was sated, or I was
starving. Either I was wide awake, or I was collapsing. Similarly, the movie’s
polarities are dramatic: Rich vs. Poor. Innocent vs. Corrupt. Happy vs. Unhappy.
Past vs. Present. (In other words, it’s a movie based on a comic book.) The
movie isn’t necessarily simple-minded—these qualities dance around each other,
and occasionally disguise themselves, in the film, but the manipulation we
witness is writ large. There’s nothing complex about the way the complexity is expressed.
Bruce Wayne is Batman, but he is tormented about it—and then, on the other
hand, he isn’t. All of these sides of his character are openly stated.
Similarly, the Joker’s complicated stance—a crook out-crooked by his more crooked
boss, with a tremendous sense of humor (remember his sparing of the Francis
Bacon grotesques in the museum? Or “I’m no Picasso”? Or “This town needs an
enema”?)—makes him both malevolent and sympathetic, as with all the great
villains of literature and film. His complexities, as with Batman’s, were
broadcast on such a large scale that you would have had to have been asleep or deeply stupid not to have noticed them. So, my younger self, nursing the
dogmatically snotty should-I-be-here feeling only a 19-year-old can pull off,
sat in the theater, surprised at the degree to which I could relate to the film, and to its warped figures.

Things would improve: for Batman retellings, for Gotham, and
for me. It would be hard to deny, in all honesty, that Christopher Nolan’s
Batman films, based as they are on a more nuanced telling of the superhero’s
story, are more subtle, more multi-layered, more deftly filmed, more atmospheric,
and possibly more profound than Burton’s version, or any of its sad successors;
Batman Returns could boast the gifts
of Michelle Pfeiffer and Danny DeVito, but the series did not progress well
after that point (enough said). New York City looked up after 1989 as well; while David
Dinkins’ mayoralty of New York was problematic on many levels, the crime rate
was reduced, and with each successive leader, the metropolis has continued to change. Today, Times Square is a clean, well-maintained tourist
depository; Avenue A is prime real estate territory and a dining destination;
and many parts of Williamsburg resemble a suburb populated by Ivy-League
educated hipsters who like drinking beer out of the can. And me? Well, my days
became more well-rounded, the summers shorter; my sociability intensified; my
mind grew; my urban environment became, rather than a vast zoo in which I was
wandering without defenses, a complex place with which I would develop a relationship,
much like an interpersonal relationship—and a place in which I would build a
life. Nevertheless, I remember Burton’s film as a document of the summer of
1989, of a particularly odd patch in my own life, and as a film with a
tremendous amount of, for lack of a better word, soul, with all of that word’s
glories and imperfections.

Max Winter is the Editor of Press Play.

How BORGMAN Makes an Ideal Storytelling Lesson

How BORGMAN Makes an Ideal Storytelling Lesson

null

The following contains spoilers, of a sort.

If you were a novelist, or a filmmaker, or a playwright, or
even a scholar, or a critic, and you wanted a primer on how a story might be put
together, you would need to look no further than Alex van Warmerdam’s Borgman. No, it wouldn’t teach you that all stories should contain
humanoids who may or may not come from another planet, or that all stories
should feature disembowelment or, beyond that, social critique. It could,
however, teach you how to start a story, how to make it interesting, and how to
end it. The film, which tells the story of a man’s intrusion and destruction of
a harmonious home, has many wonderful attributes: a strong, mercurial
performance by Jan Bivjoet as the title character, coupled with an empathetic performance
by Hadewych Minis as his onetime love interest, now a comfortably married suburban wife and
mother; a gorgeous sense of stillness in its tableaux, often shot from a
distance, so we look both harder and less attentively at the on-screen events;
a remarkable sense of pacing; and above all, an enticing, pervasive spirit of surrealism. But its structure is its most enduring element.

The film begins, persuasively, with a mystifying sequence, intended solely to raise questions in the viewer’s mind. Several men march, in a dogged
group, into a deep forest with weapons; there is a priest among them, and even
the priest is armed. At a certain spot, they stop, and they begin driving poles
into the ground. It so happens that they are poking into the underground home
of Camiel Borgman, the film’s questionable protagonist. He vacates his well-appointed
grotto immediately at the first sign of siege. He escapes quickly and finds his
comrades, all of whom are sleeping in what seem to be pods just underneath the
leaf-covered ground. Who are these men? Why do they live this way? Why are the
other men seeking them out? Have they done something wrong? Are they dangerous?
The film raises these questions and then, smartly, changes course.

A sudden shift in perspective is a common technique in
storytelling; films as varied as The
Crying Game,
Down By Law, or Mulholland Dr. all make sudden leaps,
the immediate effect being disorientation, the culminating effect being a sense
that a larger world tableau has been examined than might have been expected at
the film’s outset. Here, the shift is to a traditional stage for plot-making:
the happy home. That home, in this case, is a vast house in the suburbs, owned
by Richard, an entertainment executive, and his wife Marina, an artist. They have three children
and a live-in nanny. Into their lives comes shaggy Camiel, claiming both that he
hasn’t bathed in days (probably true, from his bedraggled appearance) and that
he knows Marina, that in fact she was his nurse at one point. Camiel is soon
stealthily ensconced in the family’s life, without Richard’s
knowledge—which means their happiness is about to be disrupted. From the time
of Paradise Lost forward,
storytelling demands that if a situation does not have any readily apparent
problems, its surface must be disrupted. Otherwise, there is no story. In this
case, the disruption is immediate. When Richard first meets Camiel, he beats him
for making inappropriate comments about his wife—and then he and Marina
fight. Marina sneakily finds Camiel a spot in a guest house on their rambling property. Not long after
that, we see Camiel, naked, crouched on top of Marina’s naked, sleeping body, disturbing
her dreams. And not long after that, Camiel kills the family gardener: one
disruption on top of another. Here, van Warmerdam deploys yet another standard
storytelling technique: the use of a crystal-clear, memorable image which
encapsulates all that happens within a story. When Camiel, along with two
be-suited associates (we never find out how these malevolent figures are connected),
kills the gardener, he also kills the gardener’s wife. The way in which he
disposes of their bodies tells us quite a bit about what the film is trying to
do, and elegantly: the victims’ heads are buried in buckets of concrete and they
are tossed to the bottom of a lake. We watch the bodies descend, slowly,
gradually coming to rest with their legs pointing directly upwards and their head
pointing directly downwards: they are turned upside down, just as the lives of
those above them are, increasingly, turned upside down and cast into disorder.

Here the narrative once again doubles upon itself, as if to
demonstrate to a viewer the extents to which stories must go to reach their
desired destinations—and also to show that within one narrative, constant
revolution may sometimes be necessary to keep it alive. In this story, Camiel
bathes, gets himself a haircut, and shows up once again at the family’s door,
this time in an interview for the gardener’s replacement. Richard, not
realizing that he is talking to the vagrant he pummeled earlier, likes him and gives
him quarters in the house itself. Shortly after his arrival, Camiel marshals a
tractor to tear up the garden, ostensibly in an effort to improve its
appearance—but, of course, also tearing up all the family has cultivated, all
of its peace, here embodied in the carefully landscaped trees. We learn,
gradually, that Camiel and Marina do indeed have a history; she reaches out to
him, and he does not reach back until a stage, of a sort, has been set. The
setting of that stage is gruesome, involving murder, drugging the children,
and, once again, Camiel’s invasion of the family’s dreams. From this point
forward, everything that van Warmerdam has put in place moves quite smoothly towards a neat (and perhaps all-too-neat).

For all of its structure, the actual conflict in the story
arises primarily from the alien quality of Camiel’s presence. The question of
what his purpose is in the film rises with beautiful restraint, until finally
he achieves his objective—at which point the film ends. Borgman has been chided
for not providing enough answers to the questions it raises—which may be fair,
given that if those answers were provided, the emotional weight of the film might
increase. For a film of this type, though, the questions are more significant
than any answer the director might provide. Indeed, the absence of such answers
helps to accentuate the film’s ultimate accomplishment, which is to show the
shapes madness and anarchy may take when properly contained—and how every story
is, in a sense, like the house described here, a box with four walls and a
roof, in which nightmares and daily realities compete for our attention.

Max Winter is the Editor of Press Play.

VIDEO ESSAY: White Knights and Bad People

VIDEO ESSAY: White Knights and Bad People

[The text of the video essay follows.]

When I watched Back to
the Future
with my parents as a child, I remember my shock at seeing Marty
McFly’s mom sexually assaulted by the high school bully, Biff, in the backseat
of a car. The assault was confusing. I remember my first viewing of this
relatively tame movie as a garble of images–the backseat, the fluffy curls of
the pink prom dress, the feet poking out, the muffled screams.

Of course, this entire scene is about Marty’s dad having the
guts to punch the rapist in the face, to tell him to “leave her alone.” By the
end Marty’s mother is all smiles, relief, and pride in having chosen a man who
would defend and respect her.

My exposure to cartoon gender relations was similarly
violent. The female cartoon characters in shows like Tiny Toon Adventures and Animaniacs
liked to don skimpy outfits. The male characters’ eyes would pop out of their
skulls, tongues hanging out lecherously. Of course, these shows played on old
cartoon favorites. Betty Boop often had to avoid unwanted male attention, poor
Olive Oyl was constantly placed in supposedly comic situations where she was
being either kidnapped or harassed, and in Tex Avery’s Little Red Riding Hood,
“Red” is a full grown woman who must be careful of the predatory wolf who
stalks her nightclub. 

When I was a child, the images of a female cartoon character
being catcalled, or a woman being assaulted, did not seem especially unusual. I
assumed that warding off male attention was met by most adult women with a
mixture of pride and mild annoyance. As I got older, I became more and more
concerned about this phenomenon. When even strong, powerful women are victimized
in films and television, a dashing hero saves the day.

Today, in the age of Steubenville, we still worry about the
ways boys and men prey on girls and women. Social organizations often still
rely on the white knight trope when they address this matter. Actors and
musicians who regularly objectify women on screen and in music videos are shown
looking sad as they pose with Real Men Don’t Buy Girls hashtag signs. In the
White House PSA on sexual assault, Daniel Craig and Benicio Del Toro are among the male
participants calling for heroic behavior.

Stepping in when someone is in trouble is certainly
honorable, but the moral lesson in these PSAs provides men with the same
options they had in Back to the Future.
Are you a Marty, or a Bif? Will you defend womanhood, or assault it?

The threat of rape is often used as a device for male
characters to become heroes, which contributes to the idea that sexual assault
is a normal part of growing up female. Rape is still seen as unchecked lust
rather than an expression of violence. 
This myth has far reaching repercussions, as girls and women live in the
very real shadow of sexual assault constantly. We get inured to sexual violence
on shows like Game of Thrones, where
rape is often presented in the background of a scene, something bad, brutal men
do to helpless women.

It’s exhausting as a woman to constantly see the female body
on the brink of violation. I’m tired of the voicelessness of those bodies, by
the fact that we still need to spread awareness about how horrible sexual
assault actually is. I know I’m supposed to be grateful when people express
that they are aware, when men who seem poised to protect me when I go out, when
someone develops an app designed to help get me home safe by checking in with my
family and friends.

The way rape is portrayed today is not so different from how
it was portrayed in 80s exploitation films, where rape is intended to shock and
titillate in one fell swoop, like it often does in the current series Game of Thrones. A film like Extremities, for example, promises the
sweetest of revenges for a female protagonist, but it is the image of Farrah Fawcett
cowering and sobbing, forced to take off her clothes, while her rapist looks on
and calls her beautiful that has become the ubiquitous Hollywood rape scene,
where a gorgeous woman is exposed and shamed and, despite the fact that we are
told to root for her, we are also given permission to ogle her, to see her
through the rapist’s lens, before we see her own experience.

This is one of the reasons that Joan’s rape scene on Mad Men is so effective is that it
portrays her quiet terror without fetishizing her body or her fear. We don’t
see her ample curves illuminated, the way they normally are. Joan’s sexuality
is a point of pride throughout the series, and the camera makes it clear that
what we are witnessing is a power play and violation. There’s nothing sexual
about it. The camera ends not on a close up of her body, but a close up of her
staring at a point just ahead of her in an office that isn’t hers, as she waits
for what is happening to stop.

Arielle Bernstein is
a writer living in Washington, DC. She teaches writing at American
University and also freelances. Her work has been published in
The
Millions, The Rumpus, St. Petersburg Review and The Ilanot Review. She
has been listed four times as a finalist in
Glimmer Train short story
contests
. She is currently writing her first book.


Serena Bramble is a film editor whose
montage skills are an end result of accumulated years of movie-watching
and loving. Serena is a graduate from the Teledramatic Arts and
Technology department at Cal State Monterey Bay. In addition to editing,
she also writes on her blog Brief Encounters of the Cinematic Kind.

OUR SCARY SUMMER: PROPHECY and the Toxic Environments of 1979

OUR SCARY SUMMER: PROPHECY and the Toxic Environments of 1979

nullDuring the first week of December, 1978, the covers of Time and Newsweek featured horrific images that would haunt me over the ensuing
year.  Both magazines bore the headline
“Cult of Death” superimposed over masses of dead, decaying bodies, victims of
the catastrophe at Jonestown, Guyana. 
Under the direction of their leader, Jim Jones, 909 members of the cult
organization calling itself the Peoples Temple Agricultural Project were
persuaded to commit “revolutionary suicide” by ritually drinking a sweetly-flavored
poisonous mixture in a senseless act of coerced self-destruction that spawned
the phrase “drinking the Kool-Aid.”  At
the age of thirteen, I couldn’t really fathom this event, nor did I know where
Guyana was, but I was deeply impacted by those horrific magazine covers.  I would only begin to make some kind of sense
of these events by watching the horror movies that were released during what Newsweek would call “Hollywood’s Scary
Summer.”

By that summer I had already become a fairly seasoned
watcher of horror films.  More than mere
thrills and escapism, however, horror movies had come to serve as a reflection
of the toxic environments around me.  My
understanding of the world was shaped by violent and disturbing images, not
only in theaters but also on television and in magazines, thanks to the news
media’s increasing tendency to capitalize on the graphic shock value of current
events.  I suppose that after televising
Viet Nam, nothing was taboo, and there was certainly a political significance to
a nation’s being asked by its reporters and photographers to bear witness to
what its military was being ordered to do overseas.  But there’s no denying that
images of massed dead bodies displayed on the family coffee table could have a
dramatic, even traumatic, effect, especially on children and adolescents.  

The poster advertising John Frankenheimer’s Prophecy featured a grotesque image of a
monstrous fetal creature wrapped in its placenta, an image I responded to like
all such images in the media environment of the 70s: with equal
parts fascination and horror.  After seeing
the film, however, I discovered that horror could help me to make social and
political, as well as emotional and imaginative, sense of the era’s disturbing
events.  Several months earlier, on March
28, 1979, the Three Mile Island nuclear plant in Pennsylvania experienced the
worst meltdown in the history of the U.S. nuclear power industry, releasing
radioactive material into the environment and alerting America to the
catastrophic risks courted by the industry. 
Less than two weeks before Prophecy
hit theaters, on June 3 the exploratory oil well Ixtoc 1 blew and began
spilling vast quantities of oil into the Gulf of Mexico, in a summer-long
disaster that would later be disturbingly reenacted in 2010, stage-directed by
British Petroleum.

With Prophecy,
Frankenheimer wanted to create an environmentally-conscious horror film that
would raise the ethical stakes of popcorn fare. 
While it can hardly be said that he succeeded in this goal—the director
has blamed his own alcoholism at the time, as well as production issues, for the
film’s relative failure—it did succeed in presenting images and settings that
managed to distill, at least for one young filmgoer, the toxic environments of
the 1970s. 

Dr. Robert Verne (Robert Foxworth) and his wife Maggie
(Talia Shire) leave their urban life for the Maine woods, in order to carry out
an investigation for the Environmental Protection Agency.  A native tribe has accused a local logging
mill of dumping pollutants into the Androscoggin River, pollutants that are
poisoning their land and people.  Verne
and Maggie find themselves trapped between the interests of Native Americans
and those of loggers, but gradually become advocates for the local tribe and
its environment once they begin to see the monstrous mutations spawned by the
mercury that the mill has been releasing into the environment.  Trouts the size of sharks, giant demented
raccoons, and tadpoles the size of overweight bullfrogs are just a few of the
initial signs that something weird is going on. 
Like the three-eyed fish that jumps from the river beneath Burns’
nuclear plant in The Simpsons
opening, these creatures are more a part of radiation lore than of mercury
poisoning. 

And it’s precisely the indeterminate, hybrid nature of the
creatures that stalk, wiggle, and hop through horror movies that allows them
such a broad range of reference.  Those
who don’t get horror movies would cite the implausibility of such monsters as
undermining the film’s environmental message. 
But the indeterminacy of this kind of horror imagery
actually multiplies meanings rather than negating them.  It matters that the creature of Mary
Shelley’s Frankenstein is created
from the dead as well as the living, from humans as well as animals.  In his hybridity, the creature embodies a
plethora of anxieties induced by the rise of scientific culture in the early
nineteenth century, when the novel was first published, including concerns over
the use of human corpses in anatomical research, the use of live animals in
laboratory experiments, and the use of animal-incubated antitoxins in
vaccines.  While it would be rather a
stretch to compare Mary Shelley’s classic novel with John Frankenheimer’s
not-so-classic film, the latter does partake of this rich tradition of the
monstrous that is horror’s enduring legacy.

As a thirteen-year-old, I was captivated by these creatures,
and horrified by the most dramatic of the film’s monsters, a giant mutant she-bear
that the natives regard as an avatar of their totemic nature spirit,
Katahdin.  But I was even more affected
by Katahdin’s grotesque cub, which Maggie tries to rescue, in a harrowing
sequence that remains one of the film’s most effective moments.  Early in the film we learn that Maggie is
pregnant, a fact that she keeps secret from her husband until she discovers
that the fish they been eating from the river have been poisoned by the same
substances that have produced Katahdin and its brood.  She knows her own child will suffer the same fate,
and consequently regards the mutant cub they find with a displaced motherly
affection.  As she swims through the
river, carrying it in one of her arms, the cub grows terrified as it hears its
biological mother howling in the narrowing distance of pursuit, and begins
biting and tearing at Maggie’s throat. 
She drowns the cub, as she will presumably abort the mutant fetus growing
inside her.

This scene stayed with me for several months after seeing
the film, for reasons I couldn’t quite place, until one weekend, bored during a
visit to my grandmother’s, I pulled out a collection of award-winning
photographs from Life magazine that
had I had often perused before.  There I
came upon an image that had long horrified and moved me, and the connection
between it and the scene from Frankenheimer’s movie instantly clicked.  The image was taken by W. Eugene Smith as
part of an expose on the mercury poisoning caused by the Chisso corporation in
Minamata, Japan.  It showed a mother
tenderly bathing her adult child; the young man’s limbs are bent and twisted at
unnatural angles and his face is distorted in an agonized grimace, the result
of mercury exposure.  In this grotesque
pieta, the mother supports him gently in the tub, and gazes upon him with a
look of steadfast love.  The image was
taken in December, 1971, a fitting emblem for the decade to follow.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

VIDEO ESSAY DIPTYCH: Good Dads/Bad Dads: A Tribute to Cinematic Fathers

VIDEO ESSAY DIPTYCH: Good Dads/Bad Dads: A Tribute to Cinematic Fathers

Good Dads/Bad Dads: A Tribute to Cinematic Fathers

I can’t remember the first film I watched with my dad Jim.  However, I do remember what I affectionately
call my “Martin Scorsese summer.”  I
spent three weeks in the hospital following an appendix operation and decided
to tackle the American Film Institute’s 100
Years…100 Movies
from my sickbed. 
My dad was a major presence during this event, only leaving my side to
go to rent the videos from the list.  I
can still remember him personally recommending Fargo (1996).  My eventual
career as a Cinema Studies Professor can be traced back to that hospital bed
and my dad’s trips to Blockbuster Video.

Another course on the informal side of my film education came
from my eventual father-in-law Larry.  At
first, Larry resented me for dating his daughter Nicole (not for any specific
reason, simply because of that natural protective instinct a father feels for
his daughter).  In order to sooth his
unhappiness, I asked Nicole what his hobbies were.  She started to list them off (“Hunting,
fishing…”), and I began to feel my stomach drop.  She added, “But he likes Westerns.”  I had never been a huge fan of the genre, but
I would become one thanks to Larry.  We
finally bonded over our admiration for John Ford’s collaborations with John
Wayne. 

Despite these anecdotes, my two fathers are not cinephiles.  Larry’s tastes begin with The Searchers (1956) and end with Lonesome Dove (1989).  When I watched 2001: A Space Odyssey (1968) for the first time, Jim was quick to
note his distaste.  “If you ever have
difficulty sleeping, turn that movie on. 
You’ll never make it to the part that takes place in space,” he said.  My dad used to like Quentin Tarantino movies,
but I don’t think he has the patience for them anymore. 

One of the last films we watched together is one of his
favorites: Robert Benton’s Nobody’s Fool (1994).  The film stars Paul Newman as a crotchety,
failed father who attempts to redeem himself in the eyes of son (Dylan Walsh)
and the town he lives in.  I think the
film resonates with him because it reminds him of his two fathers.  Thankfully, neither of my fathers needed to
follow Newman’s trajectory towards absolution. 
We shared many of the experiences outlined in Benjamin Sampson’s video essay
on good dads: the life lessons, the cultural education, the enrichment of an
accomplishment brought by their pride. 

Ironically, if there is a larger lesson to be taken from Ben and
I’s diptych, it’s that bad dads are far more memorable than good dads.  Many of the most beloved films of cinema
history appear in my contribution:  Citizen Kane (1941), The Godfather (1972), Chinatown (1974), Star Wars: The Empire Strikes Back (1980), and Freddy Got Fingered (2001) to name but a few.  Bad dads from Darth Vader and Michael
Corleone to Aguirre and Jack Torrence emanate a magnetic, horrifying, presence
that provide filmmakers with the manifestation of a potent conflict whose
universality stems from its intimate proximity to the homestead.  The
Shining
(1980) continues to terrify not because an anonymous murderer is
wielding an axe in a haunted hotel, but because a father is turning on his
son.  The pessimistic ending to Chinatown hits the viewer like a punch
in the gut because Noah Cross’s bad deeds perpetuate themselves without end or
punishment (a related point:  most of
cinema’s bad dads gain their status because they are aggressive towards their
children, be it in the form of physical and/or sexual violence, and not because
they are neglectful).  Essentially, the
influence of Sophocles’s tragedies remain as emotionally potent as they were
2,000 years ago when they were first performed.–Drew Morton 

Drew Morton is an Assistant Professor of Mass Communication at
Texas A&M University-Texarkana.  His
criticism, articles, and video essays have previously appeared in the
Milwaukee Journal Sentinel, Senses of Cinema, animation: an interdisciplinary journal, Press Play, and RogerEbert.com.  He is the co-founder and co-editor of in[Transition], the first peer-reviewed
academic journal of videographic film and moving image studies. 

Benjamin Sampson is a Ph.D. candidate in Cinema and Media Studies
at the University of California, Los Angeles. 
His video essays on Steven Spielberg’s
A.I. Artificial Intelligence (2001) and Orson Welles’s F for Fake (1973) have appeared in Press Play and [in]TransitionHe is
currently researching the intersection between Hollywood and religious
institutions.
 

OUR SCARY SUMMER: DAWN OF THE DEAD and the New American Malaise

OUR SCARY SUMMER: DAWN OF THE DEAD and the New American Malaise

nullAs tag lines go, George Romero’s seminal zombie epic sports
a pretty good one: “When there’s no more room in Hell, the dead will walk the
earth.”  As a thirteen year old, I had
repeatedly stared at the lurid poster bearing these ominous words in the front
windows of the Maplewood Mall multiplex in the weeks before the film was
released in the summer of 1979.  But like
most tag-lines, these were grossly misrepresentative of the film they
advertised.  The notion of an overfull
Hell spewing forth its denizens is too mythic, too Dantesque, by comparison with
the abjectly modern and mundane world the film depicts.  A more fitting tag-line might have been taken
from a speech given by President Jimmy Carter later that same summer: “Often
you see paralysis and stagnation and drift. 
What can we do?”

Addressing what he described as a “crisis of confidence” in
America, Carter’s July 15, 1979 address has been called “the malaise speech”
for its focus on the country’s financial woes and lack of direction.  Though neither provide answers to the
dilemmas America experienced at the end of the 1970s, both Carter’s speech and
Romero’s film offer disturbing visions of a world succumbing to “paralysis and
stagnation and drift,” visions that clarified and vitally shaped my own
perception of the world, then and now.

Now that we are inundated with zombies, in the movies and on television, it’s hard to remember
how off-the-wall Romero’s film seemed at the time.  There was something both funny and disturbing
about seeing monsters that looked more or less like ordinary people, though well
past their “sell by” date.  Massed
together in vast hordes, these creatures, stupid and slow-moving on their own,
collectively assumed the contours of a nightmare, one that hadn’t been realized on such an
expansive cinematic canvas before. 

Yet despite of all its originality and strangeness, Dawn of the Dead made sense to me,
largely due to the fact that much of its action takes place in an enclosed
shopping mall.  As a Minnesotan, I grew
up in the land of malls.  The Mall of
America may be the most massive example of my home-state’s mall obsession, but
Southdale was the first mall of America, opened in 1956.  Many others followed, including the Maplewood
Mall, where my family and friends experienced their own version of the uniquely
American malaise evoked by Carter, and where I later saw Romero’s film.

“Human identity is no longer defined by what one does, but
by what one owns,” said Carter. “But we’ve discovered that owning things and consuming things
does not satisfy our longing for meaning. We’ve learned that piling up material
goods cannot fill the emptiness of lives which have no confidence or purpose.”  For all his failings, it’s hard to imagine a
President since Carter having the guts to offer such an honest criticism of our
country, verging on sacrilege against major tenets of the American commercial
gospel.  This description of vacuous
consumption is an apt description of zombie appetites—joyless and never
satisfied—as well as of the situation in which the four human protagonists find
themselves in Romero’s film. 

Holed up in the Monroeville Mall of Pennsylvania, an odd
collection of refugees from the zombie apocalypse gradually form a community
based on escapism and greed.  Sadly, escapism
and greed are also at the core of the uninspired “solution” offered by Carter
to our national dilemma.  Reducing the “growing
doubt about the meaning of our own lives and in the loss of a unity of purpose
for our nation” to our dependence on foreign oil, Carter advocated using more
coal, building a pipeline, and conserving what we’ve got until something better
comes along.  At least Romero had the
insight to foresee what the end result of this shortsighted thinking would be,
as the horizon of possibilities gradually closes in on the film’s protagonists.

It’s easy to forget about the larger world when you’re in a
mall, which offers a virtual environment catering to seemingly every consumer demographic.  The Maplewood Mall had two bookstores, three
record stores, two hobby and gaming stores, eight cinema screens, and two video
arcades: these so fitted my limited needs and consumer choices as a thirteen
year old that I could hardly imagine what more the world could offer me.  It took me some time to realize that my
consumer desires were not being catered to so much as created by the Mall
itself.  As with the protagonists of Dawn of the Dead, what began for me and
my family as an escape turned into a lifestyle. 
The walkways were lined with trees and shrubs to create an illusory
natural environment, and the utopian vistas of its vast central court, crossed by gently
rising and falling escalators, resembled the sets of seventies sci-fi films like Logan’s Run, Futureworld and Rollerball.  In my eagerness to live in a virtual reality,
whether through video-games, films, or malls, I somehow missed
the point that these visions were meant to be dystopian. 

Watching the film now gives me a strange frisson
Those muted earth tones, those defunct store-fronts with their period fonts,
those broad lapels and flared pants worn by the mannequins: they resemble the
lost iconography and ambient set-pieces of my youth, brought uncannily to
life.  The film’s soundtrack consists
largely of commercial background music of the period, what came to be called
“library music”—LPs that could serve as a ready source of musical interludes
to be played in the background of low-budget films, commercials, or educational
videos.  The genre has become a popular
one for collectors, largely because these virtually anonymous musical pieces
provided the sonic backdrop of our collective past.  An unofficial soundtrack release collects
many of these from Romero’s film, and for anyone who grew up in the 70s,
listening to it is the aural equivalent of watching a super-8 movie of an
average, anonymous day out of the past.

Dawn of the Dead is
less a horror film to me than it is a distorted snapshot of my youth, one into
which I still sometimes escape.  As the characters
frolic through the Monroeville mall, indulging their consumer whims while
zombies menace them from behind glass doors, I find the premise disturbingly
seductive even as I recognize its abject futility.  It’s a fantasy I could never really experience,
since even if there was some version of a zombie apocalypse, I wouldn’t want to
be holed up in some mall of the twenty-first century: I only want to be alive
in the mall of the 1970s.  The final
irony is that my own response to the New American Malaise has been to retreat
into nostalgia, but what I discover in watching films from the 1970s is an
America hardly dissimilar from the one from which I’d hoped to escape.

Jed Mayer is an Associate Professor of English at the State University of New York, New Paltz.

I Spit on Your Fairy Wings, and Your Little Dog, Too!: On MALEFICENT and Other Films

I Spit on Your Fairy Wings, and Your Little Dog, Too!: On MALEFICENT and Other Films

null

“The woman has
power if she’s a villain.” That’s what my college art professor told me once,
when we were discussing the Disney films we’d grown up with. If you were a
girl, or female-identifying, you were Team Ursula. Team Wicked Queen. Team
Maleficent. These villains resonate with girls like us, who’d grown up knowing
that they’d never be Prince Charming’s type; that all of creation, from the
beasts in the forest to the flowers in the field, would never sing of our
sweetness; that our parents would never be royalty.

The villainesses
offered a new paradigm: If you can’t be beloved, be angry. Reduce the king of
the seas to simpering plankton, poison an apple, will your body to turn into a
dragon’s. But closer reflection, as well as exposure to feminist texts and more
adult film fare, reveals that what may seem like delicious wickedness is, in
fact, not real power: It’s just bullying. These women get back at the kings
(and the kingdoms) that have cast them out and insulted them by attacking
innocent princesses, young girls who haven’t done them wrong. This isn’t real
vengeance; it’s just women sinking their talons into other women. And why?  Just because.

Maleficent,
which reimagines the Sleepy Beauty story from the vantage point of the woman
who cast the curse, embraces the beating black heart of the villain’s
appeal—only to sink its fangs into it. The movie is a Disneyfied exploitation
flick: Maleficent’s curse is her roaring rampage against Stefan, the man who,
once upon a time, promised her true love’s kiss before drugging her and
stripping off her wings, so he could appease a dying king, and be named his
successor. Maleficent roars, and she rampages, but she doesn’t get bloody
satisfaction until she comes to the unsettling truth that she’s deployed her
power against Stefan’s daughter, the innocent Aurora, instead of directly
attacking the man who actually wronged her (and the patriarchal will-to-power
that he represents). Maleficent (and the movie that bears her name) turns the
righteous wrath of the woman wronged from a knife’s edge to a tightrope: She
tiptoes along that fine line between between claiming justice and identifying
with her aggressors.

null

Take Ursula the
Sea Witch, who may rival Maleficent as the most beloved baddie in the magic
kingdom. Ursula once lived in the pearlescent splendor of The Little Mermaid’s aquatic kingdom, only to be cast out (for
reasons unknown) by King Triton; the circuitous route of her revenge—getting
him to sign his soul to her to save his daughter—is designed as a pile-driving,
pile-on of pain for the king. And yet to do this, she literally steals the
voice of another woman. Ariel’s only “crime” is being the wasp-waisted
embodiment of everything that Ursula is not, and Ursula’s grand revenge becomes
an attack on the pretty girl—which, given the dark potency of her spells, is a
waste; it reinforces, instead of breaking open, that tired binary of the
lovely, much-loved “homecoming queen” vs. the ugly outcast whose countenance
matches her soul.

We can shrug this
off as a fairy-tale, a genre where only the purest of the pure-hearted and the
blackest of the black-hearted get starring roles. However, it’s still deeply
problematic to see a powerful woman literally tower over our innocent
heroine—especially when so many women, particularly younger women, believe that
there is no place for them within feminism because they “like men” or wear
make-up or want to be a stay-at-home mom. They believe that feminism isn’t a
movement for equality, it’s a matter of us vs. them—and never, sadly, a matter
of us vs. the real enemy, the Stefans of the world, people who value having
power over respecting the dignity and autonomy of women everywhere.

The in-the-flesh
incarnation of Maleficent is able to get the revenge that eludes her cartoon
counterpart because she realizes that the casting of the curse makes her no
better than her former love. Stefan is the flattest character in the film, a
man defined only by what he wants the most: to be king. His bristling ambition
parallels her blazing rage: It allows him to steal the parts of her that
brought her to the heavens, just so he can wear the crown. It allows her to
condemn a laughing baby to a living death, just so she can hear the king beg.
But she doesn’t truly get the better of him, or at least bring about his richly
deserved end, until she’s reconciled with Aurora.

Aurora liberates
Maleficent’s wings from the glass case where Stefan has entombed them, and
Maleficent drags him out of his castle, lets him fall; in his last moments, he
watches her hover above him as the air rushes around his body, and he knows
what it means to desperately long for wings. Stefan’s death is more than just the extinguishing of an enemy; it’s
the end of an era. The film ends with Maleficent crowning Aurora as a queen
without a king: the arbiter of a new age of matriarchy.

Maleficent now
exists within the archetype of the woman warrior, the righter of wrongs, and
the avenger. This archetype wields her wand and sword, her pistol and Tiger
Crane Kung-Fu, and, above all, her wits, directly against her enemies. She is
Coffy, hiding razor blades in her hair; she is Beatrix Kiddo, crossing names
off her “Death List Five”; she is Arya Stark, whispering her own kill list as a
nightly prayer; she is Carrie, unleashing telekinetic Hell against the high
school sadists and the fundamentalist mother who’ve tormented her; she is
Mystique, the mutant revolutionary out to assassinate the political operatives
who oppress her kind. She is Katniss Everdeen, who must “remember who the real
enemy is” if she’s to escape the ceaseless spiral of violence and use her power
for a purpose. And she is Maleficent, who must learn that cruelty is simply
scratching an itch, not treating the wound that burns clear to the bone.  Every time the woman warrior flexes her
might, she’s defining who she is and who she wants to be: the
victim-turned-avenger, asserting her worth against those who tried to break
her—or the villain, just another abuser who thinks that making someone, anyone, pay, is the same as actual gain.

null

We see this
dilemma played out directly with two of the younger, though still ethically
complex, examples of the woman warrior archetype: Katniss Everdeen and Arya
Stark. In Catching Fire, when Katniss, who’s been stop-lossed back into
the arena, has a choice to shoot an arrow straight into another tribute’s
heart, or to take out the heart of the arena itself—and the Capitol that
created it—by aiming her bow at its force-field. She spares the tribute and
sends her arrow whistling toward her oppressors. Arya Stark won’t use her
quickness and cunning to help The Hound steal from a peasant farmer, but she
will spear her sword through the throat of the brigand who’d stolen it from her
years before and used it to murder one of the boys she’d been traveling with.
Arya stares down at the man, who gurgles blood and rasps for air, with an
impervious haughtiness. She parrots back the taunts he’d made as he’d stabbed
her friend; his words are the hammer-strikes sealing his coffin closed: He
brought this on himself the second he raised his blade against Arya and the
people she loves. This is even Steven. This would be about square.

The woman warrior
must choose what—and most significantly, who—merits her lethal gaze, and that
choice reveals everything about her values. Will her capacity for violence imitate
an arrow’s arc, striking with purpose and direction? Or is her rage an engine
revving in a parked car, ceaseless churning and pointless noise? Toward the end
of Maleficent, a now-grown Aurora remarks, “my kingdom wasn’t united by
a hero or a villain, but by one who was both.” Maleficent’s evolution shows how
simple it is to conflate the ability to bring devastation with the snap of her
fingers with actual power, the kind of power that empowers her to stand up for herself and everything she cares
about, that does more than just charge up the same dull machinery of abuse and
degradation. Maleficent must show this
evolution within the confines of a PG rating; however, films like the Kill Bill saga can sift through all the
grit and the spatter for a more nuanced understanding of vengeance, violence,
and the relationships between women who’ve gotten used to feeling of blood
under their nails.

Despite the Kill Bill movies’ joint titles, our
yellow-haired warrior takes the lion’s share of the narrative as she cuts down
her former teammates on the Deadly Viper Assassination Squad, the women who
battered her swollen, pregnant body almost to the point of death after she
tries to leave the group, to become more than Bill’s woman, a woman who kills
for Bill. Beatrix’s impromptu retirement doesn’t actually hurt any of the DIVAS
(indeed, it allows Elle Driver to slip into her much-coveted role of Bill’s
best girl); they attack her at the behest of Bill. They’re a kung-fu coven of Ursulas:
lashing out because of, or in reaction to, some man.

But no matter how
savagely Beatrix and her former comrades battle, there is always a moment—“Just
between us girls …” or “Silly Rabbit, Trix are for kids”—that recalls the
intimacy they once had. Beatrix was one of them, and her arc toward autonomy is
a transition from deadly viper to righteous avenger. It’s fitting, then, that
the only DIVA who is given any substantive backstory is O-Ren, the character
whose origin tale functions as a parallel and an inverse of our heroine’s.
Beatrix recounts O-Ren’s revenge against Matsumoto, the yakuza boss who
murdered her parents when she’s at her own lowest point, freshly awakened from
her coma and willing her limbs out of atrophy. O-Ren’s story is rendered in
hyper-stylized anime and scored with a lean yet operatic mournfulness that
evokes the Fistful of Dollars trilogy,
vesting it with a mythic grandeur that does more than simply align the viewer’s
sympathies with her aim—it suggests that claiming her revenge is a vital, even
sacred task.

null

However, this
anime sequence ends with O-Ren delivering a round-house kick straight to
Beatrix’s pregnant belly, doing Bill’s bidding so he’ll back her Shakespearean
in magnitude quest to become the boss of all bosses of the Japanese yakuza. And
then we’re back to live-action, down to earth, and O-Ren is beheading
dissenters and letting her entourage bully the wait-staff of the bar she owns.
Her violence has no purpose, no passion; trafficking in mindless cruelty, she’s
more akin to Matsumoto than to the young girl who looks him in the eye and asks
if she looks like anyone he’s killed as she twists her sword into his gut. That
girl emerges again, however briefly, in that final fight with Beatrix; after
Beatrix draws first blood, O-Ren bows her head, says, “For insulting you
earlier, I apologize.” The sorrow in those six words shows that she can
remember the raw feeling of violation without recourse. The women rush each
other until O-Ren’s blood ribbons the snow: a single red spatter framed against
a pristine whiteness that suggests the purity of Beatrix’s mission.

Maleficent
shares a thematic kinship with Kill Bill
by suggesting that revenge really can be cathartic, and by having its heroine
find peace after vengeance through her bond with another woman: Maleficent has
Aurora, and Beatrix has B.B., her daughter. 
So it’s appropriate that Maleficent’s
final battle scene is set around another purifying force: fire. Dragon’s
breath surges over stone, leaps over the battlements as a re-winged Maleficent
takes flight with her nemesis, Stefan, clinging to her boot. It’s a grand
fuck-yeah moment, akin to Katniss delivering her quiver-full of a middle finger
to the Capitol and Arya scratching one name off her kill list, Coffy gunning down
her first drug dealer and Carrie turning a prom full of bullies into a taffeta
and sequined holocaust. But these are even more than fuck yeah moments—they’re
fuck yeah moments that show the self-affirming power of revenge. Their message
is written in blood and flame: I matter. I know who hurt me, and I’m going to
make them pay.

Laura Bogart’s work has appeared on The Rumpus, Salon, Manifest-Station,
The Nervous Breakdown, RogerEbert.com and JMWW Journal, among other
publications. She is currently at work on a novel.