23 Jun

On Reading

Yesterday, I heard a brief segment of the NPR program “On the Media,” which included an interview with Ann Kirschner, a woman who set out to read Little Dorrit in four different formats: paperback book, audio book, Kindle, and iPhone.

Now, just to clarify: she didn’t read the entire book four times, which is initially how I was envisioning this crazy project. No. Instead, she would start in her house with the paperback version, and then she’d pick up at that point with the audio book while traveling to work on the subway.

“I didn’t set out to be scientific, I set out to be practical,” she explained. Which is to say that it wasn’t so much an experiment as it was a method of fitting in 1000 pages of reading with an otherwise busy life.

But the experience still provoked questions about various methods of reading (and whether they all can be called reading). I don’t think there was a clear winner in her estimation, but the Kindle seemed to be the clear loser. She didn’t like “having to make a conscious decision to take it with her.” And she also cited the annoying black screen transition that animates every page turn.

I’ve seen this annoying black screen myself. Last week, one of the members of my writing class brought a Kindle to lunch and passed it around. When you “turn” a page, the writing turns white, and the screen turns black just for a split second. It’s jarring.

Ms. Kirschner said she spent the most time listening to the audio book version, and she disagrees with those who say listening isn’t reading. She admits that with an audio book, you’re at the mercy of the narrator; you can’t go backward or forward; you can’t dog-ear pages or underline or write in the margins. So it’s a relatively “passive” experience, she concedes. But is it still reading?
Read More

08 Apr

Flutter

On why this is funny and why it isn’t.

First watch the video:

So, since my readership is so huge and diverse (and therefore may not understand all that’s being mocked in the clip), let me begin by explaining some of the humor.

  • Flutter’s fictional founders are Stanford dropouts. It’s typical of web 2.0 shit that the founders were college kids at prestigious universities, who had (sometimes only) one good idea. Google’s founders were Stanford students. Facebook’s founders were at Harvard. And Twitter’s founders were from Cornell.
  • “A lot of people don’t have time to twitter.” Yeah. The whole concept of microblogging is absurd. Even more absurd than blogging. But it certainly doesn’t require time.
  • Nor does it require thought, really. “You hardly have to think about what you’re posting.” The majority of tweets are — like the majority of things people say — not witty, insightful, or really all that enlightening anyway.
  • “Flaps.” And later in the video, some guy calls tweets “twits.” Perhaps not quite as amusing as how Stephen Colbert conjugates the verb, but funny nonetheless. It’s funny (ha ha) and funny (strange) that a new verb can enter our language so quickly.
  • “FlutterEyes” mocks the kind of people who spend all their time texting other people, which is a direct slap in the face to those they’re interacting with in the real world.
  • “MySpace, I guess.” Ha. MySpace is really uncool, and so it seems genuine that a hip web 2.0 company would be reluctant to develop easy access to it.
  • Other hip web 2.0 applications have sold out and become more commercial. So the “$Pepsi” thing parodies those.
  • “Shutter without the vowels.” I’ll let you figure that out. It’s also worth noting that twitter.com originally was twttr. No joke.

Okay, now that I’ve killed the humor by analyzing it, let me explain what’s actually somewhat scary about this Flutter concept. In my Science Fiction class this year, I’ve been examining predictions of future technology. Not just the crackpot predictions, mind you. But the well-grounded predictions made by respected academics. And there are a few things hinted at in the Flutter mockumentary that aren’t that far off.

First off, what will really happen to our intelligence? As writer Nicholas Carr points out in his famous article about Google making us stupider, “as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.” This flattening to the artificial might happen sooner than we think.

It’s fairly inevitable, for instance, that our human memories will soon become unnecessary. Have you ever forgotten someone’s name? Ever had an argument about who took out the garbage last? According to Jim Gray of Microsoft Research, “It will soon be possible – in terms of cost and size – to store a complete digital video record of your life.” So you can settle that argument about who last took out the garbage. Eric Horvitz, also of Microsoft Research, takes this stuff a step further: “As more of our lives go digital, we may use a program to sort our data. And it could hook up to software that understands the things people forget.” Facial recognition software + video = never forgetting another name. This supersession of memory is almost a definite. If we, as a race, survive for the next three decades, we’ll see such things happening.

One of the costs, though, will be privacy. The Flutter video jokes about absolute transparency when it describes the iPhone app that will know where you are and “flap automatically.” This sort of thing is also a definite. In the near future, more and more items will be hooked up to the internet. People like Ray Kurzweil and Kevin Kelly have predicted that the internet, which we now access through our desktop and laptop computers, will be all around us. By placing RFID chips in food packaging and in clothing, we’ll literally be living in the web. And it will allow some pretty cool things. We could get customized recipe suggestions from the food items in our cupboard, which would “communicate” with each other. We could find a lost sweater simply by searching for it on Google.

We’re only a year or two away from having our mobile devices capable of updating every half hour with our GPS coordinates. Actually, many of them could do that right now with the right software. But as more places and objects get hooked into the net with these RFID chips and whatnot, our phones will be able to give more information than our GPS coordinates. They’ll be able to essentially track us throughout the day with identifiers like “Starbucks bathroom.” But the price will be privacy. “If you want total personalization,” Kevin Kelly notes, “you’ll need total transparency.”

If you’re willing to give up some privacy, though, you’ll probably find yourself integrating with technology more and more. That’s not to say you’ll allow a chip to be implanted under your skin, but perhaps you’ll get yourself a pair of FlutterEyes. Or maybe a pair of “active contact lenses,” which would “project words and images into the eye.” And if you do so, that might be the “gateway drug” of sorts to more technological augmentation. We already have some pretty useful augmentation in the form of cochlear implants and visual cortex implants. And there are currently paraplegics whose brains are hooked up to electrodes which allow them to move a cursor on a computer screen. (This was done four years ago to Matthew Nagle, by Dr. John Donoghue, the end goal being to allow those with spinal cord injuries to bypass the damaged neurons altogether.)

Bran Ferren of Walt Disney Imagineering — admittedly not as impressive an employer as others — claims that “the technology needed for an early Internet-connection implant is no more than 25 years off.” But Ray Kurzweil has made some equally bold assertions. Nanotechnology is currently taking off, and since technology develops at exponential rates, we will someday soon have respirocytes, nanotech red blood cell substitutes which are much more efficient than actual red blood cells. A human whose blood was made up of 10% nanotech respirocytes would be able to hold his breath for four hours. “Nanobots capable of entering the bloodstream to ‘feed’ cells and extract waste will exist (though not necessarily be in wide use) by the end of the 2020s. They will make the normal mode of human food consumption obsolete.”

Given, we’re now delving into some pretty far-fetched stuff that’s not going to happen really soon, but as long as we’re going there, let’s examine the ideas of James Hughes, author of Citizen Cyborg, who speculates, “if we get to the point where we can back up our memories and our feelings, we may be able to then share them with other people.” When you get married, you might “negotiate how much of your personal memory space you’re going to merge. . . . So the boundaries between us will begin to blur.” He also posits (as does Aubrey de Grey) that our life spans will get to be very long — perhaps in the 1000s of years. My first reaction to such assertions is to be scared. But Hughes gets philosophical: “I don’t want to be immortal. What I want is to live long enough so that I understand how profoundly illusory the self is and I’ve shared enough of my experiences and thoughts and I’ve stored them up and given them to other people enough that I no longer feel like this particular body — existence — needs to go on. . . . That’s the post-human equivalent of the Buddhist enlightenment.”

Is that where we’re headed? Enlightenment or stupidity? Man or machine?

Google’s founders (Stanford grads Larry Page and Sergey Brin) have claimed that they’re really “trying to build artificial intelligence and do it on a large scale.” Brin has stated, “Certainly if you had all the world’s information attached directly to your brain . . . you’d be better off.”

But Nicholas Carr counters with the following eloquent rebuttal: “their easy assumption that we’d all ‘be better off’ if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.”

Who knows? Maybe some day, humanity will look back on this Age of Human Fallibility fondly and long for a sort of imperfection and lack of understanding no longer possible. Until that day, I still desperately want an iPhone.

For further reading:
Kevin Kelly’s TED Talk
U of Washington Tech Predictions
2057 Video (Includes paraplegic cursor movement)
Is Google Making Us Stupid?
James Hughes’ Citizen Cyborg
Results of Pew Poll of 700 tech experts on potential trends for 2020
Ray Kurzweil’s TED Talk
Ray Kurzweil’s main points from The Singularity is Near
Summary of WIRED UK’s top predictions
To the Best of Our Knowledge “Future Perfect: Our Computers”
Chip Implants for Paraplegics

06 Apr

Wisconsin Film Festival 2009

ballot
If there were ever a big street brawl between artists and critics, I’d much prefer to be on the side of the artists. But critics are mean, you say; they live to kick artists’ asses. True. But artists — good ones — know how to use pain. And when the going got tough in this street brawl, the artists would fashion new weapons out of the debris, and the critics’ trash-talking would melt into whining.

Don’t get me wrong. Critics are certainly important; their presence can spur artists to create better work. But the vastly oversimplified explanation that I might someday tell my children goes like this: artists do what they love and critics point out what’s wrong. Now, kids, do you want to be artists or critics?

That said, I’m going to engage in a little bit of criticism, which you should take with a grain of salt. This past weekend, I got to see several movies at the Wisconsin Film Festival. Here are my quick impressions (I don’t spoil plot unless the movie sucks).

500 days of summer
My first movie was 500 Days of Summer, directed by Madison native Marc Webb. It was a romantic comedy, actually. But in contrast to most romantic comedies, it was good. It didn’t get its laughs from slapstick humor and references to pot. Instead, it interjected clever nods to other film genres, including, for instance, a choreographed dance number in the street the day after the protagonist, Tom, sleeps with his love interest, Summer. The plot swivels around the fulcrum of Day two hundred and something in the relationship of Tom and Summer. On that particular day, they broke up. So the film has us jumping back and forth between the happier, more hopeful parts of the relationship and the ugly aftermath, which sees Tom having a hard time accepting that it’s over. The non-linearity is great but a little jarring since it’s sometimes confusing when we are in the story. All in all, though, the movie succeeds because it’s a pretty honest portrayal of longed-for relationships and the ways in which we glorify them in our memories.

32a
32A is an Irish film about a 14 year old Maeve, who is navigating her entrance into adolescence. At heart, it’s another story about failed relationships. Maeve, a pretty, innnocent, and unassuming girl, somehow gets the attention of 16 year old heartthrob Brian Powers. It’s a relationship doomed to fail, given the huge age difference between them, and most of the film’s tension comes from that inevitability. Maeve gets in arguments with her girlfriends, skips out of school once, smokes pot, and throws up in the bathroom of a teen dance club (where you have to be at least 16 to enter). But she does all of it while maintaining her innocence, really. As such, she comes off as a very authentic character. A lot of coming-of-age movies have the central protagonist shedding childhood too quickly. Maeve doesn’t. And that’s why I liked the film. But it was a little slow-moving at times, and there’s a subplot involving Maeve’s friend Ruth, whose estranged father just showed up and wants to meet her. Unfortunately, the Ruth plot really has nothing to do with Maeve’s.

afterschool
Afterschool is the story of Rob somebody-or-other, who’s a sophomore at a boarding school called Bryton. It’s a co-ed boarding school at that, which is a disaster waiting to happen. (Do such things actually exist? Is anyone really stupid enough to have a co-ed boarding school?) This film is no Catcher in the Rye, that’s for sure. It tries to expose the quiet, unexpressive, YouTube-tinged variety of modern-day teen angst, but its protagonist is pretty unlikeable. He’s pitiable, though, and at one point, he calls home and tells his mom that he doesn’t think anyone likes him; Mom says she doesn’t need the stress of worrying over him and asks him to assure her he’s okay. It’s in this environment that Rob witnesses the school’s popular senior twin girls emerge from a bathroom and fall, bloody and high on drugs/rat poison, onto the floor of an empty hallway. He has been filming the hallway as part of an AV Club project, and so his camera is still running as he walks slowly over to the girls and struggles to help. His back is to us, and it’s clear from the get-go that he’s actually killing one of the girls, but that’s supposed to be a surprise in the final scene of the movie, which “exposes” the reality of the situation.

The film is trying to make some sort of commentary, I’m sure, on how our modern teens seem to be living a series of short video clips rather than life itself. And I suppose the very slow tempo of the movie might further such a message. But it was also really tedious. And the fact that the film’s attempts at mystery and comedy were inappropriate to the mood may have also played into the overall schizoid quality. But they too were really tedious. I found myself looking at my watch repeatedly, and growing increasingly impatient with the unrealistically incompetent teachers. When the movie ended, I couldn’t get out of there fast enough. Maybe that’s what the filmmaker was going for. But watching that movie wasn’t an experience I’d wish on anyone else.

august
Our Beloved Month of August was sometimes tedious, too. Actually, the first half of this long film (2:30 long) wasn’t just sometimes tedious. The footage is of various aspects of small town life in Portugal, including such things as boar hunts, firetrucks ascending mountain roads, religious processions, karaoke performances, and unemployed men drinking wine. None of it seems to relate to each other, and the only constant is that there are a lot of small-time bands singing at various town festivals. Just when it’s seeming like this is all going nowhere, we get a scene of a producer talking to a director who’s supposed to be making a movie called Our Beloved Month of August. The producer chews him out, saying that none of the work so far has been focused and that the screenplay actually calls for actors. The producer reads the descriptions of the characters, and soon afterward, we get more seemingly unrelated footage — of a high school boy who plays roller hockey and of a young girl who works in a lookout tower, scoping the mountains for fires. But these new additions are more relevant in that the real life characters are suddenly getting together to play in a band and are beginning to enact the plot of the screenplay. The film’s second half mostly delivers the fictional narrative, with occasional jumps (no warnings given) to real life.

The entire movie really coalesces in the final scene, which has the director arguing with the sound guy about how sometimes in the film there are sounds which don’t exist, like music playing when we’re in the woods. The sound guy responds by saying, “So you don’t hear music right now?” just as a song starts playing in the background. The gist of the argument that continues afterwards is that we don’t want to hear all the sounds that exist when we see a film. We don’t want reality unedited. And the movie certainly makes that point. All in all, it does so quite cleverly, but the documentary half of it is a little too long, and though most of the information we get in that first part resurfaces at some point in the second part (a band in the first half becomes a song playing on the radio in the second half, for instance), not all of it connects.

kk downey
Who Is KK Downey? is a silly film put on by some Canadian sketch comedy group that kind of felt like a film put on by a sketch comedy group. Two failed artists — Theo, whose book Truckstop Hustlers just got rejected, and Terrance, whose girlfriend just dumped him — decide to recast Theo’s book as an autobiography, written by KK Downey, the protagonist of Truckstop Hustlers. Terrance puts on a wig and poses as KK, while Theo becomes his manager. The book becomes a hit, and people gobble up KK’s overly-provocative life as a transgendered prostitute/druggie. There’s a pretty clear parody of James Frey’s A Million Little Pieces, which people loved until they discovered it was fiction. And there’s some commentary on art here (Terrance’s girlfriend is an artist whose claim is “everything deserves a soul,” so she gives eyes to inanimate objects — by gluing googley eyes onto everything). As such, the film feels not-quite-American. Or not quite United-States-of-American, I should say, since comedies here tend not to have a point other than “love conquers all” or some such cliched message. The fact that the movie has an additional layer is a good thing. But the film isn’t hilarious. It’s just funny.

mermaid
Mermaid was my last movie, and definitely one of the best. Last year, my favorites were The Substitute (a Danish film) and Ben X (a Belgian film). Mermaid is a Russian film, and follows the life of Alisa, a withdrawn but whimsical girl who lives in a seaside shack with her single mother and her grandmother. Alisa’s mother is far more interested in men than she is in Alisa’s welfare, and as a result, Alisa is far more interested in her dreams (many of which include meeting her father) than she is in being a normal girl. The film incorporates some magical realism as Alisa discovers she can make apples fall from trees and cause storms to roll in from the sea. When one such storm destroys the beach shack, the three women move to Moscow, where a teenaged Alisa falls for a rich man who sells plots on the moon. Though she’s often forlorn and forsaken, Alisa is fun and imaginative. So when we watch her take a pathetic job where she walks around the city in a cell phone costume, it’s sad but also full of wonder. One reviewer put it well: “Certainly a level of tenable comparison can – and has to – be made to Amélie; however, make no mistake: here Melikian carves a darker tale of whimsy, rippled by a distinct undercurrent of melancholy not seen in its French counterpart.”

29 Mar

Story 2.0

On what new media has to offer to engage us in stories.

Interactivity is the buzz word in the new media realm. But what are we really after when we strive for interactivity? I would argue that the goal is to have the reader/viewer/player/listener engaged. It’s that simple.

Of course, the pinnacle of engagement is in the creative process. I learn much more, for instance, when I teach a class than when I take one. In creating the curriculum, I need to be more invested — mentally and emotionally — in the product of that creation. This sort of logic is, I would argue, behind many of the web 2.0 innovations we’ve seen in recent years. It explains the popularity of sites like YouTube, Facebook, and Flickr. They don’t stop at offering their users content to ingest; they allow the users to create the food they’re eating, too.

But if one of the defining characteristics of web 2.0 is interactivity, does Story 2.0 require the same? As I explained in my last post, I don’t think that the audience of a story can ever co-author it. But I do see the current media environment as doing two important things to engage people in stories. First they (the proverbial they) are giving us some great new tools to create content. And secondly, they are giving us some very engaging delivery systems.

The new media environment has been especially friendly to the visual and audio arts, providing plenty of free services and software that actually help people create more interesting and sophisticated content. A site called aviary has been developing various online software tools to manipulate photos, create vector art, and do other fancy design work. And there are tutorials out there for everything, so even casual Star Wars geeks can make movies with light saber effects.

But when it comes to storytelling, there’s not much in terms of software that you can give people to actually help them craft a better story. I suppose free movie editing programs, bundled with most new computers, have had some impact on how many people are producing movies, but as I’ve learned after 7 or 8 years of teaching video production to high school students, the tools don’t guarantee good stories.

Other people can help you craft better stories, though. And I guess if tutorials are tools, so are internet communities. I can’t say whether the various poetry and fiction forums out there in cyberspace have improved storytelling in general. But they are out there. So in addition to software, add communities to the list of tools.

And then add one more: publication. The arts need an audience. And if the web is good at any one thing, it is good at giving people a stage, however small that stage may be. Blogs, web cams, and image/video hosting sites make it possible for everyone to get published.

Combine the software, a community, and publication, and you have a site like xtranormal. Here’s their “about”:

Xtranormal’s mission is to bring movie-making to the people. Everyone watches movies and we believe everyone can make movies. Movie-making, short and long, online and on-screen, private and public, will be the most important communications process of the 21st century.

Our revolutionary approach to movie-making builds on an almost universally held skill—typing. You type something; we turn it into a movie. On the web and on the desktop.

I decided to give it a try and came up with the following:

So there you have it. Clear evidence that the new media environment offers some new tools for story production. And also clear evidence that the story’s only as good as the storyteller.

But the question isn’t about whether Story 2.0 will be better than previous iterations of narratives. The question is whether our new forms of story are increasing the engagement of the readers.

The tools may or may not help the creators of tales to make more engaging content, but by engaging more people in the creation process, it can’t hurt the consumption end of that cycle.

If story-building tools aren’t helping Story to evolve, though, the various delivery systems offered in the new media environment are certainly adding layers of audience engagement.

The most obvious enhancement brought about by the web is its multi-media nature. Sites are capable of delivering images and sound along with the text we’re reading. Our minds tend to be captivated by information assaults, which is why tv is so good at lulling us into hypnogogic states. But as such mesmerism proves, engagement isn’t always active engagement.

I think Story 2.0 improves upon television by requiring a little more active navigation than is required by the remote control. How? Well, all internet browsers read html and php, and most of them read flash and javascript. These are all coding languages capable of producing dynamic user involvement — to put it simply, they allow for the user to click on things.

I’ve already delved into hypertext fiction, which is the simplest form of “dynamic user involvement.” Even noncritical user involvement (i.e. when the user’s interactivity has no bearing on the direction the story takes — like clicking on next gets you the next page) is still user involvement and is a level higher than television.

And then there’s gameplay, which gives us challenges that exist within a context of a story. Certainly, games are capable of getting us to engage with a narrative quite actively. Even if I maintain that they don’t allow any co-authoring, I must grant that games produce active engagement.

And if we’re talking about games, we’ve got to return to communities. Not only are communities tools in the creation of content, they’re also sometimes part of the delivery system. Book clubs, casual discussion of movies, and academic study — such tried and true community engagement with narratives has always been a part of story delivery. But now we can add group gameplay to that list.

And if we’re really going for gold, we can take a look at “alternate reality games,” which might be the current pinnacle of active audience engagement.

ARGs, as they’re called, begin with a narrative hook that describes some sort of mysterious event. Dana’s Aunt Margaret is having trouble with her website — weird trouble. Or some sort of strange red light has been seen in coastal waters worldwide. Or six amnesiacs wake up, blindfolded, in labyrinths around the world with tattoos on their arms that read, “Trovu la ringon perditan.”

As you investigate these mysteries, you will inevitably stumble upon links to related blogs, email updates, and some sort of forum where you can discuss the details of the mystery with other players/readers. A community of puzzle-solvers forms around the narrative, and more of the plot is revealed as the various players uncover more clues. Typically, the gameplay extends from the virtual world into the real world. So, for instance, the game Perplex City, which began with a magical cube being stolen from some other planet, ends with some real person in England who found the cube in a park in Northhamptonshire.

The vast majority of ARGs are commissioned by some sort of corporation which wants to build hype for a product of theirs. The one that starts with Aunt Margaret having trouble with her website was actually a marketing scheme for the release of Halo 2 and was called “I Love Bees.” The strange red lights in the sea is a current tie-in with the video game Bioshock’s story and is also about building hype for an upcoming Bioshock 2. The six amnesiacs were part of a game called “The Lost Ring,” launched by McDonald’s and the IOC in anticipation of the 2008 Summer Olympics.

More and more television shows are creating ARGs to help expand the universe of the series’ fictional narratives. Dollhouse, The Sarah Connor Chronicles, Heroes, Lost, and a slew of others have all attempted to increase engagement via ARGs. And according to a recent article in the Economist, it’s working.

ARGs may be the best glimpse we can get into what Story 2.0 might end up looking like. They’re marketable, and they lend themselves well to cross-promotion and advertising; as a result, they have some real money behind them, and they get promoted. Most impressively, though, they combine almost everything that the new media environment has to offer: community, interaction, and a multi-media experience.

24 Mar

Video + Interaction

On whether video games are the new media of choice for delivering stories in our digital age.

Are all video games stories? No. Tetris.

But the vast majority of games at the very least have a back story. That is, there’s some story that precedes the interactive game the player partakes in. Even Space Invaders as Jesper Juul points out, has a back story. He writes, “A prehistory is suggested in [Space] Invaders: An invasion presupposes a situation before the invasion. It is clear from the science fiction we know that these aliens are evil and should be chased away. So the title suggests a simple structure with a positive state broken by an external evil force.” Just to emphasize: a story is suggested by Space Invaders, but not actually stated.

Following Space Invaders, though, there were plenty of games that did state the story that preceded play. Classic arcade games like Double Dragon, Alien Syndrome, and Paperboy usually had some minimal introductory scenario that got the story rolling (click the titles to see the intros). The typical one had some evil mastermind kidnapping your girlfriend. Wasn’t that the premise of most of the Super Mario games? Princess Peach is kidnapped and we’re off.

In a lot of these “back story” games, though, the play doesn’t move the story forward much. The initial computer animated sequence provides a context for gameplay, but what follows is a series of challenges that have little to do with plot.

This situation continues nowadays with online shooters, which dispense with story altogether. Even the ever-popular Halo series, which has a definite narrative thread, throws story out the window for its online play, where a player is usually on a team of marines fighting against another team of marines. Such a scenario actually runs counter to the Halo story, where the player never fights against his own species.

Speaking of the Halo story, though, in it we can see a more sophisticated method of conveying the narrative. It doesn’t just begin with back story. It then proceeds to fill in gaps between the various “levels” or chapters using “cut scenes.” The cut scenes exist to propel the story forward and they alternate with actual gameplay.

But it’s rare (and a fairly recent phenomenon) that gameplay and narrative are actually delivered at the same time. RPGs and action-adventure games sometimes attempt to offer the player various narrative choices, but often those choices take place in interactive cut scenes rather than in the gameplay itself. One of my favorite examples of such an approach comes from the game Deus Ex: Invisible War. Are there any games that don’t pause the gameplay to allow the player to move the story forward through interaction? Prior to reading Jesper Juul, I would have said yes, definitely.

But now I’m not so sure.

He makes the claim that you “cannot have interactivity and narration at the same time,” an argument he bases on a pretty complex discussion of story time, narrative time, and reading time. But I think I have a simpler explanation.

If you’ve ever played a game with a narrative component to it, think about whether the narrative thread of the game would have been different without any of the player’s game challenges like fighting, puzzle solving, platform jumping, etc. In most cases, I think, it’s an either/or situation. Either you’re fighting baddies, collecting coins, doing whatever you’re character is supposed to be doing OR you’re watching the story progress through cut scenes and/or through choosing pre-packaged lines of dialogue for your character to deliver. Thus, with every video game I can think of, if you were to fast forward through the gameplay sections, the story would remain completely intact.

Modern games are getting more fluid with this alternating narration/gameplay format (a game like Fallout 3 being a good example of that — see the fifth video here for an example), but that’s all they’re doing — they’re alternating better. They’re not actually making the narrative as interactive as many game developers claim they are. Pretty hefty claims, if you ask me. “A growing number [of developers],” according to the “Brainy Gamer,” now believe that “the designer builds a system, but the player authors the story.”

Except they’re wrong. The player cannot co-author the story. As Jonathan Blow notes, “Story is a filtered presentation of events that already happened.” The player’s interaction with a game consists of alternating between navigating the pre-written (sometimes choose-your-own-adventure) story and the challenging situations that don’t really matter to the narrative. There’s no authorship on the player’s part.

Steve Gaynor articulates this sentiment well: “Video games are not a traditional storytelling medium per se. The player is an agent of chaos, making the medium ill-equipped to convey a pre-authored narrative with anywhere near the effectiveness of books or film. Rather, a video game is a box of possibilities, and the best stories told are those that arise from the player expressing his own agency within a functional, believable gameworld. These are player stories, not author stories, and hence they belong to the player himself.”

What I think Gaynor means is that there are stories delivered by the video game and stories that arise from the video game. Stories delivered by the game are in no way co-authored by the player, and stories that arise from the player’s experience — from the “box of possibilities” — are, I would argue, experiences, not stories. Sure, when you tell someone about what happened in a game, it’s now a story, but in the game itself, it’s an experience.

Gaynor continues: “Unlike a great film or piece of literature, (video games) don’t give the audience an admiration for the genius in someone else’s work; they instead supply the potential for genuine personal experience, acts attempted and accomplished by the player as an individual, unique memories that are the player’s to own and to pass on.”

The goal of many developers, then, is to provide a rich world that the player can navigate freely; these worlds are often referred to as “sandboxes.” But the purest sandboxes eradicate narrative.

Enter Second Life, the perfect example of a virtual experience that has no narrative element to it. Sure, some stories may arise out of it, but Second Life itself has no more plot than the state of Wisconsin does.

In a figurative sense, sure, we’re all authoring our own life story, but that’s just a metaphor. We’re experiencing our lives. My life isn’t a story until I craft it into a “filtered presentation of events that already happened.” Thus, narrative and experience are somewhat at odds. The listener/reader/viewer/player cannot tell the story she’s receiving. It’s an impossibility by definition. She can choose between various options if the storyteller gives her any, but to say she’s then co-authoring the story is hyperbole.

As for video games, they use story to create an enjoyable context for gameplay. And they are undoubtedly an effective delivery medium for some pretty good stories. But the player’s interactivity isn’t as revolutionary as some would make it out to be. They do raise some interesting questions, however: what is the storyteller’s goal for his audience and what is the ideal reception on the part of the receiver of the story? I’ll explore those topics next.