08 Apr

Flutter

On why this is funny and why it isn’t.

First watch the video:

So, since my readership is so huge and diverse (and therefore may not understand all that’s being mocked in the clip), let me begin by explaining some of the humor.

  • Flutter’s fictional founders are Stanford dropouts. It’s typical of web 2.0 shit that the founders were college kids at prestigious universities, who had (sometimes only) one good idea. Google’s founders were Stanford students. Facebook’s founders were at Harvard. And Twitter’s founders were from Cornell.
  • “A lot of people don’t have time to twitter.” Yeah. The whole concept of microblogging is absurd. Even more absurd than blogging. But it certainly doesn’t require time.
  • Nor does it require thought, really. “You hardly have to think about what you’re posting.” The majority of tweets are — like the majority of things people say — not witty, insightful, or really all that enlightening anyway.
  • “Flaps.” And later in the video, some guy calls tweets “twits.” Perhaps not quite as amusing as how Stephen Colbert conjugates the verb, but funny nonetheless. It’s funny (ha ha) and funny (strange) that a new verb can enter our language so quickly.
  • “FlutterEyes” mocks the kind of people who spend all their time texting other people, which is a direct slap in the face to those they’re interacting with in the real world.
  • “MySpace, I guess.” Ha. MySpace is really uncool, and so it seems genuine that a hip web 2.0 company would be reluctant to develop easy access to it.
  • Other hip web 2.0 applications have sold out and become more commercial. So the “$Pepsi” thing parodies those.
  • “Shutter without the vowels.” I’ll let you figure that out. It’s also worth noting that twitter.com originally was twttr. No joke.

Okay, now that I’ve killed the humor by analyzing it, let me explain what’s actually somewhat scary about this Flutter concept. In my Science Fiction class this year, I’ve been examining predictions of future technology. Not just the crackpot predictions, mind you. But the well-grounded predictions made by respected academics. And there are a few things hinted at in the Flutter mockumentary that aren’t that far off.

First off, what will really happen to our intelligence? As writer Nicholas Carr points out in his famous article about Google making us stupider, “as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.” This flattening to the artificial might happen sooner than we think.

It’s fairly inevitable, for instance, that our human memories will soon become unnecessary. Have you ever forgotten someone’s name? Ever had an argument about who took out the garbage last? According to Jim Gray of Microsoft Research, “It will soon be possible – in terms of cost and size – to store a complete digital video record of your life.” So you can settle that argument about who last took out the garbage. Eric Horvitz, also of Microsoft Research, takes this stuff a step further: “As more of our lives go digital, we may use a program to sort our data. And it could hook up to software that understands the things people forget.” Facial recognition software + video = never forgetting another name. This supersession of memory is almost a definite. If we, as a race, survive for the next three decades, we’ll see such things happening.

One of the costs, though, will be privacy. The Flutter video jokes about absolute transparency when it describes the iPhone app that will know where you are and “flap automatically.” This sort of thing is also a definite. In the near future, more and more items will be hooked up to the internet. People like Ray Kurzweil and Kevin Kelly have predicted that the internet, which we now access through our desktop and laptop computers, will be all around us. By placing RFID chips in food packaging and in clothing, we’ll literally be living in the web. And it will allow some pretty cool things. We could get customized recipe suggestions from the food items in our cupboard, which would “communicate” with each other. We could find a lost sweater simply by searching for it on Google.

We’re only a year or two away from having our mobile devices capable of updating every half hour with our GPS coordinates. Actually, many of them could do that right now with the right software. But as more places and objects get hooked into the net with these RFID chips and whatnot, our phones will be able to give more information than our GPS coordinates. They’ll be able to essentially track us throughout the day with identifiers like “Starbucks bathroom.” But the price will be privacy. “If you want total personalization,” Kevin Kelly notes, “you’ll need total transparency.”

If you’re willing to give up some privacy, though, you’ll probably find yourself integrating with technology more and more. That’s not to say you’ll allow a chip to be implanted under your skin, but perhaps you’ll get yourself a pair of FlutterEyes. Or maybe a pair of “active contact lenses,” which would “project words and images into the eye.” And if you do so, that might be the “gateway drug” of sorts to more technological augmentation. We already have some pretty useful augmentation in the form of cochlear implants and visual cortex implants. And there are currently paraplegics whose brains are hooked up to electrodes which allow them to move a cursor on a computer screen. (This was done four years ago to Matthew Nagle, by Dr. John Donoghue, the end goal being to allow those with spinal cord injuries to bypass the damaged neurons altogether.)

Bran Ferren of Walt Disney Imagineering — admittedly not as impressive an employer as others — claims that “the technology needed for an early Internet-connection implant is no more than 25 years off.” But Ray Kurzweil has made some equally bold assertions. Nanotechnology is currently taking off, and since technology develops at exponential rates, we will someday soon have respirocytes, nanotech red blood cell substitutes which are much more efficient than actual red blood cells. A human whose blood was made up of 10% nanotech respirocytes would be able to hold his breath for four hours. “Nanobots capable of entering the bloodstream to ‘feed’ cells and extract waste will exist (though not necessarily be in wide use) by the end of the 2020s. They will make the normal mode of human food consumption obsolete.”

Given, we’re now delving into some pretty far-fetched stuff that’s not going to happen really soon, but as long as we’re going there, let’s examine the ideas of James Hughes, author of Citizen Cyborg, who speculates, “if we get to the point where we can back up our memories and our feelings, we may be able to then share them with other people.” When you get married, you might “negotiate how much of your personal memory space you’re going to merge. . . . So the boundaries between us will begin to blur.” He also posits (as does Aubrey de Grey) that our life spans will get to be very long — perhaps in the 1000s of years. My first reaction to such assertions is to be scared. But Hughes gets philosophical: “I don’t want to be immortal. What I want is to live long enough so that I understand how profoundly illusory the self is and I’ve shared enough of my experiences and thoughts and I’ve stored them up and given them to other people enough that I no longer feel like this particular body — existence — needs to go on. . . . That’s the post-human equivalent of the Buddhist enlightenment.”

Is that where we’re headed? Enlightenment or stupidity? Man or machine?

Google’s founders (Stanford grads Larry Page and Sergey Brin) have claimed that they’re really “trying to build artificial intelligence and do it on a large scale.” Brin has stated, “Certainly if you had all the world’s information attached directly to your brain . . . you’d be better off.”

But Nicholas Carr counters with the following eloquent rebuttal: “their easy assumption that we’d all ‘be better off’ if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.”

Who knows? Maybe some day, humanity will look back on this Age of Human Fallibility fondly and long for a sort of imperfection and lack of understanding no longer possible. Until that day, I still desperately want an iPhone.

For further reading:
Kevin Kelly’s TED Talk
U of Washington Tech Predictions
2057 Video (Includes paraplegic cursor movement)
Is Google Making Us Stupid?
James Hughes’ Citizen Cyborg
Results of Pew Poll of 700 tech experts on potential trends for 2020
Ray Kurzweil’s TED Talk
Ray Kurzweil’s main points from The Singularity is Near
Summary of WIRED UK’s top predictions
To the Best of Our Knowledge “Future Perfect: Our Computers”
Chip Implants for Paraplegics

29 Mar

Story 2.0

On what new media has to offer to engage us in stories.

Interactivity is the buzz word in the new media realm. But what are we really after when we strive for interactivity? I would argue that the goal is to have the reader/viewer/player/listener engaged. It’s that simple.

Of course, the pinnacle of engagement is in the creative process. I learn much more, for instance, when I teach a class than when I take one. In creating the curriculum, I need to be more invested — mentally and emotionally — in the product of that creation. This sort of logic is, I would argue, behind many of the web 2.0 innovations we’ve seen in recent years. It explains the popularity of sites like YouTube, Facebook, and Flickr. They don’t stop at offering their users content to ingest; they allow the users to create the food they’re eating, too.

But if one of the defining characteristics of web 2.0 is interactivity, does Story 2.0 require the same? As I explained in my last post, I don’t think that the audience of a story can ever co-author it. But I do see the current media environment as doing two important things to engage people in stories. First they (the proverbial they) are giving us some great new tools to create content. And secondly, they are giving us some very engaging delivery systems.

The new media environment has been especially friendly to the visual and audio arts, providing plenty of free services and software that actually help people create more interesting and sophisticated content. A site called aviary has been developing various online software tools to manipulate photos, create vector art, and do other fancy design work. And there are tutorials out there for everything, so even casual Star Wars geeks can make movies with light saber effects.

But when it comes to storytelling, there’s not much in terms of software that you can give people to actually help them craft a better story. I suppose free movie editing programs, bundled with most new computers, have had some impact on how many people are producing movies, but as I’ve learned after 7 or 8 years of teaching video production to high school students, the tools don’t guarantee good stories.

Other people can help you craft better stories, though. And I guess if tutorials are tools, so are internet communities. I can’t say whether the various poetry and fiction forums out there in cyberspace have improved storytelling in general. But they are out there. So in addition to software, add communities to the list of tools.

And then add one more: publication. The arts need an audience. And if the web is good at any one thing, it is good at giving people a stage, however small that stage may be. Blogs, web cams, and image/video hosting sites make it possible for everyone to get published.

Combine the software, a community, and publication, and you have a site like xtranormal. Here’s their “about”:

Xtranormal’s mission is to bring movie-making to the people. Everyone watches movies and we believe everyone can make movies. Movie-making, short and long, online and on-screen, private and public, will be the most important communications process of the 21st century.

Our revolutionary approach to movie-making builds on an almost universally held skill—typing. You type something; we turn it into a movie. On the web and on the desktop.

I decided to give it a try and came up with the following:

So there you have it. Clear evidence that the new media environment offers some new tools for story production. And also clear evidence that the story’s only as good as the storyteller.

But the question isn’t about whether Story 2.0 will be better than previous iterations of narratives. The question is whether our new forms of story are increasing the engagement of the readers.

The tools may or may not help the creators of tales to make more engaging content, but by engaging more people in the creation process, it can’t hurt the consumption end of that cycle.

If story-building tools aren’t helping Story to evolve, though, the various delivery systems offered in the new media environment are certainly adding layers of audience engagement.

The most obvious enhancement brought about by the web is its multi-media nature. Sites are capable of delivering images and sound along with the text we’re reading. Our minds tend to be captivated by information assaults, which is why tv is so good at lulling us into hypnogogic states. But as such mesmerism proves, engagement isn’t always active engagement.

I think Story 2.0 improves upon television by requiring a little more active navigation than is required by the remote control. How? Well, all internet browsers read html and php, and most of them read flash and javascript. These are all coding languages capable of producing dynamic user involvement — to put it simply, they allow for the user to click on things.

I’ve already delved into hypertext fiction, which is the simplest form of “dynamic user involvement.” Even noncritical user involvement (i.e. when the user’s interactivity has no bearing on the direction the story takes — like clicking on next gets you the next page) is still user involvement and is a level higher than television.

And then there’s gameplay, which gives us challenges that exist within a context of a story. Certainly, games are capable of getting us to engage with a narrative quite actively. Even if I maintain that they don’t allow any co-authoring, I must grant that games produce active engagement.

And if we’re talking about games, we’ve got to return to communities. Not only are communities tools in the creation of content, they’re also sometimes part of the delivery system. Book clubs, casual discussion of movies, and academic study — such tried and true community engagement with narratives has always been a part of story delivery. But now we can add group gameplay to that list.

And if we’re really going for gold, we can take a look at “alternate reality games,” which might be the current pinnacle of active audience engagement.

ARGs, as they’re called, begin with a narrative hook that describes some sort of mysterious event. Dana’s Aunt Margaret is having trouble with her website — weird trouble. Or some sort of strange red light has been seen in coastal waters worldwide. Or six amnesiacs wake up, blindfolded, in labyrinths around the world with tattoos on their arms that read, “Trovu la ringon perditan.”

As you investigate these mysteries, you will inevitably stumble upon links to related blogs, email updates, and some sort of forum where you can discuss the details of the mystery with other players/readers. A community of puzzle-solvers forms around the narrative, and more of the plot is revealed as the various players uncover more clues. Typically, the gameplay extends from the virtual world into the real world. So, for instance, the game Perplex City, which began with a magical cube being stolen from some other planet, ends with some real person in England who found the cube in a park in Northhamptonshire.

The vast majority of ARGs are commissioned by some sort of corporation which wants to build hype for a product of theirs. The one that starts with Aunt Margaret having trouble with her website was actually a marketing scheme for the release of Halo 2 and was called “I Love Bees.” The strange red lights in the sea is a current tie-in with the video game Bioshock’s story and is also about building hype for an upcoming Bioshock 2. The six amnesiacs were part of a game called “The Lost Ring,” launched by McDonald’s and the IOC in anticipation of the 2008 Summer Olympics.

More and more television shows are creating ARGs to help expand the universe of the series’ fictional narratives. Dollhouse, The Sarah Connor Chronicles, Heroes, Lost, and a slew of others have all attempted to increase engagement via ARGs. And according to a recent article in the Economist, it’s working.

ARGs may be the best glimpse we can get into what Story 2.0 might end up looking like. They’re marketable, and they lend themselves well to cross-promotion and advertising; as a result, they have some real money behind them, and they get promoted. Most impressively, though, they combine almost everything that the new media environment has to offer: community, interaction, and a multi-media experience.

24 Mar

Video + Interaction

On whether video games are the new media of choice for delivering stories in our digital age.

Are all video games stories? No. Tetris.

But the vast majority of games at the very least have a back story. That is, there’s some story that precedes the interactive game the player partakes in. Even Space Invaders as Jesper Juul points out, has a back story. He writes, “A prehistory is suggested in [Space] Invaders: An invasion presupposes a situation before the invasion. It is clear from the science fiction we know that these aliens are evil and should be chased away. So the title suggests a simple structure with a positive state broken by an external evil force.” Just to emphasize: a story is suggested by Space Invaders, but not actually stated.

Following Space Invaders, though, there were plenty of games that did state the story that preceded play. Classic arcade games like Double Dragon, Alien Syndrome, and Paperboy usually had some minimal introductory scenario that got the story rolling (click the titles to see the intros). The typical one had some evil mastermind kidnapping your girlfriend. Wasn’t that the premise of most of the Super Mario games? Princess Peach is kidnapped and we’re off.

In a lot of these “back story” games, though, the play doesn’t move the story forward much. The initial computer animated sequence provides a context for gameplay, but what follows is a series of challenges that have little to do with plot.

This situation continues nowadays with online shooters, which dispense with story altogether. Even the ever-popular Halo series, which has a definite narrative thread, throws story out the window for its online play, where a player is usually on a team of marines fighting against another team of marines. Such a scenario actually runs counter to the Halo story, where the player never fights against his own species.

Speaking of the Halo story, though, in it we can see a more sophisticated method of conveying the narrative. It doesn’t just begin with back story. It then proceeds to fill in gaps between the various “levels” or chapters using “cut scenes.” The cut scenes exist to propel the story forward and they alternate with actual gameplay.

But it’s rare (and a fairly recent phenomenon) that gameplay and narrative are actually delivered at the same time. RPGs and action-adventure games sometimes attempt to offer the player various narrative choices, but often those choices take place in interactive cut scenes rather than in the gameplay itself. One of my favorite examples of such an approach comes from the game Deus Ex: Invisible War. Are there any games that don’t pause the gameplay to allow the player to move the story forward through interaction? Prior to reading Jesper Juul, I would have said yes, definitely.

But now I’m not so sure.

He makes the claim that you “cannot have interactivity and narration at the same time,” an argument he bases on a pretty complex discussion of story time, narrative time, and reading time. But I think I have a simpler explanation.

If you’ve ever played a game with a narrative component to it, think about whether the narrative thread of the game would have been different without any of the player’s game challenges like fighting, puzzle solving, platform jumping, etc. In most cases, I think, it’s an either/or situation. Either you’re fighting baddies, collecting coins, doing whatever you’re character is supposed to be doing OR you’re watching the story progress through cut scenes and/or through choosing pre-packaged lines of dialogue for your character to deliver. Thus, with every video game I can think of, if you were to fast forward through the gameplay sections, the story would remain completely intact.

Modern games are getting more fluid with this alternating narration/gameplay format (a game like Fallout 3 being a good example of that — see the fifth video here for an example), but that’s all they’re doing — they’re alternating better. They’re not actually making the narrative as interactive as many game developers claim they are. Pretty hefty claims, if you ask me. “A growing number [of developers],” according to the “Brainy Gamer,” now believe that “the designer builds a system, but the player authors the story.”

Except they’re wrong. The player cannot co-author the story. As Jonathan Blow notes, “Story is a filtered presentation of events that already happened.” The player’s interaction with a game consists of alternating between navigating the pre-written (sometimes choose-your-own-adventure) story and the challenging situations that don’t really matter to the narrative. There’s no authorship on the player’s part.

Steve Gaynor articulates this sentiment well: “Video games are not a traditional storytelling medium per se. The player is an agent of chaos, making the medium ill-equipped to convey a pre-authored narrative with anywhere near the effectiveness of books or film. Rather, a video game is a box of possibilities, and the best stories told are those that arise from the player expressing his own agency within a functional, believable gameworld. These are player stories, not author stories, and hence they belong to the player himself.”

What I think Gaynor means is that there are stories delivered by the video game and stories that arise from the video game. Stories delivered by the game are in no way co-authored by the player, and stories that arise from the player’s experience — from the “box of possibilities” — are, I would argue, experiences, not stories. Sure, when you tell someone about what happened in a game, it’s now a story, but in the game itself, it’s an experience.

Gaynor continues: “Unlike a great film or piece of literature, (video games) don’t give the audience an admiration for the genius in someone else’s work; they instead supply the potential for genuine personal experience, acts attempted and accomplished by the player as an individual, unique memories that are the player’s to own and to pass on.”

The goal of many developers, then, is to provide a rich world that the player can navigate freely; these worlds are often referred to as “sandboxes.” But the purest sandboxes eradicate narrative.

Enter Second Life, the perfect example of a virtual experience that has no narrative element to it. Sure, some stories may arise out of it, but Second Life itself has no more plot than the state of Wisconsin does.

In a figurative sense, sure, we’re all authoring our own life story, but that’s just a metaphor. We’re experiencing our lives. My life isn’t a story until I craft it into a “filtered presentation of events that already happened.” Thus, narrative and experience are somewhat at odds. The listener/reader/viewer/player cannot tell the story she’s receiving. It’s an impossibility by definition. She can choose between various options if the storyteller gives her any, but to say she’s then co-authoring the story is hyperbole.

As for video games, they use story to create an enjoyable context for gameplay. And they are undoubtedly an effective delivery medium for some pretty good stories. But the player’s interactivity isn’t as revolutionary as some would make it out to be. They do raise some interesting questions, however: what is the storyteller’s goal for his audience and what is the ideal reception on the part of the receiver of the story? I’ll explore those topics next.

07 Mar

Established Storytelling Adapted

On what’s currently happening to established forms of storytelling in the digital age.

Part of the issue in figuring out where fiction will go from here is to determine whether current forms will be improved or whether new forms will be invented. The novel was not a huge departure from its predecessors. It was simply a longer story. So it’s really more of an enhancement of previous fiction rather than a complete revolution, like film.

What will happen from here on out, though? Will we maintain current forms of fictional stories and adapt them to the web, or will we come up with fundamentally different ways of delivering narrative? That’s the question.

In a talk he has given multiple times, Kevin Kelly of Wired Magazine explains that when internet content was starting to take off in the 90s, the powers that be thought it would be like “TV, only better.” But it surprised them because a) it wasn’t at all like TV, and b) it ended up producing content nobody could have predicted.

I have no illusions here: I don’t claim to know what will happen with the future of storytelling. But we can look at what has happened already in the relatively short period of time that digital media has thrived. And today, I’d like to examine how already-established forms of storytelling are faring in this new media environment.

I want to first mention the novel, even though I know I’ve already said quite a bit about it. The Kindle, and various other ebook readers, were developed for one primary purpose: to keep the novel alive. Or, if not the novel, then at least novel-length books. That’s one of the reasons I don’t see the novel dying any time soon. It’s a priority for our society still. We’re creating technology for it.

On the other hand, I don’t see the novel faring well online with our current web interfaces. For all the reasons I stated earlier, I think shorter chunks are the key to the current digital media environment. So what about serialized novels, you ask. Good question.

I don’t think they’re doing too well either. That’s not to say there aren’t lots of them. There are. They go by the name “webserials.” And you can find plenty of them at webfictionguide.com. But for now, they remain one of those relatively obscure niches on the web, mostly populated by aspiring authors.
For webserials to really be successful, they’re going to have to be featured on sites that attract readers. This has been done, too. Sites like Salon, boingboing, and Slate have published serialized fiction. But they have some problems. The Salon one, according to one reader, just kind of faded into obscurity by the 35th installment (I can’t verify that). The boingboing one linked to a pdf file, so you were essentially just downloading one chapter per week of a book that had already been published, thus robbing the serial of its much-needed sense of what’s-going-to-happen-next-? And the Slate one I can’t even get to load.

Ultimately, though, serials just haven’t ever gotten back the popularity they enjoyed in the Victorian era, despite some notable exceptions here and there. As one informative piece on serialization points out, though, serials never actually died; they just changed form.

Two such forms have done well on the web. One is the comic. Webcomics are cheap to put together and some enjoy as wide a readership as print comics. Xkcd is my personal favorite, though it, like Bizarro (my other favorite comic), doesn’t have an ongoing narrative (with a couple notable exceptions here and here: 1, 2, 3, 4, & 5). Others, like Penny Arcade and Weregeek are pretty popular, and there are some very clever ones that have attractive interfaces, making them fairly interactive. The Right Number and Nine Planets without Intelligent Life are my favorites.

Beyond webcomics, video series, or webisodes, have done alright, too. A quality webisode usually requires a big monetary investment, though. Which is why some really good ones, like 72nd to Canal and The Remnants, have just fizzled out. But Dr. Horrible’s Sing-a-long Blog, Chad Vader, Lonelygirl15, Quarterlife, Red vs. Blue, and several others have gotten a significant viewership.

With the development of personal media devices like the iPod, and with the addition of 3G internet access to sophisticated cell phones, I think the video format of storytelling is adapting well to new media. They don’t even need to be in serial form. Some of my recent finds include excellent stories like Evol, Ida’s Luck (Part 1 & Part 2), and Glory at Sea.

Ultimately, video on the web is both better than and quite inferior to television. There’s certainly enough quality out there to rival traditional TV. But finding it is a little more difficult. Fledgling programs like Miro and joost and hulu have had some success, but miro is the only one of the three that is pure internet tv and I just haven’t found many channels worth subscribing to.

So even video, I would argue, hasn’t achieved its optimal form of propagation through the internet tubes. There remain issues of accessibility and consolidation. Clearly the internet won’t kill video (like video killed the radio star), nor will it kill comics or novels. But I’d say video is in transition. Might it be headed toward something with more interaction?

That’s what I’ll look at next time. Video + interaction = _______. Fill in the blank.

03 Mar

Early Attempts at New Media Stories

Part one of an investigation into the kinds of storytelling that are currently thriving in our digital age.

These days in America, nonfiction outsells fiction by a factor of three to one, I’ve heard. And any avid web surfer can tell you that nonfiction is way more popular on the net. Still, I don’t think fiction will go away. Imaginative narratives are hardwired into us. We live nonfiction; we dream in fiction. And we’ve been telling made-up stories to each other for millenia. It’s part of what it means to be human.

That said, there’s no denying that fiction has changed and evolved over the years. And it will continue to do so. Writing was invented some 5000 years ago, and since then, various new technologies have had huge impacts on the kinds of stories told. For instance, the novel exists primarily because of the printing press, though other factors like a growing middle class — with increased leisure time — come into play. But just think about plays, radio drama, movies, television series — all of these were the results of technological advancements.

There’s no doubt that the computer and the internet are also having their effects on the way stories are told. New media interfaces like web browsers, gaming consoles, mobile devices, embedded video, and in at least one case, google maps are becoming the new stages on which our current tales are being acted out. But what do these current tales look like? And are they really any different from fictions we’ve seen in the past? Today, I’ll look at two of the earlier pioneers of the new fiction frontier.

The first computer-aided attempts at interactive storytelling predated the internet, but you can find plenty of those early experiments online. Interactive fiction began sometime in the 70s, I think. Maybe even earlier. The phrase “interactive fiction” is actually a pretty specific type of story, one in which the reader types commands to determine what elements of the narrative will be described next. So, for example, you might encounter a scene like this: “You walk into the room to find a table on which rests an unopened letter and a strange looking box.” You would then type “open letter” or “open box” to expose the next block of text, telling you about the contents of the letter or whatever. I find it a little frustrating since it’s not always completely intuitive and since it ends up delivering plot quite slowly. Take a look at my feeble attempt to navigate one of these things.

Another early foray was hypertext fiction, which basically consisted of a simpler sort of reader involvement than interactive fiction. In hypertext fiction, you just click on links to get to the next section of the story. It’s basically a choose-your-own-adventure novel on the computer. And its heyday was in the 90s. Here’s me navigating the “Starry Pipe Book,” which is typical of simplistic hypertext fiction.

Nowadays, hypertext fiction has gotten a little more sophisticated. They’re often displayed in Flash and include some nice graphics and sometimes audio, so they are truly multi-media. The story “Inanimate Alice” is an example. It provides a very multi-media experience but is still essentially about just clicking to navigate pages.

Inanimate Alice

We can ask the readers to do things other than just clicking on links, though. And many have tackled more experimental/artistic ways of incorporating reader interaction. Ideally, the interaction and the multi-media components have some thematic or narrative purpose. But at this relatively early stage of tweaking story form and structure, you’re bound to get some pretty artsy, postmodern stuff. That’s cool and all, but what you end up with is not really story. Instead, it’s a lot of poetry and experimental writing.

The poem “Cruising” is a good example of a piece of writing that uses its Flash interactivity effectively, even if it’s only a minor supplement to the already-good poem. As the blurb at the ELO Collection states, “Cruising is an excellent example of a Flash poem that, while primarily linear and cinematic, makes use of interactivity in a limited way that complements the subject of the poem, the coming-of-age ritual cruising, with hormones raging, in small town America.”

Cruising

But then there’s other stuff, like “Soliloquy,” which is “an unedited document of every word [the author] spoke during the week of April 15-21, 1996, from the moment [he] woke up Monday morning to the moment [he] went to sleep on Sunday night.” It took him 8 weeks of working 8 hours a day to transcribe the whole thing. And the end result is something that’s an intriguing postmodern work of art, I suppose. But definitely not something I want to actually read.

Soliloquy

And that’s the problem with much of this hypertext stuff. It doesn’t have much mainstream appeal. Not that mainstream appeal is a prerequisite for quality (see American Idol), but it is pretty important in determining what direction fiction will take in the next few decades or centuries. Interactive fiction and hypertext fiction are both intriguing forms of storytelling, but they’re not the next it.