Authors: Tom Chatfield
The industry is known as gold farming because ‘gold’ is the most common name for virtual currency in online games, and ‘farming’ is an especially apt metaphor for the set routines that it’s necessary to perform for hour after hour within an online game in order to earn said gold. Statistics are necessarily sketchy, but it’s undoubtedly a booming sector. At its global heart, in China, some estimates put the number of gold farmers at close to a million people and their annual trading at close to $10 billion; certainly, the global figure should be thought of in billions rather than millions.
Twenty years ago, this would have sounded like the most extreme kind of satire, had you attempted to describe it, and yet the economic logic behind gold farming is as sound as that behind any kind of outsourcing – or, indeed, behind operating a sweatshop in
Second Life
. The amount that consumers will pay for the product, less the effort and expense it takes to make it, equals profit; and it’s neither here nor there, in at least these fundamental terms, that the product in question has no existence beyond bits of data on a number of computers owned and operated by a video games company. A typical ‘cottage gold farming’ outfit in China involves a team of workers living together in a dormitory and working in a rented room full of computers for an effective wage of 30 cents per hour. In this rented room, they will play a game – usually
World of Warcraft
, thanks to the size and affluence of its player market – for twelve hours a night, seven days a week. They will battle through precise areas and tasks again and again, pulling in virtual gold which will then be sold by their local employer to an online retailer for around $3 per hundred gold coins. Because the gold cannot exist independently of a game account, the sale will mean that both parties must have their own game characters, and will use the in-game player-to-player mailing system to transfer the gold. These hundred coins will then be sold for up to $20 to a Western player, via one of the numerous websites specialising in such transactions – and the coins will duly arrive in the player’s in-game mail box. Such transactions are, naturally, explicitly against the rules of the game as defined by its operators.
As you might expect, endless variations exist on this theme, some far more profitable and sophisticated than others. You can pay for someone else to build you a character from scratch. You can buy characters at a particular level and of a particular class for your friends. You can even pay a team of twenty-five virtual mercenaries to take you along as a passenger through the toughest dungeon in the game, and get to pick up every single piece of treasure for yourself along the way. It sounds crazy, and doesn’t come cheap, but given that the world is already collectively spending over forty billion legal dollars on video games annually, this kind of secondary expenditure has a whiff of inevitability about it.
What does it all signify, though, beyond the fact that some people are only too happy to profit from others taking play very seriously? The New York-based author and technology journalist Julian Dibbell has spent more time than most plumbing the depths of virtual and real-world interactions. He has been writing about video gaming since the early 1990s, but it was in 2003 that he took his most radical step into testing where the boundaries between modern work and electronic play might lie, deciding that he would spend a year attempting to earn a living wage, in America, entirely by trading in virtual items found within the game
Ultima Online
. It was an endeavour he recorded in a book,
Play Money
(2006), and culminated in the conclusion that he could earn around $3,000 a month, working no more than fifty hours a week, entirely by trading in-game items.
What was it like to spend so much time in an ephemeral realm, working entirely with objects that had no real-life counterparts, I asked? It was, he explained, a bizarrely rooted existence in many ways, compare to the vagaries of the real world’s money markets. ‘The irony is that, compared to the financial derivatives that got puffed into a bubble and burst in 2008, these virtual economies and these virtual monies are much more solidly founded and robust. People say that this virtual trade is the ultimate culmination of high capitalist economics, where all that is solid melts into air and money is based on nothingness. Yet, if you look at what is actually going on in these games, there is something much more solid there than what’s happening in financial markets. This virtual gold has real value because people have real attachments to it: and not just to the games, but to the other people that are in them.’
When someone in America pays $100 to a website to buy some virtual gold that has taken a Chinese person 100 hours to earn, it’s certainly difficult to treat the leisure of the one and the labour of the other as two different orders of activity. The American will even go out and do, for pleasure (and while paying for the privilege), almost exactly the same thing as the Chinese person has just done for a subsistence wage. The only real difference is the element of choice. And yet, even here, the boundaries are far from clear-cut. As Dibbell discovered while investigating the world of Chinese gold farming on location (something that he detailed for the
New York Times
magazine in June 2007), even in the depths of a twelve-hour daily
World of Warcraft
grind, a Chinese worker could say that he felt a ‘playful attitude’ towards what he was doing. Or there was the ten-man team of ‘power levellers’ who Dibbell observed choosing, with only one exception, to spend their few waking hours of free time in the very same game that they were working themselves to the bone playing for money:
World of Warcraft
.
One key aspect of this staggering motivational capability is what’s known as a ‘reward schedule’ – that is, a carefully tailored timetable governing the rate at which different kinds of rewards are given to players as they progress through a game. At the start, when a lot of basic learning is likely to be going on and a player doesn’t yet have much invested in the game, the rewards will come close together: more powers, graphical effects, new equipment, a higher character level, new areas to explore, and so on. Gradually, then, these rewards will become further and further apart, with a tantalising random element included to keep players guessing (and hopeful) and plenty of distractions and mini-objectives to keep them committed. If it’s sufficiently well designed and, most importantly, thoroughly tested and refined, a player will suddenly discover that it now takes two days rather than two hours to raise their character by one more level – but that they’re quite willing to invest the time needed to progress further.
There is a certain level of paradox here: that, in a well-made game, the more fun someone is having, the harder they work. It’s almost as if a video game is not only something that delivers fun by satisfying the innate human love of learning, but also a device that trains people to work far harder than they otherwise would by turning work into a series of tangibly rewarded learning challenges.
This motivational power is something Nick Yee has explored in his work, citing in particular the example of a behavioural economics study in which people were given the option of doing ‘a really boring task’ for either half an hour or an hour. They weren’t to be paid for their time as such; instead, half an hour meant earning 30 points, while an hour meant 100 points. These points could then be spent on ice cream, according to two rules: 100 points awarded pistachio flavour and 30 points awarded vanilla. Most of the undergraduates taking part decided to work for an hour in order to get 100 points and thus the pistachio ice cream. Yet, when the experimenters went back and asked what kind of ice cream they preferred, most people turned out to prefer vanilla. It sounds like a trivial enough observation, but it illustrates just how powerful a motivator even a completely arbitrary scoring system can be. It’s a little like magic, after all, to persuade a room full of people to work for twice as long as they need to in order to earn a reward that most of them don’t like as much as the one they would have got had they worked for half that amount of time.
This is the kind of magic at which video games excel beyond anything else. It is a medium in which, quite literally, one kind of value is conjured out of thin air while – almost unnoticed – the certainty of most other kinds is whisked away. It is a kind of magic that has an unusually close relationship with another ultimately arbitrary scoring system that exists only because of human consensus: money itself. What is money, after all, if not a shared fiction maintained to allow the exchange, purchase and valuation of real goods and services independently of their actual nature? As the recent financial crisis has demonstrated, money is a fiction that can all too easily take on many of the characteristics of farce. Yet many of the greatest changes to the way we think about value, effort and exchange in the future are likely to come not from the increasingly abstract activities of lenders and spenders dealing in vast sums, but from a far more basic set of principles relating to what ordinary people actually attach value to and decide to spend their time pursuing. As online games have already begun to demonstrate, millions of people are barely able to put an upper limit on the worth of certain kinds of fun; and it’s this economics of pleasure and leisure rather than of labour that has perhaps the greatest lessons to teach us about the coming century.
C
HAPTER
9
Serious play
In 2007, the Serious Games Institute was founded at Coventry University in England. The name, which sounds like a contradiction in terms, signalled the Institute’s commitment to studying video games’ potential uses across a spectrum of ‘serious’ activities: games as learning and training tools, as educational aids, as a means of social and political engagement, and so on. Under the aegis of its founding director David Wortley, however, the Institute has also developed a strong emphasis on understanding some of the larger principles underpinning modern gaming – and on exploring what lessons games might hold for the realms of business and public service, not to mention their broader philosophical and sociological implications.
It’s a field that lies on the periphery of the modern games industry; nevertheless, a gathering body of research into the kind of lessons that might be drawn from video gaming globally is starting to hold out the prospect of quite transforming discoveries, both in enterprise and in the more nebulous region of the social and psychological sciences.
Wortley’s own background is far from traditional for the games industry. For a start, there’s his age: at nearly sixty, he has several decades on most of even his more venerable peers. Moreover, although he has always worked with technology, his long career is rooted in the world of business rather than entertainment: first as an electronic engineer, then as a manager for the British Post Office, an IBM marketing executive, an information technology entrepreneur and, now, an academic and researcher.
There are, Wortley believes, two common attributes emerging in the interconnected spheres of social networks, games and virtual worlds that are absolutely crucial to the future of twenty-first-century business. First, there is the notion of personalisation: games and online social networks have between them transformed the expectations of an entire generation in their dealings with technology. Wortley explains, For the first time ever, people interacting with a computer are able to personalize their own space. They genuinely feel they have something individual to them which they can shape.’ Second, there is the related concept of persistence: the fact that people now expect a virtual environment to have many of the properties of a real, miniature world, with its own continuing existence independent of its users logging on or off. ‘With a persistent environment, when you go back in, it remembers where you were before: what you did, the assets and marks you created, your achievements. There is a kind of mirror image of the real world that you can create for yourself.’
Between them, these points represent something that the traditional computing and communications industries – for all the billions of dollars spent every year on research and development – often do badly or not at all. Computers and operating systems within business, for example, are almost entirely static as work environments. Every time you turn a computer off or exit an individual program, while you may save individual files and settings, there is no sense that you are moving into and out of a truly customisable environment that goes on existing in your absence, and that can be fundamentally modified in its appearance and behaviour to suit your preferences and needs. There is no sense that you have a permanent, individual space on your computer where work can be shared and ideas discussed; nor, when you are away can other people see exactly what you have been working on. Compared to a MySpace or Facebook page, let alone a character or location in a virtual world, most computers are about as dynamic as the fabric of the building they’re sitting in.
In addition to these failings, most non-entertainment programs on computers – from word processing to databases or email clients – remain dauntingly hard to master to the uninitiated. This is because they rely on a set of conventions that seem simple enough when memorised, but that have little intuitive logic to them; for example, the fact that the ‘File’ menu in most Microsoft programs contains the ‘Exit’ command is almost impossible to reason out except by trial and error or the laborious consultation of search documents. Indeed, the whole system of drop-down menus driven by key words like ‘Edit’ and ‘Tools’ is hardly a model of self-explanatory usability, despite years of efforts to improve it.
In an age in which digital literacy is essential within most workplaces, it may sound trivial to worry about such basics; but anyone who has either had to explain how, say, an internet browser works to a non-technical relative, or who has suddenly found themselves required to start using an entirely new suite of software applications, will have a keen understanding of just how impenetrable are the webs of conventions surrounding many computing activities. Even explaining the notion of a ‘double click’ with a mouse – and helping someone who has never done it before to double click on an icon – reveals how surprisingly tricky some of the most basic acts associated with using a computer still are.