Understanding Research (26 page)

Read Understanding Research Online

Authors: Marianne Franklin

BOOK: Understanding Research
8.36Mb size Format: txt, pdf, ePub
WEB-ANALYSIS: SITES, MAPS, AND HYPERTEXTS

To recapitulate, with the web comes an array of new places to go, relationships to follow, and sorts of digital (computer-mediated) data to collect; from user-log statistics, to hyperlink hubs, to web maps based on density, distribution, or traffic flows between websites. Some of these data are provided by freely available, commercial search engines and service providers or as part of a website’s display. This information is a first stop for basic information; for example, how old a website is and how often updated, how many registered users it has, and how many there are at any one time.

Digitally constructed, web-dependent domains have also created a burgeoning literature on ‘digital methods’, or ‘virtual ethnography’ or ‘hypermedia research’ where their newness predominates and innovative approaches to gathering and analysing the interactions and subjects populating cyberspace, new topics and recharged older debates (for example, about privacy and surveillance in social networking sites, editorial control in user-based domains, peer-to-peer practices) are proliferating. Wikipedia, Second Life, Facebook, and Twitter, are at the time of writing the research domains, and topics of choice, in their own right or their role in major events of the day, for many research students.

Whilst online and computer-mediated, and thereby trans-border and covering ever-expanding research terrains are characterized by the exponential increase in terms of quantity, time (compressed synchronous but also asynchronous) and intensity of volume (increasing all the time) online research also engages tried and trued data-gathering methods. Interviews online are conducted more or less following longstanding principles, as is focus-group work whatever form the participants take (see
Chapter 6
). Surveys are also increasingly carried out online, if not within communities or user-groups then by using survey tools (as outlined above). Various sorts of ‘content analysis’ in quantitative terms, visual and ‘textual’ analysis from interpretative traditions are also perfectly viable for approaching website content and design, as a whole or in part; for example, changes in layout and editing for web-formats, or themes and debates in discussion forums or user-groups.
Chapters 6
and
7
will cover these points for online settings where relevant.

In that respect, a lot of research carried out entirely or partially by means of the internet overlaps established research traditions. However, apart from the distinct features that web-based practices and production bring with them which impact on research methodologies in turn, there are differences.

  1. First, in terms of how text online, as written, audio, and visual material is constantly in motion; website content is constantly updated, archives are not always accessible or kept up to date, participants come and go or adopt various personae for their online identities, access and interaction occur at various levels of mediation.
  2. There are differences in the relationship between researchers and research objects/subjects based on non-proximate and non-visible, textual exchanges where computers mediate and frame the encounter. These dynamics affect the ethical rights and obligations for both researcher and subject (see
    Chapter 3
    ).
  3. There are practicalities peculiar to studying a website or cluster of websites, or web portal as the primary object or research, for these are multimedia and multilevel sorts of domains: visuals, layers, functionalities, populations, and content. This is also the case when an inquiry wants to map or trace particular sorts of hyperlinked relationships (between participants, or websites), or focus on images and digital texts from web-sources in their own right as cultural or aesthetic artifacts. Or when an online community (however defined) of gamers, or an ethnic minority, or a larger newsgroup (e.g. Facebook group) is the main focus.

All in all, web-spaces are multi-user, multimedia, and multilevel domains for researchers. They offer expansive and overlapping sorts of data to collect, create new
ways to navigate, sort and then assess the material, and a seemingly accessible range of potential interviewees to access in ways that are as exciting as they are difficult to contain. For the time being, and making sure to consult more detailed and focused explorations in the literature, note the following analytical distinctions:

  • Those between the sorts of research you may be doing, where on the web you intend carrying out the work, and how these relate to, indeed may well be, the main object of inquiry.
  • Depending on the question and objectives, the internet may be more than a cornucopia of free or accessible information and academic resources. It may well be a key factor in the phenomenon you are studying if not the research field itself.
  • In addition, and this truism bears noting; the internet and the web are not only huge but also moving targets. Any research project in this domain needs to be specific about which part is the main focus; for example, a particular micro-blogging service, or social networking site for a particular setting (Chinese ones are different from American or European ones).
Web-mapping

This is a developing area where software tools are coming into their own, designed for consumers (Google maps and spin-offs, GIS devices) and researchers (specific tools with licensing arrangements). Like all sorts of mapping, this sort of research as data-gathering and analysis has its own methodological weighting; implying a geographical, topological approach whereby an area, spatial relationship, and other sorts of distribution are primary. Some researchers have achieved interesting results by manual means; for example, following leads and hyperlinks and creating a representation of these pathways from a clear starting point (for example, home portal and back). Others are more interested in investigating a particular, or comprehensive area based on traffic flows, concentrations, or access points.

The point here is that this research makes use of our ‘digital traces’ (see Holmes 2007, Latour 2007) as well as creating online traces of its own (see Rogers 2000); the digital footprint is both means and object of the research. In this sense the
hyperlink
(how texts are linked from one to another) is a key unit of analysis, the way the researcher gets around in digital terms, and an indicator of certain hierarchies (for example, where one website is a central hub, i.e. linked to and from many others).

Hence, like all cartographic instruments, exploratory projects and their underlying operating principles, digital hyperlinking or web-mapping research tools and their programmed rules and procedures are not neutral mining or representational devices. Their conception and deployment are related to an objective notion of what is being looked for.

That said, they are generating new ways of looking at cyberspace and understanding how the internet and web function behind the user interface and content production that preoccupies a large part of online research projects to date. But they are also playing a role in its future architecture. Nonetheless, hyperlinking and web-mapping can be put to use for inquiries where web-embedded relationships or flows rather than traditional content production or producer identities are the main focus.

Figure 5.5
Map of the internet

Source
: xkcd:
http://xkcd.com/

Hypertexts – hyperlink research

The links themselves can also be an object of research. So here the practical issue is how best to isolate and then connect up these relationships which only make sense as digital relationships and in computer-mediated formats; their dynamism and ability to define online power hierarchies are lost in translation onto the written page.

Most researchers rely more than they realize on commercial search engines to do the linking, present the results, and then archive them. As Google dominates at present for navigating the open web, it is difficult for most student researchers to even envisage using another way to get around the web; moreover, specialized ones cost money. Here too, as above, note that no search engine, no matter how effective, is neutral either. They do not work by random selection but are designed according to hierarchies of citation and generate income by embedded forms of ad-tracking.

If your web-based project is relying on a freely available search engine then bear this in mind in terms of the claims you make, where you store the raw data, and your
eventual findings as well (see Lazuly 2003, O’Neill 2009, Rogers 2000). As ‘Twitter feeds’, i.e. the sorts of traffic and content generated by micro-blogging practices and their web-access points, become a common feature, methods to track, collect and then analyze them are also being developed, some based on quantitative indicators and others based on signs, symbols, and relationships.

Website analysis

This is more than simply creating a screenshot and making some general comments. If websites and wider portals are the central focus then you will need to consider how exactly you will set up criteria to study them, why, and to what ends in terms of your research question: Do you need to study:

  • How a website is designed; layout, colour, images, multimedia operations?
  • How this website is part of a larger conglomeration, may or may not be a ‘hub’ or more peripheral? Here web-mapping tools may help.
  • What people (‘users’) are doing when visiting this site? Can you gain enough information from user statistics provided or do you need to talk to the moderators? If they are willing can you access the log-files (data of traffic and ‘hits’); if you do access these how will you use and then present the information?
  • The frequency of visits in general (‘hits’) or who is visiting and why? Can you do this from the website itself, from your access point? Or do you need to contact these visitors?
  • The content itself, either as keyword frequencies or positions using content-analysis coding and interpretation (quantitative and/or qualitative) or in terms of meaning-making?
  • What people are saying thematically, as an online space for deliberation or debate or in comparison to offline relationships?
  • The more technical features? For example the layers of software design governing functionality or how users can freely contribute.
  • The role a website or portal plays in a larger setting? For example, for an international or local NGO? As a fund-raising platform or social mobilization tool?

In other words, web-analysis (a very general term) or website analysis requires a methodical approach that can focus on the interface (what we see onscreen), the traffic coming and going, the stuff that gets produced (uploaded and accessed) on its own terms, the community using and running the site, or behind-the-screen functions. You could combine any of the above but as always it depends on the question. You could also simply treat a website as you would newspaper or television content, to access and code accordingly. That said, websites are always in flux, archives not always consistent or accessible, and user statistics an amalgam of hits, individuals, and double-ups.

Online communities

This is where anthropologists and those using ethnographic approaches have been active from the early days of the web. It is also where traditional ethnographic issues and newer, more digitally inflected issues intersect (see
Chapter 6
). If your work is based on observation – and so participation by virtue of being online – of a community, a group, or a discussion forum (often part of a larger community) then your project has particular ethical considerations that are longstanding and also embedded in the ‘multi-sited’ nature of cyberspatial fieldwork: see
Chapter 3
and specialized discussions to help you in the initial planning. As you are observing-participating in any online group, or larger community by logging in, accessing or browsing around the site, you too are leaving digital footprints. These in turn can become a ‘partial object’ in some other research project.

An important corollary to communities online are computer games and virtual worlds, which have existed as long as the internet itself. These comprise all the above features in terms of where to go, how to navigate, and then how to present the material in a traditional dissertation. Not only is there a rich research literature focusing on gaming and virtual worlds in their own right using emerging and longstanding methodologies, but there is also a fast-growing area in IT law and policy analysis that looks at governance mechanisms such as the
terms of use
and the
end user license agreement
. These can be researched in their own right as well as how they are being rewritten and contested within user-groups and gaming communities (see e.g. Gaiser and Schreiner 2009, O’Neill 2009).

As these are new areas, student research is often where some of the most exciting and innovative methodological advances and conceptual debates are emerging right now. These technologies and their everyday, political, and economic impact are those of today’s ‘Google generation’ (see Franklin and Wilkinson 2011, UCL 2008) and this generation is now the upcoming generation of researchers.

Other books

Pent Up by Damon Suede
Playboy Doctor by Kimberly Llewellyn
Which Way to the Wild West? by Steve Sheinkin
Stalker Girl by Rosemary Graham
Seeker of Shadows by Nancy Gideon
Stanton Adore by T L Swan
Deadly Dosage by Richards, Cheryl
Smoke and Fire: Part 1 by Donna Grant
The Paris Architect: A Novel by Charles Belfoure