The Dictionary of Human Geography (178 page)

BOOK: The Dictionary of Human Geography
10.66Mb size Format: txt, pdf, ePub
rural-urban continuum
A once popular hypothesized continuous gradation of ways of life between rural areas and large cities. It assumed a polarization of attitudes and behav iours between ideal rural situations, on the one hand characterized by communities built around kinship, attachment to place and co operation (what Tonnies (1955) referred to as Gemeinschaft) and large cities, on the (NEW PARAGRAPH) other characterized by societies dominated by impersonal, especially market, relationships (Gesellschaft). Although some empirical studies found patterns consistent with the hypothesis, it was largely demolished by studies such as Pahl?s (1965) that identified village like communities within cities and urban traits spreading into the countryside. rj (NEW PARAGRAPH)
rustbelt
A descriptive term for the area of declining manufacturing industries in the north east of the USA, now more widely applied to any area of industrial decline (cf. suNbELt/snowBELt). Rj
sacred and profane space
Sacred spaces are sites imbued with a transcendent spiritual qual ity. They are characterized by the rituals people practice at the site, or direct towards it. Mircea Eliade, the scholar of comparative religion most closely associated with the concept of sacred space, proposed that it is with sacred experience that sacred space is marked out from profane space. Whereas profane experience maintains the homogeneity of space, the sacred disrupts that, creating non homogeneous space (which is why sacred space figures so promin ently in discussions of urban origins). (NEW PARAGRAPH) Eliade (1959, pp. 26 7) identified three ways in which sacred places are formed: when there is a hierophany (an ?act of manifestation of the sacred', such as a voice proclaiming the sacrality of a place); an unsolicited sign indicating the sacredness of a place (as when something that does not belong to this world manifests itself); or a provoked sign (e.g. using animals to help show what place or orientation to choose in setting up a village). Sacred places may also be made through the relics of holy beings. (NEW PARAGRAPH) Sacred places may occur in bio physical or in built form. Animists, in particular, believe that some form of the divine exists in nature, though this is true too of some world religions. Thus, rivers, trees and mountains are reli giously interpreted, and invested with sym bolic meanings. For example, the bodhi tree is a sacred tree for the Buddhists, while the Ganges River is venerated by the Hindus. In built form, perhaps the most visible and obvi ous sacred spaces are the ?officially sacred? (Leiris, 1938) religious buildings, such as churches, temples and synagogues, though roadside shrines and home altars constitute other examples of sacred spaces too. In advanced technological societies, techno religious spaces such as radio and television broadcasts and internet based communica tion also contribute to the making of sacred experiences through live telecasts of prayers and religious gatherings, for example, thus creating new conceptions of sacred space. (NEW PARAGRAPH) Chidester and Linenthal (1995, p. 6) argue that ?nothing is inherently sacred'. Sacred space is contested space. It is ?not merely discovered, or founded, or constructed; it is claimed, owned, and operated by people advancing specific interests? (ibid., p. 15), involving ?hierarchical power relations of dom ination and subordination, inclusion and exclusion, appropriation and dispossession' (ibid., p. 17). In these power relations, four kinds of politics are apparent (van der Leeuw, 1986 [1933]): a politics of position whereby every establishment of a sacred place is a con quest of space; a politics of property whereby a sacred place is ?appropriated, possessed and, owned?; a politics of social exclusion, whereby the sanctity of sacred place is pre served by maintaining boundaries; and a politics of exile, which takes the form of a modern loss of or nostalgia for the sacred. (NEW PARAGRAPH) Sacred spaces need not always be sacred. The same space may be sacred at one time under one set of circumstances, but not sacred at other times and circumstances. For example, a house is ordinarily considered functional space, but in its design and the rituals prac ticed within, it may become sacralized. lk (NEW PARAGRAPH) Suggested reading (NEW PARAGRAPH) Dunn (2005); Kong (2001a,b). (NEW PARAGRAPH)
sampling
Sampling involves selection from a greater whole and can be contrasted with a full enumeration or a census. However, even when a census has been undertaken, the outcomes can still be regarded as a sample of what could have occurred (another stochastic realization) for example, on other than cen sus day so the concept of sampling has even wider applicability. It is an essential part of extensive research designs and survey analysis, but is also important when qualita tive data are collected and interpreted (King, Keohane and Verba, 1994). The most com mon reasons for sampling are the costs of measuring an entire population and the uneth ical intrusiveness of doing so. (NEW PARAGRAPH) Any sampling strategy is concerned with estimating an unknown parameter (such as the proportion in poverty, the mean income, or the total number in poverty) and its likely error, and involves trade offs among: (NEW PARAGRAPH) Representation: the ability to generalize from the sample to the (carefully defined) wider population. (NEW PARAGRAPH) Coherence: the degree that the measured entity conforms to the theoretical con struct being studied. This is particularly important in qualitative research when often necessarily small samples need to be theoretically relevant (Mitchell, 1983); consequently, for example, we need to de fine and sample the different mechanisms that place people in poverty (exclusion from the job market, divorce and separ ation, etc.) and make sure that each type is recognized and studied. (NEW PARAGRAPH) Bias: the degree to which the parameter is accurately estimated, without systematic error; for example an internet based sam pling strategy may seriously underestimate the extent of poverty. (NEW PARAGRAPH) Precision: the degree to which the parameter is reliably measured and random, stochastic, innumerable small errors are controlled. (NEW PARAGRAPH) Efficiency and cost: efficiency is the relative precision of an unbiased sample compared to others of the same size. Concern with precision can often be over ridden by con venience, practicality and cost; moreover, it is generally much better to do a well designed small scale study than a botched large scale one. (NEW PARAGRAPH) A highly convenient sample design (often used by commercial organizations) is the quota method, in which, for example, an interviewer approaches people on a shopping street until the desired quota of 25 each of young and old men and women is reached. This approach is simple but also open to bias, as certain types of people are not generally found in shopping streets; there can also be interviewer bias, as the interviewer may be resistant to approach ing certain groups of individuals. Non response is inevitably ignored in this approach. Snowball sampling is when having found a key informant (e.g. a drug user) they are asked to recommend others. While useful with hard to reach populations, there is again the poten tial for bias as isolated drug users may be missed and one circle of users may not inter mix with others. A systematic sample is when respondents are chosen in an ordered way, such as every fourth house on a street. Such a design is highly convenient in an the field and when the total population is not known, but can produce biased results when the entity being studied has a corresponding systematic patterning, so that all even numbered houses on one side of a street are social housing, but all odd numbered houses are owner occupiers. (NEW PARAGRAPH) All the designs discussed so far are non probabilistic. In probabilistic designs the key feature is that neither the interviewer nor the interviewed can affect the selection mech anism, which is done at random. With such samples, the likelihood of being sampled is knowable and non zero; consequently we can use statistical theory (based on the central limit theorem) to guarantee unbiased, repre sentative estimates and to estimate the degree of precision in those estimates. Thus we can say that the proportion in poverty is 25 per cent and that we can be 95 per cent confident, given our sample of 7,200 households, that the true underlying rate lies between 24 and 26 per cent. Non probabilistic sampling is not necessarily biased and unrepresentative, but we lack the necessary formal framework for making any judgement. (NEW PARAGRAPH) There are three basic types of probabilistic sample: (NEW PARAGRAPH) Simple random sample (SRS) requires a complete listing of the population (the sampling frame) from which a sample is chosen at random so that each and every unit has an equal chance of being selected. With such EPSEM sampling (equal prob ability of selection method) the standard error, which defines the precision of the sample estimate, is proportional to the square root of the absolute sample size. Consequently, the larger the sample (in absolute terms, not as a percentage of the population) the greater the precision but there are diminishing returns, as the sam ple size must be quadrupled to halve the standard error. This can be an expensive design as in a national survey the inter viewers will be required to travel the entire country. In practice, quasi random sam ples are often used; for example, the British national birth cohorts of 1946, 1958 and 1970 were based on babies born in a par ticular week, while the ONS Longitudinal Study uses record linkage of individuals born on four days of a year, which equates to some 1 per cent of the national popula tion. Indeed, providing there is no period icity in the sampled variable, systematic sampling can be treated as quasi random. With probabilistic sampling, an effective sample size of 10,000 respondents is needed in order to be 95 per cent confident of being + 1 per cent of the true value, when that is 0.5. Typically, scientifically credible national opinion polls contain around 1,500 respondents. In student and other small projects, the absolute min imum is 100 and preferably 250 when sub groups (male and female; young and old) are being analysed. The aim should be a focused questionnaire to a lot of people, rather than a long questionnaire to few, or a recourse to secondary data. (NEW PARAGRAPH) Stratified sampling groups the population into strata so as to maximize similarity within a stratum and maximize between strata differences. This can considerably increase the sample?s efficiency if stratifica tion is based on a variable strongly related to the estimate. If income is strongly related to region, then regions could be used for stratifying and reducing the stand ard error of the mean income estimate. We can also disproportionately sample from particular strata when there are important groups of the population that are numeric ally small and so would yield only small numbers if SRS were used within strata such as ethnic groups (see ethnicity) with the non indigenous groups over sam pled to get more precise estimates. Such a strategy requires detailed knowledge of the sample frame in terms of an ethnic classi fication, and the analysis should be weighted to get correct estimates. (NEW PARAGRAPH) Multi stage designs involve sampling in stages. For example, a sample of constitu encies may be selected at random (the so called primary sampling units), then wards within them, then households within wards and individuals within households. This design is often used for major scien tific surveys, as it only requires a sampling frame at each stage; thus at stage one only a list of constituencies is required, while at stage two, only ward names are required for those constituencies already selected. Another advantage is the cost reduction resulting from basing a team of interview ers in the higher level units. A variant is the cluster design when at some stage all the lower level units are sampled everybody in a ward is selected, for example. A prob lem with these designs is that there is a tendency for people living in the same place to be somewhat similar so that the resultant sample is more alike than a ran dom sample and standard statistical theory gives overly precise results. Clustered data lead to inefficiency and it is not unknown for an SRS a third of the size to achieve the same standard error. It is clearly vital to measure this dependency (the intra class correlation) and correct for it. The development of multi level models allows this even when the sample is unbal anced with a different number of units in each higher level unit. Consequently, multistage designs are recommended for studying variation simultaneously at a number of different scales, with the popu lation itself seen as having a hierarchical structure, which is itself of substantive interest (Jones, 1997). Indeed highly clus tered designs are needed if survey informa tion is to be gathered on individuals as well as their peers. With such designs, it is necessary to specify the number of units at each level; Raudenbush and Xiaofeng (2000) provide the necessary background, which is put into practice by Stoker and Bowers (2002) in their geographically sensitive designs for surveying American voting behaviour. (NEW PARAGRAPH) These three designs can be used in combin ation; the UK Millennium Cohort study, unlike previous birth cohorts, is spatially clus tered specifically to study neighbourhood effects. Wards are disproportionately strati fied to ensure adequate representation of all four UK countries, deprived areas and areas with high concentrations of particular ethnic groups, and then all babies aged 9 months in selected wards over a 12 month period. The resultant sample includes 19,000 infants who are being followed longitudinally. (NEW PARAGRAPH) Other probabilistic designs may be used for different circumstances; they include capture recapture methods to estimate population size with mobile populations, and response based sampling (see extensive designs) when a numerically small but important outcome is over sampled. In geo graphical studies, the standard procedures may be modified to ensure spatial coverage. Methods of random, systematic and stratified sampling of points on a map have been devised using coordinate systems, for example, as have methods of selecting transects (line samples) across an area (Berry and Baker, 1968). Increasingly, these designs are being used adaptively (Thompson and Seber, 2002), so that the degree of spatial autocorrelation is being assessed as the survey proceeds and there is increased sampling in areas where the outcome variable is most varied and least spa tially dependent. (NEW PARAGRAPH) When testing a hypothesis it is crucial to assess and control for two types of error in a probabilistic design. Type I errors, finding an effect when there really is none, are controlled (NEW PARAGRAPH) by setting demanding probability levels in confirmatory data analysis. Type II errors that is, having insufficient information to be able to detect a genuine effect are managed by conducting a power analysis during the design phase. Conversely, collecting too much information is not only wasteful of resources but can be seen as an unethical intrusion. Statistical power is increased when there is little ?noise? in the system; the effect is sub stantial when probability levels are set leni ently and when the sample size is large, and according to which statistical test is used (parametric procedures being more powerful). Consequently, researchers should choose settings/contexts that maximize the ?signal? and not as in a study of the effect of size of house on price, sample areas where all the properties are three bedroomed ones. Power formulas and software (such as G Power) are available but require an estimate of the size of the effect. Cohen (1988) has defined small, medium and large effects as the ratio of an effect to variation for a very large range of statistical procedures (e.g. a t test in a multiple regression model). Thus, to be able to detect a small difference of 0.25 of a standard deviation between two sample means with 80 per cent power and 5 per cent significance (both these percentages being the most commonly used conventions) requires 2 253 observations in the total sample; while to be able to detect a large dif ference of 1.0 standard deviation requires only 2 17 observations. Unfortunately, academic research has paid too little attention to statis tical power with, for example, Sedlmeier and Gigerenzer (1989) finding that even in a reputed journal statistical power hovered around 50 per cent. If all these studies were replicated, only half would result in an identifiable effect. The problem is actually even more widespread due to the use of non probabilistic samples. The way forward is to use simulation to judge effectiveness and efficiency, as pioneered by Snijders (1992) for snowball sampling. Indeed, simulation is a general strategy that permits great flexibility, not only allowing the assessment of power as sample size increases but also catering for missing data, non linearity, unequal variances and other specifi cations of an underlying model. kj (NEW PARAGRAPH) Suggested reading (NEW PARAGRAPH) Barnett (2002); Dixon and Leach (1977); Kish (NEW PARAGRAPH) ; Lenth (2001); Sudman (1976). G*Power for power calculations is available from http:// www.psycho.uniduesseldorf.de/aap/projects/ gpower/index.html (NEW PARAGRAPH)

Other books

Mercy by David L Lindsey
The Independents by Joe Nobody
The Arrangement 3 by Ward, H.M.
Traveler by Ashley Bourgeois
Time Fries! by Fay Jacobs
Exchange of Fire by P. A. DePaul
Brightly Burning by Mercedes Lackey
Valentine's Day by Elizabeth Aston
The Osiris Ritual by George Mann
Hard Luck by Liv Morris