May 30, 1999

To: Faculty, graduate students, Geography
From: Dick Morrill
Subject: "qualitative and quantitative"

Recent conversations, theses, examinations, etc, indicate a degree of confusion about methodology that carries a risk of divisiveness in our program that is unhealthy and unnecessary. I'm referring to a growing false dichotomization of qualitative and quantitative that is misleading and destructive. I'm not surprised, because it is complicated.'

Our purpose is to gain knowledge of the human condition by studying the behavior of people and their institutions in space and place and to use whatever data and methods which help in this work. Two basic things we've learned over the years is that data vary in their "measurability" rather continuously and that methods for gathering information and for drawing insights from the data also vary continuously, not dichotomously, from more 'qualitative' to more 'quantitative'.

Probably the minority of data for human geography are of the sort where the categories are unambiguous (e.g., inside or outside the city limits) and even less common are uncontested interval data (e.g., traffic recorded by a lane counter). More typically we have imprecise interval data (e.g., on income, house value for individuals) or worse, only crude averages for groups or areas (e.g. median rent per census tract), and probably the majority of data we geographers use are nominal, or counts of persons who prefer x to y, or of areas of land in uses p, q, or r, where the assignments were likely arbitrary and inconsistent. So MOST of our data are crude, but this has NOTHING to do with 'quantitative or qualitative" !!!!!!!!!!!!!!!! I can't even imagine any human geographic data that could meet any test of 'laboratory control'.

So how do we get information? We can get secondary, archival data from populations or large samples; e.g., measurements of acreages of various land covers from satellite imagery, or aggregate characteristics (distributions, means) of the populations of places from the census, or even a sample of individuals in households(the Public Use Micro Sample) from the census, or from hundreds of large surveys conducted over the years. All of these data are subject to both known and unknown errors and uncertainties. These are large number of cases type data, suitable for statistical procedures for handling large numbers. Or we can get secondary data of historical photographs, diaries, newspapers, typically with rather few cases, requiring more personal interpretation and thus more qualitative analysis techniques, in order to code data and find patterns, than say from the PUMS data, but the difference is a matter of degree, not of kind.

Or we can collect primary data in a variety of ways, formal structured questionnaire surveys, or informal unstructured interviews and surveys, or focus groups, or participant observation, or unobtrusive observation, etc. Such data could also involve thousands of cases (common in health research), or only dozens or only a few (like my planning PhD student who is looking at 3 stadia as redevelopment tools). The number of cases do not control whether we look at the data as more quantitative or quantitative, but rather the kinds of questions asked or form the information comes in. So the same interview can yield some information well suited to quantitative analysis (e.g, sales of shops in the years following the stadium opening) and other information that is not,e.g., the venues and ways in which public-private partnership "deals" were struck, often more particularistic data which gives a richer understanding of the individual place or situation.

It's useful to distinguish between ethnographic (not qualitative necessarily!) approaches where we get a lot of information about a few cases, that is we get a rich, dense knowledge of them, with the alternative of examining a limited amount of data about a lot of cases; e.g., where the sheer number gives us confidence that say on average in American cities, men commute farther than women. This is useful to know, but tells us nothing about particular men or women. Where the number of cases is small, we are able to claim that we are unusually confident about the meaningfulness and validity of our results - and this is perfectly good and defensible science- but we don't try to claim the results as highly generalizable; that was not the purpose. But if we look at research of our faculty and students (certainly some of mine) we will often find an intermediate number of cases, as from 25 to 100, right in the middle between 'generalizability' and 'particularity'. Should we despair and panic? No, we can take advantage of such data, to permit some generalizability, while also mining the richness of greater contextual and individual knowledge.

Statistical techniques, however, are yet another dimension to the story. It is simply NOT the case that because we have few cases (say 25) or because of the nature of our data (simple categories, information on attitudes, beliefs, values, preferences, feelings!, etc) that there are not appropriate and useful "quantitative" statistical approaches. That is what psychology has been doing the last 50 years and more! Convincing inferences from multiple regression may be 'safer' with larger numbers, and the existence of at least some interval data, but a whole class of multivariate methods, notably factor analysis, discriminant analysis, multi-dimensional scaling and cluster analysis, were in fact DESIGNED and created precisely for data based on simple categorical or at best crude "more or less than" type answers, and which aim to derived patterns or dimensions of consistency and meaning from attitudinal data; e.g., in psychology, to assess the degree of 'altruism' in people, or in geography, to scale people's local versus regional versus national orientation. And there are similarly specific qualitative tools to assist in coding and analysis of initially unstructured narratives and conversations.

So the concept of qualitative as used today seems to have more to do with issues of validation, interpretation, representation and communication, of what we mean by knowledge, than sample size, or how the data were collected; that is, the credibility we grant to claims of knowledge. A personal example: I recently did an analysis of the geography of campaign contributions to candidates for Seattle city council. I mapped these (see the bulletin board in the hall) and did a little regression, relating contributions to income, distance from downtown and race; a typical 'quantitative' approach. But the folks I worked with down in Rainier Valley looked at the data and maps, and added to the discourse personal knowledge of the fact that they had not been contacted by candidates or approached for contributions, as direct evidence that 'they did not count'. I couldn't plug that information in, but it was also valid, and helped to mobilize their involvement. The point is that methodologies are not in conflict but can be mutually enforcing and enriching.

In light of the above, consider this table that "makes itself" when we succumb to a dichotomous view of research as qualitative OR quantitative.


Thick description

Numerically oriented

Emergent design; no apriori hypotheses

Prefigured design to test hypotheses

Takes place in natural setting

Takes place in controlled setting (lab)

Handles ambiguity

Proceeds methodically

Researcher leaves mark on processes of collection and analysis

Researcher considered objective

Good for answering how and why questions

Good for answering what and how much Questions

Data collection methods:

Interviews, document analysis, direct observation, participant observation, focus groups, open-ended surveys, Video-taping, tape-recording
Data collection methods: array of measurement techniques e.g., surveys, remote sensing

Nominal or ordinal scales of meas

Ordinal, ratio and interval scales

NONE, I repeat NONE of the supposed "differences" are accurate representations of what we really do!! They are imposed and divisive constructs. All of the phrases in the left column could apply to the right, and those in the right column are either possible in the left or are not relevant to social science. I'll save the reasons for our later discussion, which I welcome. I will not even dignify absurdities like qualitative being associated with feminine and quantitative with masculine with inclusion in the table. [This is not to deny that women scholars have played a major role in developing and encouraging qualitative methods].

PS There is good science and bad science, but not "positivist" or "post positivist" or "normal" and "post-normal" science. Science is the pursuit of knowledge which can be mutually comprehended, and is always subject to revision and refinement.

(Posted with permission by the author)

Internet Sites: