Our primary goal in the library is to fulfill the information needs of our users, our patrons.
In order to do this we need to be well-informed about our users. This presentation will deal with the methods we use to study our users and their information needs.
To make effective use of this narrative, please print it out. Then read along as you run the Powerpoint show available at http://www2.hawaii.edu/~donnab/user_studies_f10.ppt. When you see a [click] in the narrative this indicates that you should press your right arrow key to get something to appear or move on the screen of the Powerpoint slide.
In addition to providing you with the information meant to be conveyed in the Powerpoint, the printout of the narrative will assist you in completing the Recommendation for a User Study assignment.
We can divide our users into four categories:
First there are potential users?these are determined by the mandate of our institution. For example, Hamilton Library is the library for use of persons throughout the University of Hawai'i system. However, Hamilton is also a government documents repository so anyone in the state is a potential user of the government documents collection in our library. This is mandated by law.
With the advent of the Internet our potential users have expanded from those who could physically visit our premises to the world of remote users. The person accessing information stored on computers at Hamilton may be sitting at a terminal in Nairobi.
Next there are the expected users—people you would expect to use the materials and services of your library. Here at Hamilton we would expect the students, faculty, and staff to utilize the library.
Third are the actual users—the people who physically walk through the door or access our resources online.
Last there are the beneficiary users—users who actually benefit from our services, who have their information needs fulfilled.
We need to study the user needs of all 4 categories of users. If there are barriers to access we need to understand what those barriers are and look for ways in which such barriers can be removed or overcome.
For example, here at Hamilton there may be students (expected users) with physical disabilities who are not able to utilize our services because of barriers such as stairways or narrow aisles.
If a community consists largely of non-English speakers and the commands needed to find information in our online public access catalog, as well as the cataloging records for the materials, are only in English, that's a significant barrier to accessing our non-English resources.
By studying the barriers to access, our hope is to increase the number of patrons who fall in the last category—that of beneficiary users.
This presentation will briefly look at:
In understanding user needs we need to understand what information our users need so that we may try to make that information available.
Some needs are fairly stable over time. [click]
Sometimes our users' needs change rapidly. [click]
This is a picture of Hurricane Katrina.[click]
After the hurricane libraries in the area of destruction offered forms and information about how to get assistance.[click]
Less dramatically, your community may receive an influx of Hmong refugees. This would require you to re-think your collection, adding resources for your new neighbors.
For example, take the subject of fences. [click]
A lawyer defending someone in a lawsuit [click] and a backyard gardener [click] may both be seeking information about fences. But they're going to be using that information differently and thus have different requirements as to the type and format of information needed.
The lawyer may need information about legal requirements as to height and type of fences that surround a hazardous site.
The gardener may be looking for the type of fence that will keep the bunnies out of his lettuce patch.
We need to understand how a user goes about fulfilling his or her information needs—we need to aid and augment that process.
Thus we need to delineate user activities associated with information seeking.
We also need to understand the reasoning process of the user so that we can design our systems to mesh well with those processes.
We need to understand the ways in which information must be presented in order to be intelligible and thus useful to the patron. Does the patron need printouts of data? Downloadable files? Audio tapes of books and articles? At what intellectual level? [click]
A student in the dorm working on her paper for LIS 670, especially if it is late at night, would probably prefer articles that are available electronically and can be downloaded from the library. [click]
But what about that elderly auntie who knits you those lumpy sweaters every Christmas? [click]
And what about our patrons who are blind or who have low vision? They may require audio books that they can check out and listen to in the privacy of their homes.
We need to know what constitutes a trustworthy source of information for the user, then combine that with our own judgments of reliability.
The comparative trustworthiness of the Wall Street Journal versus the National Enquirer may seem like an easy question. But what about peer-reviewed versus non-peer-reviewed journals? The one might not have greater trustworthiness given the recent controversy concerning the peer-review process. Or a comparison between one peer-reviewed journal versus another?
The Amnesty International Web page may be a trustworthy source to some patrons while others—because of their political views, affiliations, or citizenship—may not trust or even be allowed to quote such a source in their writings.
The Cornell University Law School Web site provides a more trustworthy source of legal information than a high school student giving advice from his Web site. However, the information on the Cornell site may be unintelligible to a patron with no background in law and thus viewed by the patron with distrust.
An interesting side note: Studies show people tend to trust information coming from people like themselves, people they can identify with. Madison Avenue discovered this long ago. When they want to sell a big, powerful, gas-guzzling truck they do not put someone in a white lab coat on the screen to tell you about the truck's features. Instead they put an actor in a hard hat and jeans, smear some fake mud on the jeans, then have the actor tell the viewer how much he likes his truck.
We need to understand the elements of a user interface that expedite or hinder information retrieval. [click]
In Human-Computer-Interaction studies we look at such questions as the relative efficacy of a command line versus a menu interface [click], or a text-only versus a graphical user interface. We examine the importance of context-sensitive help and, as we discussed earlier, the usefulness of chunking information on the screen. We also compare the utility of various navigation aids in finding information.
This is an example of the reason we need to do user research to determine the utility of our information retrieval systems. Shown is a list of search limiting options offered users in a public library. It is highly unlikely that this list would be intelligible to the average--or even the experienced-- public library user.
A user study in which subjects were presented with such options and queried about their understanding of the meaning of the terms would allow the librarians at this institution to realistically evaluate the manner in which the options are delineated and whether such options, regardless of the way they're presented, facilitate retrieval of relevant information for patrons or staff.
This is an interesting experiment. Many if not most library workers who have served at a reference desk have had users who can't remember the exact title of a work but remember its color. While some librarians privately roll their eyes at this, we've mentioned before that in human development visual interpretation of the world preceded language development. Thus it is natural that a person would more readily remember the physical characteristics of a work than the exact wording of the title.
The staff of this library decided to augment the searching capabilities of their online catalog by allowing the user to utilize color as a search tool. These librarians looked at the way users approach the information seeking process and incorporated the user's approach to add functionality to the information retrieval system.
So how do we go about studying users and their needs? There are three basic types of user studies:
Let's look at each of these in turn.
User-oriented studies seek to use demographics to predict information use.
They look at factors such as age, educational level, economic status, and language in relation to the types of collections or services they utilize. The hope is that this will then be useful in collection development or selection of services to offer in a given library.
For example, if your community includes many Spanish speakers, you'll want to offer a variety of resources in Spanish, as well as to have staff members who can speak enough Spanish to be able to assist your Spanish-speaking patrons. You would also want to offer signage in Spanish as well as English and have a bi-lingual Web site.
You have to be careful, however, in making assumptions about a demographic group.
Let's say your library serves a retirement community. You might think you will need to stock up on Frank Sinatra albums when you actually may be dealing with a bunch of grey-haired fans of the Rolling Stones.
Here's an example of how a library uses demographic information to better serve its patrons.
The Los Angeles Public Library has a special section of its online catalog devoted to kids. [click] [click]
Note the use of colorful icons with information categories like homework help specifically aimed at the information needs of children. [click]
Note also that there is also a Spanish-language link for the kids.
That same library offers an Web interface for adult Spanish-speaking users. Click on the Español link on the main page and you retrieve a page containing information access instructions in Spanish--how to find a library near you, introduction to the library, rules for visitors, the various services available to users, and Spanish-language information resources.
Systems-use studies, as the name implies, focus on:
Some of the elements we look at in systems-use studies are:
There are ethical considerations to systems-use studies. In general, looking at aggregate information about systems-use is considered to be acceptable. We also may collect some data about individual searches as long as there is no way to identify the individual conducting those searches.
However, if there is a way to identify the individual doing the searching, we must get permission from the individual being studied. Here at the University of Hawai'i there are strict guidelines for studying human subjects. Any such study must get approval from University authorities before being undertaken.
In several of your classes you'll probably read about the FBI's Library Awareness Program begun in the late 1980s. Librarians were supposed to report a user who checked out books like "How to Build a Bomb in the Basement" or "suspicious-looking foreigners." ALA fought this, considering it a violation of the individual's right to access information freely, without "fear of government intrusion, intimidation, or reprisal." Because of the threat of possible intrusion by government into the constitutionally-protected information-seeking activities of our patrons, we collect only that information about our patrons necessary in providing our services, and keep that information only as long as needed. Thus we don't keep track of what our users request or borrow after the items have been returned and the condition of the returned materials verified. Interlibrary loan requests are kept only until materials have been returned to our institution or to the lending library. Then they are destroyed.
User research must also respect patron privacy.
In utility-oriented studies we seek to understand how useful our tools and information products are to our patrons. There are a number of methodologies used to do this.
One is that of critical incidence studies—These are studies in which subjects report on their information needs and the resolution of those needs at each decision-making point in the process of searching for information.
Citation analysis—analyze citations to particular journals or authors or articles—We'll learn more about this when we get to the section on bibliometrics. We can look at a publication like the Science Citation Index to determine the relative frequency with which articles by a particular author or published in a particular journal are being cited in later articles. This gives us some idea as to the relative importance of the author or the journal.
However, there are a number of caveats here. First, authors often cite themselves repeatedly in their articles, thus giving themselves inflated citation counts. Second, just because someone cites an article doesn't mean the information in the cited article was valid. For example, you may remember the cold fusion debate. Two authors claimed to have produced energy through nuclear fusion at room temperature. Many articles later cited the article in question, but only to report that the later researchers had not been able to replicate the original authors' results. Thus the citations to the earlier work were not indications of the value of that work. Third, we need to note the depth with which a particular citation publication analyzed the works in question. Were only primary research articles examined? Short communications? Book reviews? Letters to the editor? Citation counts can vary significantly according to the types of materials included in the citation analysis.
There are several major phases to the basic processes in user research: First there is the design of a research plan. Next is the implementation of that plan, followed by implementation of the necessary changes suggested by the results of the research, and lastly, the very important step of evaluating the results of the research and the actions undertaken in response to it.
The LIS course on research methodologies deals with this entire process in-depth. What follows is just a brief overview of step one--the design of the research plan.
A fair amount of LIS research is done not in a laboratory setting but in a library setting. Much of this is applied rather than theoretical research. It is done to ameliorate a problem or improve services in a particular institution. Publication of that research helps other institutions deal with the same situations.
In designing a research plan, you first you need to identify a problem area or need to study. [click]
Identifying a problem is actually not as easy as it sounds. Let's look at an example.
Let's look at a rather mundane but common problem.
Let's say you have a problem with crowds in the lobby area. At times there's a long line in front of the reference desk with people jostling each other. [click]
So you assign someone to stand there with a little clicker and count people every 15 minutes for several days. You identify the busiest times. [click]
Then you take some of your librarians away from their other tasks and assign them to extra duty at the reference desk during busy periods. Sounds ok, right?
Except maybe your problem isn't insufficient personnel at the reference desk. Maybe your problem is inadequate signage with the result that most of the people going to the reference desk for help are just looking for directions. In such a case, adding more reference librarians is an expensive way to deal with the problem. At every decision point in your library there should be directional signs. [click] Adding signage would help to alleviate crowding while allowing you to better utilize the time and expertise of your staff.
Another factor contributing to the crowding might be that you have a traffic flow problem. The arrangement of the desk and the lobby furniture may funnel everyone through a small area in front of the reference desk. Putting more people on the reference desk would not solve that problem, either. Re-arranging the furniture might.
Once you've determined the problem you wish to study it's time to conduct an initial literature review.
Librarians are very good at publishing articles that say "This was our problem; this was how we attempted to solve it; these were the results."
Your initial search may help you to better identify what your problem is and how to approach studying it.
Once you have a better idea of what the problem actually is, you need to have a specific research question to study. Let's say you continue to have congestion at the reference desk even after adding signage and alleviating your traffic-flow problem. You might wish to study ways to facilitate the reference interview process. Are there time-consuming activities at the reference desk that slow down the interview? For example, if the librarian has to walk 30 feet every time she pulls out a pathfinder this may significantly increase the time it takes her to help her patrons.
In such a case your research question might be: "What is the optimal arrangement of information resources in the reference area?"
You need to take an honest look at the potential benefits of your study versus the costs in money and personnel time. If the study is going to require a substantial investment in time and funds and the results are unlikely to be implemented due to either financial or policy considerations, or the results may be implemented but at considerable cost with little benefit, you may decide that your funds would best be put to other purposes.
If you decide that your study would indeed be worth the cost, at this point you will probably want to conduct a second literature review. This will be specific to the research question you've decided to address.
If you are following classical research methodology, at this point you formulate one or more hypotheses to be tested.
A hypothesis is a statement about the relations between variables. This statement should carry clear implications for testing the stated relations. A hypothesis is generally stated as a declarative sentence.
For example, a hypothesis concerning traffic flow in the lobby might be:
"Changing the configuration of the furniture to segregate incoming and outgoing traffic will result in less crowding in the lobby."
This statement relates the variable of furniture configuration to the variable of traffic congestion. It suggests that you will test the relationship by observing the present traffic congestion, then varying the furniture configuration and observing the resultant changes in traffic congestion.
If you're going to query people, you'll need to identify the population you will study. Are you going to send a mail survey to everyone in a particular zip code? Or are you going to target persons of a particular demographic group? For example, you might be interested in how well your library is meeting the information needs of the senior citizens in the community, or the Spanish speakers, or the high-school students.
The type of statistical tests you decide to use will determine the minimum sample size you'll need to have statistically significant results.
At this point in your study you need to decide on your data collection methods. Will you be using a phone survey, mail survey, in-house questionnaire? Or will you capture system use data like databases accessed or keystrokes entered? Or test your hypotheses in a laboratory situation?
Next you'll need to develop your data collection instruments. Here's one of the times when your literature reviews are going to pay off.
You generally don't start from scratch in designing a data collection instrument. If you're going to be using a survey you look at other surveys or questionnaires that have been reported from the literature then modify these for your use. In addition to scholarly articles there are also Web sites with sample survey questions for use in information science.
Good survey questions are actually fairly difficult to compose. A question needs to be worded in such a way that it will be clear and unambiguous to your research subject. If your question is aimed at a particular age group you need to ensure that the vocabulary used will be intelligible to members of that group.
Using current slang or references to the sartorial preferences of Britney Spears might not be decipherable to senior citizens. And polysyllabic words frequently heard in the halls of academia might be unintelligible to fourth graders.
It is also difficult to eliminate ambiguity in questions. Thus it's important to try out your questions first on your co-workers, then on a small group of persons representing the demographic group or groups you wish to study.
To give an example of why this is important, in LIS we periodically survey our students to see how we're doing.
On one survey I was the person who was compiling the data. We had asked a series of questions in which current students were asked how well they understood particular areas of library and information science. We had use a typical Likert scale of 1-5: strongly disagree to strongly agree with a given statement.
The problem was that we hadn't included a response for students who had not taken a class in a particular area as yet. And we had not given instructions for how to handle that situation. Should the student leave the question blank? Or select the most negative response?
Because of this our data for that section of the questionnaire was nearly meaningless. If we had done a pilot study, asking a small group of current students to answer the questions, the problem would probably have been identified before the questionnaire was sent out to the entire LIS student body.
Of course, if you are working for a public library in a multiethnic community you'll need to have versions of your questions in multiple languages if your results are to reflect the diverse views and information needs of your entire community.
For an online tutorial on survey research, you can go to the Association for Information Systems web site.
You'll need to design a data-analysis plan before you start your research. The statistical tests or comparisons you plan to use will dictate the way you collect and encode data.
For example, if you plan to do a numerical analysis of incomes, don't use a write-in format. We once surveyed employers asking what the starting pay for librarians at their institution was. Unfortunately, we used a write-in text box. We got a variety answer formats from hourly to monthly wages. One person wrote in [click] "Same as DOE." [click]
If you plan to numerically analyze such data it?s better to give your users categories to choose from. Broader categories will result in less precise information but possibly a greater willingness among your respondents to disclose the information. [click]
(Checkmark denotes wishful thinking here.)
Once you've decided on your data-analysis plan, you'll need to re-visit your data collection instruments.
You also need to formulate a data-collection plan. If you're doing a statistical study of users, random sampling of a large segment of the target populace gives the best statistical data. But how do you select the persons who will receive the survey or be asked to fill out the in-house questionnaire? If you're going to query persons entering your library, are you going to ask everyone who walks through the door? Flip a coin and request participation if the coin comes up heads? Or use a computer-generated random-number list?
As mentioned previously, a pilot study is essential. In the long run it can save you a lot of time and money by identifying the problems with your research instruments, methodologies, and analysis procedures before you make a significant investment.
There are three general categories of data collection methods: questioning, observing, and studying information records or documents--both published and unpublished data.
Questionnaires, in particular mail questionnaires, are useful in reaching a widely dispersed geographic group of subjects. If you want to survey residents of Hawai`i from South Point to Hanalei a mail questionnaire is probably the only practical means of doing so. However, the return rate for such surveys tends to be quite low. And the persons represented by the returns are a highly self-selected group: persons who are highly motivated to give you their opinions, are able to read the language or languages of the questionnaire, and have time to fill out your survey. You're probably not going to get responses from the busy soccer moms or corporate executives or people who have significant literacy problems.
Interviews can give high quality data because the interviewer can ask follow-up questions to clarify or get further information. However, interviews are costly and time-consuming thus can reach only a limited sample size. And there is a problem with inconsistency among interviewers and even with a particular interviewer over time.
In the diary method the subject is asked to make entries about his or her activities or thoughts over a period of time. For example, a subject might be given a particular information seeking task and asked to write down what she is thinking at every juncture in the search process.
This gives rich informational content but the users' answers tend to get shorter over time as the user tires of the process. The technique also relies on subjects to accurately and honestly report on their thoughts and activities. Thus the reliability (we'll discuss that term later) of the responses is rather low and the validity--the applicability of the research results to a larger population--tends to be low as well.
Group interviews are less expensive than one-on-one interviews and often result in a good response rate. One of the methods used in group questioning is the Delphi technique, which we'll discuss in the next slide.
The Delphi technique is used in situations where you'd like to get a consensus of opinion about some aspect of the future in order to do contingency planning. The technique involves gathering together persons with expertise in a particular field. In the first round, each expert is asked the same set of questions concerning the possibility of future economic, social, political, or technological events or trends. For example, in a question the respondent might be given the current population of Hawai`i and asked to estimate the population five years in the future. The responses of the various experts are then collated and summarized and these summaries distributed to everyone in the group.
After they have seen the responses from the first round, each member of the group is then asked to submit their opinions again, this time possibly revised in light of the opinions of their colleagues.
Again the answers are collated, summarized, and distributed to everyone in the group.
Through a series of iterations a consensus often emerges.
The advantage to this process is that each expert is given additional input to their opinion-making process during each iteration. Ideally as everyone provides successive input, revising their opinions in light of the new data or take on data, a coherent picture emerges. Proponents of this methodology feel that it is particularly useful in high-risk, turbulent situations in which traditional strategic planning tends to be ineffective.
One criticism of this method is that it produces "forced consensus." Elisabeth Noelle-Neuman has written about the phenomenon she refers to as the "spiral of silence.'' People are reluctant to voice opinions that contradict the general consensus. As persons who hold minority views are unwilling to voice those views, the impression that everyone holds the majority view becomes entrenched and minority views are increasingly less likely to be articulated.
Observation can be less intrusive than questioning if you're gathering data without the knowledge of your subjects, although, as mentioned previously, there are ethical considerations and we need to be careful about protecting the privacy of our patrons.
Observation is regarded as reliable in that you don't have to worry about the problem of an interviewer asking questions differently each time.
The technique is also considered objective because you're observing actual behavior and recording it, rather than relying on subjective responses by participants to your questioning. This is what your users are actually doing, not what they say they do or intend to do in the future.
Depending on the technique, observation can be expensive, if you must pay a team of observers to engage in the observation process.
And your data doesn't reflect user motivation--you don't know what the person is thinking while they're acting.
In addition, if your subjects are aware of being watched there is the phenomenon of the Hawthorne effect.
The Hawthorne effect refers to a series of studies done in 1927-1932 at Western Electronic Company's Hawthorne Works plant. This was the era of Frederick Taylor's scientific management. The turn of the century had seen a remarkable set of discoveries in the physical sciences and the heads of corporate management were eager to utilize scientific methodology to increase the productivity of their workers. Taylor's approach to management reflected a highly mechanistic view of the workers. Management's job was to optimize the tools and dictate the movements of the workers, who were reduced to minute, specialized, fairly mindless tasks and were considered interchangeable, like parts in a machine.
The original purpose of the Hawthorne Works study was to discover the relationship between production and the level of lighting at the plant in order to maximize worker productivity. One group of workers, functioning as the control group, received the same level of lighting throughout the study. The other group had their lighting increased and decreased and their productivity under the different conditions monitored.
The researchers found that when the workers' lighting level was raised [click] [click], productivity rose. [click]
However, when the lighting level was lowered [click] [click], productivity also rose. [click]
In addition, the productivity of the control group rose during the study, even though their lighting level had remained constant. This puzzled the researchers.
Eventually, they determined that increased attention paid to the workers by the researchers caused an upsurge in morale [click] which in turn resulted in the observed increases in productivity.
In terms of our context, we need to be aware that our observation activities will have an effect on the thoughts, emotions, and actions of our research subjects—that the actions we observe during our studies may not accurately reflect the emotions, thoughts, and information searching activities undertaken by subjects when they are not under observation.
By the way, the Hawthorne study led to a new movement in management relations theory termed the humanistic management approach.
In addition to questioning people or observing them, we can look to published and unpublished reports for our studies
When we examine documentary evidence in our research we look to three different types of materials.
First, there are publications, both those produced by organizations and those found in scholarly journals.
Next, there are statistical reports. Those produced by a particular system and those produced at the wider level of our institutions. For example, the University of Hawai'i produces many quarterly and annual reports giving information about students numbers, demographics, student-faculty ratios, student semester hours per department, and so on. These can give us valuable data about our expected users.
As mentioned earlier, we also look to citation indexes as these offer information about which journals or authors are being widely read.
One of the premier journals in information science is the Journal of the American Society for Information Science and Technology. In terms of the dichotomy between basic versus applied science, this is a journal that publishes articles dealing with basic research. Standard research protocols are expected to be followed, statistical tests applied, and results expressed in a predominantly scientific format. Bibliometrics is frequently a focus of information science research. Bibliometrics and the newer field of cybermetrics, as you'll see later, look at patterns in language and discipline development in publications, citation matrixes, author productivity, and now linkage structures on the World Wide Web. One of the major figures in bibliometric research is Eugene Garfield, the originator of the Science Citation Index.
Journals that focus on applied research are equally of interest to us, especially when we're doing problem-oriented studies in our institutions. Applied research focuses on topics such as the relative utility of various databases in terms of search modalities offered, breadth and depth of coverage, formatting options for output, and pricing. Dr. Jacso of the UH LIS faculty is one of the preeminent researchers in the field of bibliographic databases. He publishes regular columns as well as individual articles in a variety of library journals and annual review publications, such as the prestigious Annual review of information science and technology.
One of the journals we turn to in search of citation data is the Journal of Citation Reports. This gives us a picture—albeit a flawed one, as you'll learn in Dr. Jacsó's courses—of the rates at which articles in a variety of journals are being cited. Later in this course in the bibliometrics presentation you'll learn about the various mathematical laws that have been formulated concerning publication and citation rates.
One of the changes brought about by the advent of the World Wide Web is the ease with which we can now search citation databases such as the Science Citation Index, the Social Sciences Citation Index, and the Arts & Humanities Citation Index. Whereas with the print version we would have to go volume by volume, year by year to amass data for our research, we can now search through multiple years with the click of a button.
As we'll see later in the bibliometrics session, having electronic resources allows us to more readily study the patterns that arise in the publications of various disciplines. This in turn helps us in our collection-development decisions. By the way, the ISI databases are currently available to you free of charge while you are a student at the University of Hawai'i. They can be accessed through Voyager.
In addition to data available from commercial vendors, we can also utilize the statistical reporting functions of our own automated library systems in our research.
This is a sample of the UHM access statistics for databases available through Cambridge Scientific Abstracts.
Notice that there is variation not only among the databases but also over time.[click]
Access rates drop to zero for some databases at the end of the academic year. This may be a reflection of lack of summer courses on a given topic but also may mean that the majority of users of a particular database are undergraduate students who tend not to continue academic research on their own during the summer.
This is a usage report for Lexis Nexis, a database that concentrates on materials related to law. [click]
Notice that in some cases there is a high ratio of documents retrieved to the number of searches performed. [click]
In other cases the number of documents received is less than the number of searches. This indicates that a number of the searches are yielding no results. In the statistics circled, at least four of the twenty searches reported resulted in a null retrieval set--no documents were retrieved that matched the search query. There could be a number of reasons for this: database content, indexing of the documents, search methodologies available, or a lack of clear search instructions available for users.
Further research in which we closely observe or query patrons during their information search processes would be needed to interpret the data.
Sometimes our data collection methods are fairly primitive.[click]
After handling a question at the reference desk the librarian makes a hash mark on a piece of paper.
Some of our data collection methods are far more sophisticated. [click]
This is a sample of the type of reports Voyager is capable of generating.
Notice that the date, type of search, search string, search limit type, limit character string, index, and number of items retrieved can be captured for every search a patron makes.
Here we need to be concerned with user privacy. Even if we capture data about our users' searches, we do not capture identifying data about the searcher. This is a fundamental principle of librarianship. It will be emphasized in your reference courses, this course, management courses, and courses about the library profession in general.
Sometimes we perform classical experiments in our research.
The classical experimental model is to formulate a hypothesis, then set up a laboratory in which two groups--an experimental group and a control group--participate in an activity. Then the reactions, activities, or times to perform certain tasks of the members of the two groups are compared.
In the classical experiment, one variable--say the user interface--is allowed to change while all other variables are kept constant. This change in only one variable, along with sophisticated statistical tests, allows the researcher to determine with greater or lesser confidence that any difference between the control and the experimental group is due to the change in the variable under study.
However, there are problems with utilizing this approach in user studies.
Let's say we're studying the efficacy of a particular interface. [click]
Subject A had a great evening the night before the research session. Her husband surprised her with a long-stemmed rose. They went out to dinner in her favorite restaurant. The next morning he brought her breakfast in bed. She looks at the interface and the flower reminds her of the night before. [click]
She gives it an excellent rating.
Subject B had a terrible time the night before. Her kids were sick. Her husband had to work late. [click] This morning she encountered terrible traffic trying to get to the research lab. [click] And to top it all off she's having a terrible hair day. [click]
She gives the interface a poor rating.
Same interface. Very different reactions that aren't actually an evaluation of the utility of the interface. This brings up the problems in using humans as research subjects.
User research in information science has borrowed many research techniques from the social sciences. In doing so, we also inherited the problems of social science research. These problems are largely the result of the social sciences borrowing their research techniques from the natural sciences. As it turns out, there is a often a poor fit between the research techniques of chemistry and physics and the research environment and subjects of the social sciences.
In the natural sciences we can more readily achieve the classical research paradigm discussed in the previous segment of the user studies presentation. We can take a culture of a particular strain of bacteria, put the cells in a nutrient agar solution, divide this mixture of cells and nutrients into a hundred test tubes or petri dishes, then incubate half the samples at one temperature and half at a higher temperature. This can give us data about the effect of temperature on the morbidity of a particular type of bacteria.
We can't do that with humans. We certainly can't isolate a particular "strain" and we can't control the environment in which our subjects develop.
Let's say we decided to use the members of our LIS class in an experiment. Although there are some common elements, the members of the class have different occupations, ages, nationalities, educations, and personal experiences. Some of these myriad variables are difficult to identify—for example, on any given day there are fluctuations in mood not only between persons but in the same individual over time.
Such variables are also difficult to control.
Another problem in user research is that the link between cause and effect is often difficult to establish. [click]
For example, let's say the library administration makes changes in the library borrowing policy. [click]
Almost immediately afterward there is a decline in circulation. [click] Is the former the cause of the latter? Or was there an extraneous factor? [click] Did the transit authority reduce the number of buses on the route that services a local retirement community, with the result that senior citizens could no longer get to the library? [click]
Another problem is that we often seek to measure intangibles. What constitutes quality of service? What if a patron feels good about a particular library visit but didn't really get all the resources he or she needed?
What about the converse: What if the patron got the resources she needed but feels unhappy because the library was too cold?
An additional problem is that users' articulated demands may not reflect their true information needs. [click]
One time I was approached at the desk by two students asking for National Geographic articles on New Zealand. [click]
When I questioned them about their information needs, it turned out that what they really needed were articles about business opportunities in New Zealand. The National Geographic would not have been the appropriate publication to search for such articles. It was simply the only magazine they were familiar with that carried articles about foreign countries.
If they had gone directly to the online catalog they would have pulled up the call number for the National Geographic. They would have gotten what they requested of the system. But their actual information needs would not have been fulfilled by their request.
If we simply tracked their search activities in our automated system reports we would have seen their search for National Geographic was successful. We would not have known that their information needs were not met.
One problem with user research is that it can be disruptive to the normal operation of our information-provision system.
If I stand there with my clipboard asking people questions as they leave the reference desk—How old are you? What are you looking for? Do you think we have what you need? Are you satisfied with your interaction at the reference desk?—I disrupt the traffic flow and probably the thought processes of our patrons, and I will probably make our reference librarians nervous or annoyed. Or if we employ observers to watch users in their information seeking activities, we tend to alter those activities. Whereas a person might normally switch her attention from information seeking to entertainment, to socializing, and back to information seeking even in a short time span, an individual who knows she is being watched is probably not going to engage in her normal activity patterns. Depending on the level of intrusiveness, our data does not truly represent normal patron behavior.
Another problem, as mentioned previously, is that the library may not be able to implement changes suggested by the results of our studies.
For example, university students, when surveyed, may wish that the reference desk be open until 2 a.m. the night before their papers are due. But the library may not have the staff to keep the reference desk open for such extended hours.
Aside from the previously mentioned problems with user research, there are the traditional problems of validity and reliability of data.
There are two aspects to validity. One is whether the data gathered about a population sample is representative of the entire population. Many studies done at the university use students as their subjects. For example, a group of psychology students are required by their professor to participate in a study. But do the preferences or information searching activities of students at the University of Hawai'i reflect the preferences or information searching activities of a former plantation worker in Ewa?
Another aspect of validity is whether our measuring tools actually yield the desired information. For example, if we seek to discern whether students like school, is attendance a good measure? Maybe not. There can be myriad health or social environment factors that can contribute to attendance rates.
The other factor that affects quality of data is that of reliability. Are our instruments stable over time? For example, we mentioned earlier that questioning techniques can vary between interviewers or in the same interviewer over time.
You are the director of the Lake Woebegone Public Library, a small library in a residential area. Over the years, the demographics of the area have changed. Years ago the streets were full of children riding bicycles; now the only people you see as you drive to work are senior citizens tending their gardens. The library used to be full of school children doing their homework or preschoolers attending storybook sessions. Now very few patrons of any age visit the library. You decide to do a user study to determine the best course of action to revitalize your library but realize that you will need additional funds for the project.
Write a two-page letter to the Library Board of Directors requesting additional funds. Explain the type of user study you would like to do, justifying the methods you have chosen. Do not list every method we discussed in class. Select only those methods that would be appropriate in this case. For this assignment you do not have to estimate the amount of funds you will need but you do need to demonstrate that cost effectiveness was a consideration in your choice of methodologies.
Hint: Be sure to demonstrate to the Board that you have done your homework before making your request. Lake Woebegone is, of course, a fictional town. If this were a real town, what sort of information could you cite to bolster your claim that a study is needed?