Faculty Club / Assessment / 11 Steps to Help Students Conduct Effective Research Surveys

11 Steps to Help Students Conduct Effective Research Surveys

Dr. Jerrod Penn shows his agricultural economics students how to use cold facts and human behavior to craft a survey that provides useful results.

Dr. Jerrod Penn shows his agricultural economics students how to use cold facts and human behavior to craft a survey that provides useful results.

Jerrod Penn, PhD

Educator

Assistant Professor of Agricultural Economics,
Louisiana State University in Baton Rouge

PhD in Agricultural Economics, MS in Economics, MS in Agricultural Economics, BA in Economics, BA in Political Science

Conducting a research survey on any subject can be filled with statistical peril. Is your sample size large and diverse enough? Will answers be tainted by hidden biases and unforeseen variables? Will the responses provide the accurate, relevant data that your project requires?

This is a significant reason why Dr. Jerrod Penn, an assistant professor of agricultural economics at Louisiana State University in Baton Rouge, puts a great deal of thought into the preparation of research surveys when assisting local organizations (and often other university entities) with research into agricultural and environmental issues affecting the campus and the nearby community. This is part of his research interest: to quantify often hidden or subtle changes to the environment we live in and their impact on humans in dollar values.

It is also why Penn provides students in his LSU course AGEC 4700 Research-Consulting Practicum (and his similar past University of Kentucky course, AEC 580 Independent Study) with precise instructions for the construction of such surveys. “The quality of a survey can really dictate the quality of the results,” he explains.

Past student research projects have surveyed sustainability measures on the LSU campus, use of the campus arboretum, dining services, pollinator protection (as part of a joint project between LSU, the University of Kentucky, and Ohio State University), butterfly conservation (near the UK campus), and other topics.

The benefits of involving students in research surveys

This exercise, says Penn, creates a sort of symbiotic relationship between the students and local stakeholders, allowing each group to benefit from the interaction. The students, under Penn’s guidance, develop and conduct research surveys (providing them with valuable professional experience)—then they aggregate and examine the data, using it to provide insights and consulting advice to groups involved in agricultural preservation and agribusiness programs and policies. That data is then often used in the creation of new programs—and in the presentations that these groups must conduct to secure funding for them. For students, the benefit is straightforward: real-world research experience they can put on a resume.

“The survey process is a good microcosm,” says Penn, “because it gives students the chance to have ownership, to make choices, to generate a report, and to see the project from planning to data collection and on to troubleshooting.”

Context

“The survey process is a good microcosm because it gives students the chance to have ownership, to make choices, to generate a report, and to see the project from planning to data collection and on to troubleshooting.”
-Jerrod Penn, PhD

Official course name and number: AGEC 4700 Research-Consulting Practicum

Course description: Understand the foundations of agricultural and resource economics and applying them to problems for related stakeholder groups.

See materials

11 steps to an effective research survey

Penn gives his students as long as three weeks to create a meaningful survey. This activity can be intimidating, he notes—particularly when there are high stakes involved. To do it well (and retain the respect and interest of local stakeholders) requires guidance, so Penn leads students through the process in the classroom. First, he divides students into groups and then he allows them to set the tone and pace as they walk through these steps and strategies.

1. Select a central question

Before the course even begins, Penn has identified a central issue for the class, one that the university or local entity is grappling with. But the problem may still be only broadly identified, so Penn asks students to begin the first week by asking themselves, “What is the single most important question that we need to answer well?” All of the supplementary or subsequent questions will be built off of or related to that main question, so it is important not to give this short shrift.

“I may give the students a copy of the three-year or five-year strategic plan or the central problem that the organization’s leader has described to me,” Penn says. “[Students then] gather feedback from the stakeholder experts during survey development, either individually or in a focus group of these experts.”

2. Describe the process

Penn says he also spends time initially in describing the process the students will be participating in, and how each step is critical to generating a high-quality answer. “We’ll go through established fundamentals on survey design,” he says, “but also, because this is economics, we will distinguish the distinct elements to generating monetary estimates of the central question. This explanation can continue into week two.”

[block id=”18667″]

3. Examine the existing data

Once the central question and process are defined, Penn recommends that students do some independent research outside of class. He suggests they immerse themselves in the topic they are researching through additional reading and by looking at existing surveys on similar subjects. He says that textbooks are helpful, and other researchers are usually willing to share information, especially if they know their work will be properly cited by the students.

4. Create a best/worst list

Next, students use their newly acquired insights to decide on both the top 10 best questions and the 10 most disposable questions. “Usually,” says Penn, “we refer back to survey instruments from other relevant research and ask ourselves, ‘Why did they ask it this way? Is it useful in our context? Can we improve it with respect to adhering more closely to our research question or the consistency with which respondents interpret and answer?’”

Then the groups craft their lists of most valuable and most dispensable questions. “The key is to benchmark its relative value against the central question,” Penn explains. “If we spend too much of a respondent’s time in answering a less important question, this is likely at the expense of a more important question that enhances the validity of our result or the central question.”

Penn recommends that in-person students take no longer than 10 minutes to complete; online surveys may be slightly longer.

These stages of research and formulation can easily mean that it is week four or five of the class before the first draft of the survey is ready.

5. Add some attention-check questions

One concern students will face out in the field is in holding the survey participants’ attention.

“A standard practice of surveys is to have ‘attention-check’ questions,” says Penn. About two-thirds of the way through the survey, he recommends inserting a question such as, “To verify you’re a human response, please select ‘Somewhat True.’” He does this to see whether participants will follow these instructions. If a respondent misses the attention-check question, Penn notes, their survey is usually removed from the final analysis because their response (or lack thereof) suggests that their minds have wandered and, therefore, the rest of their answers may be suspect.

6. Select the question lineup and order

Once the group reconvenes, the students hash out an overall best question lineup, then place those selections in strategic order in the survey.

“You have to think not about just the individual questions but how these questions affect each other,” Penn tells them. The goal is to make all questions as neutral as possible, so that respondents do not feel as though a certain answer is expected from them, or that earlier questions are leading them to certain responses later on.

7. Test the questions on a small group first

Before administering the survey to the intended population, Penn recommends using focus groups to identify ineffective questions and unexpected variables.

“Any survey worth its salt has to have focus groups,” says Penn. “They are in-person trial runs with people who reflect the population you are trying to understand.”

Focus groups are especially useful in helping students learn to identify potential bias in responses they may encounter in the field. After listening to the range of responses that a diverse focus group might present, the students conducting the surveys can fine-tune their questions in a way that avoids both overly negative and overly positive responses.

“You’re really trying to get the best representation of the population,” says Penn. “And you’re trying to avoid having people get pushed in one direction or the other with how they answer and how they participate.”

8. Be general with prospective participants (at first)

When administering the official survey, Penn recommends concealing the specific content of a survey from participants—at first, especially if the topic is polarizing.

Telling responders the exact purpose and goals of a survey tends to induce self-selection, he says. This occurs when those who are especially interested in a project and its outcome may be more willing to participate than those who are not, and these individuals as a group may have different opinions from the rest of the people you would like to study, which can skew the results.

“We always have to think about what type of bias we’re going to induce,” he says. “Keeping things general helps entice a broader population for participation.”

9. Survey one person at a time

It may take much longer to complete the survey work, but Penn recommends conducting in-person surveys one person at a time rather than listening to questions in a cluster so that all can hear. One-on-one questioning, says Penn, can help prevent participants from responding to “group think,” in which people are swayed by the opinions of the masses rather than stating their own true beliefs.

“The idea is that you want independence across observations,” he says. “For example, if I ask [the same questions of] a husband and wife, or two siblings, they’re more likely to be correlated about how they think about something.” This can happen on an even larger scale when people are in groups and are (consciously or unconsciously) eager to fit in. “We’re hoping to have the most independent responses possible, where their opinions aren’t likely to be correlated,” says Penn.

10. Deflect interruptions

Penn notes that it is not uncommon to have survey responders interrupt the flow of a survey with questions about its purpose or results.

To keep the survey process streamlined and uninterrupted, Penn recommends that students tell participants that they will be happy to answer any and all questions after the survey has been completed.

“I think of that as sort of an immediate in-person impact of outreach and education,” he says. “If a person is interested knowing more after they finish the survey, the student can give them more resources and turn the interaction from being solely data collected into also being outreach conducted.”

11. Offer to share the results

Penn shows that he respects and appreciates his participants’ time and effort by offering to email them a brief summary of the survey report once the results are compiled and reviewed.

“People are always asked to give their opinion, but they are rarely ever given the opportunity to see how their opinion is part of that greater process,” says Penn. “To me, it’s a matter of long-term good will. The very least I can do for [participants] is give them some brief feedback.”

Outcomes

Penn says that it’s not unusual for these student surveys to produce more concrete results. For example, the first round of a student survey on campus sustainability practices triggered a larger-scale survey of staff and faculty, and the results were reported to the University of Kentucky’s President’s Sustainability Advisory Committee.

Even more clear to Penn, however, is the fact that students gain valuable practical experience in research, surveys, and the troubleshooting process. “I believe this process is particularly useful because employers want employees who can work independently and think critically for themselves,” Penn says. “This practicum serves that purpose.”

In teaching assessments, Penn has received positive responses to this exercise, including a verbatim response from one student who outlined multiple positive impacts:

“This class was an immeasurably valuable experience for me. I gained experience in research design that was incredibly useful to me later on.… The critical thinking skills that I used in this class have stuck with me. This class [also] prepared me professionally. After completing the class, I applied for and received a job with the Undergraduate Research outreach team, where I was able to share my experience with research with other students and encourage them to augment their academic curriculum with research. Completing undergraduate research is also a highly marketable experience.”

Get the Faculty Club newsletter

Browse by Topic