Shortly after starting my first semester as a PhD student, I came across an opportunity to get involved in a community food system research project as a data collector. It was a collaboration between a local food coalition, nutrition scholars, and city officials. This was just the type of experience and connections I felt I needed to begin establishing myself as a researcher within a new community and among faculty in my field. Instead, it ended up being a lesson on establishing my ethical boundaries as an emerging researcher and evaluator.
The lead researcher on the project was an assistant professor from a large university in the South. In this post, I will refer to her as Dr. Susan. I had little background on the project, so I asked Dr. Susan about the research questions guiding the data collection. She stated that she was working with the primary stakeholders and that they just wanted to do an “assessment”. I pressed for a more precise answer, but it seemed that she either didn’t know the purpose of the study or that the stakeholder group really didn’t have a clear purpose in mind. This was just the start of my discomfort.
We moved on to discussing the data collection instruments. As we read through the survey items together, there were several that required explanation because of ambiguous wording, so I pressed for clarification. By the third or fourth unclear item, Dr. Susan appeared to be impatient. She instructed me to let the respondent decide what the item means to them, or to have them “think about it hypothetically” if it did not pertain to their life experience. I was frustrated. If I had trouble interpreting the items as a PhD student with a nutrition background, what did that mean for the reliability of our data among a less-educated population?
Finally, we came to the demographic items on the survey. With the exception of the income questions, none of the demographic items included a response choice of “prefer not to answer.” I asked Dr. Susan if she wanted to make them consistent by adding a “prefer not to answer” response option to the other items. She declined. I followed up and asked if we could include an introduction to the survey where we could inform respondents of their right to decline to answer any questions, without risk of losing their monetary incentive. Dr. Susan explained that if we let them know they have the option to skip questions, then they might choose to not answer some of the items and we would have missing data. I could feel my muscles tightening and a feeling of panic.
I left the meeting anxious and unsure about continuing with the data collection. On the one hand, I really wanted to build a relationship with the food and nutrition community in this city and worried that backing out of the data collection would make me appear difficult or arrogant. But, the ethical gnawing in my gut was keeping me up at night. Fortunately, my professors had introduced several tools during my nascent PhD career that helped me navigate the situation and make a decision that put my mind at ease.
Reflexivity is a critical evaluator competency. In the qualitative methodology class I took during this first semester, we were required to start a journal to explore our growth as researchers. In one exercise at the beginning of the semester, we were asked to write down things that were ethically important to us in conducting research. There was a moment of clarity when I referred back to key words from this statement while reflecting on my options: self-determination, transparency, and autonomy. I also described how it was important to me that individuals providing their data were respected and that I felt the particular methodological choice did not provide voice to the community members. I wrote that it “reinforced a power structure of researchers and government officials deciding on an intervention based on (flawed) data they collect through an instrument that they designed in isolation.” Two themes emerged from my journal entries over those few days: concerns about data integrity and social justice.
American Evaluation Association Guiding Principles
The concerns I saw emerging in my journal are addressed explicitly in the Guiding Principles of the American Evaluation Association. We were asked to reflect on these principles early on in the semester. Two areas in particular provided the direction I needed to make a decision about moving forward with the data collection.
Integrity/honesty: If evaluators determine that certain procedures or activities are likely to produce misleading evaluative information or conclusions, they have the responsibility to communicate their concerns and the reasons for them. If discussions with the client do not resolve these concerns, the evaluator should decline to conduct the evaluation.
Respect for people: Evaluators should abide by current professional ethics, standards, and regulations regarding risks, harms, and burdens that might befall those participating in the evaluation; regarding informed consent for participation in evaluation; and regarding informing participants and clients about the scope and limits of confidentiality.
One of my concerns was that the poorly constructed survey items would provide misleading information about the population. This could potentially lead to an ineffective and wasteful intervention in this community. Furthermore, without a clear understanding of the purpose of the data collection or provision of informed consent to respondents, I could not confidently conclude that my work would maximize benefit and reduce unnecessary harm. Since Dr. Susan was not receptive to modifying the survey items or establishing informed consent, I felt the right decision was to decline the opportunity to participate in the data collection.
Although I felt fairly confident that I needed to let go of this opportunity and focus on research and evaluation work that aligned with my ethical boundaries, there was a lingering feeling of self-doubt. I was a first-year PhD student with little experience. Who the hell was I to question the protocol of a more experienced assistant professor? What did I know about conducting field research anyway? The AEA Guiding Principles provided direction, but was that how evaluation really worked in practice?
I turned to a faculty member in my department for advice. I knew this person had made similar ethical choices as an emerging researcher because she had talked about her experience in class. As I shared my experience with her, I felt sensations of vulnerability. My face was trembling like I wanted to cry. She reassured me that this protocol was not business as usual in field research and that even though this project was not subject to IRB review, it should still adhere to the ethical treatment of research subjects. She described three courses of action I could consider: 1) walk away from the project with no explanation, 2) walk away from the project and give an exit interview with the researcher to let her know my reason for leaving the team, or 3) move forward with the data collection and use it as a learning experience of what not to do.
My Ethical Boundaries
Later that evening, I went home and wrote an email to the researcher. I explained that I was unable to continue with the project and communicated my concerns about informed consent, coercion and respect for research participants. As a learning experience, this situation highlighted the importance of actively articulating ethical boundaries as part of one’s research philosophy early on in the graduate career. I was fortunate that the faculty in my program provided the tools and mentorship for navigating these situations on several occasions early in my first semester.