When I started out with the social sciences, I didn’t know much more than that I would fit here. I was, until then, a student of computer science and a coder at Microsoft, yet mostly unhappy. Locked in my Hyderabad room, I spent hours writing on my blog my uninfomed commentary about social happenings, which felt closest to the kind of thing I wanted to do. It was this which led me to realize my interest in research in the social sciences. Driven by an imagination about what it would be like to do research and a belief that the other side possessed the meaning I sought, I left my carpeted Microsoft life to take up research.
I was going to do research but I was completely unaware how one really did that stuff.
Before I moved to Delhi, where I was going to take up the liberal arts program at Ashoka University that would help me transition to the social sciences, I raced with time to complete thick books stacked next to my bed. I read history and sociology, even fiction. I made extensive notes. How else was I to prepare for research, whatever that is.
On one afternoon in my first few months in Delhi, I received an email about Christophe Jaffrelot’s project on syncretism. It was the first time that I came across that word, but the world renowned professor was looking for a team of research assistants and I was going to apply. As I understood later, the project aimed to study dargahs, which are tombs of Sufi saints that have been thronged primarily by Muslims but also by Hindus and Sikhs for over 8 centuries. These spaces, which were born out of ideals of anti-institutionalization of religion, embraced humanistic values such as equality, and have survived as shared religious spaces despite the contemporary culture of riots. The research project sought to understand how religions interact in these spaces today, in order to determine whether mistrust and communalism have seeped into the dargah as well. We were to observe and interview devotees and pirs in dargahs, and shopkeepers in vicinity. I jumped on board quickly. In the months that followed, my team and I spent many days at dargahs in Delhi, Agra, and Meerut. Field work, I realized, is harder than it seems at the outset. You have to deeply understand the local culture, be sensitive and empathetic to gain subjects’ trust, and adeptly manoeuvre through conversations, while battling the sun, exhaustion, and the brunt of judgements from random people who you’d never see again. It is even more difficult when you are an outsider to a community, seeking to study things that are considered controversial in it. To be a non-Muslim, South Indian, female student conducting research about interreligious relationships in a largely Muslim and male dominated space, in an aggressive city was often consuming (to say the least).
It was during this period that I met Neil Lutsky, a sharp psychologist, a caring mentor, a wonderful friend, and everyone’s favorite professor. Neil introduced me to quantitative methods, which are typically more specific, more binary, and less exploratory than qualitative methods. He emphasised, in his critical thinking undergraduate class that he allowed me to attend, that example is not evidence. The more I worked on my research, the more I realized my tendency to develop a memory bias for vivid interviews and stories, which were not necessarily a better depiction of reality. Good and longer interviews impacted my judgements and clouded my perspective. What if persons with tolerant views were more willing to participate in interviews than those with hatred for the other religion? This would bias our results. My teammate noticed that interviews with Muslim men took a different direction when he wore a skullcap and began conversations with the typical Arabic greeting ‘As-Salaam-Alaikum’. When we began writing up our work, we were determined to not be influenced by these biases. It was terribly difficult. How were we to build a narrative and conclude our research if we couldn’t focus on our best data? We tabled our interviews, categorized them, and counted similar opinions, all to escape bias, and also wrote a qualified conclusion.
After all of this, could there still be biases in our work? Definitely. This leads me to ask: Can research ever be completely objective?
Post this project, my interests began to incline towards social psychology, a discipline that happens to revere experimental and quantitative methods. Classes on statistics and experimental methods taught me to break down research problems, devise hypotheses, and design simple experiments to verify them systematically. Quantitative research typically works with research questions that are narrow and specific. Experiments manipulate a causal variable while controlling other extraneous factors and observe its impact on another outcome variable, thereby efficiently testing causal claims. Data, in a psychologist’s computer, is a table of numbers, and analysis is statistical tests performed on these numbers in order to determine the probability of the hypothesis to depict reality.
Most academics would agree that research in the social sciences is easily classifiable into quantitative and qualitative research. This division happens at a macro-level; disciplines self-categorize into one of the two paradigms of research, with their members often staunchly believing in their discipline’s ways and even dismissing other ways of research. American political science, for example, does not think much of European political science anymore for the latter still relies on observation and conversation. Psychologists describe sociology as the discipline with important questions but poor methods. There seems to be an invisible hierarchy, with numbers being placed higher than all else. I found myself throwing my weight toward the quantitative side of this tug of war. I began to rest my faith in the certainty that numbers seemed to provide.
My master’s thesis was an experimental study aimed at understanding collective experiences of anger. Common understanding of riots tells us that anger amplifies when experienced in groups. Some preliminary empirical evidence supports this claim. But it is largely unknown what conditions are required for anger to amplify. My advisor, Kai Qin Chan, a group of wonderful undergraduate research assistants, and I sought to investigate this. For example, Would anger amplify even if people in the group don’t overtly express their feelings? Must members of the collective perceive themselves as belonging to a group for the phenomenon to occur? I was particularly interested in anger against social injustice as would be experienced in protests. We invited college students to participate in our study. Each participant watched a video depicting injustice as a result of monetary influence in psychiatric diagnoses, alone or in groups of three or five. We manipulated participants to view the others in the room as ingroup (similar others) or outgroup (dissimilar others) members. We planned to compare self-reported emotion ratings of participants to see if there was any effect of group size or group identity.
The idea of controlling all but a couple of intended variables sounds neat on paper but is difficult to achieve, especially in experiments involving groups of people. At the end of our experiment, we found no influence of group size or identity on anger. That is, our statistical test tells us unequivocally that anger did not amplify when more people were present when participants did not openly express their feelings. However, how are we to know whether there is truly no such effect or whether the execution of our experiment was flawed (ie) other unintended factors had inadvertently influenced the outcome?
Had our results been consistent with our expectations, I wonder if we would have doubted our method.
It is reasonable that empirical methods are considered important since these are less prone to researcher’s bias and provide more mathematical certainty in conclusions. But is that sufficient to justify the superiority of quantitative research over qualitative research? This change in paradigm forces one to ask if the new hegemony of numbers in social science is warranted or reasonable.
In my last term, I took a course on field research methods. The instructor, Valentina Zuin, busts the qualitative-quantitative dichotomy in her classes. Are qualitative and quantitative methods equally capable of answering the same research questions? If they are employed to study different questions, she asks, is it even logical to compare them in the first place? That is, qualitative research typically starts with broad questions that may not even know at the outset which variables matter. For example, what characteristics of election candidates do voters give most importance to? If I were to administer a quantitative survey to study this question, I would need to know a possible (and exhaustive) list of characteristics that voters might consider before casting their votes. Without such knowledge, my study would have to depend on assumptions. In Valentina’s class, we read an article about reasons why residents of rural India defecate in the open even when latrines are available to them. The authors collected through focus group discussions that women went in groups to the fields to defecate together and chat about their day, suggesting that open defecation was a recreational activity for the rural woman—a perspective that surprised the authors. Had the authors chosen a quantitative method, recreation would have never featured in their survey.
Quantitative research requires at least preliminary knowledge about variables that could matter for the outcome being studied because such research usually starts with a hypothesis or a focused question that examines the effect of one variable on another. For example, the effect of the education level of a candidate on voters’ likelihood to vote for him/her. Where there isn’t sufficient understanding of underlying variables, observation, interviews, and other similar methods could provide useful insight into the subject of study.
Even when a researcher is savvy of the important variables influencing a particular outcome, a qualitative understanding of the social context and culture in which the study is being conducted is necessary. I am reminded of a recent experience at Ashoka University. After complaints about inefficient handing of waste on campus, there were efforts focused on designing interventions to increase students’ tendency to segregate waste. My groupmate and I, observing students’ waste behaviour for our class project, noted that janitors on campus emptied both organic and recyclable bins into the same trash bag, rendering students’ segregation tendencies inconsequential. It took us a few minutes of observation to understand that the problem wasn’t with students. Qualitative research could help us frame and refine questions for quantitative research.
Some kinds of questions will always be the playground of qualitative methods. Investigations about one-time events placed in a unique historical context will have to be qualitative as it would be impossible to recreate the event or its history as it occurred. Open-ended, exploratory problems are also the arena of observation and interviews.
It seems that qualitative and quantitative methods are used to answer different kinds of questions and for different purposes. Any hierarchy among them seems illogical and unreasonable. Comparison of the two methods would essentially be a comparison of research questions. It leads us to ask: Are some kinds of research questions always better and more important than others? If not, it might serve us better to respect other disciplines, and take from them to better our own.