Social Sciences Research: To Count or To Listen?

Carol A. Hand

Once upon a time, I would sit all day running statistical analyses. These were among my least favorite jobs. Crosstabs, T-Tests, ANOVAs, and occasionally, Chi-Square Tests. One didn’t really need to understand why these statistical procedures were supposedly trustworthy explanations and predictors of people’s behavior. One was merely programmed to believe that these were valid and reliable sources of information about groups of people based on self-reported, yes/no, forced choice answers to questionnaires.

One job in particular was noteworthy. My job was to construct, run, and interpret statistical analyses to determine the interaction of stressors and health for older men and women who were caring for adult children with disabilities. It was someone else’s job to enter the data correctly. I was told by the project director not to try to speak to the data entry expert face-to-face because he spoke Korean and struggled with English. “Send him emails if you need something.”

I tried this approach, but he would always ask to meet with me so I could explain my confusing emails. We became friends in the process and I learned more about his life, studies and culture. He valued the chance to practice speaking English. Emails were a poor and isolating substitute.

Of course, the data I needed were always quickly and impeccably entered. Still, it was such a boring job. After an hour or two of staring at the computer screen and data printouts, I snuck into the room where the seldom viewed narrative data were stored. These were hand-recorded replies of “research subjects” to the question, “Is there anything else you would like to add?” I discovered a fascinating (and troubling) project oversight. More than 75 percent of the study participants (calculated in my head) had indicated that occasional respite from continual caregiving responsibilities would significantly improve the quality of their lives.

The project director wasn’t pleased when I asked her if we had followed up on this information. It appeared to be a policy we should advocate at the state level. We had research data to support the need and cost-effective benefits for such a policy.

Research should, after all, be used to improve people’s lives, right? Not just to count and catalog their deficits and misery?

businessman-research-cartoon_23-2147508086

Image: Research

Yet this project wasn’t the most distressing. That one would come later. It was a project that was supposedly measuring the effectiveness of a particular intervention to improve the academic performance of Native American elementary school students. The researchers who directed the project were more concerned with fancy research methods than with the intrusiveness and cultural dissonance of those methods in tribal contexts.

When those methods failed to produce evidence of significant improvements in grades and attendance, project directors blamed Native American families, schools, and cultures. Unethical? Yes, but their draft conclusions were subjected to an elegantly argued rebuttal that pointed out the absurdity of the methods. Observers and beeping computers in a room with third graders, questionnaires families refused to complete for a variety of reasons, and student performance forms filled out by too many substitute teachers who didn’t even know the names of students let alone how they performed in class.

Most alarming, however, was the failure to actually think about the data that were collected in human terms. Reviewing and synthesizing the data we did have with the narrative comments, I noticed one young boy whose condition was serious. Every quantitative measure suggested he was severely disturbed and clearly in need of help. The narrative comments, again seldom read, reported violent behavior at home and school and mentioned the repeated threats he had voiced to harm himself and others. I asked my colleagues and the project director what we planned to do about this. Didn’t we have ethical obligations to make sure the young boy accessed the help he so obviously needed? The idea had never occurred to them before that researchers had obligations to the people they studied.

Even though these experiences soured me on quantitative research, the qualitative approach I chose for my dissertation study, critical ethnography, wasn’t much of an improvement. I had to listen to troubling stories and witness oppressive situations and conditions as a somewhat aloof objective observer. I could convince myself to some degree that it did help those who chose to share their experiences with an empathetic, nonjudgmental listener. I wouldn’t count or catalog their suffering. I would privilege their words and perspectives.

Yesterday, I discovered one of the flaws inherent in this approach. As I reread and edited an interview with the tribal child welfare director in the community I was studying, I came across the list of child welfare cases she read off to me. The list remained buried in my fieldnotes. Yesterday, I wondered what value the information added in terms of the overall purpose of a book I’m writing about the study. I wondered if I should just delete it, even though there were no names or identifying details. Just to be sure, I decided to reduce the list to numbers – how many children were in one of five different intervention categories:

1. returned home,
2. placed in community kinship care with Ojibwe relatives,
3. placed in off-reservation non-Indian foster care homes,
4. placed in an off-reservation non-Indian residential treatment facility, or
5. adopted or in the adoption process by an off-reservation non-Indian family.

I decided to see if I could figure out how to use Microsoft Excel to do a simple calculation. I was alarmed that I hadn’t thought to do this before when I looked at the results. Despite the intention of the Indian Child Welfare Act of 1978 to end the outflow of Native American children from their communities, an ongoing process of cultural genocide, more than 40 percent of the child welfare placements were in non-Native settings outside of the tribal community. The colorful Excel pie-chart below illustrates the magnitude of loss.

we remember chart

we remember chart legend

 

 

 

 

 

 

Lesson learned. Quantitative approaches do sometimes have merit. Stories, when backed up by numbers, may be far more effective advocacy tools than either approach alone. So the list will stay for now, accompanied by a new graphic.

Copyright Notice: © Carol A. Hand and carolahand, 2013-2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Carol A. Hand and carolahand with appropriate and specific direction to the original content.

Advertisements

12 thoughts on “Social Sciences Research: To Count or To Listen?”

  1. Very interesting, Carol. Tubularsock has always been amused at even the term “social science”.
    Like science has an answer to anything ………. we can study electricity but we still don’t know what it really is, we can capture it and direct it but have no clue to what makes it ………

    Tubularsock has always found “testing” and comparing the “results” as the wrong direction to be going to attempt to understand humanity. Hmmm, maybe just asking and listening would produce a closer picture of the unknown.

    Thanks. Great post.

    Liked by 2 people

    1. These are such crucial points about our focus on measuring phenomena versus understanding them, Tubularsock. And I quite agree that “testing” decontextualizes behavior, making what we think we learn rather meaningless in the real world. Thank you for sharing such important critiques!

      Like

  2. Numbers don’t lie but people do so it’s really hard to believe these statistical analysis sometimes. Statistics has and still been used as a tool to “prove” a point more convincingly, sway public opinion to a desired direction, etc. Also, in many cases, whoever paid for the research is the one who could control the results. These practices are unethical, yes, but this doesn’t stop certain people from doing them…

    Our current statistical analyses techniques are certainly far from perfect, but if done right, they will suffice for many practical cases.

    Anyway, I sometimes joke that the number of journal articles published is inversely proportional to scientific progress! 🙂

    Liked by 1 person

    1. Thank you for such thoughtful comments about statistics, Abyssbrain. I love the humour of your conclusion – “the number of journal articles published is inversely proportional to scientific progress!” Sadly, I agree. 🙂

      Liked by 1 person

  3. wonderfully diligent effort
    to see the better way of knowing!
    i used to be grateful for the researchers
    from johns hopkins collecting and analyzing data
    in the room next to my public health office;
    grateful they were creating study results
    and that I was free to educate,
    without making studies or publishing 🙂

    Liked by 1 person

  4. They have computers now that do this kind of counting and analysis. They’re specifically designed to search databases and construct “readable” narratives. The journalists, researchers (and even NSA spies) who used to do this kind of work are being laid off.

    Liked by 1 person

    1. Thank you for sharing this information, Stuart. I’m sure machines can count beans more accurately than people. Sadly, I doubt much of this work is focused on research to improve the lives of people other than those who are among the elite…

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s