This continues our series of student reflections and analysis authored by our research team.
Inter-Coder Reliability Between Projects
During the week of November 18, many members of the Project were given an assignment focused on “scraping” documents. Essentially with scraping, coders search the names of defendants for potential cases in our project database to determine if we’ve already coded a case, or if we’ve found a new case to code. Coders set up what we called an “assembly line,” several picking out names to be searched, several checking our spreadsheets to see if those names were already codes, and the last few began the cases that weren’t yet coded. It was a very efficient way of getting new potential cases on the board, even if they can’t be finished immediately.
However, rather than taking potential cases from police reports or news articles like is common in the Project, this assembly line was focusing on pulling potential cases from other research projects. The Threat Within provided us with a spreadsheet of cases compiled on foreign terror, as well as several other lists from Homeland Security and other organizations. Being able to compare cases with other projects and organizations working within the same field provided a tremendous opportunity to clarify the reliability of our work.
To a certain extent, we can check for the reliability of our results within our own project. The practice of checking for inter-coder reliability, or making sure that separate coders receive the same result when looking at a case, provides insight into whether coders are using the same standards and coding by the manual in the same way. More reliable coding makes it more likely that the values being coded are valid, as multiple coders are finding the same end results.
However, coding correctly by the manual does not necessarily mean that the cases are being represented accurately. An issue of checking reliability and validity of the Project within the Project itself is the potential for groupthink. The inability to consult outside minds or consider other perspectives on coding outside those in the room with us each day could cause coders to accept variables and values as true rather than as things to be changed to better fit the project as it develops. Therefore, we took this scraping activity as an opportunity to check our results based upon cases from the lists provided that the Project had already coded for. In many cases, we found that our coding matched that of other projects or organizations, giving us sufficient evidence to believe that our methods of coding and the variables and values we are using are adequate.
In a sense, checking results between projects becomes its own method of analysis. While mostly used as a means to complete the research already being done, performing this comparison could be used as its own analysis between the results of tPP and other projects in order to get a grasp of the general ideas behind terrorism research. While not necessarily for publishing, it could prove to be a useful tool when further evaluating our coding process.
At the moment, it looks as if we are on the right track based upon the comparisons done between our data and others’. However, continuing to scrape new cases from other organizations and compare those done mutually between projects should and likely will be a priority of the Project. While we are working with a group of capable individuals who work carefully to produce the best results, breaking away from the group to get an outside perspective is the best way to awaken the parts of us that take our decisions as absolute with no room for suggestion.
For more information on groupthink as it pertains to team projects: https://pdfs.semanticscholar.org/be0c/9ebe64eb4c77673404f77f08cdc5600f97ef.pdf