Scraping the Violence Project’s Mass Shooter Database

The thoughtful Greg Reece from Miami’s Research Computing Support sent our team a link to a news story today. The email was titled, “Might have relevance to your work” and linked to a story by Vice News, “Nearly All Mass Shooters Since 1966 Have Had 4 Things in Common.”

This article presents a recent data set published by The Violence Project known as the Mass Shooter Database.

Greg was right to send this our way as there are obvious likely overlaps between tPP’s case criteria and the MSD’s. When reviewing Vice’s secondary review, they noted:

“[Mass shootings are] also increasingly motivated by racial, religious, or misogynist hatred, particularly the ones that occurred in the past five years.”

As soon as I saw this I decided to request access to the data set and promptly received a link. The meticulous and easy to navigate data set provided event data on 171 shootings. From there I sorted columns to prioritize certain variable values and trim the 171 events down to those which would likely meet the inclusion criteria for tPP.

  • I began by eliminating any shootings prior to 1990 as this is outside of tPP’s data range.
  • I then used the “on scene outcome” variable to remove all cases where the shooter died on scene, keeping only those in which the individual was apprehended. Since tPP requires the charging of a crime, only individuals who survived their attack could be included.
  • I then sorted by motive. The data set codes for 13 “grievances” and “motivations.” Using these criteria I colored all cases which displayed the following values:
    • Racial element
    • Interest in white supremacy/Notable racism/Xenophobia
    • Religious hate
    • Homophobia

I also included two cases coded as “Notable misogyny” as this is a recurrent trend in some cases we have added to our project. I then eliminated all of the cases which displayed other grievances as these would not likely meet our definition of a socio-political motive.

This process produced a final set of 13 cases which, according to my interpretations of the coding criteria as provided by The Violence Project, likely meet the criteria for inclusion in tPP. These cases will subsequently be assigned to coders to investigate, and eventually coded for inclusion or exclusion. The cases identified (prior to individual investigation) are:

  1. Kenneth French
  2. Colin Ferguson
  3. Hastings Arthur Wise
  4. Richard Baumhammers
  5. Steven Stagner
  6. Chai Vang
  7. Dylann Roof
  8. Arcan Cetin
  9. Nikolas Cruz
  10. Dimitrios Pagourtzis
  11. Jarrod Ramos
  12. Robert Bowers
  13. Patrick Crusius

Our scraping procedure for data sets requires that we first check if an incident is already included in the project. This involves searching the final data set as well as a series of ‘in progress’ sheets managed by coding teams. If the case is already included, as is the case of the 4 defendants underlined, we will evaluate our coding choices based on the new data for triangulation and possible modification. Since the data provided by The Violence Project is more detailed in certain aspects, we may be able to more accurately represent the record within tPP by exploring the other researchers’ coding decisions.

This search yielded confirmatory information on 4 cases, and 9 likely new case starters. These 9 cases will be investigated by coding teams. They will be worked through the inclusion/exclusion decision tree, and if they pass, entered into the team workflow.

Not Your Stereotypical Terrorism


This continues our series of student reflections and analysis authored by our research team.


Not Your Stereotypical Terrorism

Sarah Carrier

It is a regular day, you are drinking your morning coffee, when suddenly your phone receives a news alert. The title reads something like, “Another Terrorist Attack Strikes America”. You think to yourself, “Great, another ISIS attack, when will our government get it together and get rid of them?” However, you are mistaken. The attack was not performed by a member of ISIS, but rather a member of the Aryan Brotherhood (White Supremacists). Or, perhaps it was performed by an environmental activist against a construction site. Truth is, many people who receive a news alert like the one mentioned above will only read the title, not the story itself, and assume facts based or prior knowledge or prior prejudices. A recent study done by the Media Insight project claims that only 25% of participants went in-depth on breaking news, and 30% said they do not go in-depth on news stories. Readers can find out more about the study here.

There is a huge issue of disinformation being spread to American news consumers. Either people watch a news site that shares in their views and does not give all the facts, or Americans do not bother to go further than news headlines. As a result, their is a common misconception when it comes to terrorism. Many Americans believe terrorism to be similar to 9/11. It is large scale destruction, constitutes loss of life, and is committed by a muslim, member of ISIS, or Al-qaeda. However, the definition for terrorism is changing, there is both foreign and domestic terrorism. The FBI defines terrorism in USC Title 18 writing:

International terrorism: Violent, criminal acts committed by individuals and/or groups who are inspired by, or associated with, designated foreign terrorist organizations or nations (state-sponsored). Domestic terrorism: Violent, criminal acts committed by individuals and/or groups to further ideological goals stemming from domestic influences, such as those of a political, religious, social, racial, or environmental nature.” 

Any violent crime with the goal of influencing politics, or attacking a particular group in such as religion or race, can be a terrorist act. It is not only the Salafi/Jihadist/Islamist people who can commit a foreign terrorism act, it can also be the left or right groups that can commit domestic terrorism. However, it is more common to see Alt-Right groups, for example White Supremacists, perform an act of terrorism. A study done by the news site Quartz, with the Global Terrorism Database, found that in 2017 around ⅔ of terrorist acts in America were committed by right leaning groups. The study and a news article about the study can be found here. 

In the story Three Little Pigs, it is hard to imagine the big bad wolf who huffed and puffed and blew their house down as anything other than a wolf. However, what if another animal came along who wanted the pigs? What if it was a bear, or a lion, or even a dog? It is hard to imagine the story we have been hearing since children in any other possible way. The true can be said of terrorism. Since childhood, many Americans have been told the big bad wolf is ISIS or Al-qaeda. This was acceptable to Americans. Those groups are far away and do not affect everyday life. It is much harder to change the narrative. To believe the wolf is not a wolf at all, but a dog who lives in your backyard. This is a much scarier reality that many Americans do not want to live with. However, this truth about terrorism must be accepted. For example, in Charlottesville, James Alex Field, a member of the Alt-Right, was able to drive into a crowd and kill and injure many people. This criminal act was defined by the United States as terrorism. This is a large and well-known story. However, there are many smaller stories about domestic terrorism that do not make national news. So, next time you are drinking your morning coffee, click on the headline and read the full article. 

References

CillizzaBioBioReporter, Chris Cillizza closeChris. “Americans Read Headlines. And Not Much Else.” Washington Post. Accessed November 1, 2019. https://www.washingtonpost.com/news/the-fix/wp/2014/03/19/americans-read-headlines-and-not-much-else/.

Federal Bureau of Investigation. “Terrorism.” Folder. Accessed November 1, 2019. https://www.fbi.gov/investigate/terrorism.

Southern Poverty Law Center. “Study Shows Two-Thirds of U.S. Terrorism Tied to Right-Wing Extremists.” Accessed November 1, 2019. https://www.splcenter.org/hatewatch/2018/09/12/study-shows-two-thirds-us-terrorism-tied-right-wing-extremists.

On tPP’s 3-step Verification Process


This continues our series of student reflections and analysis authored by our research team.


On tPP’s 3-step Verification Process

Izzy Bielamowicz

More-so than in the past, this semester I have come to realize the importance of tPP’s verification process. Throughout my time as a member of tPP, I always viewed multi-person verification as useful, but it was not until this semester, when I became a member of the Steering Committee that I finally understood the cruciality of the 3-step process to its full extent.

Arguably one of the most critical methodological approaches of tPP is that before any completed case is moved into the official dataset, it is checked for accuracy by at least 3 separate coders. First, a case is “claimed” by a coding pair in the team spreadsheet, which is accessible to all team members, by coloring its row green. Once the coding pair has claimed a case, they begin the coding process exclusive of one another. Coding a case individually allows for each coder to process and interpret the facts of the case on their own accord based on the guidelines of the codebook.

Once the coders have singly coded a case, they meet to discuss their rationale behind each variable. If the coders have arrived at the same conclusions for every variable based on the facts of the case and the guidelines of the codebook, it is likely that they have correctly coded the case. Should the coders run into a variable which they have coded differently, there is opportunity for valuable discussion. In my personal experience, it is often the case that one of the coders has missed information within the source files or that they have misinterpreted the codebook. In other instances; however, I have experienced dissenting opinions based on inconsistent understanding of the material. Often, variables which offer issue for dissent are ideological target, ideological affiliation, and tactic. This may be attributed to different assumptions gathered from the known facts of the case, and the inevitable ambiguity of certain cases regarding ideology and plot. Should this disagreement occur, the coders reanalyze the case and agree upon the most suitable code. Once the coding pair has completed their case – should dissent arise or not – they move the completed case back into the team spreadsheet, coloring it white.

Once a case is completed by a coding pair and moved back into the team spreadsheet, it is reviewed again by a member of the Steering Committee, the Auditor. Going through the review process for the first time this semester was the experience which I needed to fully grasp the importance of tPP’s verification process. Assigned 10 cases to review, I began by opening the source files for each case and cross-checking the information I was gleaning with the coding which had already been done. As I checked, I realized that mistakes had been made by the initial coding pair. I proceeded to correct the errors and transfer the updated cases to the official dataset. Without having a third review of the cases I had been assigned, there would have been inaccurate codes in tPP’s dataset, decreasing the validity of the project as a whole.

Ultimately, the 3-step verification process tPP follows ensures the legitimacy of the data. While there are generally fewer cases which require a third review to correct errors than those cases which need rectifying, it is critical that every case follow the same process to maintain the strength of the data. The implementation of coding pairs guarantees that variables which necessitate discussion are discussed before a decision is made. Furthermore, because tPP is composed of circulating membership, a third review by veteran team members on the Steering Committee secures the accuracy of the codes. While not unique to tPP, the multi-person verification process which the project follows enhances the credibility and effectiveness of the data.

Bias in Coding: How Precise Variables Lead to Unbiased Results


This continues our series of student reflections and analysis authored by our research team.


Bias in Coding: How Precise Variables Lead to Unbiased Results

Stephanie Sorich

As monotonous as coding can sometimes feel, it’s also in these repetitive moments that I end up finding the cases that kick me out of my coding groove. It can feel like inputting meaningless values into a spreadsheet, until a value stops you in your tracks and makes you realize you were making assumptions about a case based upon the values combined. Alone, values are nothing more than units on a sheet; together, they spell out the story of a case.

Take, for instance, the case of Farhan Sheikh, a 19-year-old male arrested in Chicago for threatening to attack a women’s clinic in protest of abortion. The action itself is rightist in nature, however with a non-Germanic name, the narrative does not seem to align with the biases we are conventionally used to about crimes and who commits them. As I filled out Sheikh’s biographical information- name, case ID, and city and state of the crime, I had created in my mind an idea of what this case was about. It was based off of previous cases coded, so to a degree my assumptions were guided by some kind of logic. However, coders getting ahead of themselves poses a danger to the objective nature of coding each case. Gliding over one factor by making an assumption could cause vast misdirection about a case.

Of course, the idea of stereotyping a situation is nothing new. However, ways around coders’ preconceived notions of crimes and terror can be more complicated. Forcing a coder to break a case down into its smallest possible components compels us to break apart our own assumptions, and therefore notice the very minute details that make this project unique. Terror and crime are complicated; our understanding of a case can only be as complete as our ability to code it accurately.

Since I joined the project just a few short months ago, things have constantly changed. New variables for coding have been created (and old cases re-coded), and variables have grown to include more potential values for coders to select. Changes in coding variables can mean the difference in the type of crime created, how the defendant identifies him or herself, and how the government identifies the defendant: all pieces essential to understanding the facts of a case, and even more essential to analyzing it.

Putting a name to a crime (like Farhan Sheikh’s case) is on the simpler end of the spectrum, and I was still personally taken aback by the idea that it “didn’t fit.” Defining our variables means that I do not define a Christian defendant, the coding manual does; I do not define State Speech Act, the coding manual does; and having a consensus strong as such keeps each coder on the same path to a concrete understanding of how each case- no matter how different the circumstances- can be compared with one another.

My own goal going forward is to focus on each case one cell in the spreadsheet at a time. There are textbook cases, of course, but there will also continue to be cases that surprise. I also don’t want to fear the cases that may not fit the exact mould: in fact, I want to embrace them. Every case that causes us to pause and consider the way we as a team function only forces us to be better in our techniques and more critical of our own choices. At the early stage I’m in, the more critical I can be of myself and my participation, the better.

For more information on Farhan Sheikh’s case: https://www.washingtonpost.com/nation/2019/08/20/online-violent-threat-meme-site-chicago/

 

Paying for Court Documents – An Infringement of Rights?


This continues our series of student reflections and analysis authored by our research team.


Paying for Court Documents – An Infringement of Rights?

Morgan Demboski

When researching cases for the project, court documents are the jackpot of source files. I always feel a tiny bit of excitement when I am able to find a free indictment or plea document online. However, I think my excitement stems from the sad truth that court records, for the most part, are often really difficult to acquire, either due to not updated and unorganized databases or
required payments for searches and downloads. Whether I am searching for a federal case on a website, like PACER, or for a state case on a district or court website, there are often many obstacles I must face in order to collect the records I need. This has made me think about how difficult is it really to access source documents, and also whether we should even be required to pay for court documents at all.
Since most of the cases we add to our database are of federal jurisdiction, I will begin by discussing the issues and complications with accessing federal records. The tPP team often looks to PACER, or Public Access to Court Electronic Records, to collect court documents; however, only one of two members our team has access to the site because of the membership costs. Pacer holds more than 300 million documents from over 90 district courts and 13 appellate circuits, which is basically a goldmine to an over-excited researcher, but you have to pay “$30 for the search, plus $0.10 per page per document delivered electronically, for up to 5 documents (30 page cap applies)”(Hughes; PACER). Calling the program the Public Access to Court Electronic Records is a little misleading because the public does not have the ability to access the documents as they wish, with those who cannot afford it being unfairly disadvantaged. What’s even worse is that a lot of the money that PACER users pay in order to search names and access documents is not even spent on PACER itself (Browdie). A good bulk goes into other areas, such as paying for technology for courtrooms (See graph below):
In a list compiled by All Jarmanning of Boston University, I was able to view the various multitude of court and district websites containing court records for each state in the U.S. By doing this, I was able to get an idea of just how many states require a fee to search, access, or download court cases. There are many state court cases that I can access for free, such as those in Arizona, Connecticut, and New Mexico; however, each of these differs in the number of cases or courts that are available, the amount of information they provide, and the documents they allow users to view. For example, the Superior Court of Arizona in Maricopa County provides case information but does not provide any court documents (See example below).
There are also states in which some court cases and districts are accessible for free, and others are not. In California, various areas have different court sites, but more than five of these courts require around $1 per page. In Kansas, the district courts charge $1.50 per search, even if the search comes up empty; however, the Supreme Court and Court of Appeals records are free. In addition, there are states in which documents and information are not accessible unless a fee is paid. In Alabama, it costs $10 per name or case. In Colorado, it costs $7 if you want to conduct a statewide search ($2 if you only want to search in Denver; $5 if you want to search everywhere but Denver; $7 for a combined search), even if you come up with nothing. Furthermore, the site does not even provide online access to documents, only basic information.
In 1996, President Johnson signed into law the Freedom of Information Act which mainly protects any person’s right to request access to federal records or information. Though there are some exceptions to this such as court information concerning juveniles or mental health commitment, court records generally fall in the realm of information that is available to the public. Therefore, I am led to ask the question: why are we spending hundreds of dollars on documents that technically should be free?
Works Cited:
Browdie, B. (2016, October 14). PACER fees for US court documents are facing a legal challenge of their own [News Outlet]. Retrieved October 31, 2019, from Quartz website: https://qz.com/800076/the-cost-of-electronic-access-to-us-court-filings-is-facing-a-major-legal-test-of-its-own/
Court Records and Proceedings: What is Public and Why? (2017, April 18). Retrieved October 30, 2019, from Connor Reporting website: https://connorreporting.com/court-records-proceedings-public/
Criminal Court Case Information—Case History [Government]. (n.d.). Retrieved October 30, 2019, from The Judicial Branch of Arizona Maricopa County website: http://www.superiorcourt.maricopa.gov/docket/CriminalCourtCases/caseInfo.asp?caseNumber=CR2010-142511
Hughes, S. (2019, March 20). The Federal Courts Are Running An Online Scam [News Outlet]. Retrieved October 30, 2019, from POLITICO Magazine website:
PACER – Frequently Asked Questions [Administrative Office of the U.S. Courts]. (n.d.). Retrieved October 30, 2019, from PACER website: https://www.pacer.gov/psc/faq.html

 

Suck an Answer out of your Thumb: A Scientific CIA Analysis of Homegrown Violent Extremism


This continues our series of student reflections and analysis authored by our research team.


Suck an Answer out of your Thumb: A Scientific CIA Analysis of Homegrown Violent Extremism

Zion Miller

tPP’s mission is to explore the relationship between how a crime occurred, who was the perpetrator, and why it was committed. To help us answer these questions, we determine what factors to study, accumulate data, and centralize it within our database for easy access. But a problem arises from having so much data on so many cases: How do we go about utilizing this information to answer our fundamental questions? This can be a difficult task, as explained by McGlyn and Garner in their 2018 book entitled, “Intelligence Gathering Fundamentals.”

Even if you carefully read through all the collected information, as important as that is, it is not enough. Seldom does the correct answer jump out from the collected data. The answers are fragmented and scattered like parts of a 10,000-piece puzzle with half the pieces missing or damaged. Using the collected information, some of which was to be exploited first by translation or other technical means, to be properly understood, the analyst examines each piece of data individually and as a whole to create a meaningful and accurate assessment.

There are several methods of analysis described by the authors for assessing data, collectively referred to as structured analytic techniques (SATs). These range from the simplistic and obvious, such as reading about a subject and then thinking hard about a problem, to the more systematic and professional, such as social network analysis, SWOT analysis, geospatial analysis, and others. Within all of these methods, checking biases and assumptions is a key part of the process.

One of the most common points of analysis in tPP is when we check if a case should be excluded from our database on the basis of ideological affiliation and target. For a case to be included, the crime must be committed ​in furtherance of terrorism, extremism, or political violence (i.e. have a socio-political motive). This is where careful analysis must occur, because not all crimes carried out by extremists or terrorists are committed in the furtherance of their ideology. Sometimes a white nationalist just feels like vandalizing a store and it happens to be a black-owned business. This would not qualify for inclusion under our definitions. However, if the vandalizing occurred because it’s a black-owned business, then it likely would be included. The difficulty in the distinction arises due to imperfect information. We may not know the exact thoughts running through the criminal’s mind, they may not have explained themselves, or the case may simply be too poorly covered for us to get everything we need. When this problem of imperfect information arises, it is key that we do as recommended by McGlyn and Garner, and check our biases and assumptions when performing our analysis.

A recent set of cases made this obvious to me. “Operation Blackjack” was a long running narcotics investigation in Florida that resulted in the arrests of 39 white nationalist gang members from the “Unforgiven” and the “United Aryan Brotherhood” for various drug and firearm trafficking related charges. One of our qualifications for inclusion is that the crime must support that homegrown violent extremist (HVE) network. The question for this case was whether or not the drug and weapons charges were related to the support of the gangs or if it was simply gang members acting for their own benefit. Ultimately, I chose to code the case as included and that the gang members were acting to support their gang’s activities. This was based on a SAT analysis method described by McGlyn and Garner as the “CIA” method, where I read as much of the case as I could, thought about it, sucked an answer out of my thumb, and wrote it down as crisp a manner as possible. I wish it was a more scientific method, but unfortunately, that’s the best we can do sometimes with limited information. 

As stated by the authors, I was forced to go through a variety of different data sources to piece the puzzle together — namely DOJ reports, news articles, and indictments. As I read through the material, I quickly noticed that the ATF was involved, which is a federal organization. It is unlikely that they would be involved in a case of individuals dealing drugs on a non-institutionalized scale. Secondly, the DOJ report on the case led by stating that those arrested were white nationalist gang members. It is, once again, unlikely for this information to be so prominent unless it was relevant to the investigation. Third, several members were convicted of intending to distribute over 500 grams of methamphetamine. The scale of this operation makes it, for a final time, unlikely for it to be a small-scale and non-organized crime. These factors together led me to believe that there was sufficient evidence to include these cases in tPP under providing material support for an HVE network. 

Works Cited:

McGlynn, Patrick, and Godfrey Garner. Intelligence Analysis Fundamentals. Boca Raton: CRC Press, Taylor & Francis Group, 2019.

On Codes, Code Books and Coding


This continues our series of student reflections and analysis authored by our research team.


On Codes, Code Books and Coding

Margaret Kolozsvary

Within the Prosecution Project, the team goes through many steps and processes to be sure that the information that we are provide is thorough and accurate, as well as compliant with the rules that we hold within our own Code Book and Manual. Written by members of our team is our Code Book, which is used to aid us in the process of coding cases to our data set. The code book is approximately eighteen pages long, with all of the information being crucial to coding qualitative research. Explained by Johnny Saldaña in his own Coding Manual for Qualitative Research, are the three primary purposes of a coding manual. These purposes are as follows:

to discuss the functions of codes, coding, and analytic memo writing during the qualitative data collection and analytic processes; to profile a selected yet diverse repertoire of coding methods generally applied in qualitative data analysis; and to provide readers with sources, descriptions, recommended applications, examples, and exercises for coding and further analyzing qualitative data. (Saldaña, 2015)

Not only is extensive coding necessary for a research team to find concise and accurate data, but it allows for optimal understanding of a case when the research is being presented. While all code books can differ on the basis of the research being taken, they all hold the same purposes which is why understanding qualitative research approaches is important and it helps out our team to read excerpts from people such as Saldaña so that we have a better grasp on the true importance of a clear code.

As far as the code book for the Prosecution Project goes, ours is broken up into many different sections that begin with the date of the criminal charge, why it was included within our data set and ends with describing the source from which we pulled our data. These are the processes in which we “codify” a crime; according to Johnny Saldaña, “to codify is to arrange things in a systematic order, to make something part of a system or classification.” (Saldaña, 2015).  Understanding this arrangement for specific data is crucial and as far as the Prosecution Project goes. We have a vast spreadsheet with thousands of cells and data within them; they all hold different information about the specific crime and it allows for us to easily pinpoint a fact about a case.

At any time through coding a crime, it is possible to come to the realization that a case must be dropped with the chance that it fails to meet all criteria; this often occurs when coding “reason for inclusion” or “charge”–where we can run into errors such as the crime was not necessarily motivated by political or ideological violence or the defendant was never charged, which can happen in the case of death of the suspect.

In addition to all of the requirements that are stated in our code book for adding a case to our data set, it is very important that when our coders are working to build the data set, they are working with another person within the team so that there are two eyes on the case, and if one coder misses an important fact pertaining to the case, the other is likely to catch it. Successful and meaningful qualitative research would not be possible without code manuals such as that of the Prosecution Project as it is the perfect guide and aid for those working within this form of research and allows for distinct and accurate research.

Works Cited

Saldana, Johnny. The Coding Manual for Qualitative Researchers Third Edition. 3rd edition. Los Angeles, CA: SAGE Publications Ltd, 2015. [Excerpt, Chapter 1: “An Introduction to Codes and Coding”]

Coding and Starting Cases


This continues our series of student reflections and analysis authored by our research team.


Coding and Starting Cases

Courtney Faber

This week our class made some steps towards learning how to code cases for the project. A few of the students in our class have done this before but many have not, including myself. The very minimal experience I have with coding comes from one assignment for a different class with Dr. Loadenthal where I coded and analyzed the language in a Nike ad. In sum, project members have various levels of experience with this kind of work.

In one of our weekly readings, it was explained: “In larger and complete datasets you will find that several to many of the same codes will be used throughout. This is both natural and deliberate because one of the coders main goals is to find these repetitive patterns of action and consistency in human affairs as documented in the data.” (Saldana 2015, 6)

The project has a team spreadsheet where we code certain information about a case that the project has deemed relevant to the dataset over time. One of the things that I found interesting was that we code for veteran status of the crime’s perpetrator(s), and I actually found a couple cases within a short period of time where the assailants were American veterans in some fashion. Relative to the Saldana piece is the fact that along the way, socio-political violence was determined by the project to be committed by veterans as a repetitive and consistent pattern, and therefore one we continually include in the data set. This is especially interesting to me because I tend to think of veterans as a group that would be defending against evils like violence in the name of a socio-political motive. However the patterns of data must argue against this logic or else the variable veteran status would not be recognized by the project as important or coded in the team spreadsheet.

This week we also looked at how to start new cases, to add a new row of data to our team spreadsheet that will then later be coded. This is my first time doing this task as well, though it proved to be fun. To start a new case you go into the team drive and read one of the many files containing information on a certain case. Then once you have a good idea of the substance of the case you do more research attempting with court documents and news sources to find specific variables that we have coded for on the team spreadsheet- of which there are multiple like charges facing the defendant, the defendant’s plea to those charges, and known aliases of the defendant. I find this activity to sometimes be challenging for various reasons.

In my opinion there are some variables that are particularly easy to locate- like the defendant’s plea- because often available court documents or other resources will have it listed in a straightforward manner. However I think that sometimes finding the information that you need for the codes is difficult because depending on the type of case and the variable you are searching for you have to take a holistic approach and analyze all the sources and data you have compiled together. An example of this is for the coded variable ‘reason for inclusion’.

One of the major critical lenses through which to look at a case when determining whether it should be included in our data set is analysing the following: it must either conform to one of the following major prototypes 1) the act that occured in the case furthers political violence, extremism, or terrorism or 2) There was some fashion of the state (the US government) explicitly associating the actions in the case with extremism or pushing a message, usually a violent one, in pursuit of a distinctive ideology. The reason I believe that this can be hard to assess is that it can be up to the team member to determine if something was a state speech act or if something is really in furtherance of political violence- these are often subjective or disputed calls amongst various interpretations. With practice, though, this gets easier to decipher.

Relatedly, in Rich, Brians, Manheim, and Willnat’s chapter in Empirical Political Analysis, there is discussion of the fact that some data is harder to find and takes more extensive work than other data to access (2018, 211). I have found this to be true when searching for our data because high profile cases (like a 9/11-related case I analyzed previously) often have far more sources that are easier to access than say a lower profile or older case because these records are harder to dig for and may not even turn up any results. For instance I had a significantly easier time finding data on a 9/11-related attack than on an abortion-related extremist case that occurred in Wisconsin in 1999. Little things like these can make some data harder to get to than other data as Rich et. al noted.

Works Cited

Saldana, Johnny. “Chapter 1: An Introduction to Codes and Coding.” The Coding Manual for Qualitative Researchers”, 3rd ed., Sage Publications, 2015.

Brians, Craig, et al. “Chapter 12: Comparative Methods: Research Across Populations.” Empirical Political Analysis: Quantitative and Qualitative Research Methods”, 9th ed., Routledge, 2018.

New Variable: Hate Crimes

Since tPP began, we have noticed a rapid increase in defendants being charged with hate crimes. In some cases, hate crimes are used as ‘enhancements’ to other charges, while in other cases, such a designation represents a rhetorical attempt to label the crime as bias-motivated.

In order to help capture this emerging reality, the tPP team has added a new variable to capture whether or not a case is labeled under the hate crime designation. After much discussion, debate, and consultation with scholars and Advisory Board members, we decided that if a case has a hate crime designation then it automatically proves, as far as the government is concerned, that has been motivated by socio-political aim.

These changes have been included in our latest release of our Code Book, as shown below:

 

 

 

 

Many thanks to tPP Steering Team member Katie Blowers for work on this.