Suck an Answer out of your Thumb: A Scientific CIA Analysis of Homegrown Violent Extremism


This continues our series of student reflections and analysis authored by our research team.


Suck an Answer out of your Thumb: A Scientific CIA Analysis of Homegrown Violent Extremism

Zion Miller

tPP’s mission is to explore the relationship between how a crime occurred, who was the perpetrator, and why it was committed. To help us answer these questions, we determine what factors to study, accumulate data, and centralize it within our database for easy access. But a problem arises from having so much data on so many cases: How do we go about utilizing this information to answer our fundamental questions? This can be a difficult task, as explained by McGlyn and Garner in their 2018 book entitled, “Intelligence Gathering Fundamentals.”

Even if you carefully read through all the collected information, as important as that is, it is not enough. Seldom does the correct answer jump out from the collected data. The answers are fragmented and scattered like parts of a 10,000-piece puzzle with half the pieces missing or damaged. Using the collected information, some of which was to be exploited first by translation or other technical means, to be properly understood, the analyst examines each piece of data individually and as a whole to create a meaningful and accurate assessment.

There are several methods of analysis described by the authors for assessing data, collectively referred to as structured analytic techniques (SATs). These range from the simplistic and obvious, such as reading about a subject and then thinking hard about a problem, to the more systematic and professional, such as social network analysis, SWOT analysis, geospatial analysis, and others. Within all of these methods, checking biases and assumptions is a key part of the process.

One of the most common points of analysis in tPP is when we check if a case should be excluded from our database on the basis of ideological affiliation and target. For a case to be included, the crime must be committed ​in furtherance of terrorism, extremism, or political violence (i.e. have a socio-political motive). This is where careful analysis must occur, because not all crimes carried out by extremists or terrorists are committed in the furtherance of their ideology. Sometimes a white nationalist just feels like vandalizing a store and it happens to be a black-owned business. This would not qualify for inclusion under our definitions. However, if the vandalizing occurred because it’s a black-owned business, then it likely would be included. The difficulty in the distinction arises due to imperfect information. We may not know the exact thoughts running through the criminal’s mind, they may not have explained themselves, or the case may simply be too poorly covered for us to get everything we need. When this problem of imperfect information arises, it is key that we do as recommended by McGlyn and Garner, and check our biases and assumptions when performing our analysis.

A recent set of cases made this obvious to me. “Operation Blackjack” was a long running narcotics investigation in Florida that resulted in the arrests of 39 white nationalist gang members from the “Unforgiven” and the “United Aryan Brotherhood” for various drug and firearm trafficking related charges. One of our qualifications for inclusion is that the crime must support that homegrown violent extremist (HVE) network. The question for this case was whether or not the drug and weapons charges were related to the support of the gangs or if it was simply gang members acting for their own benefit. Ultimately, I chose to code the case as included and that the gang members were acting to support their gang’s activities. This was based on a SAT analysis method described by McGlyn and Garner as the “CIA” method, where I read as much of the case as I could, thought about it, sucked an answer out of my thumb, and wrote it down as crisp a manner as possible. I wish it was a more scientific method, but unfortunately, that’s the best we can do sometimes with limited information. 

As stated by the authors, I was forced to go through a variety of different data sources to piece the puzzle together — namely DOJ reports, news articles, and indictments. As I read through the material, I quickly noticed that the ATF was involved, which is a federal organization. It is unlikely that they would be involved in a case of individuals dealing drugs on a non-institutionalized scale. Secondly, the DOJ report on the case led by stating that those arrested were white nationalist gang members. It is, once again, unlikely for this information to be so prominent unless it was relevant to the investigation. Third, several members were convicted of intending to distribute over 500 grams of methamphetamine. The scale of this operation makes it, for a final time, unlikely for it to be a small-scale and non-organized crime. These factors together led me to believe that there was sufficient evidence to include these cases in tPP under providing material support for an HVE network. 

Works Cited:

McGlynn, Patrick, and Godfrey Garner. Intelligence Analysis Fundamentals. Boca Raton: CRC Press, Taylor & Francis Group, 2019.

On Codes, Code Books and Coding


This continues our series of student reflections and analysis authored by our research team.


On Codes, Code Books and Coding

Margaret Kolozsvary

Within the Prosecution Project, the team goes through many steps and processes to be sure that the information that we are provide is thorough and accurate, as well as compliant with the rules that we hold within our own Code Book and Manual. Written by members of our team is our Code Book, which is used to aid us in the process of coding cases to our data set. The code book is approximately eighteen pages long, with all of the information being crucial to coding qualitative research. Explained by Johnny Saldaña in his own Coding Manual for Qualitative Research, are the three primary purposes of a coding manual. These purposes are as follows:

to discuss the functions of codes, coding, and analytic memo writing during the qualitative data collection and analytic processes; to profile a selected yet diverse repertoire of coding methods generally applied in qualitative data analysis; and to provide readers with sources, descriptions, recommended applications, examples, and exercises for coding and further analyzing qualitative data. (Saldaña, 2015)

Not only is extensive coding necessary for a research team to find concise and accurate data, but it allows for optimal understanding of a case when the research is being presented. While all code books can differ on the basis of the research being taken, they all hold the same purposes which is why understanding qualitative research approaches is important and it helps out our team to read excerpts from people such as Saldaña so that we have a better grasp on the true importance of a clear code.

As far as the code book for the Prosecution Project goes, ours is broken up into many different sections that begin with the date of the criminal charge, why it was included within our data set and ends with describing the source from which we pulled our data. These are the processes in which we “codify” a crime; according to Johnny Saldaña, “to codify is to arrange things in a systematic order, to make something part of a system or classification.” (Saldaña, 2015).  Understanding this arrangement for specific data is crucial and as far as the Prosecution Project goes. We have a vast spreadsheet with thousands of cells and data within them; they all hold different information about the specific crime and it allows for us to easily pinpoint a fact about a case.

At any time through coding a crime, it is possible to come to the realization that a case must be dropped with the chance that it fails to meet all criteria; this often occurs when coding “reason for inclusion” or “charge”–where we can run into errors such as the crime was not necessarily motivated by political or ideological violence or the defendant was never charged, which can happen in the case of death of the suspect.

In addition to all of the requirements that are stated in our code book for adding a case to our data set, it is very important that when our coders are working to build the data set, they are working with another person within the team so that there are two eyes on the case, and if one coder misses an important fact pertaining to the case, the other is likely to catch it. Successful and meaningful qualitative research would not be possible without code manuals such as that of the Prosecution Project as it is the perfect guide and aid for those working within this form of research and allows for distinct and accurate research.

Works Cited

Saldana, Johnny. The Coding Manual for Qualitative Researchers Third Edition. 3rd edition. Los Angeles, CA: SAGE Publications Ltd, 2015. [Excerpt, Chapter 1: “An Introduction to Codes and Coding”]

Coding and Starting Cases


This continues our series of student reflections and analysis authored by our research team.


Coding and Starting Cases

Courtney Faber

This week our class made some steps towards learning how to code cases for the project. A few of the students in our class have done this before but many have not, including myself. The very minimal experience I have with coding comes from one assignment for a different class with Dr. Loadenthal where I coded and analyzed the language in a Nike ad. In sum, project members have various levels of experience with this kind of work.

In one of our weekly readings, it was explained: “In larger and complete datasets you will find that several to many of the same codes will be used throughout. This is both natural and deliberate because one of the coders main goals is to find these repetitive patterns of action and consistency in human affairs as documented in the data.” (Saldana 2015, 6)

The project has a team spreadsheet where we code certain information about a case that the project has deemed relevant to the dataset over time. One of the things that I found interesting was that we code for veteran status of the crime’s perpetrator(s), and I actually found a couple cases within a short period of time where the assailants were American veterans in some fashion. Relative to the Saldana piece is the fact that along the way, socio-political violence was determined by the project to be committed by veterans as a repetitive and consistent pattern, and therefore one we continually include in the data set. This is especially interesting to me because I tend to think of veterans as a group that would be defending against evils like violence in the name of a socio-political motive. However the patterns of data must argue against this logic or else the variable veteran status would not be recognized by the project as important or coded in the team spreadsheet.

This week we also looked at how to start new cases, to add a new row of data to our team spreadsheet that will then later be coded. This is my first time doing this task as well, though it proved to be fun. To start a new case you go into the team drive and read one of the many files containing information on a certain case. Then once you have a good idea of the substance of the case you do more research attempting with court documents and news sources to find specific variables that we have coded for on the team spreadsheet- of which there are multiple like charges facing the defendant, the defendant’s plea to those charges, and known aliases of the defendant. I find this activity to sometimes be challenging for various reasons.

In my opinion there are some variables that are particularly easy to locate- like the defendant’s plea- because often available court documents or other resources will have it listed in a straightforward manner. However I think that sometimes finding the information that you need for the codes is difficult because depending on the type of case and the variable you are searching for you have to take a holistic approach and analyze all the sources and data you have compiled together. An example of this is for the coded variable ‘reason for inclusion’.

One of the major critical lenses through which to look at a case when determining whether it should be included in our data set is analysing the following: it must either conform to one of the following major prototypes 1) the act that occured in the case furthers political violence, extremism, or terrorism or 2) There was some fashion of the state (the US government) explicitly associating the actions in the case with extremism or pushing a message, usually a violent one, in pursuit of a distinctive ideology. The reason I believe that this can be hard to assess is that it can be up to the team member to determine if something was a state speech act or if something is really in furtherance of political violence- these are often subjective or disputed calls amongst various interpretations. With practice, though, this gets easier to decipher.

Relatedly, in Rich, Brians, Manheim, and Willnat’s chapter in Empirical Political Analysis, there is discussion of the fact that some data is harder to find and takes more extensive work than other data to access (2018, 211). I have found this to be true when searching for our data because high profile cases (like a 9/11-related case I analyzed previously) often have far more sources that are easier to access than say a lower profile or older case because these records are harder to dig for and may not even turn up any results. For instance I had a significantly easier time finding data on a 9/11-related attack than on an abortion-related extremist case that occurred in Wisconsin in 1999. Little things like these can make some data harder to get to than other data as Rich et. al noted.

Works Cited

Saldana, Johnny. “Chapter 1: An Introduction to Codes and Coding.” The Coding Manual for Qualitative Researchers”, 3rd ed., Sage Publications, 2015.

Brians, Craig, et al. “Chapter 12: Comparative Methods: Research Across Populations.” Empirical Political Analysis: Quantitative and Qualitative Research Methods”, 9th ed., Routledge, 2018.

New Variable: Hate Crimes

Since tPP began, we have noticed a rapid increase in defendants being charged with hate crimes. In some cases, hate crimes are used as ‘enhancements’ to other charges, while in other cases, such a designation represents a rhetorical attempt to label the crime as bias-motivated.

In order to help capture this emerging reality, the tPP team has added a new variable to capture whether or not a case is labeled under the hate crime designation. After much discussion, debate, and consultation with scholars and Advisory Board members, we decided that if a case has a hate crime designation then it automatically proves, as far as the government is concerned, that has been motivated by socio-political aim.

These changes have been included in our latest release of our Code Book, as shown below:

 

 

 

 

Many thanks to tPP Steering Team member Katie Blowers for work on this.

Problems with Pacer and How it Affects Our Team


This continues our series of student reflections and analysis authored by our research team.


Problems with Pacer and How it Affects Our Team

Sara Godfrey

Access to electronic court documents is crucial to the Prosecution Project (tPP). Our team is reliant on numerous platforms while collecting case information. Typically, our case coders start with a simple Google search to get a briefing on the selected case, continuing on to more specific and advanced google searches in hopes of finding court document PDFs. Next, our coders will look to the Department of Justice, and then to local or regional news sources. Finally, our coders continue on to search library databases. The collection of court documents and case information is a long, and tedious process. As a new member of our team, I was alarmed to see how difficult this process can be. I was especially shocked to come across cases in which our team struggles to find any information and sources at all.

As the United States has the largest incarceration rate per capita in the entire world and is prideful about the country’s constant strive for innovation and technological advancements, I was appalled to see the outdated and inefficient system called PACER.

PACER, the “Public Acess to Court Electronic Records”, should be a logical solution to many of our team member’s struggles. PACER provides electronic dockets, summaries and filed documents for federal cases. These dockets often contain crucial information for multiple variables in our data set that other resources can not provide. However, access to these documents is far from free.

PACER comes at a cost, and that cost is 10 cents per page (and 10 cents per search) for each document accessed. As the document price caps at three dollars, one can only imagine how quickly PACER fees accumulate (Carver, 2015). As a new member of the team, collecting source files has been much more difficult than I could have ever imagined. It is shocking to have to struggle to find access to court documents which are supposed to be public information. As James B. Haines Jr, a Maine bankruptcy judge explains “‘the information is free at the courthouse, as it’s always been… What you’re paying for is the delivery system and maintaining the delivery system. It’s not a price for the law. It’s a price to have it handed to you on your desktop at your convenience at your command”’ (Browdie, 2018).

However, PACER is far from convenient and the cost is not only monetary. PACER is an outdated system that takes time, practice, and patience to navigate. The system is far from advanced modern search engines. To use PACER you need to know exactly what you are looking for, as PACER has no ability to search by any variable besides the litigant’s name or docket number (Browdie, 2018). There is no way to search a word or phrase related to the case, making it extremely inefficient for research projects like The Prosecution Project. Imagine how effective it would be to search keywords such as “terrorism,” “extremism,”  “bias-motivated crime,” etc. Unfortunately, this is not currently possible with PACER.

To further our team’s frustrations with collecting source documents, federal court documents come from ninety-four district-level courts all with varying filing processes. These small discrepancies in each district’s filling processes can often cause mixed, and sometimes failed search results. This adds to our frustrations with PACER as an unsuccessful search still results in a charge (Hughes, 2019).

As terrorism researcher (and tPP Advisory Board member) Seamus Hughes explains in regards to PACER: “one must know the quirks in the system,” and this could not be truer (Hughes, 2019). After just weeks of joining the tPP team, I, along with many of my peers are quickly realizing that, like most things in life, PACER will surely take some time to learn how to successfully navigate.

Works Cited:

Browdie, Brian. “The Cost of Electronic Access to US Court Filings Is Facing a Major Legal Test of Its Own.” Quartz, Quartz, 10 Aug. 2018, qz.com/800076/the-cost-of-electronic-access-to-us-court-filings-is-facing-a-major-legal-test-of-its-own/.

Carver, Brian. “What Is the ‘PACER Problem’?” Free Law Project, 20 Mar. 2015, free.law/2015/03/20/what-is-the-pacer-problem/.

Hughes, Seamus, et al. “The Federal Courts Are Running An Online Scam.” POLITICO Magazine, 20 Mar. 2019, www.politico.com/magazine/story/2019/03/20/pacer-court-records-225821.