So what do tPP team members do?

As we begin recruiting for the next class of tPP students, I have been receiving a lot of emails asking what exactly being part of the team entails. Well, in the fall, tPP will be ran through SJS497 where we will learn data science and methodology skillsets in the classroom each Tuesday, and then practice them in the classroom each Thursday. For example, on a Tuesday we may learn how to verify a newspaper story via locating and interpreting a criminal indictment, and on Thursday, use that approach to verify and complete various cases under analysis.

Throughout the semester we plan to cover a wide range of tasks, including but now limited to:

  • Coding cases: This is one of the main tasks of tPP. This involves studying a particular criminal case, collecting the necessary source documents (e.g. Case Docket, Indictment, Criminal Complaint, Plea Agreement, Sentencing Memorandum, newspaper article) and then translating these texts into codes from our code book. For a bizarre cartoon explaining Qualitative Coding, check this out. Like all tPP skills, this will be taught in class and then practiced in a workshop style
  • Checking, improving and verifying cases already in our system. This is especially important as cases change–defendants are sentenced, fugitives are captured and tried, and arrests continue to occur
  • Helping to identify new cases for inclusion through reviewing and monitoring services of the Department of Justice, US Attorney’s Office, FBI and others.
  • ‘Scraping’ and ‘mining’ texts from large documents to help locate new cases for inclusion and to ensure all appropriate cases are counted
  • Evaluating cases marked for exclusion through investigating the facts of the cases and working them through a decision tree
  • Evaluating documents for accuracy, authenticity and reliability; rep-lacing poorly scoring sources with better sources
  • Reviewing the work of your fellow coders, providing peer-review and intercoder reliability and helping to refine the code book
  • Refining the data for analysis which involves ‘cleaning’ the data, shifting its format, exporting/importing and learning how to work with the materials in SPSS, R, Tableu, GIS and a variety of other tool suites.

So if this sounds like you, get in touch with us. Check out this post for information on SJS497 and the application process.

tPP forms its Fall 2019 team!

Hello current & future tPP team members!
We are excited to announce that we will continue to build, refine and analyze the tPP data set this fall through a new course, SJS497, which Miami University students are welcome and encouraged to enroll in to serve on the project for the Fall 2019 semester.
This is a very exciting time to join the project as our completed case count nears 1,700, our first publications are about to come out, our Advisory Board forms, and our social media presence is getting more and more attention.
SJS497 (CRN:75594)…the class through which we’ll be running the Prosecution Project through for the Fall, will be held Tuesday and Thursdays, 10:05-11:25 in Upham Hall. You will need to register for the course to participate as part of the central coding, research and analysis team. If you plan to register for the class, you MUST get in contact with tPP’s Director, Dr. Loadenthal, and let him know. A few points of clarification:
  1. The class will be limited to 25 students, and with 20 students (as of 5 April) already asking to join, we are very encouraged. Soon we will be reaching out to invite applicants from Sociology/Criminology, pre-law, Political Science, International Studies, Global and Inter-Cultural Studies, and other programs. We expect these efforts to fill the remaining seats in the class. So if you are interested in the class, please let us know ASAP.
  2. If you have not been a part of the team in the past, you will need to complete the application online so we can see where best to place you in the project. The form should take less than 10 minutes and is available here: https://tpp.lib.miamioh.edu/want-to-join-the-team/. After completing the form, you’ll need to email your resume/CV to Dr. Loadenthal.
  3. We are also looking to recruit a small number of students for specific project roles. These students would not be expected to enroll in SOC497 but would instead work alongside the project Director via an Independent Study. If you have experience in any of the following areas and would like to take part in the project, contact Dr. Loadenthal
    • machine learning/Python
    • grant writing
    • mapping/GIS
    • database design (e.g. File Maker, SQL)
  4. If you use Twitter, please follow us (https://twitter.com/ProsecutionThe) so you can begin to see what types of cases make up the project. Casually following these updates between now and August will suit you well for engaging with tPP in the fall.
(our Spring 2019 team)

Data Validity Issues


This continues our series of student reflections and analysis authored by our research team.


The Prosecution Project is a multi-year research initiative that is run by Dr. Michael Loadenthal, of Miami University’s department of sociology, and a cohort of students. Our team has built a “code book” to help us turn the prose of court documents and news articles into consistent data values. While the code book seeks to mitigate subjective interpretations of case details, some variables — like ideological affiliation — are less black and white and require an amalgamation of different context clues from multiple sources to definitively define. As the project’s scope and team have expanded over the last two years, different variables and values have been interpreted in increasingly varying ways. That is okay, as the existing understanding of a given variable may not always be the best. Furthermore, discussions of our coding process have engaged my critical thinking skills, taught me how to articulate my research process to other teammates, and given me a clearer grasp of the Prosecution Project’s potential impact on the public and law enforcement community. However, discordance in variable interpretations — while beneficial to my overall personal growth — has made it much harder to build samples in the analysis stage of our project.

For the last month and a half, I have been analyzing our dataset in order to understand domestic terrorism motivated by anti-government belief systems. I narrowed down the 1194 cases of political violence coded to a sample of 162 cases that were ideologically aligned with anti-government extremism.

Of course, “anti-government,” to those whose research focus lays elsewhere, can be interpreted in many ways. Anti-government could be used to describe an anarchist who does not believe in the existence of a government that exercises authority without justification. Or, anti-government might be used to label a Jihadist who seeks to terrorize the American government and people by attacking a federal institution. These examples show the many ways in which “anti-government” ideological affiliation can be incorrectly but understandably assigned to defendants who clearly do not possess a rightist anti-government belief system. This is significant because any defendants incorrectly included in my sample would have skewed my data for key variables like jail sentence, ethnicity, tactic, etc.

I individually assessed each of the 26 cases in the database ideologically coded as “rightist unspecified” and “unclear” (Loadenthal et al. 2018). For each case, I read the narratives briefly describing the case and assessed the following variables: group affiliation, foreign affiliation, motivation for choosing their target (labeled in our database as “target: why”). I included defendants affiliated with right-wing, anti-government groups (e.g. Oklahoma Constitutional Militia) even if their ideological affiliation for a specific attack was unclear. I excluded defendants who possessed unspecified rightist ideologies but exclusively chose their targets based on the religion, race, or foreign nationality of the person or property targeted. Ultimately, this case by case filtering process enabled me to verify that each case included in my sample possesses the fundamental characteristics of homegrown anti-government extremism.

This blog post demonstrates one of the many ways in which our research cohort is democratically creating a communal research process, making mistakes, positively reacting to our mistakes, and reiterating and improving our database and processes. The Prosecution Project is a labor of love that is being built through a culture of collaboration, complementary skills, and continuous learning.

– Nikki Gundimeda

Reexamining the Codebook


This continues our series of student reflections and analysis authored by our research team.


In class the past few weeks, we have decided to reassess and rework our team’s Codebook. Up until now, the Codebook has been a relatively straightforward manual that we all follow while coding each case. Personally, my partner and I leave the Codebook open in another tab while we research a defendant – referring to it when we have specific questions about what codes make up a certain variable or how exactly a variable is defined.

For the most part, we have the Codebook memorized. Since the majority of us joined the team as the Codebook was being created and fine-tuned, we understand what each variable is looking to assess. This is not to say the Project has not come across some interpretation issues as we have worked through coding. A few weeks ago, we had a lengthy discussion about how to code cases in which the defendant was a minor at the time of the crime. Since portions of these cases, including ages of the defendant are often sealed from the public, it can be difficult to know what the actual age was. Some members of the team had been coding this variable as unknown, while others had been using 17 as a fill-in number, as our Codebook had not specified what to do in this situation. We talked in-class and decided to use 17 as our age for all minors, unless the actual age was known, in which case we would use that one.

Differing interpretations of the Codebook occur regularly, and we do our best to address them and amend the document to make each variable description even more clear. However, next semester, we are adding several new members to the Prosecution Project. These new additions will have the guidance of senior members of the team as they learn how to code, but they will still be heavily reliant on the exact wording of the Codebook to understand each variable and its codes. Because of this, the team has spent the last two classes going through each individual variable in the Codebook in extreme detail. Our goal is to phrase the Codebook so that there can be almost no room for misinterpretation. We begin by phrasing each variable as a question.

This sounds relatively straightforward and self-explanatory, but as the team and I can assure you, it is not. We have dedicated at least ten minutes to discussing how to phrase the questions for each variable, and some variables have required conversations lasting over an hour. One particularly challenging and divisive conversation occurred surrounding the variable of “previous similar crime”. We struggled with how to conceptualize what constituted a similar crime, and by the end of the discussion, we had probably run through thirty different variations of the question. Our first suggestion was, “Has the defendant been charged with a previous similar crime?” and by the end, we decided on, “Has the defendant been charged or convicted of a previous crime motivated by the same belief system?” We felt that this phrasing was the best way to encompass what we wanted to assess with this variable.

Eventually, we will have worked through this process for every variable in our Codebook. Ideally, we will have a user-friendly manual for next semester’s additions, but we are ready and willing to further adjust our Codebook as new issues arise.

– Zoe Belford

A Codebook 2.0?


This continues our series of student reflections and analysis authored by our research team.


 As the semester is winding down, here is an update on the current status and goals of tPP! Over the past two months, everyone has worked to construct mini-analyses papers on a chosen topic surrounding our database. Some members worked in pairs while others worked individually to assess trends that may be appearing within the database. The papers addressed several different factors we have coded for such as gender prevalence in terrorism, foreign affiliation and fatality, military/veteran status and its role in attacks, location prevalence, etc. We plan to start out next semester by presenting our papers and findings to the entire team as a reminder of all the great work we have achieved so far!

Speaking of the team, we are also excited next semester to welcome some new recruits! We have spent time recently reviewing our meeting agenda and drafting not a new, but a more explicit, and more concise codebook that will be extremely beneficial when catching the new members up to speed! Kicking off the new year, we will ultimately finish adding and coding cases, so we can continue to draw final analyses of patterns of taxonomy within our dataset. As we begin to move toward the final stages of our project, we aim to draft more literature, advertising the information presented in our data, and work to present our findings to outsiders at relative conferences!

As a member of tPP from the start, and a soon graduating senior, this experience has been eye-opening as much as it has been informative. Working with Dr. Loadenthal and the rest of the team has caused an interest in continuing research around the justice system and helped prepare me to keep to higher education in ways I would have never received without them. With tPP being one of the largest datasets of its kind, it has offered so many undergraduate students the chance to participate in research that while tasking, has been extremely rewarding. The project is mostly student-led has allowed us to learn and improve our skills in leadership, collaboration, research, statistical analyses, technical writing, and so many more. Many members have been on the team since the start and found their niche through this project and enjoy the chance to collaborate on a regular basis to adopt roles and goals as needed within our own mini projects and the larger project as a whole. This spring will be an exciting time for everyone as we move to our final stages but the time cannot come soon enough for our eager current and new members.

– Tia Turner

Excluded Cases and Why They Remain Important


This continues our series of student reflections and analysis authored by our research team.


The tPP data set[1] has an extensive process of selecting cases that fit the criteria for the database.  This process is called the decision tree and has been described in other blog posts.  While the data set currently has almost 1,202 coded cases, there are some cases that did not meet the qualifications of the decision tree at some point in time and ended up becoming excluded.  These cases appear to be ones that would be relevant to the set but they fall short of particular qualifications.  When a case is excluded it is placed into a document of excluded cases where it is briefly explained and then its exclusion is subsequently explained as well.  Some may wonder why we bother to record cases that are not matches for us, well, many these excluded cases can reveal information about the tPP data set itself.

Some excluded cases are straightforward to explain, such as the case of William Rodgers.[2]  William Rodgers was an environmental activist and major leader of an act of arson at a Vail Ski Resort in Colorado.  He is excluded from the data set because he committed suicide in jail shortly after he was arrested.  Since he was not able to be charged and prosecuted, he is excluded from the tPP data set.

Other cases in the excluded cases file deal with more complicated issues such as intent.  Intent becomes crucial in determining whether or not to omit particular cases from the dataset.  Does the individual committing the act of terrorism or political violence truly possess a political motive?  Are their crimes attempting to further a particular terrorist organization or movement?

The tPP dataset contains many variables that are coded with very precise language to ensure that intent is the primary focus of the coding.  Some of these variables include ‘people versus property’ or ‘ideological affiliation’.  People versus property outright asks “Did this crime intend to target human beings, material property, both or neither?”[3]  This seeks to determine the intent of whom the crime was trying to cause harm towards.  Ideological affiliation is defined by the codebook as “What belief system, if any, motivated the defendant to commit the crime?”[4] This variable also focuses on what the core value system of the individual is and this can affect the intention of their crime.  If one throws a brick threw a McDonald’s window out of anger it would not be considered terrorism.  However, if they had an ideology that opposed consumption of animals and they committed the same crime, the same act could be considered an act of political violence, and likely termed by the government as ‘eco-terrorism’.

These variables show the emphasis that the data set places on intent.  The excluded cases are a variety of examples where the acts may be heinous, or may present rhetoric that is similar to what one may consider to be terrorism, however, this specific data set takes into careful consideration intent, and every case must fully pass through the decision tree before it qualifies to be coded into the data set.  These excluded cases are still valuable, as they show the value this tPP data set places on intent.

– Hannah Hendricks


References

[1]Loadenthal, Michael, Zoe Belford, Izzy Bielamowicz, Jacob Bishop, Athena Chapekis, Morgan Demboski, Bridget Dickens, Lauren Donahoe, Alexandria Doty, Megan Drown, Jessica Enhelder, Angela Famera, Kayla Groneck, Nikki Gundimeda, Hannah Hendricks, Isabella Jackson, Taylor Maddox, Sarah Moore, Katie Reilly, Elizabeth Springer, Michael Thompson, Tia Turner, Brenda Uriona, Brendan Newman, Jenn Peters, Rachel Faraci, Maggie McCutcheon, and Megan Zimmerer, 2018. “The Prosecution Project (tPP) October 2018” Miami University Sociology Department. https://tpp.lib.miamioh.edu. Loadenthal 2018. “The Prosecution Project (Decision Tree)”

[2] (Loadenthal et. al, 2018)

[3] (Loadenthal et. al, 2018)

[4] (Loadenthal et. al, 2018)

 

On Additional Sentencing and Deportation

 


This continues our series of student reflections and analysis authored by our research team.


 The Prosecution Project (tPP) analyzes a wide variety of variables to uncover and assess patterns in the prosecution and punishment of domestic terrorism in the United States, and my research has led me to pay interest to one variable in particular: additional sentencing.

In our codebook, additional sentencing is an open-ended, quasi-catch-all variable. tPP explicitly measures the length of jail/prison sentences and the presence or absence of life/death sentences. The additional sentencing variable, however, records other punishments, as well as any special enhancements specified in their sentencing or notable acts the defendant was charged under. Common codes under the additional sentencing variable include probation, time served, and hate crime or firearm enhancements. This variable holds a treasure trove of information for future analysis, but one specific code under this umbrella caught my interest from the beginning of my time working with tPP: deportation.

After joining tPP this past winter and beginning to code cases, I began to notice the repetition of deportation as an added punishment on top of, or in lieu of, the standard jail sentencing or probation. According to USA.gov, the United States “may deport foreign nationals who participate in criminal acts, are a threat to public safety, or violate their visa” (USA.gov). This broad operationalization of the criterion allowing for deportation gives the United States government the power to deport most of the foreign-born nationals in our data set. The only factor that should determine whether deportation occurs is whether the foreign defendant in question is found guilty or innocent of a criminal act, but the United States does not seem to apply this policy consistently.

As of October 2018, tPP has a completed data set of nearly 1200 cases, 32 of which involve deportation. However, of the cases in our data set, 296 involve foreign-born, non-naturalized defendants. Many foreign born, non-naturalized individuals were found guilty, but were still allowed to remain in the country. After spending nearly a year working with this project, my question is: why? Are there consistent, measurable differences in the characteristics of foreign-born, deported defendants compared to the characteristics of foreign-born, non-deported defendants? How does this compare to our tPP data set as whole? My future research using the data tPP has gathered will aim to uncover if there is a clear answer to these questions.

– Zoe


References

USA.gov. (2018). Deportation. Retrieved from https://www.usa.gov/deportation

The Evolution of tPP


This continues our series of student reflections and analysis authored by our research team.


In the April of 2017, a group of approximately twenty undergraduates from various backgrounds slowly trickled in to an empty classroom at Miami University in the late afternoon.  That afternoon was the first meeting of the Prosecution Project.  Of those twenty-some undergraduate students, four remain with the Project now, in the fall of 2018. Since then, nearly 45 Miami University students have contributed in some way to the project, whether it be for a few weeks, a semester, or a year.

As the Prosecution Project approaches its second year, the outcome is slightly different than what was imagined at the start — I can say this with certainty, since I was in the room for that very first meeting nearly two years ago.  The Prosecution Project was initially conceived as a one-year ordeal, wherein the upcoming summer and throughout the fall semester, researchers would gather and code data, and in the spring semester, analyze and report on that data, with the outcome of a publication in a year’s time. We soon realized that this project was going to become much larger than that. More and more cases continued to be input, more detailed and intricate coding questions began to come up, and the scope of the project expanded exponentially.

Data collecting and coding continued well into the spring of 2018, with monumental questions of methodology, coding, theoretical framework, and other fundamental aspects of the Project being brought up at each weekly team meeting. It became clear quite clear that there would be no publishable output by the end of the academic year, but we did begin to expand the “deliverables” of the Project into mediums not yet considered. The blog was created in the spring, as a way to show people outside of our small team and its supporters what we had been doing for the past year. Preliminary analysis began on the data, even if it was nothing more than a framework for future analysis. We began to systematize the process of coding, including drop down menus in our dataset for faster and more uniform variable level assignment; creating teams of coders that work in tandem to independently code and verify doubly that cases were coded correctly; the organization of different files that accompany the Project, including source files for each case; and a number of people assigned to scrape new cases and cases we may have previously missed from large databases and other sources. The new plan was to have a complete dataset by the end of the summer of 2018.

When the fall rolled around, we soon realized that this plan just wasn’t feasible. We were adding hundreds of new cases each week, and of the then-nearly-2000 cases we had added, only about 800 had been coded and verified. Our new plan became intensive coding weeks in which we could complete as many cases as possible, followed by a semester of analysis and producing reports.

Now, in October of 2018, the goal has shifted again, and is, in truth, still evolving. The current plan which that we are working with right now is a multi-layered analysis. As of mid-October, we have over 2000 cases added, and of those cases, over 1000 complete and verified. Part of the team is currently working on continuing to code, while other subgroups of our 20 undergraduate researchers are doing analysis in multiple forms. We have a team of researchers running descriptive statistics and generating data visualizations, a team working on inferential statistics to generate correlations and regressions and other statistical results, and a team working on the beginning stages of Qualitative Comparative Analysis.

While the Prosecution Project has evolved significantly over the past 18 months and become a much bigger feat than we could’ve imagined that April afternoon in that first little meeting, it has not lost sight of its goal. We hope to not only begin publications and mini-reports of our own findings of the prosecutions of acts of political violence/extremism and terrorism, but also to make our database accessible to students as a tool for conducting research, and to the public in the pursuit of open and accessible knowledge.

 

Athena Chapekis is a senior sociology major at Miami University and senior team member and data analyst at the Prosecution Project

Evaluating “Success” in Terrorism and Counterterrorism


This continues our series of student reflections and analysis authored by our research team.


Some of the great opportunities that the Prosecution Project (tPP) provided me include being able to find an interesting niche within the field of terrorism studies, employ the tools within the project to enhance my understanding of certain topics, and try to tackle challenging analytical problems through critical thinking. How the state responds to terrorism is a leading subject within the field, and as my interest in the topic developed, I’ve found that one puzzling issue is central to understanding counterterrorism initiatives: how do we best evaluate the success of terrorist organizations? As the war on terror persists through the century, it can be helpful for academics, policymakers, officials, and citizens alike to produce a more grounded dialectic by reflecting on the successes and failures of groups past and present. But measuring success requires us to appropriately define it and quantify it, and my contribution will utilize tPP to hopefully help frame and measure organizational success over time.

A good first decision that must be made when answering the question involves choosing a perspective from amongst the various actors. Do we define success according to the perspective of the terrorist group? The government? The individual suspect(s)? Each actor will judge a certain operation differently from the other. Fortunately, the cases in tPP can represent all of these perspectives, but to varying degrees. Since the goal of the project is to understand how the United States prosecutes individuals with crimes related to political violence, the government’s perspective will be front and center, and it will be the primary lens which I will employ. From the state’s view — and often from each actor’s view — success is multifaceted; that is, there is no single method to calculate success, but it involves a combination of equally important values. Here, Daniel Byman’s five measures of success (two of which are listed) provide a mostly functional, if imprecise, foundation for understanding the state’s perspective:

Freedom of operation. The first method looks at how secure a group may be in a certain location. According to the state, part of a group’s success means having “secure areas in which they can organize and plan with little fear…[and] can wait to strike at their pleasure” (Byman 2003). Certainly, an organization’s network will never be completely removed, but single operations can be shut down. The better an organization is able to operate out of an area, the more successful it is at perpetuating terrorist activity. At least some of this freedom can be operationalized through the city, state, and country variables in the tPP dataset. We unfortunately cannot discriminate between cases where a group claims responsibility for certain crimes versus isolated “lone wolf” crimes, which would erase issues of counting individuals with no attachment whatsoever to a group as being apart of an organization’s “footprint”. Even so, by focusing on one group’s activities in specific cities over time, we may gain insight into how well the government can curtail their operations.

Success in recruitment. The second method understands a group’s success in terms of the size of their recruitment base. Byman points to leadership structures (either centralized or decentralized) as being key to this method of approximation. Some groups, he states, rapidly decline as soon as their top leaders are killed or captured — like the Kurdish Workers Party in Turkey or Sendero Luminoso in Peru — while other groups can remain virulent and active despite losing many leaders, such as Hezbollah in Lebanon. Decentralized leadership means that “for a group like Al-Qaeda, disrupting recruitment is a vital…measure of success” (Byman 2003).

This variable is notoriously tricky to account for, though, since we can only rely on estimates that can enumerate the size of recruits. To some extent, the tPP dataset can approximate this amount of involvement, although with a great deal of imprecision. Many organizations with fewer members and/or more centralized leadership may engage in plots alongside their operators, and will thus be deemed as co-defendants. If we pair this with the hypothesis that most recruits are not repeat offenders, this could be tied to our “Previous Similar Crime” variable that tracks individuals with multiple convictions. Accumulating both results could help us determine if organizations utilize recruits or existing members for their operations, with the sense that the more recruits they gain, the more momentum and “success” they may have.

These are just a couple of the ways in which the tPP dataset will reflect measurements for success. To be sure, there are clear disadvantages for adopting certain methods, but I believe that by wrestling with these imprecisions and finding better ways to make more accurate inferences, the data will better our understanding.

– Michael Thompson


Source:

Byman, Daniel. “Scoring the War on Terrorism.” The National Interest, no. 72 (2003): 75-84. http://www.jstor.org/stable/42897485.

On the Coding of Foreign Affiliation


This continues our series of student reflections and analysis authored by our research team.


While the Prosecution Project (tPP) codes cases of domestic political violence for forty different variables, one that deserves specific attention is that of “foreign affiliation.” This variable can be best defined as a defendant’s affiliation with a foreign terrorist organization (FTO). FTO’s are defined as “a non-US organization that engages in terrorist activity that threatens US nationals or national security.” A list of FTOs, as catalogued by the US Department of State, can be found here.

The only options when coding for this variable are “no,” “yes,” or “unknown.” During the process of researching any given case, foreign affiliation is apparent almost immediately. Because involvement with an FTO connotes an individual’s terroristic threat, documentation of affiliation is both critical and consistent throughout legal proceedings. If there is any mention of, or allusion towards, any group on the state department’s list of FTOs, the case receives a code of “yes” for foreign affiliation. If there is no mention of any FTO, the code for the case is “no.”

Within our dataset, there are only a handful of cases which have been coded as “unknown” regarding foreign affiliation. The circumstances leading to the ambiguity of foreign affiliation in these cases can best be explained by confusion surrounding the events of the crime itself. A great example of this phenomenon can be found in the 2016 case of Michelle Marie Bastian. Bastian sent ISIS propaganda to her incarcerated husband, but this exchange of terroristic material does not insinuate collusion with an FTO itself. In this case, there was likely no direct contact between the defendant and the FTO; however, because there is a lack of evidence that clearly states Bastian obtained the propaganda from an indirect rather than direct source, the foreign affiliation category must be coded as “unknown” rather than “no.”

Following the coding for the variable of foreign affiliation, members of tPP code for group affiliation. It is important to recognize that a “no” for foreign affiliation does not mean that there is no group affiliation. While this may seem to be an obvious statement, our “group affiliation” variable is inclusive of domestic, as well as foreign, terrorist organizations.

As we continue with individual analyses of tPP’s dataset, I plan to examine the implications of foreign affiliation and its interaction with other variables. One relationship worth analyzing is that of foreign affiliation and citizenship. An interesting correlation may be made when pooling cases where the defendant has a foreign affiliation and comparing, within those cases, whether the individual is an American citizen or not. Furthermore, disparities in sentencing within the case pool of foreign affiliation may be juxtaposed relative to the aforementioned variable of citizenship.

Foreign affiliation, upon initial inspection, does not appear to be an overly significant variable relative to the others within our dataset; however, it’s presence, or lack there of, poses considerable influence over the interpretation of the prosecution of political violence cases in general. The relationships between foreign affiliation and other variables (particularly citizenship, tactic, and sentencing) likely possess valuable information regarding the factors governing an individual’s decision to participate in political violence, and how they choose to do so.

Affiliation with an FTO likely determines, or partially shapes the tactic and tactic variable which a defendant utilizes in offending. These relationships also likely reveal connections between the resulting convictions and sentencing of defendants and foreign affiliation, with notable regard to the proceedings of the US’s judicial system. Ultimately, tPP’s coding of foreign affiliation deserves to be analyzed in greater depth. When we consider the impact of FTOs on individual perpetrators, we reveal the severity of their danger to national security.

I look forward to studying foreign affiliation as tPP moves forward into statistical and analytical research and presentation of our finalized dataset for the semester!

 

Izzy Bielamowicz is a Junior pursuing a degree in Political Science with a double minor in Criminology and Philosophy and Law. Izzy has been with tPP since August 2018.