Rational Misbehavior? Evaluating an Integrated Dual-Process Model of Criminal Decision Making
Jean-Louis van Gelder & Reinout E. de Vries
Objectives: Test the hypothesis that dispositional self-control and morality relate to criminal decision making via different mental processing modes, a ‘hot’ affective mode and a ‘cool’ cognitive one. Methods: Structural equation modeling in two studies under separate samples of undergraduate students using scenarios describing two different types of crime, illegal downloading and insurance fraud. Both self-control and morality are operationalized through the HEXACO model of personality (Lee and Ashton in Multivariate Behav Res 39(2):329–358, 2004). Results: In Study 1, negative state affect, i.e., feelings of fear and worry evoked by a criminal prospect, and perceived risk of sanction were found to mediate the relations between both dispositions and criminal choice. In Study 2, processing mode was manipulated by having participants rely on either their thinking or on their feelings prior to deciding on whether or not to make a criminal choice. Activating a cognitive mode strengthened the relation between perceived risk and criminal choice, whereas activating an affective mode strengthened the relation between negative affect and criminal choice. Conclusion: In conjunction, these results extend research that links stable individual dispositions to proximal states that operate in the moment of decision making. The results also add to dispositional perspectives of crime by using a structure of personality that incorporates both self-control and morality. Contributions to the proximal, state, perspectives reside in the use of a new hot/cool perspective of criminal decision making that extends rational choice frameworks.
“Can’t Stop, Won’t Stop”: Self-Control, Risky Lifestyles, and Repeat Victimization
Jillian J. Turanovic & Travis C. Pratt
Objectives: Drawing from lifestyle-routine activity and self-control perspectives, the causal mechanisms responsible for repeat victimization are explored. Specifically, the present study investigates: (1) the extent to which self-control influences the changes victims make to their risky lifestyles following victimization, and (2) whether the failure to make such changes predicts repeat victimization. Methods: Two waves of panel data from the Gang Resistance Education and Training program are used (N = 1,370) and direct measures of change to various risky lifestyles are included. Two-stage maximum likelihood models are estimated to explore the effects of self-control and changes in risky lifestyles on repeat victimization for a subsample of victims (n = 521). Results: Self-control significantly influences whether victims make changes to their risky lifestyles post-victimization, and these changes in risky lifestyles determine whether victims are repeatedly victimized. These changes in risky lifestyles are also found to fully mediate the effects of self-control on repeat victimization. Conclusions: Findings suggest that future research should continue to measure directly the intervening mechanisms between self-control and negative life outcomes, and to conceptualize lifestyles-routine activities as dynamic processes.
Bayesian Spatio-Temporal Modeling for Analysing Local Patterns of Crime Over Time at the Small-Area Level
Jane Law, Matthew Quick & Ping Chan
Objectives: Explore Bayesian spatio-temporal methods to analyse local patterns of crime change over time at the small-area level through an application to property crime data in the Regional Municipality of York, Ontario, Canada. Methods: This research represents the first application of Bayesian spatio-temporal modeling to crime trend analysis at a large map scale. The Bayesian model, fitted by Markov chain Monte Carlo simulation using WinBUGS, stabilized risk estimates in small (census dissemination) areas and controlled for spatial autocorrelation (through spatial random effects modeling), deprivation, and scarce data. It estimated (1) (linear) mean trend; (2) area-specific differential trends; and (3) (posterior) probabilities of area-specific differential trends differing from zero (i.e. away from the mean trend) for revealing locations of hot and cold spots. Results: Property crime exhibited a declining mean trend across the study region from 2006 to 2007. Variation of area-specific trends was statistically significant, which was apparent from the map of (95 % credible interval) differential trends. Hot spots in the north and south west, and cold spots in the middle and east of the region were identified. Conclusions: Bayesian spatio-temporal analysis contributes to a detailed understanding of small-area crime trends and risks. It estimates crime trend for each area as well as an overall mean trend. The new approach of identifying hot/cold spots through analysing and mapping probabilities of area-specific crime trends differing from the mean trend highlights specific locations where crime situation is deteriorating or improving over time. Future research should analyse trends over three or more periods (allowing for non-linear time trends) and associated (changing) local risk factors.
Forecasts of Violence to Inform Sentencing Decisions
Richard Berk & Justin Bleich
Objectives: Recent legislation in Pennsylvania mandates that forecasts of "future dangerousness" be provided to judges when sentences are given. Similar requirements already exist in other jurisdictions. Research has shown that machine learning can lead to usefully accurate forecasts of criminal behavior in such setting. But there are settings in which there is insufficient IT infrastructure to support machine learning. The intent of this paper is provide a prototype procedure for making forecasts of future dangerousness that could be used to inform sentencing decisions when machine learning is not practical. We consider how classification trees can be improved so that they may provide an acceptable second choice. Methods: We apply an version of classifications trees available in R, with some technical enhancements to improve tree stability. Our approach is illustrated with real data that could be used to inform sentencing decisions. Results: Modest sized trees grown from large samples can forecast well and in a stable fashion, especially if the small fraction of indecisive classifications are found and accounted for in a systematic manner. But machine learning is still to be preferred when practical. Conclusions: Our enhanced version of classifications trees may well provide a viable alternative to machine learning when machine learning is beyond local IT capabilities.
Block Randomized Trials at Places: Rethinking the Limitations of Small N Experiments
David Weisburd & Charlotte Gill
Objectives: Place-based policing experiments have led to encouraging findings regarding the ability of the police to prevent crime, but sample sizes in many of the key studies in this area are small. Farrington and colleagues argue that experiments with fewer than 50 cases per group are not likely to achieve realistic pre-test balance and have excluded such studies from their influential systematic reviews of experimental research. A related criticism of such studies is that their statistical power under traditional assumptions is also likely to be low. In this paper, we show that block randomization can overcome these design limitations. Methods: Using data from the Jersey City Drug Market Analysis Experiment (N = 28 per group) we conduct simulations on three key outcome measures. Simulations of simple randomization with 28 and 50 cases per group are compared to simulations of block randomization with 28 cases. We illustrate the statistical modeling benefits of the block randomization approach through examination of sums of squares in GLM models and by estimating minimum detectable effects in a power analysis. Results: The block randomization simulation is found to produce many fewer significantly unbalanced samples than the naïve randomization approaches both with 28 and 50 cases per group. Block randomization also produced similar or smaller absolute mean differences across the simulations. Illustrations using sums of squares show that error variance in the block randomization model is reduced for each of the three outcomes. Power estimates are comparable or higher using block randomization with 28 cases per group as opposed to naïve randomization with 50 cases per group. Conclusions: Block randomization provides a solution to the small N problem in place-based experiments that addresses concerns about both equivalence and statistical power. The authors also argue that a 50 case rule should not be applied to block randomized place-based trials for inclusion in key reviews.
Deterring Gang-Involved Gun Violence: Measuring the Impact of Boston’s Operation Ceasefire on Street Gang Behavior
Anthony A. Braga, David M. Hureau & Andrew V. Papachristos
Objectives: The relatively weak quasi-experimental evaluation design of the original Boston Operation Ceasefire left some uncertainty about the size of the program’s effect on Boston gang violence in the 1990s and did not provide any direct evidence that Boston gangs subjected to the Ceasefire intervention actually changed their offending behaviors. Given the policy influence of the Boston Ceasefire experience, a closer examination of the intervention’s direct effects on street gang violence is needed. Methods: A more rigorous quasi-experimental evaluation of a reconstituted Boston Ceasefire program used propensity score matching techniques to develop matched treatment gangs and comparison gangs. Growth-curve regression models were then used to estimate the impact of Ceasefire on gun violence trends for the treatment gangs relative to comparisons gangs. Results: This quasi-experimental evaluation revealed that total shootings involving Boston gangs subjected to the Operation Ceasefire treatment were reduced by a statistically-significant 31 % when compared to total shootings involving matched comparison Boston gangs. Supplementary analyses found that the timing of gun violence reductions for treatment gangs followed the application of the Ceasefire treatment. Conclusions: This evaluation provides some much needed evidence on street gang behavioral change that was lacking in the original Ceasefire evaluation. A growing body of scientific evidence suggests that jurisdictions should adopt focused deterrence strategies to control street gang violence problems.
A Bi-level Framework for Understanding Prisoner Victimization
John Wooldredge & Benjamin Steiner
Objectives: To present and test an opportunity perspective on prison inmate victimization. Methods: Stratified random samples of inmates (n 1 = 5,640) were selected from Ohio and Kentucky prisons (n 2 = 46). Bi-level models of the prevalence of assaults and thefts were estimated. Predictors included indicators of inmate routines/guardianship, target antagonism, and target vulnerability at the individual level, and several indicators of guardianship at the facility level. Results: Assaults were more common among inmates with certain routines and characteristics that might have increased their odds of being victimized (e.g., less time spent in recreation; committed violence themselves during incarceration), and higher levels of assaults characterized environments with lower levels of guardianship (e.g., architectural designs with more “blind spots”, larger populations, and less rigorous rule enforcement as perceived by correctional officers). Similar findings emerged for thefts in addition to stronger individual level effects in prisons with weaker guardianship (e.g., ethnic group differences in the risk of theft were greater in facilities with larger populations and less rigorous rule enforcement). Conclusions: The study produced evidence favoring a bi-level opportunity perspective of inmate victimization, with some unique differences in the relevance of particular concepts between prison and non-prison contexts.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.