Chapter 4

Crackdowns, Censorship, Curfews, Cure-Alls

 

               The motto of American social policy seems to be: if it didn’t work, try more of it. In reading the histories of drug policies (i.e., Craig Reinarman et al, Crack in America, or David J. Musto, Drugs in America), sex education (Jeffrey Moran, Teaching Sex), crime (Stephen J. Gould, The Mismeasure of Man), or juvenile delinquency (Richard J. Lundman, Prevention and Control of Juvenile Delinquency), one is struck by the absolute lack of new ideas. Virulent debates youth and other domestic issues today remain mired in the same ruts already well worn in 1900. Sex education will either save or corrupt our kids; drugs are a scourge of dark-skinned populations menacing white women and kids; crime is caused by bad people and crime-prone populations; juvenile delinquency can be reduced with therapeutic fixes and educational inoculations absent fundamental changes in social structure or youth environments.

               Researchers and evaluators consistently concluded that efforts to prevent juvenile delinquency by means of individual psychiatric, education, and social interventions, such as the Cambridge-Somerville Youth Study (1940s), and the Youth Consultation Services. Chicago Area Project, and Midcity Project (1950s), failed miserably because they did not change the chief factors driving crime: socioeconomic status, education and job opportunity, and family environment. What have we learned? The 1980s and 1990s Gang Resistance Education and Training (GREAT), Juvenile Mentoring Program (JUMP), and Drug Abuse Resistance Education (DARE) all attempt to cure delinquency through individual, educational, and social interventions that make no fundamental changes in youths’ socioeconomic status, education and job opportunities, and family environments. American ideology that blames individuals and refrains from scrutinizing social structure has spawned entrenched institutions are indifferent to failure because they profit more from perpetuating social problems than from reducing them.

Ronald Reagan’s beaming 1980s presidency assured Americans, yes, there are simple answers to the tough social problems plaguing America. Then, the Clinton era’s panaceas to deal with youth crime made the Reagan years look erudite. Because Americans are willing to embrace simpler answers and impose harsher, more sweeping social controls when punishing kids (usually in the name of protecting them), the conversion of most major social problems into youth attitude and behavior flaws has dumbed down America’s discussion of social issues alarmingly.

               Off the table go the vexing conundrums surrounding poverty, racism, urban joblessness, wealth concentration, household violence, and deteriorating baby boom behaviors. Now being served are the repressive platitudes. According to today’s top experts, if only we could

…Keep all teenagers at home, in school, and adult-supervised—all the time!

…Censor what kids see and hear—all the time!

…Achieve “zero tolerance” for teenage drinking, drugs, raves, guns, gangs, goths!

…Keep them controlled by adults and away from peers—all the time!

…Lock them up more! And longer!

…Execute them more! And younger!

…Dress them in uniforms!

…Lecture them endlessly on morals and character!

…what a wonderful country this would be.

 

               Clinton-era cure-alls have two characteristics: they treat teenagers as a mass scourge whose very existence constitutes a public danger, and they flatter and indulge adults. More important, they handsomely profit many adult interests, from programs to prisons, creating powerful constituencies vested in drumming the press and policy makers with a steady roll of anti-youth alarmism in hopes of expanding profits still further. Thus, Clinton-era teen crackdowns rise in number and intensity even as juvenile crime continued to fall.

               So extreme has the attack on youth become that even official sources have begun to demur. “It is clear that national crime and arrest statistics provide no evidence for a new breed of juvenile superpredator,” the U.S. Department of Justice’s Office of Juvenile Justice and Delinquency Prevention (OJJDP) concluded in February 2000. Nevertheless, “in the early 1990s, this (superpredator) myth caused a panic that changed the juvenile justice system and its response to the nation’s youth.”

 

 

Panacea #1: Get Tough

 

               The flawed scholarship and extraordinarily inflammatory language employed by quotable experts such as James Q. Wilson, James Alan Fox, and John DiIulio Jr. captivated a press never able to resist sensation and gratified politicians never able to resist scapegoating. The media hunt was on, first to make inner-city gang killings appear an imminent menace to the suburbs, then to make suburban kids the demons. Legislative panic followed. From 1992 through 1997, 47 states and the District of Columbia toughened laws applying to juveniles: 45 made it easier to transfer youthful offenders to adult court, 31 allowed or mandated tougher sentences, 47 weakened the traditional confidentiality of juvenile courts, and 22 gave victims greater roles in the juvenile justice process. Twenty-eight states toughened all of these.

               Young age is not a mitigating factor, as it was in the old days of the juvenile court. Today, it’s an aggravating factor leading to harsher sentences than adults receive. “Juvenile who `do the adult crime’ may do more than the `adult time,’” OJJDP found in a 1999 study of average maximum prison sentences handed youths compared to adults who committed the same crimes (Table 12).

 

 

Table 12. Juveniles (under age 18) get longer sentences on average from adult courts than adults do for equivalent offenses

 

Average maximum sentence (in months) given

juveniles and adults sentenced by adult courts

 

Offense                                        Juveniles    Adults

All felonies                                          111            69

Violent offenses                               139         115

Murder                                                   287         258

Rape                                                        200         149

Robbery                                                 139         112

Aggravated assault                              75            81

Other violent offenses                     130            70

Property offenses                                 50            56

Drug offenses                                        80            60

Possession                                               66            48

Trafficking                                              83            66

Weapons offenses                                66            46

Other offenses                                       61            40

 

Source: Office of Juvenile Justice and Delinquency Prevention (1999). Juvenile Offenders and Victims--1999 National Report. Washington, DC: US Department of Justice, p. 178.

 

               Juveniles received sentences four years longer than adults for rape, two and a half years longer for murder, and two years longer for robbery, drug, and weapons offenses. Juveniles received slightly shorter terms for aggravated assault and property offenses. For all felonies, juveniles sentenced by adult courts received an average maximum sentence of 111 months, much longer than the average of 69 months handed adults. Given that violent adult offenders tend to have twice as many victims as violent juvenile offenders and arrive in court with longer criminal records, the longer sentences given youths are puzzling—and outrageous.

               Even more bizarre, California juveniles consistently serve longer sentences for equivalent offenses than do adults, and they serve especially long sentences if sentenced by a juvenile court! For the 16-year 1987-2001 period, the California Youth Authority and California Department of Corrections have released annual reports comparing the time served on sentences for 600,000 paroled adults, 26,000 paroled juveniles sentenced by juvenile courts, and 2,000 paroled juveniles sentenced by adult courts. They consistently show that on average, more than three-fourths of juveniles served longer sentences than adults for equivalent offenses.

 Further. juveniles sentenced by juvenile courts generally serve longer sentences for equivalent offenses than juveniles sentenced by adult criminal courts, including for homicide. Compared to juveniles sentenced by adult courts, juveniles sentenced by juvenile courts served longer sentences for homicide (all categories), sex offenses other than rape, kidnapping, burglary, theft, auto theft, and drugs, and shorter sentences for rape, robbery, assault, forgery, and other offenses. Youths face a 60% chance of serving a longer sentence for a given offense when sentenced by a juvenile court than when transferred to adult court.

Finally, the juvenile-adult sentencing gap is widening as California juvenile courts became harsher compared to adult courts over the last decade, particularly for violent offenses. As a result—a completely unexpected one—prosecutors who have pushed laws to try more youths as adults are, in practice, trying more youths as juveniles. For example, in the 1980s, prosecutors tried 43% of the juveniles accused of murder, and 22% accused of rape, in adult courts; in 2000 and 2001, those percentages were 19% and 13%. Other offenses show similar trends. The facts that convictions can be won more easily in juvenile court (youthful defendants have fewer rights there) and that juvenile judges mete, if anything, harsher sentences than adult criminal court judges leave little incentive to transfer youths to adult court.

               The “get tough” wave in California and elsewhere did not cause the decline in juvenile crime, much as the crackdown lobbies are falling over themselves to take credit. As reported earlier, youths show the largest declines of any age group in serious offending (or the smallest increases, depending on the crime chosen) over the last 25 years except the elderly. In the 1990s, the decline in homicide and other violent crime among juveniles began, especially in Los Angeles, New York, and other urban centers, years before the crackdowns emerged.

               The small number of studies on the effects of adult-court and other tougher sentencing measures find they do not deter youth crime. A 1996 Justice Policy Institute study of 3,000 juveniles sentenced in Florida in 1987, matched for similarity of offenses and prior record, found those handled by adult courts were as likely to reoffend within six years as those handled by juvenile courts, but the adult-court sentencees reoffended more quickly and more often. Contradicting its own administration’s and congress’s enthusiasm for forcing federal and state authorities to push more youths into adult court, the OJJDP reported in 2000 that “the imperfect evidence to date supports the conclusion that transfers [juveniles sentenced by adult courts] are more likely to recidivate.” (This conclusion might seem obvious from the high rates of adults who recidivate after going through the adult court system.) The agency recommended further study with more careful matching of youths sentenced by adult and juvenile courts for weapons use, injury to victim, drug-use history, and similar factors courts evaluate in imposing sentences.

               My argument against trying youths in adult courts, regardless of offense, is somewhat different than that of other juvenile justice system defenders. First, the practical reason is that adult-court handling does not deter crime, as the rapid growth in serious offending by middle-agers shows; why use the failed adult system as a model for juvenile justice?

               Second, it’s unfair—perhaps an “adolescent” sort of objection, but it seems to me an unfair approach ultimately proves unworkable. True, youths ages 13 or so and older are capable of planning criminal acts and understanding their consequences—at least as much as adult criminals are. (Crime-prone individuals are not known for long-term thinking regardless of age.) My experience working with youths matches what is revealed by cognitive development studies in that regard. Despite widespread adult prejudices about adolescents voiced by “experts” who have not consulted their own literature, large-scale research reviews do not show that adolescents are any more prone than adults to delusions of immortality, poor impulse control, irrational behavior, inability to understand the consequences of actions, hormonal influences, or depression.

               So if teenagers are cognitively similar to adults, why not try youths as adults? Because American society does not afford youths the adult rights necessary to ensure that young people control their own lives. Young people are not allowed to choose where they will live, who they will be supervised by, what adults they must associate with, what they can do with their time, or where they must be. The Supreme Court has justified abrogating juvenile rights by contending that juveniles are always considered to be in custody. Further, youths are not allowed to participate in their society’s governance through voting or holding office.

               In short, youths are not allowed elemental adult freedoms. A society that treats adolescents as children when it comes to rights is logically and morally bound to treat them as children when it comes to responsibility. America says to youths, “if you do the adult crime, you do the adult time,” but we do not say, “if you display adult maturity, you deserve adult rights.” The transparent excuses lawmakers and courts make to avoid facing the moral implications of stripping rights away from the same youths upon which they impose super-adult responsibility and punishment (as noted, youths serve harsher sentences than adults for equivalent offenses) represent expressions of hostility, not a mature balancing of rights and accountability.

               Amnesty International reports that the United States now stands alone with Somalia (a nation with no central government), and against every other Western and Third World nation, in executing persons for crimes committed as juveniles. Prior to 1990, 281 Americans had been executed for crimes committed before age 18, including 126 for crimes committed at age 16 or younger. In the 1990s, 10 of the 19 juvenile offender executions worldwide were by the U.S., and 73 more juvenile offenders (two–thirds of them nonwhite) now wait on America’s death rows. Both Democratic and Republican leaders have advocated reducing the age for executing juveniles to 15 or younger.

               Further, the same raw, visceral emotion that is the chief predictor of when Americans apply the death penalty operates at the level of age as well as race. Just as a black person who kills a white is far more likely to be sentenced to death than a black person who kills a black, a youth who kills an adult is 2.4 times more likely to receive capital punishment than a youth who kills a youth. Conversely, whites who kill blacks, and adults who kill youths, are among the least likely to be put to death.

               That appalling, worsening record should bring sober reflection as to why no other peer society seems even remotely on the same path as ours. The Clinton administration’s response to Amnesty and United Nations Commission on Human Rights complaints that America’s death penalty embodied racism and violations of international standards? The juvenile death penalty is popular with the public (the only standard Clinton’s presidency understands).

               So what if it’s unfair, crackdown supporters argue? Youths who don’t want to get chewed up by the newer, meaner juvenile justice machine can avoid it by freely choosing not to commit serious crimes. It would be interesting to see today’s conservative Republicans and New Democrats apply that same logic to enact draconian punishments for corrupt politicians and corporate criminals. For example, Singapore (whose medieval penal system is adored by California Governor Gray Davis and other hardliners) viciously canes corrupt politicians and crooked businessmen—think a good horsewhipping might make white-collar criminals think twice?

               Further, the don’t-want-time-don’t-do-crime argument is undermined by the crackdowners’ parallel crusade to punish juveniles who commit no crimes through curfews, censorship, “zero-tolerance” standards for minor infractions and non-infractions, increased surveillance, and de-funding of jobs and other services in favor of mandatory behavior modification programs and incarceration. These approaches clearly have had little effect; at least not a positive one (see the subsection on curfews, below). The most repressive measures aimed at controlling “youth crime” by controlling youths fail because they are based on the wrong premises.

 

 

Panacea #2: Censorship

 

               One of the reasons America remains such an intractably high-risk society, suffering from rampant dangers other Western nations better control, is the habit of American politicians and institutions to blame social problems on easy scapegoats and specious “cultural” influences rather than facing difficult issues like poverty and adult norms. Just as the mass shootings of the late 1990s promised to bring about a true national debate over gun violence, in stepped the president and quotable authorities to throw the debate off track by trivializing the real menaces facing kids at the same time they demonized kids and their supposed cultural influences. As of this writing, the most salient political result of the turmoil over school shootings is that teenagers now have to get fake ID’s to see R-rated movies, following presidential and congressional uproar resulting in proof of age to see such films.

               In late July 2000, the American Academy of Pediatrics, American Medical Association, American Psychological Association, and American Academy of Child and Adolescent Psychiatry joined in a two–day conference called by Senator Sam Brownback (R-Kansas) for the explicit purpose of blaming “youth violence” on the media. “I think this is an important turning point,” said Brownback, calling for a media code of conduct (that is, self censorship). “Among the professional community, there’s no longer any doubt about this. For the first time, you have the four major medical and psychiatric associations coming together and stating flatly that violence in entertainment has a direct effect on violence in our children.”

               But, while Brownback compared the statement that media violence causes “youth violence” to the medical finding that cigarettes cause cancer, the associations were not nearly so sweeping: “We in no way mean to imply that entertainment violence is the sole, or even necessarily the most important factor contributing to youth aggression, anti-social attitudes, and violence. Family breakdown, peer influences, the availability of weapons, and numerous other factors contribute to these problems.”

               Why, then, weren’t these associations and the senator beginning with the “most important” factors linked to violence: poverty and household abuse? “It is inaccurate to imply that the published work strongly indicates a causal link between virtual and actual violence,” declared the international medical journal The Lancet in an August 14, 1999, editorial criticizing the American Academy of Pediatrics for continually blaming the media. “Contrast this knowledge vacuum with the data—and the daily presentations in hospital emergency departments—which point toward the proven and more immediate risk to babies and toddlers of living in poverty, with inadequate access to health care, or at risk of sexual and physical abuse.”

               It may be argued that even if violent media is only a minuscule contributor to violence in society and doesn’t affect most people at all, restricting it would still be of benefit—after all, what’s wrong with reducing violence a little bit? Aside from the fact that such an argument could be used to ban just about anything from cars to candlesticks that most people use beneficially and a few use to slaughter, what if the opposite is true: what if violent media helps reduce violence in society? The evidence indicates a lot more reason for ambivalence on this question than the popular debate has yet allowed.

               What does the hard evidence show? Those who insist that “more than 1,000 scientific studies” show media, video-game, and other “cultural” expressions of violence are major causes of real-life violence have three enormous objections to overcome. The first is fundamental: different groups of youths exposed to the same media express vastly different levels of violence. European and Japanese youths patronize similar violent media but display violent crime and murder rates only a small fraction of America’s. In the U.S., as we have seen, rates of gun killings by poorer youth of color are more than 10 times higher than among more affluent white youth. This is a particularly telling result, given that more affluent white kids have more access to violent video games and other media, as well as to guns in their homes, than do poorer youths.

               If the murder difference between high- and low-risk youth populations were, say, 30%, some plausible alternatives to socioeconomic inequality would make sense to explore. But a difference of 1,000% means that fixation on causal factors unrelated to economic disadvantage is escapist and diversionary. The second problem is one of timing: the advent and proliferation of violent games, R-rated movies, gangsta rap music, and other media accused of causing youth violence coincided not with more violence in society but less—especially by youths. If media violence causes real violence among youth, then more media violence should cause more youth violence. An inverse correlation suggests the one doesn’t cause the other. This issue is discussed later.

               The third problem involves a basic research problem: as media violence studies move from the laboratory to the real world, their validity drops drastically. What seems a sure thing in a white-coat setting—children beating on Bobo clown dolls after seeing a violent film, or college men expressing more aggression on pencil-and-paper surveys after viewing rough pornography—all but disappear when studied in the outside world.

               Moving out from the artificially-controlled laboratory environment, large-scale studies in the real world find affinity for violent media is surprisingly weakly correlated with real aggressive tendencies. Even if taken at face value, violent media explains very little of society’s violence levels. For example, studies of thousands of children and teens typically produce correlations of less than 0.15 (on a scale of 0 to 1.00) between youths’ patronage of violent media and ratings or records of their real violence. In contrast, as discussed in Chapters 2 and 3, poverty and the level of adult violence are correlated with the level of “youth violence” by era and locale at levels of 0.60 to 0.90 (and in combination, more than 0.90)—effects thousands of times more important than even the strongest effect advocates claim results from violent media.

               Temple University psychology professor Laurence Steinberg points out this important nuance: “There is good evidence that watching violent television and film increases aggression in children. But there is a far cry between ‘aggression’ (which in most of the studies that have been done could include things like pushing peers on the playground or acting aggressively in a lab situation) and serious violence. I know of no research to support the fact that watching violent TV or film makes children behave in ways that the general public thinks of when the word ‘violence’ is used.” The following examples illustrate why censoring media violence and other expressions is unlikely to reduce violence and may increase it.

 

 

The Game-Blamers

 

               The catch, of course, is that it is far easier for politicians, experts, and culture critics to talk about violent films and games than it is to talk about poverty, racial disadvantage, household violence, and adult behavior and their own derelictions in addressing them. The chief example in the year 2000 is West Point psychologist and Lieutenant Colonel Dave Grossman’s claims (including congressional testimony) that violent media, especially video games, are pivotal in molding violent youths.

               In his two popular books, On Killing: The Psychological Costs of Learning to Kill in War and Society (1996) and Stop Teaching Our Kids to Kill: A Call to Action Against TV, Movie, and Video Game Violence (1999) Grossman claims the growth in aggravated assault offenses from the 1950s to the early 1990s following the advent of television proves media violence causes real violence. In particular, Grossman points to the debut of bloody, interactive video games (those in which the action tableaux on the screen is shown from the point of view of the player, often sighting down a gun barrel) imbue players with both the physical skills and psychological stimulation to commit the mayhem he insists is sweeping modern teendom. His theories have been cited by President Clinton and other top officials and quoted lavishly in the press, including worshipful hagio-treatment by Rolling Stone’s Randall Sullivan.

               Evaluating Grossman’s cause-effect assertions first requires determining when the violent games he deplores appeared, then what followed according to the best measures of real-life violence. The precursor to the violent video games of the 1990s was the mass-marketing of action-oriented games, beginning with Nintendo’s introduction of Super Mario Brothers in 1985 and Street Fighter in 1990. These games were primitive and not interactive. Super Mario Brothers involved cartoon characters rather than human simulations, and Street Fighter’s characters were locked into fixed fighting styles which could be manipulated by the player in only limited ways to defeat caricatured gangsters. Grossman declares in On Killing that these earlier games are not the problem:

 

When I speak of violence enabling, I am not talking about video games in which the player defeats creatures by bopping them on the head. Nor am I talking about games where you maneuver swordsmen and archers to defeat monsters. On the borderline in violence enabling are games where you use a joystick to maneuver a gunsight around the screen to kill gangsters who pop up and fire at you. The kinds of games that are very definitely enabling violence are the ones in which you actually hold a weapon in your hand and fire it at human-shaped targets on the screen…There is a direct relationship between realism and degree of violence enabling, and the most realistic of these are games in which great bloody chunks fly off as you fire at the enemy.

 

That would describe the first interactive violent game, Mortal Kombat, marketed by Nintendo beginning in 1992, which featured an array of killing techniques and graphic results. Gamester historian J.C. Herz’s Joystick Nation (1997) called Mortal Kombat “Street Fighter squared:”

               Not only did Mortal Kombat have an astounding array of strikes, kicks, and balletic combination moves, but it added a frisson of Fatality Moves, which allowed you to kick your opponent while he was down. “Finish him!” was your cue to put the loser out of his misery, by, say, tearing out his heart or ripping off his head and holding the severed cranium, spinal cord dangling, aloft. Mortal Kombat gave players not only the primal thrill of vanquishing an opponent but also the theatrical chest-beating aftermath.

               Sound like the trigger for white-kid killing sprees? It gets worse. Mortal Kombat’s crude mayhem simulations paled beside the explosion in violent interactive gaming which occurred with Sega’s 1994 release of its carnographic, ultra realistic version of Mortal Kombat.

               If the violent video game era has an inauguration date, Sega’s “Mortal Friday” (September 9, 1994, kicking off a record, $50-million sale weekend) is it. Before then, video game sales had been rising only modestly, from $10.49 per American 12 years of age and older in 1990 to $11.79 in 1994 (constant 2001 dollars). But from 1994 to 2001, per-capita video game spending jumped to $27.96. The average number of hours per year spent playing video games, which had risen slowly from 19 in 1990 to 22 in 1994, leaped to 78 hours in 1999.

               Not only was video game patronage firing up in the mid and late 1990s, each new edition was bloodier than its predecessor. “Mortal Kombat 2 is more violent than Mortal Kombat 1, the Mortal Kombat series is more violent than the Street Fighter series…and the Sega version of Mortal Kombat is more violent than the Nintendo version,” media/gender specialist Marsha Kinder wrote in Interacting with Video. As games advanced through the Quake and Doom series, they become more realistic, graphic, bloody, and complex. At the 1999 video game industry exposition at the Los Angeles Civic Center, I watched marketers display the latest versions, in which the player could fix his crosshairs on several dozen body parts, hits upon which produced agonized shrieking, blood spewing, wound-clutching, staggering, and twitching preparatory to the bullet-hail putaway.

               But no matter how “realistic” the game, the villains remained aracial caricatures of evil, popping around dark corners and bellowing cursed threats. It’s clearly a game, and no halfway balanced kid would confuse it with shooting up the neighborhood. Those who claim most of today’s kids are “desensitized” to violence have been seduced by National Enquirer-type anecdotes—and I can supply plenty of counter-anecdotes from years of working with kids to go with the statistics showing this is not a sadistic generation.

               Still, reasonable people could conclude the effects of gratuitous video-game carnage on players (more so already unbalanced ones such as Columbine’s Klebold and Harris) couldn’t be good. But since evil has been attributed to every 20th century cultural trend from jazz to heavy metal rock music, the question is not what non-gamers might guess would happen, but whether violent games actually do have bad effects. After all, the only ethnographic study (in which the researcher studies the subjects in their familiar environs and asks them to characterize experiences in their own words, as opposed to laboratory observation or interpreting answers to standard questionnaires) of heavy-metalheads found the boys describing their headbanging tunes as calming and violence-deterring.

               Interestingly, research on video game violence points to the same sublimating effect—aggressive kids and adults (game popularity extends well into the 40s) discharge their testosterone blowing away screen villains and lay off live humans. Since none of the interests raising the video game bogeyman would benefit from exploring that avenue, real life is left out.

               The first (and, as of this writing, only) study of video game effects appeared in the April 2000 Journal of Personality and Social Psychology, co-authored by Lenoir-Rhyne College psychologist Karen Dill and University of Missouri-Columbia psychologist Craig Anderson. Researchers asked 227 college undergraduates averaging 18.5 years old whether they had played violent video games and had been aggressive or delinquent. They found a “strong association” (0.46 on a scale of 0 to 1.00) between subjects who said they had engaged in “aggressive delinquent behavior,” and a lesser association (0.22) between those whose tests showed “aggressive personality” traits and playing violent video games. These are not as high as the correlations between poverty or adult violence rates and real violent crime rates among youth (typically 0.60 to 0.90), but considerably stronger than those found in most media-violence research.

               Researchers then commissioned 210 different undergraduates to play video games in a laboratory.

They found students who played the violent games were more likely to administer harsher punishments (measured by the loudness and duration of a static-like “blast of noise”) to competitors than those who played non-violent games. The authors declared that theirs, like previous media violence studies, showed those who played violent games were more aggressive both on a short-term basis (they administered harsher punishments) and a long-term basis (they were more delinquent).

               What did this study (typical of media violence studies) really find? First, that those who say they are aggressive in a paper-and-pencil survey also say they like violent media. But “correlation does not equal causation:” it could be that being aggressive to start with causes people to like violent media, perhaps to help them relieve aggressive tendencies in acceptable ways. Second, the study found that in a laboratory setting, students exposed to violent media temporarily are more willing to administer a non-violent “punishment” to subjects they know will not be harmed. That subjects incline their behaviors to meet researcher expectations (called “demand characteristics” or the “experimenter expectation” effect) is a further problem 40 years of research shows seriously hampers laboratory studies, particularly ones of this type that seek to provoke artificial subject responses.

               Given those limitations, a scholarly stance would dictate conservatism in interpreting results for the news media and public. Instead, Dr. Dill indulged maximum sensation: “I’m not saying that all little boys who play violent video games are going to pick up guns and shoot someone,” Dill told ABC News’ Health Scout. “But some of them will.” This statement combines meaninglessness with academic malpractice. In any population of four million Americans (the number of boys who play violent games), the number who shoot someone will be larger than zero but will fall well short of four million. None of the 500 subjects in Dill’s study (game players or otherwise) reported ever firing a gun at anyone, nor did the study examine that question beyond how subjects would punish each other with “noise.” (Analogy: drivers angrily honk horns and flip fingers at each other all the time, but very few shoot other motorists.) The correct answer, as Dill easily could have ascertained from comparing the statistics on the rarity of juvenile gun crime to the millions of boys who play violent video games, is: “practically none of them will ever shoot anyone.”

               After all, Dill’s own study found 97% of the boys and 88% of the girls played video games, and the players played an average of four different games each for an average of 2.1 hours per week (down from 5.5 hours in junior high school). It’s not clear what proportion of these games the authors classified as violent or what proportion of their subjects played violent games, which makes their analysis hard to follow. They concluded that about one–fifth of the individual violence in violent game-players is associated with playing the games.

               One fifth? That’s a lot. When multiplied by the millions of youths who play these games, this would predict a considerable leap in violence among youths directly associated with the proliferation of violent games. Indeed, unless her statement to the media above is completely pointless, Dill meant to convey that playing violent games will cause some of their players to shoot someone.

               Grossman makes similar assertions. His 1999 book, Stop Teaching Our Kids to Kill, begins with a curious statement:

 

In Paducah, Kentucky, Michael Carneal, a fourteen-year-old boy who stole a gun from a neighbor’s house, brought it to school and fired eight shots at a student prayer group as they were breaking up. Prior to this event, he had never shot a real gun before. Of the eight shots he fired, he had eight hits on eight different kids. Five were head shots, the other three upper torso. The result was three dead, one paralyzed for life. The FBI says that the average, experienced, qualified law enforcement officer, in the average shootout, at an average range of seven yards, hits with less than one bullet in five. How does a child acquire such killing ability? What would lead him to go out and commit such a horrific act?

 

This sounds “chilling,” and it has been solemnly quoted by those who never checked the record or thought about what he was saying. If Grossman is correct that a 14-year-old gun novice self-taught with a video-game joystick is a better marksman than firing-range-trained, seasoned officers, the FBI and police agencies are monstrous buffoons; there must be millions of grade school video gamers who could win a shootout against police veterans! The facts aren’t that dramatic. Carneal previously had target practice with real firearms. He was 20 feet from the victims he shot, the distance across an average living room. He was not using a handgun in a “shootout” with active, aware opponents, but opened fire with a rifle on a cluster of passive, surprised victims. His marksmanship is not miraculous.

               Nor does Grossman assess, beyond speculation, why Carneal did what he did. Instead, Grossman argues circumstantially that television violence and violent video games must be behind the increases in real violence in society and by youths. In On Violence, he claims the “astounding” growth in America’s rate of aggravated assaults reported to police from 1957 to 1992 is due mainly to television violence. Such a “correlation equals causation” notion has been rejected from ancient Roman proverbs to modern statistical texts as a classic fallacy, even backwards logic (i.e., if a fire truck is present at most fires, does that prove fire trucks cause fires?). It further ignores a wealth of nuances, such as the greater tendency of police to make assault arrests in the 1990s for domestic and street violence incidents that would have drawn a warning or no response in prior decades. But put aside those logical objections for a minute: if it is true that violent games cause crime, Grossman’s argument is thoroughly defective. This becomes clear when the post-1992 violence trends are charted below.

               Grossman’s and Dill’s work flatly predicts that the rapid proliferation of millions of violent video games would cause more murders and violent crimes (school shootings are explicitly cited in this context) centering on the most voracious players (eight-to-14-year-old boys) and including older teens. This videogame-promoted violence should have begun in the early 1990s with the marketing of Mortal Kombat and should escalate sharply from 1994 into the late 1990s, paralleling rapidly escalating video-game sales, numbers of players, average playing time, and violence content. After all, if games cause real-life violence in direct proportion to their violent realism, and if the number of teenagers playing violent, interactive games rose from zero in 1990 to four million-plus by 2001, we should see increases in teenage violence in the late 1990s.

 

 

The Body Count

 

               By every real-life measure, Dill’s and Grossman’s assertion that violent video games caused increases in societal violence predicted the opposite of what transpired. The 1992 debut of violent, interactive video games they finger for “enabling violence” coincided with a reversal of the previous increase in aggravated assault rates followed by a steep, 27% decline from 1992 through 2002.

               The most reliable measure of violence, the Department of Justice’s annual National Crime Victimization Survey of 50,000 families, shows violent victimization in the late 1990s is at its lowest level since the survey first began in 1973. The second most reliable measure, the FBI’s annual tabulation of violent crimes reported to police, also stands at a considerably lower level in 1999 than when it was first compiled in 1973 (see Chapter 2).

               The crime decline in the late 1990s was particularly precipitous. From 1992 through 2002—the decade in which ultimo-violent video games such as Quake and Doom sold 4.2 million copies and tens of millions played them—the FBI’s Uniform Crime Report shows violent crimes declined 35%, led by drops in robbery (–45%), murder (–40%), rape (–23%), and aggravated assault (–30%).

               So much for violence in society; what about violence by kids? The rate of homicide arrest declined even more steeply among youths ages 10-17 (-66%) than adults. Further, young white males, the group that patronizes violent games more than anyone else, showed bigger crime declines than anyone else!

               White kids’ most dramatic improvements of all kinds coincided with the proliferation of the very dysfluences (interactive video games, movie mayhem, Internet corruptions, gangsta rap music, consumerist pressures) Grossman warns are mass-warping them. (Equally dramatic and against much steeper socioeconomic odds, murder and crime rates also plunged among black, Hispanic, and Asian youths in the 1990s.) California statistics clearly show white Anglo kids of the 1990s are dramatically healthier and safer on nearly every count than their parents were as teenagers. So far, Grossman has not explained why, if video-game violence affects real life, real-life violence trends are the opposite to what he predicts.

               Trends among boys ages 8-14, the group Dill cites as playing violent games the most, also contradict her conclusions. From 1985 to 1991, murder rates by boys ages 8-14 more than doubled before leveling off to a 1993 peak. Then, from 1993 through 2002, the period of most intensive sales and playing of violent video games, murder rates by boys ages 8-14 declined for 10 straight years—by an astounding 80% during the period—reaching their lowest level in three decades in 2002.

               Further, from 1993 to 2002, the supposedly violence-prone male population ages 14-17 increased by 1.1 million. Meanwhile, the number of murder arrests among boys that age declined from 3,500 in 1993 to 1,300 in 2002, and the number of violence arrests likewise dropped from 90,000 in 1993 to 80,000 in 2002. In other words, violence and murder by young teenage boys was skyrocketing until violent video games proliferated, then it plummeted! Murder and violent crime plunged faster among exactly the demographic—younger teenage boys, particularly whites—that most patronized violent video games!

               How do we explain these trends using all of the facts (something, by the way, research scientists are supposed to attempt)? The most plausible conclusion is that video games have nothing to do with real-life violence; other factors such as economic and employment trends drove the 1980s rise and the 1990s decline. In that case, the whole video game furor is unimportant, and politicians and politically-attuned social scientists once again have thrown the debate off track.

               Alternatively, if we hypothesize that video-game violence affects real behaviors, the debate is even farther off the rails. In that case, the declining murder and violence rate among teenage males as video games multiplied indicates that aggressive individuals use violent video games (both personally and in laboratory experiments) to sublimate their violence harmlessly so that they become less likely to express it in real life. That counter-hypothesis remains to be proven. But if we’re going to accept the idea that correlation implies some kind of causality, as others seem to have erroneously done, then it does fit all the facts, not just the anecdotes Dill and other media-violence critics choose to cite.

 

 

Unreal: More Important Than Real

 

               Likewise ignoring real-life trends, Mother Jones’ November-December 1999 issue published a leaky article by Paul Keegan asserting that kids and society are becoming more violent due to violent video games and other media. Keegan didn’t state this baldly but implied it through a series of direly-worded questions.

               While violent games “probably won’t turn your son into a killer,” Mother Jones’ bold-faced lead demurs, the article promises to answer the question: “What is happening to kids raised on the most violent interactive mass-media entertainment ever devised?” The answer, never stated or discussed in the article, is clear from the FBI, Department of Justice, and homicide records cited in the previous section: kids and their society became dramatically less violent, and the kids most likely to patronize violent games and other media showed the fastest-declining and lowest levels of real-world violent behavior.

               But the fundamental, unexamined assumption explicit in Keegan’s article and the culture critics he quotes is that America is becoming more violent—indeed, “culture critics,” by definition, always must pronounce society unhealthy and getting worse. Focusing on the effects of video games on “the subculture of young, white American males who make up the industry’s technological vanguard,” Keegan features scary assertions by experts (Grossman, Kansas State University child psychologist John Murray, and University of Miami education professor Eugene Provenzo) that violent games were making Americans “more willing to tolerate ever-increasing levels of violence in our society.” In Keegan’s style of conclusion-by-question: “Is the recent rash of school shootings being caused, at least in part, by the exponential increase in technology’s ability to numb pain by drawing kids into an isolated world where violence and aggression have no consequences?”

               Well, is it? Keegan offers no evidence other than Provenzo’s it-must-be assertion:

 

It’s an alienation we’ve created in suburbs and small towns, and it’s aggravated by a whole series of media formats. Our kids are losing their handle on reality because of everything from malls to video games. That makes people think they can shoot someone and it doesn’t hurt, that they can recover…Cultural critics like Provenzo see evidence that the damaging effects of this phenomenon are hardly limited to a few crackpot shooters in remote places like Jonesboro and [West] Paducah. And there is something chilling in the number of kids across the country who reacted deeply not only to the isolation and alienation of the Columbine killers, but to the way they vented their anger.

 

What “phenomenon”? Keegan and Provenzo posit an unspecified “number” of kids who identified not just with the Columbine shooters’ alienation, but also with their murderous manifestation of it. Reality check (a Mother Jones aphorism unfortunately not applied to this article): If that is true, why aren’t there large numbers of school shootings? Why, in particular, weren’t there commemorative shootings on the anniversary of Columbine, as the press and experts eagerly predicted? If even 1% of teenaged boys identified with the Columbine killers, and 1% of that 1% acted on their homicidal alienation, there would be dozens of school massacres every week, not two or three a year. And if we’re talking even that large a fraction—1 in 10,000 boys—it’s hard to argue that such a tiny proportion proves video games played by millions of teenage boys are having a mass effect.

               But, all right—suppose a violent video game does influence a disturbed individual to commit violence. Suppose Carneal, Klebold and Harris would not have committed their shootings if these games did not exist (isn’t that what the culture critics are saying)? Assuming we could prove cause and effect, what do we censor and ban? Several of the school shooters, most notably the ones in Georgia and Oklahoma, were active churchgoers. At least three of the shooters had been on psychiatric medication. Charles Joseph Whitman, the University of Texas Tower gunner, was an Eagle Scout who had the Boy Scout Handbook with him. Charles Manson took murderous cues from Beatles music. Obsessed fan Robert John Bardo murdered actress Rebecca Schaeffer after hearing messages in U-2 ballads. The 13-year-old school gunner who wounded four in Fort Gibson, Oklahoma, modeled himself after General George Patton’s “cool under fire” persona, a defense psychiatrist testified in a June 11, 2000, hearing. The problem with censoring supposedly provocative media is that psychopaths respond to things most people don’t find violence-provoking. Asks Keegan:

 

Is there direct proof of cause and effect? “Not only isn’t there proof, but there may never be proof,” says Kansas State’s Murray. But, he continues, “At some point, you have to say that if exposure to violence is related to aggressive attitudes and values, and if [the latter] are related to shooting classmates or acting aggressively—all of which we know to be true—then it stands to reason that there is probably a link between exposure to violence and aggressive actions.”

 

And if there is a link, Murray could have continued, it is a very weak one. The “proof” from real life is that correlational studies show only tenuous associations, very few video game players commit mayhem, and the rapid expansion of violent video games in American culture coincided with plummeting violence by the young men most likely to play them.

               Further, as noted, the links Murray postulates could go in reverse—violent people may be attracted to violent games and media as a means of sublimating their aggression. Suppose violent video games did have a real-world effect on young men, and the effect was to make them less violent? That would explain both the weak statistical link between aggressive people and patronage of violent media as well as the decline in violence as violent media proliferated. Granted, in an age in which violent media are under intense political attack, we might not expect social scientists who want to remain prominent in the debate to “go there,” but that is a more logical reading of the trends than the empty alarmisms that fill Keegan’s article and much of the culture-war’s claims about the “corrosive effects” of the media and popular culture.

               These questions, so obvious they make or break an article on media violence, are nowhere engaged. Instead, here is Keegan’s question-conclusion as to whether video gaming affects real-life behavior: “But how could it not? If media didn’t affect real-world behavior, there would be no such thing as advertising.” I hear this kind of logic from cultural critics all the time, and it baffles me. Are they contending there is no difference between saying that media expressions might cause a person to buy a Pepsi instead of a Coke versus causing a person to plant bombs and open fire on crowds? I sent Mother Jones the following letter:

 

               Paul Keegan and the experts he quotes insinuate that “young white males,” the group most likely to play violent video games, are big contributors to the “ever increasing levels of violence in our society.” This basic premise appears untrue. White youths show reduced homicide, felony crime, suicide, drug abuse and other unhealthy behaviors over the last quarter century when media violence supposedly was inciting them. In California, the annual rate of murders per 100,000 white, non-Latino teens dropped sharply, from 4.7 in the mid–1970s to 4.1 in 1990 to 2.5 in the late 1990s.

                   Improvements in white-teen behaviors have been especially dramatic in the 1990s, when violent games such as Mortal Kombat (1991), Doom (1992), Quake (1996), and other supposedly corrupting media proliferated. During this time, California white teens showed dramatic DECLINES in murder arrest (down 39%), rape (down 42%), felony arrest (down 26%), gun deaths (down 32%), suicide (down 37%), drug deaths (down 15%), and all violent deaths (down 34%). The recent school shootings are not harbingers of rising white-teen violence, but rare anomalies in an improving generation—improvements all the more surprising given rapid increases in violence, crime, and drug abuse among white grownups.

                   It’s not kids, but the experts Keegan quotes who “are losing their handle on reality.” In 1998, the murder rate was 10 times higher among black teens, and six times higher among Latino teens, than among white teens. Poverty, joblessness, racism, and inequality are far more menacing than the chimerical threats posed by video games and pop culture.

 

Before publishing my letter in the January–February 2000 issue, Mother Jones fact-checkers meticulously verified my figures. They called a half–dozen times to pin down citations for my crime and vital statistics numbers, then looked up the sources and matched the data against my statements. (They found and corrected one small error due to my use of an inconsistent formula. Good job.) But when I asked if they likewise had fact-checked the statements by the authorities in Keegan’s article, they replied these statements were too vague to check!

               Finally, it remains curious that the mass media and liberal publications such as Mother Jones and Rolling Stone that have fixated on a few school shooters remain indifferent to the much more alarming rash of rage murders and family violence by adults that kills many more people, including kids. Is today’s liberal-left media so conformist to culture-war dogma that it lets established interests dictate its priorities? What purpose is served by another repetitive left-media rehash of Columbine or violent-TV/games-and-teenagers, quoting the same “experts” that mainstream and right-wing publications feature—especially when their statements are “too vague to check” for factual validity?

 

 

Panacea #3: “Getting Guns Out of the Hands of Kids”

 

               What could be more sensible? If we don’t want kids shooting people, take away their guns. Aside from being a mice-belling-the-cat homily—how are we going to “take away their guns” in a society with a mere quarter–billion firearms scattered about?—the “kids and guns” furor contains several deadly paradoxes. As shown in Chapter 3, there are only two important issues in “youth gun violence:” adult gun violence, and poverty. Together, rates of poverty and adult gun violence predict child gun death rates for each state and year to within 90% of their true values.

               Ignoring these crucial factors, major gun-rights and gun-control lobbies took aim at the meaningless issue of “youth access to guns.” The most prominent example is the Children’s Defense Fund’s 1999 report, Children and Guns, which is riddled with misinformation, illogic, and deceptive statistics. All of the CDF’s recommended “action steps” to “protect kids from guns” are aimed at restricting and reforming youth behavior. CDF proposes more laws and policies to criminalize ownership or acquisition of guns by persons under age 21 and to implement educational and control schemes to deter kids from getting guns, all while preserving “legitimate” adult rights to have guns for “sporting purpose.” Because the CDF restricts itself to superficial issues that do not address the most crucial correlates of gun violence by and toward children, its proposed remedies to “get guns out of the hands of children” would not be expected to have much life-saving effect. In fact, the data show they do not, a reality that the CDF should have frankly admitted enroute to sober reanalysis.

               Instead, the CDF resorts to crudely deceptive statistical tactics to shore up its claim that the “10 best states in child gun safety laws” are improving child safety faster than the “worst states in child gun safety laws.” The CDF states its major conclusion in bold caps: “The total decrease in child gun deaths among the 10 states with the greatest number of child gun safety measures is two times greater than the states with the fewest…from 1996 to 1997.”

               But that’s no achievement! The 10 “best” states had more than twice the total population, and more than twice the total child gun deaths than the “worst” states to begin with, so of course their “total decrease” in gun deaths would be expected to be twice as large as well! This deceit is alarming: it suggests that like the gun lobby, child advocacy groups are more interested in protecting agendas than kids.

               A college undergraduate who pulled such mathematical chicanery would get an F and a lecture, yet the CDF applies it to a life-and-death issue. This kind of comparison is only useful when done on a percapita basis to account for varying population sizes, something the CDF failed to do. When standard statistical measures (changes in gun death rates per 100,000 persons ages 0-19, both absolutely and relative to adults in each state) are substituted, a completely different picture emerges (Table 13). From 1996 to 1997, the CDF’s 10 “best” gun-law states experienced a 13.0% decline in their combined rate of child gun deaths—but the “worst” states did slightly better (down 13.4%).

 

Table 13. Child gun-death rates and trends are worse in states the Children's Defense Fund rated as “best” than in the states they rated “worst”!

 

                         Child gun-death rate               Child gun-death rate

                           (net of adult rate)*                     (absolute)*

                   “Best”             “Worst”        “Best”        “Worst”

Year            states                 states           states             states

1990               0.448                 0.402                 7.2                       8.0

1991               0.492                  0.396                 7.9                       8.1

1992               0.457                  0.418                  7.3                       8.1

1993               0.493                  0.429                  8.1                       8.7

1994               0.514                  0.466                 8.0                       8.8

1995                0.491                  0.404                  7.2                       6.8

1996                0.462                  0.398                  6.2                       6.3

1997                0.427                 0.359                 5.4                       5.8

1998                0.410                  0.323                  4.8                      4.6

 

Change, 1998 rate vs. rate in:

1990-91     -12.8%              -19.1%             -37.0%             -43.1%

1994           -20.3                - 30.7                   -40.4                  -47.8

 

Average child poverty rate, 1995:

                         21.5%                24.6%

 

*Net rate is gun-death rate per 100,000 persons age 0-19 divided  by gun-death rate per 100,000 persons age 20 and older; a value of 0.448 means the gun death rate among persons ages 0-19 is 44.8% of the gun-death rate among persons 20 and older. Absolute rate is gun deaths per 100,000 persons ages 0-19. Sources: National Center for Health Statistics, U.S. Mortality Detail File, 1990-98. Children's Defense Fund, Children and Guns, 1999, Washington, DC.

 

And that’s only one year. From the decade’s initial years (1990–91) to the latest year (1998), the “best” states (child gun death rates down 37%) didn’t fare as well as the “worst” states (down 43%). In fact, the “worst” states experienced bigger drops in their rates of child gun fatality than the “best” states in seven of the eight years of the decade!

               Bad enough, but it gets worse. A better method considers whether factors unrelated to child gun laws (such as differing poverty or adult gun-death rates or trends) might have caused child gun-death rates or trends to differ in the two categories of states. One way to factor out these enormous influences is to compare rates of gun deaths for adults 20 and older for each state and year to corresponding rates for youths under age 20 (left-hand columns, Table 13).

               These better-controlled “net rates” (child gun-death rates divided by adult gun-death rates by state and year) cast the states CDF rated as “best” in even more unfavorable light. From 1996 to 1997, the net rate of child gun-death rates fell more slowly in the “best” states (down 7.5%) than in the “worst” states (down 9.8%). Comparing 1998 to 1994, the peak year for child gun deaths, the “best” states experienced a 20.3% decline in child gun fatalities, and the worst states a reduction of 30.7%.

               Finally, in the 10 “best” states in 1998, both the number of child gun deaths as a percentage of total gun deaths (14.4%) and ratio of child gun-death rates to adult gun-death rates (0.410) were considerably higher than in the “worst” states (11.2% and 0.323, respectively). This dismal result occurs despite the fact that youths in the “best” states enjoy lower poverty rates (21.5%) and lower adult gun-death rates (11.6 per 100,000 population) than youths in the “worst” states (24.6% and 14.1, respectively). Since these results are not statistically significant, there is no mathematically provable difference between the “best” and “worst” states. That would be devastating enough to CDF’s claims. More stringent analysis, however, does produce a significant and disturbing result.

               By standard mathematical analysis (called “multiple regression”), the adult gun-death rate and the child poverty rate can be used to create a formula to predict what the child gun death rate should be for each state. The results are disastrous for CDF claims that the gun controls it advocates really protect kids. When the year (1997) the CDF itself picks is examined, the states the CDF rated “best” suffered actual child gun-death rates 17% HIGHER than would be predicted, while the states rated “worst” experienced slightly LOWER child gun death rates than predicted.

               The results are statistically significant: the “best” states had 180 more children die from guns that they should have in 1997, while the “worst” states had no more child gun fatalities than predicted from social factors. Summary: children are equally at risk (by crude rate) and substantially more at risk (by net rate) of dying by guns, and show smaller reductions in gun-death rates through 1998, in the “best” states which enacted the CDF’s favored restrictions on juvenile gun use than in the “worst” states which allow youths to have guns.

               Though it might seem so, this is not an illogical result. Trying to stop teenagers from getting guns while continuing to permit widespread adult gun ownership is not gun control. If adults are allowed to legally acquire guns, but youths are strictly prohibited, youths are forced outside the system and are more likely to turn to gangs and other illegal sources to obtain firearms. This establishes ties between youths and illicit gun sources that promote greater chances of violence.

               As a final example of the CDF’s Beltway-biased unreality, one state it bafflingly cites as among the “best” at “protecting children, not guns”—Illinois—displays a gun death level among children that is staggeringly worse than any other state’s. In no other state do persons under age 20 have even half the odds of dying by guns as adults 20 or older; in Illinois, that ratio is two-thirds. In 1998, Illinois’ net rate of child gun fatality was 74% above the national average, making it a laboratory for study into youth gun risk. Yet the CDF does not discuss this or any other disturbing complexities that challenge its claims.

               Two dismaying conclusions are evident: either the CDF calculated gun-death trends in conventional ways, found they didn’t support its case, and dredged up a superficial, misleading measure to hide that fact; or the CDF never bothered to analyze the trends beyond second-grade arithmetic standards once it got an answer it liked. Whether deception or negligence is at work, this traditional lobby’s deceit is alarming. Are child advocacy and gun-control lobbies so indifferent to real gun violence and child fatalities that they can’t be bothered to perform even a minimally rigorous analysis and reholster their pet agendas when analysis does not support them?

               At what point, then, does pushing popular but ineffective, possibly harmful remedies to “protect children” cease being simply diversionary and start to represent a danger to kids? The CDF’s vital omissions, mathematical subterfuge, and inflammatory media campaign surrounding “children and guns” are a “wake-up call” as to how far child advocacy groups have strayed from the goal of “protecting children.” Both the NRA’s “Eddie Eagle” gun safety campaign and gun-control lobbies’ clichés about “getting guns out of the hands of children” are meaningless without serious measures to get guns out of the hands of the adults, who kill most teens and nearly all of the children and adults who die from gunfire.

 

 

Beyond Gun Control

 

               At present, state legislatures, Congress, and at least one major manufacturer, Smith & Wesson, are considering trigger locks on firearms, and lawmakers are mulling mandating secure gun storage. These are good ideas; they would probably save a few small kids from shootings and a few teenagers and grownups from gunplay involving impulsive passion combined with handy firearm, and it’s hard for anyone other than gun makers and the National Rifle Association to argue with that.

               But campaigns for stronger gun controls should be tempered by acknowledgment that these would have only small effects on gun violence. In preliminary state-by-state analyses for the Justice Policy Institute of gun-control measures (as tabulated by the Open Society Institute’s detailed 1999 report, Gun Control in the United States [www.soros.org/crime/gunreport.htm]), I found that rates of adult gun fatality and of poverty were by far the most powerful in predicting youth gun-death rates. So powerful, in fact, that these two variables had to be removed from the analysis before the effects of gun-control laws could be glimpsed.

               Still, states with tough gun controls—particularly gun safety training and waiting periods prior to

gun acquisition, gun storage requirements, and registration and licensing of gun owners—had significantly lower rates of teenage and adult gun fatality during the 1990s even when other variables were held constant. Whether stronger gun controls are the cause or result of lower rates of gun ownership, and therefore less political resistance to gun restrictions, in states that enacted them remains to be determined. Notably, states with minimum age limits for purchase, possession, or acquisition of firearms did not have lower rates of youthful gun death. As is usually the case, the policies that would do the most good are the hardest to enact.

               State laws mandating safe storage of guns were connected by a 1997 study in the Journal of American Medical Association to a 23% reduction in accidental shootings of younger children (and of adults, though this result received little attention). No significant effect was found on suicides and homicides, indicating such measures are weak defenses against those (such as the school shooters) determined to acquire a gun. (Contrary to repeated claims, research does not find teenage suicides any more “impulsive” than adult suicides.)

               The impetus for these laws is the Michigan school shooting, which is bizarre; the gun obtained by the six year-old was lying around a crack house, whose residents would be singularly unlikely to concern themselves with safe storage and trigger locks. This is a major difficulty with the halfway measures gun control lobbies are proposing. Ordinarily, we’d expect that gun storage and safety requirements would hamper or delay access to firearms, cutting down on impulsive “crimes of passion” that comprise many of America’s shootings—and this probably would occur in a fraction of cases. However, the people most likely to commit gun homicides or accidents are exactly those least likely to comply with gun safety laws.

               In particular, studies indicate those who obtain a gun to protect themselves are particularly likely to shoot someone because their fear motivates them to keep it loaded and handy—not locked up. Trigger locks “could reliably be expected to deter only children under the age of six,” the General Accounting Office reported. Assuming trigger locks work perfectly, there would still remain two enormous problems: the 240 million guns already loose in the country that don’t have trigger locks, and the fact that their owners could still fire the guns. Even with these laws in effect, the school and middle-aged shootings detailed in Chapter 3 would still have occurred. The array of gun-safety proposals from trigger locks to “smart guns’ (programmed by computer chips to work only when fired by their owners) to safe storage would effectively reduce only accidental firearms deaths among the youngest children (50 to 60 per year), and only some of these deaths. Their maximum effect—if they work at all—would be to reduce America’s gun toll by less than 1%.

 

 

Shooting Down Superficialities

 

               Nevertheless, even weak measures such as gun storage laws, and perhaps trigger locks, may be good ideas that would save a few lives even if they address only a small fraction of the problem. However, it’s important to recognize that these proposals are founded in faith that regulations and technology can compensate for the basic failure of gun owners to handle and store firearms safely.

               The larger concern is whether gun-control groups are squandering valuable effort touting small-scale technical fixes at the expense of more fundamental challenges to gun culture. There are serious reasons why “kiddy gun control”—that is, age-targeted measures as proposed by the Children’s Defense Fund, Handgun Control Inc., and other gun-control lobbies—doesn’t work and may be counterproductive. Worse, it is not clear that gun lobbies judge their gun control proposals by whether they save lives or not. Both sides in today’s gun debate represent deadly validations of the observation by generational historians Neil Howe and Bill Strauss that today’s “older Americans tolerate policies that don’t work” so long as they “invoke the proper symbols” and provide opportunities for “moralistic lectures on ‘values’.”

               One group that tried to lend complexity to the debate was immediately pilloried by both gun-rights and gun-control lobbies. The Colorado Trust, a Denver philanthropic firm, issued an evaluation in July 2000 which found only three of the 163 programs established to “reduce youth gun violence” had any effect, and these only in limited ways. Among the diverse focus groups the Trust assembled, one opinion was unanimous: youths had so many avenues to acquire guns in a society in which “adults can buy guns with impunity” that taking on “youth access to guns” is pointless.

               Immediately, several youth-gun-access lobbies blasted the Colorado Trust for taking “the easy way out” in a gun-happy state where teenagers legally can buy and own firearms. Marian Wright Edelman, of the Washington, DC-based Children’s Defense Fund (see below), declared that “youth access to guns” is the “root cause of youth violence.” Candice Francis, whose California-based Resources for Youth administers a program called Prevent Handgun Violence Against Kids to stop youths from getting guns, accused the Trust of kowtowing to the gun lobby. California groups are boldly confronting the “youth access” issue, Francis boasted.

               Yet, the bottom line the youth and gun lobbies were supposed to be worrying about—kids actually dying from gunfire—got lost. Edelman did not puzzle as to why Washington, DC, which has the toughest gun-control laws in the nation, has a youth gun-death rate eight times higher than freewheeling Colorado’s and (comparing a city to a city) four times higher than similarly-sized Denver. In 1997-98, Washington had 95 gun deaths in a child-youth population of just 118,000, while Colorado had 117 in a child-youth population of 1.13 million. Further, Denver, which has the same youth population as DC, had just 35. For teenage firearms homicides, the gun violence getting the headlines, the difference was even more staggering: Washington DC, 84; Colorado, 45; Denver, 23.

               Edelman would have rendered a much greater public service if she had analyzed why black kids suffer gun-death rates seven times higher than white kids, given that white kids have easier access to guns. And if California has such a better idea, as Francis claimed, why is California’s youth gun-fatality rate consistently double, and its rate of teenage death from gun homicide four times higher, than Colorado’s?

 

 

Social Conditions Kill People

 

               I saw the answer when I worked with kids in a small Montana town during the 1980s. I regularly encountered youths as young as 11 or 12 who owned hunting rifles and pistols, kept them in their rooms, and had ready access to ammunition. Montana had no age limits for purchase or possession of firearms of any kind. Yet, Montana’s teenage homicide rate refutes the knee-jerk belief that “kids and guns” is inevitably a lethal mix. According to a Bureau of Justice Statistics tabulation, for two decades (1976–97), Montana’s youth homicide and murder arrest rate consistently has been the lowest in the country.

               Montana is rated by the Children’s Defense Fund as one of the “worst” states for safety laws to protect children from guns; DC is rated as one of the “best,” banning all purchase or possession of guns by anyone under 21. However, Montana had only 33 gun deaths among 250,000 children and teens in 1997-98; DC had 95 gun deaths among 120,000 children and teens, a rate six times higher. Gun murders? DC children and teens, 84; Montana children and teens, eight.

               Further, Montana’s ratio of child gun-death to adult gun-death rates (the best measure to account for unique gun risks to youths) is one of the lowest in the country (only 12% of the Montanans killed by guns are under age 20)—lower than for nine of the 10 states the CDF rated as “best”! Meanwhile, DC has the worst ratio of child gun death rates to adult gun death rates of any locale (25% of the Washingtonians killed by guns are under age 20). Obviously, Montana and Washington, DC, embody radically different environments that laws alone cannot hope to equalize. This only reinforces my point that social conditions, not “kids and guns,” are the pivotal factors creating DC’s high gun fatality rate and Montana’s low one.

               Changing America’s deadly gun culture requires long-term efforts to change social norms. Gun ownership should be socially unacceptable for all but the small fraction of the population willing to undergo rigorous training, licensing, and inspection regimes. Changing social norms in major ways (as the anti-drunken-driving movements of the early 1980s, anti-smoking campaigns of the 1970s and 1980s, and seat-belt promotion campaigns of the 1980s did) requires broad-based challenge to risky behaviors.

               Each of these movements enjoyed great success when they attacked the behavior itself; each floundered in the 1990s as they switched to attacking youths. Americans’ crucial failing—a major reason why this remains such a high risk society—is our fixation on age. As shown in this chapter, campaigns focusing on youths fail because they do not challenge the adult behaviors and conditions that establish the social norms that form the foundation of youth behaviors. And they fail because, increasingly, their motive is not to change behavior, but to promote the interests of the politicians and groups intent on blaming vexing social problems on powerless, unpopular segments of society.

               Safer, saner nations focus on guns. Every other Western nation recognizes that where adults kill with guns, kids will kill with guns. Most nations permit only a tiny minority of citizens to possess guns, and then only under circumstances of strict qualifications and tight regulation. Those few nations such as Switzerland in which many households harbor guns as part of a civilian militia requirement enforce strict regulations that prevent bloodletting.

               Laws should be aimed at gun behaviors, not teenagers. Millions of teenagers have access to guns today and do not shoot anyone. If an age limit is set for gun ownership, it should be low—certainly no higher than 14. Not only are laws that attempt to regulate behaviors of youths above the age of general cognitive maturity unenforceable, they force youths into more dangerous criminal markets to obtain desired items. It is time for the U.S. to get away from addressing social problems by setting age limits and instead to adopt the European perspective of focusing on the social problem itself. The vital first step is to stop converting every American social problem into a teenage attitude and behavior flaw. When vice presidents, mayors, and lobbies become as distraught and demanding of action when a 40-year-old commits a gun murder as when a 16-year-old does, when a child murdered by an adult provokes as much outrage as when a child is murdered by another child, when the shooting of an inner-city youth produces as much hand-wringing as a suburban school shooting, the country will make progress toward reducing violence and firearms tragedies.

 

 

Panacea #4: “Prevention/Intervention” Programs

 

               School and community programs to instruct, occupy, supervise, reform, and entertain youths constitute the liberal alternative to the centrist-conservative “prison lobby” and typically embody more benign remedies. Some are excellent ideas and clearly beneficial. But just as there is no “prison solution” to crime by youths, neither is there a “program solution.”

               A common complaint by liberal groups against California’s lock-up-more-kids Proposition 21 and similarly harsh federal juvenile justice policies is that get-tough measures do not provide funds for “prevention” of “youth violence.” That is true, but neither do most of the measures liberals would fund. The only real, sustained “prevention” of violence and crime by young people involves comprehensive efforts to reduce youth poverty and unemployment; increase access to quality education and jobs; prevent adult drug addiction, violence, gun tragedy, and household abuse; and integrate young people as legitimate participants in adult society rather than as a vilified, custodialized minority. Nothing has reduced homicide, violent crime, and gun deaths in the late 1990s so effectively as a booming economy and greater job availability, even if those jobs are far from optimal.

               However, few liberal groups who talk about “prevention” mean these major factors. Instead, “prevention” has come to mean funding a large array of programs to provide “mentoring to at-risk youth,” after-school and weekend programs, midnight basketball, and the like. These are laudable, much needed services so long as they remain voluntary rather than mandatory. Because they are laudable, they should be promoted as the services a beneficent society provides for young people as valued citizens, not by fear-based campaigns that kids will run amok if they’re not funded and dubious claims that if we keep kids busy, “they” won’t kill. After all, we do not fund senior citizens’ centers by raising negative images of elders.

               A cogent, often revealing salvo (with one glaring weakness) by the “program lobby” is the American Youth Policy Forum’s June 2000 Less Hype, More Help: Reducing Juvenile Crime, What Works—and What Doesn’t. The report reviews a variety of programs for juvenile offenders and for families deemed at risk to generate new little hoodlums. It finds “dozens of youth development programs with proven results,” along with another bunch that don’t work but vacuum up wads of money.

               “Over-reliance” on trying kids as adults and more prisons don’t work except for a few recalcitrants, the Forum concludes. In fact, get-tough policing strategies usually make things worse. (This finding is no surprise; these are the program folks, but their research presentation is more impressive by far than the prison lobby’s.) Even the Office of Juvenile Justice and Delinquency Prevention, whose 1996 report lauded curfews (see next section), admitted in its February 2000 report that “after-school programs have more crime reduction potential than juvenile curfews.”

               However, a lot of programs don’t work, either, as noted from the history of failed interventions that began this chapter. “The most successful strategies, under ideal laboratory conditions, reduce future offending rates by about 50 percent—and then only through effective delivery of complex, multi-dimensional, sustained, and resource-intensive intervention methods,” the Forum concludes of programs serving youth offenders. Translation: good programs have to provide a wide variety of services tailored to kids’ circumstances over a long period of time—that is, effectively change their environments. This costs a lot.

               Even so, the AYPF argues that the programs that work don’t cost as much as either prison or the programs that don’t work. An example of the latter: “Even with their high costs, hospitalization and other out-of-home [institutional] treatments have not proven highly effective”—especially with violent youths. (Well, it just depends on your definition of “effective.” A major study found that 75% of the youths stuck in psychiatric hospitals and other behavior-changing institutions at costs of hundreds of dollars per day reoffend, assuring a steady stream of repeat business. Most of the treatment failures who reoffend are re-institutionalized [the remainder, presumably those without insurance, are imprisoned]. The stockholders of America’s $25-billion, profits-growing-at-45%-per-year youth treatment industry—exemplified by national chains such as the Kentucky-based Res-Care Inc., a $300 million per year corporation with 17,000 inpatient clients in 1997—probably consider this situation highly “effective”).

               The American Youth Policy Forum study delineates more examples of America’s perverse penchant for blowing big bucks on flashy fizzle. The federal government spends half a billion per year on “school-based violence and substance-abuse prevention,” making these “a major growth industry” even though they “lack evidence of effectiveness.” The two chief bucks moppers, Drug Abuse Resistance Education (DARE) and the Student Assistance Program, either display negative evaluations or none at all. Schools, the Forum concluded, flock to high-profile programs that are the least effective and consistently ignore research even when available.

               Why are America’s federal, state, and local decision makers systematically committed to pouring good money into prisons, institutions, and feel-good malarkey that not only fail, but worsen matters and drain resources from more promising concepts? One only need consider the size of the lobbies supporting the bad ideas—the prison construction industry and guards, the Association of Psychiatric Hospitals, the vast anti-drug interests, and lavishly funded federal grantees.

               The American Youth Policy Forum report illustrates how promotion of “prevention” programs is

preferable to conservative/Clinton curfew and lock-’em-up strategies. But it also contains dangers. The worst is that neither the “prison lobby” nor the “program lobby” have any interest in flatly declaring that poverty, joblessness, and inequality are the major factors driving the cycles of youth homicide, gun violence, and criminal arrest and imprisonment. Nowhere does the Forum argue that reducing poverty is a major way to reduce crime by youths and adults. Its five recommendations—expansion, funding, evaluation, community cooperation, planning, linkage—boil down to two: programs, and more programs.

               A second problem is that over-promotion of programs requires the same stigmatizing efforts by liberals to depict young people as naturally and increasingly violent. That stigma, in turn, interferes with advocacy of tougher, structural changes to reduce poverty and confront destructive adult behaviors. A particularly egregious example emerged in my county, Santa Cruz. In June 2000, the county Board of Supervisors voted to seek $250,000 in federal funds to station two armed sheriff’s deputies at three local middle schools. The Sheriff Department’s spokesman at first raised the specter of rampant school violence, weapons, and drugs, which school officials and law enforcement figures promptly refuted. Santa Cruz County harbors 10,000 middle-schoolers ages 13–15. Of these, 40 to 50 are arrested for violent offenses every year (very few in schools). Santa Cruz city police reports showed youths under age 18 committed zero homicides, zero rapes, three robberies, and 19 aggravated assaults in all of 1999, and the newest police reports show even lower totals for 2000, 2001, and 2002. There’s a few mean kids here, but no “youth violence” problem worth mentioning.

               Nor is the problem getting worse. Teenage felony rates dropped 30% in the last 25 years; today, a Santa Cruz teen is less likely to commit serious crime than his/her parent. Of the county’s 100 murder arrestees in the last decade, only five were juveniles (compared to four in the decade of the 1980s and eight in the 1970s)—a rate lower than Canada’s. In 1999, 2000, and 2001, no youths were arrested for murder anywhere in Santa Cruz County (population 250,000). The local school districts to which deputies would be assigned report low and declining rates of drugs, violence, and crime through 2000. One had one assault and one minor weapons case in 1998–2000. “I don’t see the real need in having the sheriffs on campuses,” a principal whose hallways were slated for a deputy puzzled. “We’ve never had a gun” at school, she added.

               So the Sheriff’s Department switched gears: “It’s not as if they (local schools) are crime-ridden places,” its spokesman told the press. “But there are issues in the surrounding communities.” School stationed deputies will deal “not necessarily with crime” but will provide “better mentoring,” he added.

               So, in a time when schools face pressing funding needs, government is going to spend $125,000 each to hire gun-carrying deputies as middle-school mentors to address “issues in the surrounding community”? No, it doesn’t make sense. The real reason, the Sheriff’s Department finally conceded, was lucre: the cash-spewing federal government “wants to put more cops in schools,” police agencies were slandering local schools as dope and gang denizens to get it, and the editors of Santa Cruz’s largest newspaper, The Sentinel, went off the deep end:

 

What has this world come to…Cops at middle schools?…Campus violence has worsened to the point that parents actually think twice before dropping the kids off at school in the morning…It all comes as a shock to those who remember the days when there was no need for the police on campus, but those days, unfortunately, are probably gone forever.

 

To review this tragicomedy: ill-motivated feds advertise pork-barrel largesse to bankroll cops at schools, the local sheriff wants a piece of it, suddenly local schools sprout a “school violence” problem where none was in evidence, and the press balloons the non-problem to shocking, parent-scaring proportions. As the editorial said, what has this world come to?

               When liberal programs hype “youth violence” and circulate unwarranted, negative images of children and youths to win attention and funding, they betray their commitment to serve young people and instead exploit young people to serve their interests. They contribute to the larger fear of youth, particularly youth of color, that ultimately feeds public panic and leads to drastic measures like California’s Proposition 21.

               Every major town and city in the U.S. has experienced grotesque examples of the creating-fear-of-kids-for-profits campaign. Another example from my own city is the Dominican Hospital’s mass-mailed magazine to all county residents, which blared “big problems” with local teens in its Spring 2000 issue:

 

Each year, four or five Santa Cruz County teenagers take their own lives, while hundreds of other area teens kill themselves a little each day with illegal drugs…While teen drug use is high, it’s hard, too. Along with marijuana and alcohol, heroin and methamphetamines are big players on the local scene…Police departments are reporting sizable increases in thefts as a result of the increased use of hard, addictive drugs. “The habit soon needs feeding,” says Tim Sinnott, a certified drug and alcohol counselor and Dominican’s director of Behavioral Health. “It’s our very own local youth, participating in some very bold activity because of their drug dependence.”

 

Like the “school violence” problem, this was complete malarkey. Santa Cruz’s teen suicide rate is very low and has dropped by 60%, along with an 80% decline in accidents and a 25% decline in murder, over the last 30 years. In a county where 20,000 high school students and 12,000 teen-aged college students dwell, vital and health records show there are one or two teen suicides a year (Dominican simply inflated the number), teens comprise only 2% (and falling) of hospital treatments for drug-related emergencies, and (through 2002) only two teenagers have died of drug overdoses since 1976.

               Far from reporting “sizable increases” in crime, tables in the United Way’s 1999 and 2000 Community Assessment Project appendixes and the Santa Cruz Police Department’s 1999, 2000, 2001, and 2002 annual reports consistently showed crime had declined to a 25-year low, especially for thefts (3,200 in Santa Cruz in 1994, 1,400 in 2000, for example). As for youths, Santa Cruz police reported youths were responsible for only 12% of the city’s thefts and 10% of its other crime in 1999-2002, a low and declining rate. When I called the hospital’s public relations office to obtain their response for an article I was writing on the issue for Metro Santa Cruz, an alternative newspaper, Sinnott and other officials refused to comment.

               Dominican’s article made its purpose clear: to scare parents into referring more youths to its treatment services. “Dominican Hospital can be a resource,” the article said, urging parents to contact its experts if their child has “zero social life” or says things like, “I don’t want to do this any more.” Clearly, well-designed programs are necessary, even though California’s large improvement in teen behavior began on its own a decade before the current wave of treatment arrived. Dominican could have advertised its services truthfully by noting that very few Santa Cruz teens have drug or crime problems, but those few who do can be helped. Instead, it wildly falsified figures to whip up a thoroughly phony fear salvo against local youth, a tactic the press inevitably hypes even further rather than exposing it for what it is: fraudulent consumer advertising.

               The corporatization of treatment and other youth services has exacerbated the trend toward massive teen-scare campaigns, but many smaller entities also have joined in. One of the saddest developments of the 1990s has been the increasing number of once-good programs joining in the trashing of young people and becoming increasingly invested in bad news about kids. In that light, the American Youth Policy Forum report is a step forward in that it largely (though not entirely) avoids stigmatizing language. Its major shortcoming remains the failure to place violence and crime issues in the context of poverty.

               Finally, programs promoted by fear can become as repressive as prison-lobby proposals—potentially, even more so. The most disturbing trend among many juvenile justice interests is promotion of the liberal equivalents of conservative James Q. Wilson’s “broken windows” crime-fighting theory: imposing strict interventions and coercive controls on youths who commit even the slightest infraction to head off presumably worse behaviors. “Zero tolerance” is an example of programmatic zeal gone wrong. The most extreme interventions are based on assertions that if allowed any free time and/or unsupervised interaction with peers at all, adolescents will commit crimes and other untoward acts.

               Recent reports by the Carnegie Corporation, Centers for Disease Control, and National Longitudinal Study on Adolescent Health (for examples) highlight the supposed risks of peers and afternoon free time. “Unsupervised after-school hours represent a period of significant risk for engaging in substance abuse and sexual activity,” Carnegie’s Council on Adolescent Development warns in typical negativism, arguing that all youths should be “engaged in activities with adult supervision.” These fears have spawned growing sentiment for efforts to abolish adolescence itself: the parent delivers the teenager to the school, the school transfers the teenager to the mandatory after-school program, the program returns the teenager to the parent. These proposals to shackle all adolescents are more generally repressive that prison-oriented measures applied to youths convicted of crimes.

               There is no programmatic remedy for gangs, crime, or violence. The manifest eagerness of various interests to move into the potentially lucrative after-school programming arena threatens yet more for-profit lobbies pushing diversionary remedies. “Prevention,” as now framed and advocated, is only a small part of the solution, most effective when carefully targeted and not allowed to become self-perpetuating. Without changes in many program lobbies’ basic attitudes toward young people, prevention and intervention efforts will be a growing part of the problem.

 

 

Panacea #5: Curfews

 

               Hundreds of cities nationwide have instituted and enforced strict curfews on youths being in public at night or during schooldays. Accolades for curfews cascade from politicians and police absent evidence they accomplish anything other than making adults feel better.

               The federal government’s only document drawn up to support the White House’s pre-cast political campaign for both daytime and nighttime curfews was the embarrassing Curfew: An Answer to Juvenile Delinquency and Victimization?, by the U.S. Department of Justice’s Office of Juvenile Justice and Delinquency Prevention (OJJDP). It declared: “Comprehensive, community-based curfew programs are helping to reduce juvenile delinquency and victimization.”

               The only evidence the report provides is police assertions from six cities designated by OJJDP for having “established comprehensive, community-based curfew programs.” The report mixes up crime reported to police with juvenile arrests. It cites only the types of crimes that support its argument. It picks and chooses wildly varying time periods to compare. It includes no follow-up to see if the reported crime declines persisted. It includes no comparisons with cities that did not enforce curfews, the minimum requirement to reach a conclusion as to effect. The report’s shoddy quality is suspicious, since OJJDP—with the numbers-stuffed Bureau of Justice Statistics next door—easily could have put together consistent, comprehensive statistics to analyze the issue scientifically.

               Examples from the OJJDP report: after three months of enforcement, Dallas, Texas, police reported that “juvenile victimization during curfew hours dropped 17.7% and juvenile arrests declined 14.6%” (why three months, especially since crime displays seasonal variations?). Phoenix police reported “a 10% decrease in juvenile arrests for violent crimes” during the 11 months (why 11 months?) following the curfew’s advent. Chicago police reported that juvenile burglaries, vehicle theft, and theft declined (measured by arrests over varying time periods). New Orleans police reported “a 27% reduction in juvenile crime during curfew hours in 1994 compared with 1993” (measured by arrests, a claim later research demolished—see below). Denver police reported that “serious crime dropped 11% during each of the first 2 years of the program” and “motor vehicle theft dropped 17% in 1994 and 23% in 1995” (measured by crimes reported to police). North Little Rock police reported a reduction in violent crimes reported to police of 12% and a burglary decline of 10% from 1991 to 1992. (OJJDP didn’t bother to follow up to see if this crime decline persisted; FBI reports show crime was back up again in 1993–95 to levels higher than in 1991.) Jacksonville, Florida, did not report results but was praised anyway.

               As will be shown from the Monrovia case examined below, and as sociologist William Chambliss points out in Power, Politics & Crime (Westview Press, 1999), police have been known to diddle with statistics to produce desired results. In cities where juvenile arrests went up after curfews took effect, police claimed the increase resulted from more cop contact with youths that uncovered more crime; when juvenile arrests declined, police claimed it reflected a real crime decline. Even assuming all the above police reports are honest, some types of crime will decline in any given community and time period. In fact, crime declined in nearly all cities during the 1990s; what about those cities (including others claiming spectacular crime declines, such as Boston and New York) that did not adopt juvenile curfews?

               As one of the few researchers to date to study this question on a comprehensive basis, I think the evidence is overwhelming: curfews are nothing more than an expression of modern adults’ unwarranted fears of adolescents, and they probably make the streets less safe. In an exhaustive study with the Justice Policy Institute published in the Winter 1998/99 Western Criminology Review, co-author Dan Macallair and I found that whether measured over time, by county, by city, or by specific case study, juvenile curfews had no effect on crime, youth crime, or youth safety.

               Using annual Criminal Justice Statistics Center figures and Department of Finance demographic data for California and its 12 largest counties for 1980 through 1997, we compared periods of high curfew enforcement with reported crime rates, juvenile arrest rates, and juvenile violent death rates. We found no effect. We examined reported crime, juvenile arrests, and juvenile violent death rates lagged one year behind years of high curfew enforcement to see if there was a delayed effect. No effect. We analyzed crime rates in all 21 cities of 100,000 population or more in Los Angeles and Orange counties for 1990 through 1997 to see whether crime was reduced in cities with high levels of curfew enforcement. It wasn’t. We performed two detailed case studies comparing crime, juvenile arrests, and juvenile violent deaths in cities that had received national attention for strongly enforcing juvenile curfews with similar-sized, nearby cities that did not enforce curfews. If anything, the cities that did not enforce curfews enjoyed better results.

               One of our most interesting findings was that after the Los Angeles suburb of Monrovia imposed its famous curfew banning youths from being in public during school hours (heartily endorsed by President Clinton and trumpeted as a “success” by a fawning press), police tabulations showed crime declined considerably faster during the hours the curfew was not in effect (the summer, and school-year evenings, weekends, and holidays) than when it was enforced (school hours). Comparing the three years after the curfew’s advent in October 1994 with the corresponding periods before, we found that crime dropped 29% during school-day hours when the curfew was in effect—but it fell by an even more impressive 34% during school-year evenings and weekends, and by 43% during summer months, when youths were allowed to be in public! Monrovia police later admitted their initial claims of great curfew benefits, recycled by Clinton and the press, resulted from mathematical errors. These revisions drew no media or White House attention.

               At the same time, the Los Angeles Police Department released two studies of curfew enforcement. The first, in February 1998, reported that a spectacularly intense curfew enforcement effort (4,800 curfew citations in one district) “has not significantly reduced” juvenile crime or victimization (emphasis original). In fact, these increased compared to districts where there was no curfew enforcement! The second report, issued in July 1998, found that sharply reduced efforts yielded better results.

In the March 2000 Justice Quarterly, University of Central Florida and New Orleans criminologists released their large-scale study of New Orleans’ curfew, funded by a U.S. Department of Justice grant. The study examined 120,000 victim records and nearly 20,000 juvenile arrest records. It found the curfew, though vigorously enforced with 3,500 arrests and $600,000 in police overtime in the first year, did not reduce crime, juvenile victimizations, or juvenile arrests. Temporary decreases in violent and property victimizations after the curfew took effect evaporated; the “more permanent” effect was an increase in victimizations over time. The conclusion as to why “juvenile curfews are ineffective” was straightforward:

 

Delinquent behavior does not occur in isolation, but in a social context consisting of an individual’s peers, school, and family…These factors are complex and cannot be addressed simply by passing a law requiring youths to be off the streets during particular hours.

 

Even given these findings, it might seem counter-intuitive that police removal of youths from public wouldn’t at least cut thefts, burglaries, and other public crimes during curfewed periods. In a 1999 follow-up study, I got some insight as to why curfews don’t work (for details, see Males, M., 2000. Vernon, Connecticut’s, youth curfew: The situations of youths cited and effects on crime. Criminal Justice Policy Review, 11:3, pp 254-267).

               The occasion was a challenge by the Connecticut Civil Liberties Union to the juvenile curfew in Vernon, a suburb of Hartford. After a juvenile was shot to death by an adult during the daytime, the city imposed a nighttime curfew on youths to fight “gangs and drugs.” About as logical as most 1990s crime-busting panaceas.

               The Vernon curfew took effect in September 1994 and banned youths from being in public between 11 p.m. and 6 a.m. The initial effects were dismal. In its first six months, crime in Vernon was sharply higher than in the corresponding months of 1993–94 before the curfew began. Over the next three years, serious crimes reported to police fell by 11%. This was also not impressive. Vernon’s decrease was considerably less than the average crime decline over the same period in the dozen other Connecticut towns of similar size (–14%), the state as a whole (–15%), and 600 similar-sized cities nationwide (–13%). More to the point, crime declined the most rapidly in Connecticut’s two cities of similar population that did not enforce curfews, Wallingford (–17%) and Middletown (–24%).

               Further, the major (Part I) crime that declined the most after Vernon imposed its curfew, aggravated assault, was the one least likely to be committed by juveniles, while crimes more common to youths, such as burglary and robbery, did not decline as much. A simple correlation analysis showed slightly (though not significantly) more Part I crimes in months with more curfew arrests. Finally, crime declined faster in the two years before Vernon imposed the curfew than in the two years after. Add it up: police figures showed no reason to credit Vernon’s curfew with cutting crime in general or youth crime in particular. Why didn’t it?

 

 

Major Shocker: Most Kids Aren’t Criminals!

 

               As part of the CCLU case, police turned over all 410 individual curfew citations handed to 16 and 17-year-old violators from January 1995 through June 1998. The citations provided a wealth of detail concerning what youths were doing at the time of the stop. The snapshots of the nighttime lives of several hundred Vernon youths constitute an excellent random survey challenging not only the official notion of a generation out of control, but the avalanche of popular books asserting today’s teens are a “tribe apart,” lost en masse to secret lives of drugs, drinking, crime, and violence.

               What were the kids up to when the cops cruised up? The large majority were with friends, walking, sitting on park benches, in cars at the drive-in, walking or driving between friends’ houses and home. Police specifically were looking for evidence of juvenile endangerment, crime, and gang activity. They found, in 410 curfew stops, only seven cases evidencing other crimes (two outstanding warrants, one illegal weapon, two auto thefts, and two suspects with burglary tools), plus one runaway. Police reported zero instances of juvenile alcohol or drug intoxication, zero evidence of gang activity, and zero cases of youths being in danger (though several were escaping discord at home). If you stopped 400 adults at random, say, members of congress, you’d find more wrongdoing than that.

               What the curfew accomplished, then, was to occupy police time removing law-abiding youths from public places. This left more vacant, less-policed streets that provided more opportunities for the criminally inclined. As a raft of urban experts, including Jane Jacobs and William H. Whyte, point out, public places emptied of average citizens are more dangerous and crime plagued. This may explain why Vernon, Monrovia, and other cities experienced greater crime declines during periods when curfews were not enforced—a pattern affecting larger cities as well.

               Curfews, in contrast, are founded in the 1990s prejudice that all youths are criminals and ignore the ability of the majority of teenagers to deter crime. Cities that took a more positive stance toward young people had better results, as the examples of New Haven and San Francisco show.

 

 

The New Haven “Miracle”

 

               As mentioned, the OJJDP study lauding curfews failed to compare crime changes in curfewed cities with those in cities such as New Haven, Connecticut, and San Francisco that refused to impose juvenile curfews. Nicholas Pastore, New Haven’s recently retired police chief, testified for the CCLU in the Vernon case that “curfew laws tend to reinforce an `us vs. them’ attitude between police and young people that is ultimately counterproductive and possibly dangerous.” While community policing seeks to “create a nurturing relationship between police and young people,” Pastore testified, a “curfew law presumes all kids are bad.” Because curfews lend the illusion that “something is being done,” they “have the effect of ignoring rather than dealing with issues affecting young people.”

               Pastore took over as chief of one of the nation’s bleakest, most impoverished small cities in 1990. New Haven was plagued with shut-down factories, high unemployment, intensive gang warfare, and sky-high crime rates. Rejecting the repressive “quick fixes” adopted in other cities, he forced New Haven police to develop positive relationships with youths through community based programs, Pastore testified. Instead of “simply issuing an infraction,” New Haven deployed officers to acquaint themselves personally with youths they met on foot patrols and connect those engaging in troubled behavior to counseling and community services.

               Sounds nice, but does it cut crime? I checked New Haven crime trends during Pastore’s tenure as chief (see Table 15). When he took over in 1990, the FBI’s Uniform Crime Reports showed 21,090 index (felony violent and property) offenses, including 31 homicides, 168 rapes, 1,784 robberies, and 2,008 aggravated assaults in New Haven. When Pastore retired in 1998, the FBI reported 13,341 index offenses, including 15 murders, 66 rapes, 825 robberies, and 1,195 aggravated assaults. During his tenure, New Haven’s total crime index (adjusted for population changes) fell 34%, all violent crime declined 45%, and homicide dropped by 49%. The decline was steady throughout the eight-year period: a violent crime index of 3,059 in 1990, 2,483 in 1992, 1,864 in 1995, and 1,684 in 1998.

               Crime, to repeat, is down everywhere. Whether Pastore’s more benign approach toward youths caused or contributed to New Haven’s larger-than-average decline in serious crime, particularly violence, would require more sophisticated analysis to prove. Clearly, New Haven had a larger drop in crime than more repressive (and economically better-off) cities such as Vernon as well as the cities cited by OJJDP for model curfew laws (Table 15). At the very least, a blighted city showed crime can be reduced in conjunction with positive police approaches toward youth and rejection of curfews and other youth roundup measures.

               And what about a large, gang-infested city which abolished its juvenile curfew in the 1990s and adopted a panoply of liberal anti-crime policies?

 

 

The San Francisco “Miracle”

 

               According to the national media, crime’s down because cops cracked down. Once mean streets are now policed with “zero tolerance” vengeance, suspicious characters are cuffed on the slightest pretext, casual dopers are busted en masse, kids are banished from public, bad guys are packed off to prison for 25-to-life. If someone’s civil liberties got bruised amid the sweeps, if a few innocent black men were murdered in a hail of police bullets, too bad; you can’t fumigate the bad bugs without snuffing a few good ones. Boston and New York, Dallas to San Jose, the press lauded police, conservative politicians, and crime authorities such as James Q. Wilson for “cleaning up the Big Apple,” the “Boston Miracle,” and “taming gangs” with curfews, sweeps, and injunctions. The evidence? Police said so. Establishment scholars said so.

               But when the progressive Center on Juvenile and Criminal Justice (www.cjcj.org/jpi) sent the national media a careful, journal-quality report on what may be the biggest (and least reported) big-city crime miracle of all, the national press couldn’t get interested. Not a single Big Media reporter flocked to the Bay Area to report on why San Francisco’s violent crime rate, led by an 85% decline in juvenile homicide and gun murder, plummeted faster than anywhere else.

               “Since 1992, San Francisco achieved greater declines in violent crime than ten major cities,” including New York, Dallas, San Jose, and Boston, the CJCJ reported, citing FBI figures. The cities CJCJ chose for comparison (Boston, Charlotte, Chicago, Dallas, Denver, Jacksonville, New Orleans, New York, North Little Rock, Phoenix, and Washington) were a tough lineup, ones singled out by the U.S. Department of Justice for model policies to fight youth crime.

               Yet, no matter which sets of years were chosen to compare, San Francisco’s crime declines (down 42% overall, 52% for violent offenses, and 44% for property crime from 1992 to 2000, for example) topped those of all nine cities the feds had cited as exemplary. San Francisco also showed the biggest declines in all four major violent crimes (murder, rape, robbery, and felony assault) and two of the three major property crimes (theft and motor vehicle theft) chosen by the FBI as key “index” offenses. Of the 10 cities, San Francisco ranked first in violence decline and second in property crime decline.

               Whether San Francisco’s crime plummet beats New York’s fabled record depends on which years or crimes are chosen to compare. Statistics for the year 2000 posted on police department websites for both cities show San Francisco’s and New York City’s murder rate declines from the 1990-94 average were identical (down 64%), while San Francisco’s violent crime drop (-52%) exceeded New York’s (-49%).

               When it came to reductions in youth homicide, no other city even came close to San Francisco. Remember when police in Boston (where 40,000 teenage juveniles dwell) won national acclaim for having only two youth gun murders from July 1995 through July 1997? San Francisco (which has 200,000 more people, including 10,000 more kids) did even better: only two juvenile gun murders from July 1995 through December 1997. From their early 1990s levels to 1997-2000, juvenile gun homicides were down 85% and youth murder arrests dropped from an average of 20 per year to two.

               How did San Francisco do it? By doing nothing right, according to 1990s get-tough anti-crime dogma. Resisting the national stampede, San Francisco has no juvenile curfew; police stopped enforcing it in 1992, and efforts to reinstate it were dumped by voters in a 1995 referendum after high school students

vigorously campaigned against it. Contrary to James Q. Wilson’s “Broken Windows” religion urging immediate crackdown on tiny infractions (especially by kids), San Francisco’s policing has been, “don’t sweat the small stuff.” During the 1990s, arrests for simple marijuana possession and juvenile “status” offenses (such as curfew or truancy) declined sharply in San Francisco even as they skyrocketed in other cities.

               Further, the city’s liberal prosecutors refer fewer adult felons for lengthy Three Strikes sentencings, and fewer juvenile felons to adult court, than those in any other major city or county, reserving big sentences only for the worst of the worst. As a result, San Francisco’s rate of packing youths and adults to prison dropped faster and now is at lower per-capita levels than for any other major urban county in California, saving taxpayers millions of dollars.

               San Francisco’s story challenges conservative crime dogma at every turn. In a city that let its youths come and go at all hours as they and parents pleased, Lord of the Flies was hardly the result (Table 14).

 

Table 14. San Francisco crime trends, youth arrests, 1990-2002

 

Rates per 100,000 population (all crime), and age 10-17 (youth)

 

         Crimes reported to police                          Youth arrests

Year            All Violent        Hom   Prop.                   Hom                 All

1990       9,894   1,723            13.9 8,172                           25.9 3,654

1991       9,786  1,682            13.1 8,104                            36.2 3,464

1992    10,660   1,869            15.8 8,791                            34.1 3,313

1993       9,676  1,803             17.2 7,872                            66.9 3,587

1994      8,473  1,452             12.2 7,020                           27.2 3,296

1995      8,309   1,464             13.2 6,846                           15.4 3,199

1996       7,540  1,307            10.8 6,233                            14.7 3,211

1997      6,884   1,123               7.7 5,761                               1.8 2,542

1998       6,064       957                7.4 5,108                               5.3 1,924

1999       5,791       836                7.0 4,955                               5.1 1,689

2000       5,462       837                7.6 4,625                               4.0 1,500

2001       4,405       860                6.3 3,545                               4.0 1,326

2002*    4,071       750                7.8 3,321                            16.1    1,008 

 

Change, 2000-02 rate vs.:

1990-92    -54%   -54%       -49%      -54%                     -75%      -63%

 

Sources: Criminal Justice Statistics Center, California Criminal Justice Profile,

San Francisco County, 1990-2002, California Department of Justice.

*Annual rate prorated to annual estimate from January-November figures.

 

 

The rate of violent crime reported to police declined by 50% in San Francisco from 1992 to 1998, and another 10% from 1998 to 2002, with a further large decline (except for homicide) evident in 2002. Juvenile homicide arrests fell by 75%, from 65 in 1990-92 to 12 in 2000-2002. This achievement is even more impressive given that San Francisco has much larger, more active gangs than Eastern cities, has the highest rate of poverty among black and Hispanic youths in urban California, and is sited in a state whose gun death rate is triple that of Eastern states. Further, as Table 15 shows, San Francisco’s violent and other crime declines consistently exceed those of all nine major cities cited by OJJDP as having model juvenile curfew programs!

               Crime in all of the cities peaked during 1990-92, and so the average for 1990-92 is used as the base period. All the cities then experienced varying crime declines through 2000 (the particular years picked do not change overall results). When all violent and other Part I crime, consistent time periods, and consistent measures of crime (not just selected crimes, times, and measures to produce the desired result) are compared between cities that vigorously enforced curfews and two that did not, the results are very different from OJJDP’s haphazard report. San Francisco and New Haven experienced larger declines in seven of the eight Part I offenses (including murder, rape, robbery, assault, theft, and motor vehicle theft) than the nine model cities that subjected their youths to nighttime house arrest.

               But what about imposing curfews in the interest of juvenile safety? In 1990-92, when the curfew was being enforced, 17 teenage youths were murdered in San Francisco. In 1993-95, after curfew enforcement ceased,13. In 1996-98, eight (in a larger youth population). In 1999-2001, seven. Total youth gun fatalities of all types: 1990-92, 17; 1993-95, 16; 1996-98, six;  in 1999-2001, six. What Boston and New York have that San Francisco doesn’t is an aggressive public relations team.

 

Table 15. San Francisco and New Haven show bigger declines in crime and violence without curfews than all nine cities cited by U.S. Dept. of Justice for having model juvenile curfew programs

 

Change in rate       All Part I offenses Violent offenses

 

No curfew

San Francisco                                       -46%              -52%

New Haven                                           -49                   -56

 

Curfew

Avg., nine model cities                   -32%              -42%

Charlotte                                                -37                   -47

Chicago                                                 -30                   -42

Dallas                                                      -35                   -39

Denver                                                     -37                   -46

Jacksonville                                          -34                   -36

New Orleans                                        -35                   -47

North Little Rock                               -23                   -45

Phoenix                                                   -25                   -31

Washington                                           -34                   -42

 

Source: FBI, Uniform Crime Reports, 1990-2000. US Department of Justice. Declines measure crime index (Part I offenses reported to police divided by population) for 2000 with the average for 1990-92.

 

                   Most remarkable, San Francisco young people, given more freedoms than other urban youth, have improved their safety even as adults lag behind. In the early 1990s, 80 to 90 San Francisco youths and young adults ages 10-24 died every year from violent means. In the late 1990s, the toll fell to around 50 per year, reaching 36 in 2002. The declines are unbelievable: firearms deaths down 70%, suicides down 60%, accidental deaths down 50%, murders down 80%, drug fatalities down to near zero.

               Whether or not San Francisco’s liberal strategies caused the crime decline, its experience reiterates that severe repression is not needed to improve youth behavior or safety. This anti-’90s result stands in stark contrast to San Jose, a similarly populated suburban city 50 miles to the south. After instituting and strongly enforcing curfews, gang injunctions, and other youth-control policies, Santa Clara County’s conservative District Attorney George Kennedy bragged that his tough policing would send San Jose’s criminals fleeing to the soft-hearted hippie burg up north. Instead, San Jose was one of the few cities to suffer increased violent and other crime during the mid-1990s.

 

 

Panaceas Dethroned

 

As research accumulated in the late 1990s and 2000, the easy panaceas promoted by President Clinton and police interests began fall by the wayside. Boot camps for youth offenders accomplished nil, a Youth Today analysis reported. Three Strikes laws, trying juveniles in adult courts, and tough drug-law enforcement stuffed juvenile detention facilities and adult prisons but failed to cut crime, a series of Justice Policy Institute studies reported.

               In a March 4, 2000, feature, the New York Times’ veteran crime reporter, Fox Butterfield, assessed crime trends in cities around the country. Even though politicians, law enforcement, and crime experts had broken both legs getting to microphones to praise their get-tough policies for the late-1990s crime plunge, Butterfield found that cities around the country had pretty much the same declines in crime (see Table 15 for examples) regardless of their police policies—including cities that had no coherent anti-crime policy at all. Also that month, government-funded researchers writing in the staid Justice Quarterly seemed startled at how “ineffective, quick-fix, and piecemeal” their study found juvenile curfews such as New Orleans’ to be. University of Central Florida criminal justice professor K.  Michael Reynolds and University of New Orleans sociologists Ruth Seydlitz and Pamela Jenkins called for more research on “why ineffective laws are popular, the

functions served by these laws, and the climate that enables such laws to be enacted.”

               The 1990s big anti-crime policies did not cut crime; perhaps they were never meant to. After all, emerging prison, drug, and treatment interests profit from more, not fewer, offenders. The politicians whose campaigns they bankroll gain more by sounding tough than by doing good. The damage done by the Clinton presidency’s sophisticated promotion of simplistic, absurdly fruitless, sound-bite friendly nostrums is incalculable; the only surety is that his reign represented one of the pointlessly cruelest eight years in American politics. The conservative ideologues of the administration of new George Bush Jr. seem poised to do more damage yet.