The United States is coming a full circle in its deliberate misperception and failure with regard to violence, especially by youths. In the early 1900s, sincerely motivated scholars in the University of Chicago Institute of Juvenile Research (sociology’s first “Chicago school”) carefully mapped their city’s highly disparate patterns of juvenile delinquency, compared it with neighborhood conditions, and collected hundreds of case studies of delinquent youth. They found juvenile arrest rates varied 25-fold across Chicago neighborhoods, with the “social disorganization” of impoverished, transient neighborhoods the incubators of crime. They documented that the race or ethnicity of high-crime neighborhoods was irrelevant; it was the neighborhood itself that generated crime, regardless of whether its occupants were Italian, Irish, Polish, African, Latino, or Jewish. Their conclusion, published in their Juvenile Delinquency in Urban Areas (1942) stands unrefuted today: “Reduction in the volume of delinquency in large cities probably will not occur except as general changes take place which effect improvements in the economic and social conditions surrounding children in those areas in which delinquency rates are relatively high.”
A half-century later, University of Southern California social psychologist and gang scholar emeritus Malcolm Klein concluded his four-decade study, The American Street Gang (1995), with identical, if more refined, words:
Suppression programs have shown no evidence of success. Uncontrolled forces will determine gang growth and decline.
Failure to share both social power and social responsibility yields a surfeit of social ills. Street gangs are one of these… an amalgam of racism, of urban underclass poverty, of minority and youth cultures, of fatalism in the face of rampant deprivation, of political insensitivity, and the gross ignorance of inner-city (and inner-town) America on the part of most of us who don’t have to survive there.
The American response has been remarkable in its denial of these findings and the failures of those who ignore them. As noted in the last chapter, formal efforts have focused on every conceivable way to prevent crime by youths except changing the economic and social conditions of high-violence neighborhoods. Scores of projects—Cambridge-Somerville, Boston Midcity, Youth Consultation Services, and a myriad of replications and permutations--have flooded inner-city neighborhoods with psychologists, social workers, sociology graduate students, gang workers, mentors, family counselors, every sort of “expert” and “detached worker” seeking to reform the supposedly flawed individual, family, and culture. Nothing has worked. In the 1990s, a new set of youth mentoring (now volunteer), gang and drug education, and programming initiatives has followed their predecessors without a shred of recognition of their futility, but with a new element of cold punitiveness that cares little whether crime can be prevented or remediated.
Completing the circle, American authorities are returning to discredited, century-old biological explanations of crime. Instead of flawed African, Oriental, or female brains, the cause of bad behavior can now be found in the adolescents’ supposedly undeveloped cerebral cortexes. Exactly how this could explain a murder rate 50 times higher among the poorest inner-city youth compared to the most affluent suburban youth—all equipped with adolescent brains—is refuted in one word: Columbine. Harris and Klebold, a handful of youths in 25 million, proved all teens regardless of neighborhood or class are alike: innately murderous. This is a finding the New Biodeterminists (those who believe human behavior is fated by inherent biologically-based traits) embrace with wild enthusiasm—including liberal and leftwing crime experts who betray no awareness of the damage the charge of innate violence has done.
Are teenagers brain flawed? In a word, no. The biological argument is circular: its conclusion rests in its assumption. Neurobiologist Richard Restak, a leading biodeterminist, doesn’t like adolescents. He invites readers to choose their own adjectives—all negative—to describe adolescents: “difficult,” “unpredictable,” “moody.” Only when they reach adulthood, “the culmination of human brain development,” do young people become “mature,” “likable” and “courteous,” he adds. Other experts he quotes say teens suffer “biological tumult,” “impulsiveness” and “disregard for consequences.” This is a direct return to the long-debunked, 1904 adolescent “sturm und drang” (“storm and stress”) theories of psychologist G. Stanley Hall.
Restak’s 2001 book, The Secret Life of the Brain, recently serialized by PBS, is one of many recent works crediting neuroscience with confirming popular stereotypes of teenagers as rash and unthinking. PBS’ promo for “The Teenage Brain” segment read: “As the brain begins teeming with hormones, the prefrontal cortex, the center of reasoning and impulse control, is still a work in progress. For the first time, scientists can offer an explanation for what parents already know—adolescence is a time of roiling emotions, and poor judgment.”
Today’s authoritative-sounding statements about adolescents are similar to those eminent scientists once confidently issued regarding the supposedly flawed cerebrums of women and “inferior races,” as Harvard University scientist Stephen Jay Gould writes in The Mismeasure of Man. Researchers a century ago asserted that African Americans’ “less developed posterior lobe” explained their supposed impulsivity, irrationality and violent behavior. Women’s underdeveloped brains, wrote psychologist and sociologist Gustav Le Bon, produced “fickleness, inconstancy, absence of thought and logic, and incapacity to reason.” America’s preeminent early-1900s psychologist, G. Stanley Hall, argued that “savage” races, biologically, are “adolescents.” Today’s scientists reverse the sentence: Adolescents, biologically, are savages.
Modern science is more sophisticated, but skepticism of its misuse to uphold popular stereotype and official need remains. Youth have been a favorite target. A 1987 University of Wisconsin analysis, in the Journal of Youth and Adolescence, of scientific theories on teen behavior published in decades of journal articles found strong evidence of “ideological purpose.” When politicians and business interests need more youths for wars and employment booms, scientists pronounce adolescents “capable and adult-like;” during peacetime and economic downturns, adolescents are “psychologically incapacitated immature and slow to develop.” The latest discovery of adolescent biological deficiency follows a decade of blaming youths for every social ill.
To test a theory, compare what it predicts to real life. Decades of psychological studies have exposed negative typecasts of teens as a “stubborn, fixed set of falsehoods,” concluded University of Michigan psychologist Joseph B. Adelson in a February 1979 summation in Psychology Today. In truth, “adolescents are not in turmoil, not deeply disturbed, not at the mercy of their impulses, not resistant to parental values and not rebellious.” UC San Francisco medical-psychologist Nancy Adler’s testing, reported in the 1994 Annual Review of Psychology, similarly found that “adolescents are no less rational than adults.” A Carnegie Mellon University team reviewed 100 scholarly studies and reported in the February 1993 American Psychologist that the “perception of relative invulnerability was no more pronounced for adolescents than for adults.” Northwestern University psychiatrist Daniel Offer’s studies of 30,000 youths spanning three decades found virtually “no support for adolescent turmoil” theories or hormonal debilities. Repeated surveys show only 10% to 15% of teens report dissatisfaction with themselves, their lives or their relationship with parents. “Decision-making for teenagers is no different than decision-making for adults,” Offer concluded in a June 26, 1987, Journal of the American Medical Association piece. A typical research review in Child Development reported that “minors aged 14 were found to demonstrate a level of competency equivalent to that of adults” in standard measures of reasoning.
The similarity of teenage and adult thinking is further shown by crime and health statistics, as cited throughout this book. If teen brains are inherently flawed, we would expect teenagers to engage in far riskier behavior than adults. This is not the case, however. Compared with adults, teenagers have higher rates of water-sports and traffic accidents but lower rates of other major mishaps like falls, drug and alcohol overdoses, drunken driving and suicides. The risks of teens committing a crime, contracting HIV/AIDS or becoming pregnant out of wedlock vary radically by socioeconomic status and typically parallel those of adults of their cultures, not teens of other cultures. “Our youth are no healthier or sicker than we, their parents,” Offer concluded. His 1992 review of 150 studies of teenage development and cognition, published in the Journal of the Academy of Child and Adolescent Psychiatry is summed up in its title: “Debunking the myths of adolescents.” Many more examples could be cited.
While brain function is difficult to interpret, practical reality is not: teenagers, when compared to adults, do not show the range of debilities that the New Biodeterminists predict. Despite its dubious validity, however, various interests have embraced the myth of teenagers’ biological inferiority allows to abrogate teen rights and impose harsh punishment on them. Conservatives cite adolescent immaturity to justify zero-tolerance and abstinence-only, far more demanding standards than adults face. Liberal lobbies have been, in many respect, far worse in their sweeping condemnations of teenagers as irrational, immoral, and prone to unthinking, hair-trigger violence. Wrote Kim Taylor-Thompson, Academic Director of the Criminal Justice Program at the Brennan Center for Justice at New York University School of Law in an April 9, 2001, editorial in the Los Angeles Times that invokes, yes, Columbine:
Adolescents don't think like adults. Just ask any parent. We don't let adolescents drive largely because kids perceive and calculate risk differently than do mature adults. We don't let adolescents enter into binding contracts because kids have difficulty contemplating the meaning of a consequence, particularly with respect to long-term implications. We don't let adolescents make major decisions without guidance because kids have less capacity to anticipate harm as an unintended result of their actions.
Adolescents tend to be less aware of--and less alert to--information, ordinarily using the little they do know less effectively. They fixate on some initial possibility and fail to adjust their decision-making as new information becomes available. Yet both the rhetoric and rules of modern politics assume that an adolescent's violent conduct somehow transforms him into an adult.
Strong as that assumption's grip may be, it is misleading. Cognitive and developmental psychologists confirm that adolescents display immature thought processes even late into their teenage years that may help to explain involvement in criminal activity. A recent report from the New York Academy of Science suggests an organic explanation: The prefrontal cortex of the brain--which enables us to anticipate the future rationally, to appreciate cause and effect and to control impulses--may not fully develop until we reach our 20s.
This is not meant to excuse adolescent conduct. Rather it suggests good news. In time and with guidance, adolescents will grow out of this. But given current trends in criminal justice policy, too many will find themselves confined in adult prisons just as they develop the maturity to control their behavior.
Let's be clear. The adult justice system does not adequately differentiate between adults and adolescents. When a teenager confronts charges in criminal court, he will not face a jury of his peers. Adults--most of whom have long since forgotten that they narrowly survived their own teenage years--will sit in judgment. We will judge that teenager according to standards of responsibility that do not even approximate his thought processes.
Courts routinely instruct jurors that they can presume that an individual "ordinarily intends the natural and probable consequences of his acts." But teenagers don't even think about the consequences that an adult would otherwise consider natural and probable. Adolescent decision-making bears little resemblance to the mental operation that adults and adult courts treat as typical.
On the second anniversary this month of the Columbine shootings, perhaps the best way to ensure our children's future safety is to confront the underlying causes of youth violence rather than trying to force adolescents into an ill-fitting adult paradigm. It is time to recognize that kids have unique needs and require unique justice.
More absurd still is Justice Policy Institute president Vincent Schiraldi, who wrote in another Los Angeles Times opinion column on September 16, 2002:
In the landmark Atkins decision in June, the high court barred execution of the mentally retarded, ruling that "those mentally retarded persons who meet the law's requirements for criminal responsibility should be tried and punished when they commit crimes. Because of their disabilities in the areas of reasoning, judgment and control of their impulses, however, they do not act with the level of moral culpability that characterizes the most serious adult criminal conduct."
Obviously, that reasoning is analogous to youthful offenders. Do juveniles have the same moral reasoning as adults? Of course not, that's why we don't let them vote until they are 18.
Do young people have the same level of impulse control? Again, an obvious no, which is why they are forbidden to drink alcohol until age 21.
“Ask any parent”? Teenage thinking is inferior to adult thinking, as proven by the fact that adults strip adolescents of rights? Imagine the argument Schiraldi could have made in 1860 in favor of slavery. As to the extremes to which such an argument can lead, witness liberal juvenile court defender Steven Drizin, whose October 5, 2002, Milwaukee Journal column concerned the deadly beating of a black man by a mob of black youths and adults ranging in ages from 10 to 31 in the city’s poorest section. Fixating only on the teenagers, Drizin wrote:
Many teenagers, in the heat of the moment, would have had a hard time refraining from joining the beating. This is not to excuse such behavior - it is reprehensible - but to place it in the context of what we know about teen violence and how the way in which teenagers think can lead to disastrous consequences when violence erupts and they are among their peers. Juveniles tend to commit crimes in groups more frequently than do adults. This is because during the adolescent years, the peer group is often the most important influence on a teen's behavior. Identity formation takes place during adolescence and for teens, this consists of gaining acceptance among their peers…
Teen violence also tends to be more spontaneous and less premeditated than adult violence, in part because teenagers are more impulsive and less reflective decision-makers than are adults. This is as much a matter of biology as it is due to psychosocial factors such as susceptibility to peer pressure, being risk averse and unable to foresee the long-term consequences of one's actions.
New brain scan technologies now show that teenage brains are undeveloped in the prefrontal cortex, the very area of the brain that governs planning and judgment and acts as a sort of braking mechanism on more impulsive and instinctual behaviors. It's as if teenagers are hard wired to act first and think second and are less able to remove themselves from a dangerous situation once it develops.
One more element might help explain the behavior of the boys (and such violence is almost an exclusively male phenomenon). It is what the philosopher Friedrich Nietzsche meant when he wrote that "madness is the exception in individuals but the rule in groups." Teenagers from all kinds of backgrounds may be more prone to do things in groups that they would never even consider doing alone.
This idea of a mob mentality was hammered home to me by (another) incident… In November 1994, a group of mostly white suburban Philadelphia youths, believing a rumor that a group of city kids had assaulted a girl from their neighborhood, drove into Philadelphia's Fox Chase neighborhood for some payback. About twenty or so youths chased and eventually caught 16-year-old Eddie Polec and beat him to death with fists and baseball bats. Seven boys, aged 16 to 19, were charged with causing Polec's death. When asked why he beat to death a stranger, one of the boys told police, "I don't know. I just got caught up in the excitement. I swear, I never meant for him to die."
…As Milwaukee tries to make sense of this tragedy, it is important not to speak in broad-sweeping terms about "morally-bankrupt youth" or to look for answers in the fact that some of these kids come from broken homes, have relatives in prison or are affiliated with gangs. These cases are evidence that such spontaneous eruptions of violence can happen anywhere and involve anyone.
If any group other than adolescents were being demeaned (including sympathetically), Taylor-Thompson, Schiraldi, and Drizin would be guilty not just of the very kind of simpleton, short-term logic they accuse teenagers of, but hate speech. If I believed these liberal authors’ emotion-laden statements about brain-defective youths mobbing and murdering in crazed rages, I would want every teenaged boy imprisoned from age 11 to 20.
Note that Drizin has to dredge up two incidents eight years apart to constitute his “evidence” that all male teenagers (unlike adults) are prone to sudden, unreasoning “mob mentality.” He ignores that adults conduct mob beatings as well (at youth soccer matches, for example!), that adults were involved in the Milwaukee beating, and that victimization surveys show poorer people of all ages are more likely to commit crimes in groups than are richer ones. He ignores circumstances that don’t fit his argument. In the Milwaukee case, the 36 year-old beating victim triggered the attack by responding to an egg thrown at him, thrower unknown, by grabbing a 14 year-old boy at random from the group and punching him in the face hard enough to knock out a tooth. Exactly the kind of “spontaneous…impulsive…payback” by an grown man Drizin and the other authors attribute to adolescents.
As noted, there is no practical evidence—as the rarity of the Columbine, Milwaukee, and Philadelphia cases cited by the above authors attests—that male teenagers are prone to irrational violence. It may come as a shock to leftist biodeterminists that I regularly encounter groups of teenage boys (mostly Latino, but often mixed race) in my poorer Santa Cruz Beach Flats neighborhood, as I have many thousands of times over the years, and I have not been savagely beaten. Not even once.
Biodeterminist arguments are pernicious. Taylor-Thompson, Schiraldi, Drizin, and Amnesty International, in noble efforts to abolish the death penalty, display no awareness of the long-term consequences inherent in the quick-and-easy claims of teenage savagery they advance. If teens are incapable of thinking and innately violent, then such conservative crackdowns as curfews, requiring parental consent for abortion, mandatory drug testing, and long-term imprisonment (to protect society) are justified. And they ignore a more important and hopeful possibility.
While brain scans have revealed modest differences in teenage and adult brain processing—teens use more areas of the brain than adults do, on average—these are no more determinative of inferior or irrational behavior than similar findings regarding differences in male and female brain activity. Rather, biodeterminists such as Restak, colleagues, and disciples arbitrarily define the adult brain as the ideal—just as their forebears defined the white male Northern European brain as the ideal. Biodeterminists then define any deviation in cranial structure or neuroactivity from the “ideal”—which, by coincidence, just happens to be the brain of their own demographic group—as proof of flawed cognition.
However, the reverse assumption is equally valid. Restak and others report that the teenage brain is more “flexible” and “fluid” in its thinking than the adult brain. This they brand as evidence of teenage limitations. But, since when is the ability to think flexibly a flaw? It could justifiably be said that teens are better able than adults to change bad behaviors and see new possibilities where adults spin in the same ruts, both superior qualities. That wayward teens are more rehabilitatable than adult miscreants, as experts agree, could well be seen as proof of the adult brain’s hard-wired debilitation. That adults seem unable to conceive of any new ideas after decades of failure to address major social issues—and are even retreating back to the mire of 19th century biological arguments today—evidences the need to respect adolescents’ capacity for innovative thinking. If “a practical man can be counted upon to perpetuate the mistakes of his ancestors,” as Disraeli said, perhaps America needs a healthy dose of youthful thinking. Instead of seeing the teenage brain as an undeveloped adult brain, this reverse argument would go, see the adult brain as an atrophied adolescent brain.
A better argument would demean neither adult nor adolescent thinking, but the values of any small group differences and, especially, of large individual differences. Deploring “unsubstantiated claims about the incompetence of adolescents,” Carnegie Mellon researchers suggest reversing the lens and scrutinizing adults’ “cognitive and motivational factors that promote this harsh view of adolescents.” Could generic flaws in grown-up thinking explain the reckless compulsion of authorities who hurl simplistic stereotypes at powerless groups, the servile conformity of experts who cloak them in science and the smugness of the rest of us who swallow them?
The most impressive recent research as of this writing, a 2002 MacArthur Foundation-sponsored study of hundreds of teens and adults led by Temple University psychologist Laurence Steinberg, assessed whether juveniles are competent to stand trial as adults. The study found:
· Overall competence: 88% of young adults (age 18-24), 89% of 16-17 year-olds, 80% of 14-15 year-olds, and two-thirds of 11-13 year-olds were found to meet adult standards of responsibility for criminal acts. Note that 12% of adults failed the competency standard.
· Youths age 16-17 were as competent as adults when tested for abilities in understanding of rights, reasoning, appreciation of risk, future orientation (i.e., ability to anticipate consequences of acts), compliance with authority, and resistance to peer influences.
· Youths ages 14-15 were found nearly as competent as adults in all of these measures as well--20% versus 12% incompetence rate, on average. Interestingly, 14-15 year-olds scored as high as adults on future orientation--an area younger teens are incessantly berated for their failings.
· Youths age 11-13 were generally competent, though not as much as older ages.
· Intelligence heavily impacts competence, regardless of age--40% of those in the lowest IQ range (60-74) were incompetent, compared to just 5% of those with IQs of 90 and above. Because IQ reflects accumulated knowledge and experience, younger teens are at a natural disadvantage here.
· The most interesting age-based finding was that younger teens are more trusting in authority than older teens and adults (and whites are more trusting than nonwhites)--that is, youths and whites are more likely to confess, be honest, and “trust the system.” Interestingly, the study authors judge (correctly, in my estimation) that trust in the system evidences incompetence! This, ironically, is a very damaging finding for the juvenile justice system, which relies heavily on the willingness of youths to confess their offenses (Grisso et al 2003).
B Bottom line: 16-17 year-olds are as competent as adults, flatly so, and in a few cases, superior. There is no reason not to follow the rest of the Western and Latin world to permit youths age 16 and older entry into adulthood. Younger teens are surprisingly competent, and where they fail, it appears due more to inexperience than to cognitive deficiency.
What is the upshot of such stage research? The average 14 year-old is capable of adult reasoning, and half are able to apply it situationally. The average 18 year-old, a bit more so, The average 30 year-old, little improvement. Adolescents’ cognitive capabilities are similar to those of adults, but teens lack experience in applying principles. Whether the experience that comes with age improves moral reasoning and its application to real-life situations depends on how it is perceived. When placed in real-life situations, teenagers beginning around age 13, and nearly all by age 16-17, display reasoning and behavior similar to that of adults.
America suffers a high rate of violence by youths because it has a high rate of violence by adults. In Canada, Australia, and Western Europe, youths account for 7% to 10% of homicides. The same is true of the United States. While the percentage of America’s total violence committed by youths is similar to that of peer societies, our per-person rate of murder and gun violence of and by youths is much higher than in other countries because our rate of murder and gun violence by adults is much higher.
Yet, we just don’t get it. Panaceas like the National Campaign to Prevent Youth Violence and the “kids and guns” crusades have not and will not succeed—or, in fact, serve as anything but costly diversions—because they represent continuation of this country’s recipe for failure rooted in blaming demographic scapegoats. In the early 1900s, leading “experts” and institutions strove to help politicians “scientifically” prove that Eastern and Southern European immigrants, Japanese, Africans, and other “inferior stock” menaced white Northern-European American culture in rising numbers. The 1930s found California panicked at the invasion of a criminal, overbreeding, welfare-sucking horde of destitute migrant workers who barely spoke English—Okies. “Demographic scapegoat” scares of the type now targeting adolescents serve as little more than agents to translate vexing social problems into simplistic moralisms easily exploited by politicians and other interests. These and today’s other “kiddy” campaigns are impediments to confronting the true causes of American violence. They should be shut down.
The kiddy crusades should be abolished because they represent the worst of what has worsened in America’s relationship to its youth over the last three decades: the search for the “easy solution” that will allow adults to continue to drink, own guns, and generally subject our behaviors to lax standards while demanding that government, laws, schools, and programs enforce zero tolerance regimes on kids. The “kiddy” campaigns conduct an endless search for an external explanation— “pop culture” Pied Pipers of sex, dope, and violence—that can be safely condemned and easily redressed, when in truth the source of what problems exist among young people lie in the values and behaviors of American grownups.
No one is a better example than Connecticut Democratic senator and former vice presidential candidate Joseph Lieberman. Lieberman condemns “the breakdown of the family,” yet he is divorced, which (statistically) multiplies the odds that the children affected will suffer school failure, drug and alcohol abuse, early pregnancy, criminal arrest, and later marital instability. Lieberman’s inconsistency (personally divorced but politically critical of family breakup) is not the problem; the danger is that the private failings of today’s leaders seem to fire up their public fervor to punish scapegoats. Rather than examining his own behavior, Lieberman stridently declares “crude, rude, and lewd” television, movies, music, and afternoon talk shows the fundamental cause of misbehavior among kids. The enterprise that created the Lieberman family fortune—a liquor store chain—is far more firmly linked to crime, violence, addiction, family chaos, and a host of other social ills than the “pop culture” images he condemns.
Such contradictions abound among the major parties’ presidential candidates, whose backgrounds (and sometimes foregrounds) variously include drunken driving, drunken vandalism, marijuana smoking, hard-drug use, heavy partying, criminal arrest, ongoing fraud investigation, and marital breakup, accompanied by pious self-affirmations that these leaders embody American family values and will enforce strict character standards on young people. The human frailties exhibited by today’s powerful seem to render them not more tolerant of similar failings by the less powerful, but more harshly judgmental and punishing.
In short, the kiddy campaigns should be done away with because they are designed not to confront the basic cultural assumptions that generate violence and other social ills. These include Americans’ penchant for blaming powerless scapegoats as a substitute for redressing grotesquely large economic inequalities, indulging moral rhetoric as a substitute for actually behaving responsibly, and rallying to panaceas that invoke empty symbolism as a substitute for making painful choices. Lieberman and his Campaign 2000 colleagues exemplify this American meanness of spirit and its record of denial and failure.
Reducing violence will require fundamental changes to the structure of American society. These changes include trade-offs most Americans may not be willing to accept. The sense of responsibility to larger society that underlies lower levels of European murder and gun violence implies curbs on many free-wheeling individual rights Americans take for granted. Further, unlike most other Western nations, the United States is a multiculture, and a sense of social responsibility cannot be forged here on the premise that Americans share a common race, religion, or experience.
Thus, when it comes to facing a complicated social problem like violence, Americans gravitate toward special interests that prosper from reassuring us that because the problem lies outside mainstream culture, no major changes or sacrifices are necessary. The Columbine shooting provoked scores of remedies, representing every religious, psychological, law enforcement, behavior education, and other interests’ efforts to exploit the tragedy to push their wares.
Whatever the culture doctors thought was wrong with American society, Columbine confirmed it, and their own special prescriptions would fix it. Although the sillier school-safety hokum got a lot of attention—mandating the posting of the Ten Commandments in classrooms, requiring students to call teachers by “sir” or “ma’am,” arming teachers, keeping teenagers out of R-rated movies—these differed only in degree from the supposedly serious steps such as more security guards, metal detectors, and counselors. The more various interests benefit from advancing their diagnoses and solutions, the less interest they have in actually solving the social problem it now profits them more to maintain.
The most important thing America can do to reduce violence is to reduce social inequality—in particular, by preventing concentrated poverty. Our peer Western nations have shown how this is done by eliminating widespread poverty through social welfare systems (Western Europe, Canada, Oceania) or enforced corporate responsibility (Japan). Social disadvantage is the big reason rates of homicide and gun violence grade along income lines regardless of race or age.
But most Americans seem to regard even the minimal social welfare system necessary to eliminate concentrated poverty as a “giveaway” to the undeserving, even when government subsidizes business and other big lobbies by the billions, and even when a majority of America’s poor work full time in an appallingly stratified economy. That is why fundamental changes in American society must precede the kind of social reforms that would effectively reduce violence. The reform discussed in the next section for preventing the bullying of unpopular students by school hierarchies is also the type which would promote attitudes favorable to reducing the bullying of dispossessed social groups by larger American hierarchies that our high schools mirror.
Because American politicians and institutions are unwilling to consider fundamental changes in American society, effective solutions have to come from the “bottom up”—in this case, from (or involving) youths themselves. The bright spot is the improving behavior of American young people. Why today’s younger generation is less inclined to drug and alcohol abuse, serious crime, suicide, and violent demise than their parents’ generation was is not admitted and therefore not understood. As shown in Chapter 4, these improvements do not coincide with crackdowns and programmatic interventions but appear to result from quieter, long-term changes in young people themselves.
Generally, over the last generation, kids have gotten better and adults have gotten worse. Behind the isolated but spectacular suburban and rural high-school-student shootings lies the backdrop of the lowest murder and gun fatality rate among suburban and rural kids in three decades. Behind the distressing
image of Michigan’s recent first-grade murderer is the backdrop of the lowest homicide arrest rate among grade schoolers since statistics were first compiled in 1964. Behind the spike in murder, robbery, and assault among poorer youths of the late 1980s and early 1990s stands the larger reality of falling felony rates among youths as a generation. Behind a few celebrity drug overdoses lies the lowest drug and alcohol abuse problem among teens in at least three decades. Behind the “generation at risk” image sown by self-interested politicians and institutions lies a younger generation less likely to die violently than any in 35 years. The journalists headline horrifying anomalies, but they and the authorities they quote grossly distort reality when they claim these are iceberg tips of masses of troubled youth. The reverse is true. The large-scale, long-term improvements in teenage behavior are all the more remarkable given the deterioration in environments adults provide and the behaviors adults model. This volume’s last table, showing figures from the California Department of Justice’s annual crime report, dramatically illustrates these counter-trends (Table 16).
It’s difficult to find words to sum up these astonishing trends. In the late 1970s, 52,000 California white youths were arrested for felonies each year, nearly twice the number (29,000) and seven times the per-person rate of white adults of ages to be their parents. By 2002, after 20 years of steady decline, the number of felony arrests among white youths had fallen by half (to 17,000) but had tripled among white, over-30 adults (92,000), so that parent felony rates now exceed youth arrest rates! Youth and adults of color show similar, though cyclical, trends. Neither changes in law enforcement procedures nor statistical reporting accounts for these trends; in fact, new, get-tough laws and policing should have resulted in disproportionately more youth arrests.
Another example of unexplained youth improvement is drug abuse. While drug-war and drug-policy reform interests squabble over increasingly irrelevant points such as whether the War on Drugs or drug decriminalization causes more teenage dope use, teenagers designed their own drug policy: moderate use of milder substances and avoidance of hard drugs and addiction. It’s hard to argue with success. In 1970, 136 New York City teenagers died from drug overdoses; in 1997, eight. In 1970, 223 California teens died from drugs; in 2000, 33. Regions betwixt showed similar, if less dramatic, trends.
“Teenage drug policy” appears a reaction to the immediate, escalating crisis of adult drug abuse America’s prominent drug debaters have chosen to ignore. A “teenage policy” which arises during a period of political and institutional default need not come from youths themselves; it can come from adults, mixed-age groups, programs, or combinations thereof, so long as it addresses a genuine problem confronting young people and uses their particular strengths. An example is the rapid decline in murder and violent crime among Los Angeles youths in the 1990s, the result of gang leaders, churches, and other community institutions cooperating to forge truces. Similar forces in other cities produced similar benefits.
Yet, political, institutional, and crime authorities continue to debate and promulgate policies as if these surprisingly optimistic trends in youth behavior never occurred. This is a social science dereliction equivalent to California engineers building skyscrapers as if earthquakes didn’t happen. Even though I’ve presented these standard statistics in many law enforcement, academic, media, and institutional forums over the last five years without encountering serious challenge to the numbers, authorities continued publicly to depict youth crime and drug abuse as menacing a peaceable, baffled adult generation.
A particular strength of young people (and one also found in adults) left out of policy discussion is adaptability. Today’s second generation of youth exposed to widespread drug availability is not acting like baby boomers, the first drug-exposed generation. Late-1990s and post-2000 teens display far lower rates of homicide and violence than late-1980s teens. In both cases, younger teenagers learned from seeing first-hand the problems drug abuse and violence caused among adults in their families and communities.
Effective solutions, then, must confront real problems facing young people (as opposed to whatever “problem” politicians and interest groups find convenient to depict youth as having). They also have to work with the considerable strengths this generation has shown. Instead, self-interested groups have commodified youth to serve their purposes, rigidly defining young people as an undifferentiated mass of risk, issuing appallingly inaccurate assertions, and imposing simplistic one-size-fits-all solutions without regard to fairness or effectiveness.
The grotesque “post-Columbine” display challenged the right of a large array of American interests to claim grownup maturity and genuine concern for kids. It was a national disgrace founded in the panicky, provably ridiculous premise that legions of teenagers were poised to shoot up their schools. Therefore, the solutions emphasized drastic reorientations of the entire generation to solve the imagined new problem of Why Johnny Packs Heat.
The “difficulty” (in reality, the blessing) in preventing school shootings is exactly the opposite: Student gunners, literally, are one in a million. Like middle-aged and elderly mass killers, school gunboys symbolize no one but themselves. So, preventing the rise of the one Klebold or Kinkel or Carneal in 25 million teenage students, or even among the unknown millions who could be described as “alienated” at some time, requires an approach that is both inclusive (because school shooters are so rare they are hard to identify in advance) and non-oppressive (because the vast majority of students don’t need to be deterred from gunning down classmates). That is, a solution that will benefit all students and the education process itself in larger ways while affecting the murderously alienated in particular.
“Different” Kids Are Not the Enemy
Of the mass of notions about how to prevent school massacres, I only encountered one that made sense. Nobody Left to Hate: Teaching Compassion After Columbine, by social psychologist emeritus Eliot Aronson, is a potentially revolutionary book whose self-imposed limitations render it merely a sensible analysis and prescription for “school violence.” And that’s welcome enough.
Aronson’s theory is not original but is stated with particularly compelling clarity: although the very few outcast students who go so far as to shoot guns in schools are extremely disturbed, the most worrisome pathologies lie with the popular kids who exclude and taunt them. His premise is sharply at odds with the push by federal agencies and school districts to deploy psychological and law enforcement (including Secret Service) “threat assessment” profiling to identify potential student rampagers. The profiles themselves, aside from being futile, represent a dangerous attitude. Atlanta psychotherapist Joyce Divinyi has been training educators in her copyrighted “E-T-A” acronym to identify potentially dangerous students who operate “purely on emotion (E), bypass the thinking (T) process, and act (A) without awareness of the ultimate consequences.” I can’t imagine a concept that would have been less likely to have fingered methodical, long-term philosopher-killers like Klebold and Harris.
Similar programs include the Los Angeles Police Department’s “Mosaic for Assessment of Student Threats” (if such assessments work, how come the LAPD doesn’t use them to screen out rogue cops?) and extensive questionnaires used by school districts in Dallas, Cincinnati, and other cities. These ask students to admit, on forms which identify the respondent, whether they ever abused animals, are depressed, or can get a gun. The wrong answers bring referrals to parents, police, and therapists, which is certain to make students wary of answering honestly. Again, it’s hard to imagine how such crude approaches would have stopped the Columbine killers, who were bright and adept at concealing their plan to the point that even Harris’s psychiatrist didn’t see his rampage coming.
Temple University psychologist and expert on adolescents, Laurence Steinberg, lent sanity to the stampede toward profiles: “They’re bogus, a complete waste of money. The only people who will profit from them are the people peddling the threat assessment programs. They’re capitalizing on extremely rare national tragedies.” Worse than bogus, the “threat assessment” attitude behind student profiling quickly degenerates into “blame assignment,” contributing to the very problem it is supposed to prevent. Cyber-journalist Jon Katz’s book, Geeks, includes hundreds of accounts from students who, in the “post-Columbine” orgy of fear, have been “harassed and brutalized and excluded for being perceived as being different, or geeky, or non-jock, or ugly.” School administrators multiplied the persecution by forcing harassed students into counseling, notifying police about the danger they supposedly presented, or suspending or expelling them for developing understandably angry, antisocial outlooks. The undeclared strategy seems to be to maximize the chances that a “different” kid will become an enraged, anti-social one with a genuine reason to hate peers and officials.
Aronson agrees that the campaign by authorities to identify potential school shooters by scrutinizing misfit kids is “shining (the) spotlight on the wrong part of the equation.” This is the crux of revolutionary work: it’s not the rejected “loser” minority that’s the problem, but the rejecting “winner” elite and the in-between students who conform to the winner-students’ bullying.
Had Aronson gone one step further and described how the high school’s winner-loser-conformity atmosphere reproduces that of adult society, he would have (in my humble judgment) penned the classic diagnosis of American social ills. But here the book is curiously muted. Aronson seems to regard the cruelty of the high school hierarchy as unique to adolescence and its supposed insecurities and testosterone. He recalls his own painful high school experience as a kicked-around kid and generalizes it to posit a singular climate of peer torture students inflict on each other.
I disagree that high school students are uniquely prone to cruelty. My experience as an unpopular junior high student and a more accepted senior high student is that school administrators and teachers were every bit as apt as students were to reward popular kids and to reject, even bully, “different” kids. The gym coach openly sided with the senior high jocks (his own players and student aides) and stood aside as they beat up unpopular seventh graders and favored the elite ones. The faculty sponsor of the student newspaper permanently banished me after I submitted a pro-integration column (in 1963) challenging her hand-picked editor’s column supporting Oklahoma City Board of Education efforts to maintain racial segregation. Teachers (with a few exceptions) also picked on the loner students and made fun of them. Later, when I got a bit more popular in senior high, won election to the class council and student recognition, teacher and administrator attitudes warmed to me considerably.
Indeed, Aronson points out the authoritative reaction to the school shootings was to deputize the “normal students” to help identify potential weirdoes. The adults in charge, no surprise, sided with the students in charge in “implicitly sanctioning the rejection and exclusion of a sizable group of students whose only sin is unpopularity.” Aronson views this “self-serving response” as the most justifiable from a narrow bureaucratic standpoint, likely to win support of parents and larger society and cover administrative butt if a shooting happened to occur in their school.
In other words, the grownups agreed with the high school soshes that it’s the oddball kids who are scary. This reflexive administrative favoritism for the upper crust students cannot be excused as a rational school safety strategy. The fact that even among the oddball kids, school shooters are so extremely rare that their “personality type” is almost impossible to identify should tell us that school administrators armed with surveys, and even a network of “normal-student” informants, aren’t going to unearth the one-in-a-million freshman gunslinger. Yet, Aronson does not pursue these points, which have powerful implications for why the reasonable remedy he proposes has not been widely implemented.
The unfortunate result is that Aronson winds up reinforcing the current prejudice that adolescents themselves are an outcast group separate from and unequal to adults. He implies that adult environments are not as abusive as high school and asks, “what is it about being an adolescent boy” that makes them inflict “verbal and physical abuse, bullying, and shame”?
Nothing they didn’t learn from their elders. In years of working with mixed adult and adolescent groups, I saw only one age-related difference in the style of bullying: teenagers are somewhat more likely to taunt their subjects to their faces (which is what makes them seem “cruel”), while adults are more likely to gossip behind their target’s back. (College faculties are notorious for malicious backbiting!) A major transition from childhood to adulthood involves learning how personal conflicts are best handled in a society that sorts winners from losers. In cases in which the target of abuse has any ability to retaliate, teenagers learn from adults that gossip beats face-to-face confrontation hands down. In cases in which the target of abuse can’t fight back, children learn from adult modeling that direct confrontation wins approval. Grownups in authority routinely bully children and adolescents with yelling, personal insults, and even physical punishments, methods that win plaudits as “tough love” when directed at kids even though they would be unacceptable if used to shame or discipline other adults.
The mass shootings by adults detailed in Chapter 3, particularly those targeting co-workers and bosses, demonstrate to the same degree that the winner-loser mentality of bullying and rejection is also very much a part of grownup culture. Its expression may be less direct, but it is no less cruel. Long before I decided to study or write about teenagers, I learned to prefer younger adolescents’ direct expressions of displeasure toward me and each over older adolescents’ and adults’ favor for destructive gossip.
Aronson displays the same curious reluctance of most other authors on youth topics to compare theories of what adolescents should be like to real-world behaviors that contradict them. Unlike most authors, however, he acknowledges from the beginning that schools are about the safest places in society from murder. Yet how can that be, if schools are stuffed with dangerous testosterone-driven teenage boys? He depicts adolescence as a life stage highly prone to stress and suicide (another logical conclusion from the theories of adolescence he presents), yet he nowhere discusses the counter-truth that teenagers are far less likely than adults to take their own lives. He points to a recent survey showing that 63% of rural and suburban teenagers, compared to 32% of urban teenagers, report that they own guns or live in homes where guns are readily accessible—yet, he doesn’t discuss why, if twice as many rural/suburban youths have guns handy, they display firearms violence rates only one third those of urban kids! If we are to approach problem-solving in the scientific manner Aronson proposes, these massive contradictions between the theory of adolescent fragility and savagery versus the reality of generally low rates of tragic outcomes require serious contemplation.
The Enemy Is Us
Even with these limitations, Aronson’s book remains a remedial classic. His discovery from decades of research in hundreds of schools was that high school hierarchies thrive in an “academic environment… designed to encourage students to compete against each other.” The atmosphere of intense competition and the destructive echelons it spawns can be broken down by simple “cooperative learning techniques.”
Aronson and his graduate students designed what he dubbed the “jigsaw” technique in 1971 to defuse racial conflict in Austin, Texas’s, newly integrated schools. Students were arbitrarily assigned to groups whose members’ individual grades depended on how well each cooperates with the group to accomplish an educational project. Each student was assigned a part of the project to study and report on to the group. As an example, one student in a group studying World War II might be assigned the bombing of Pearl Harbor, another the Battle of the Bulge, and so on. Each student researched his or her own topic, sometimes meeting with students from other groups who had been assigned the same topic to check information. The grades of individual students in the group depended on how well they absorbed each other’s information in order to pass a test on the project. Inevitably, working together, the popular, alienated, and in-between students formed strong affiliations for each other, especially when individual skills helpful to the group’s success (and therefore grade) emerged.
The “jigsaw” method’s success resulted from shuffling the groups. As soon as the project was completed, the student groups were broken up and their members reassigned to form another set of groups with new projects. After much grumbling about how much better their old winning teams were, the students set about cooperating with their new colleagues—and again, strong bonds developed. “About the third time they go through this group process, the students start to get it,” Aronson said in an interview on “Talkabout” on KZSC, the UC Santa Cruz, student radio station. “They realize it isn’t any particular group of students that is special; they can work with any group.” The result of the “jigsaw” approach to breaking up and reforming classroom groups was that outcasts were included and students eventually wound up with “no one left to hate.”
Aronson contended that if Klebold and Harris had been valued members of a changing, ongoing group process in their classes—as even the most unpopular students in the hundreds of jigsaw groups he observed became—they would not have been isolated, harassed, and filled with murderous rage. Both, after all, had skills that would have made them valuable to groups. (Aronson didn’t claim such a method would reach true sociopaths. However, students who committed mayhem didn’t seem to begin as detached, cold-blooded iceblocks anyway, but kids who cared too much about what their classmates thought of them.) Aronson backed up his conclusions with a raft of evaluations showing that at schools where his jigsaw process was tried, sharp declines in fights, taunting, cliquishness, and other generators of uncool-kid misery ensued and academic performance was enhanced.
Regardless of evaluations, Aronson’s ideas rang true to me because I’d seen such a process work in summers as a crew leader for the Youth Conservation Corps in Olympic, Yellowstone, and Yosemite National Parks. YCC brings diverse high school students ages 15 to 18 together to work in crews on hard labor wilderness projects. Inevitably, the crews turn high school social orders upside down—the cheerleaders and jocks are hated, the geeks adored. The massive readjustments that took place in kids in these crews over the summer, usually after a rocky beginning, were astonishing. I never saw a real fight (or even seriously threatened one) among the dozens of YCC kids I worked with, and the taunting usually took on an affectionate tone a couple of weeks into the summer.
I can think of a couple of seriously misfit boys and girls I thought had potential to do some damage one fine day, and all, by the end of the summer, were among the most popular kids in the camp. One summer one of my crews teamed up with a sheltered workshop for mentally handicapped youths from a nearby town to replace a half mile of board walkway across a swamp—a grueling, muddy project during which the “normal” kids became so fond of the workshop kids that the two groups socialized closely the rest of the week. I found other YCC leaders had stories similar to mine.
Readers may already perceive the problem with this heartwarming little tale, but let me presage it. While the kids were getting along beautifully, insulting each other with X-rated personal gibes and hammering through their conflicts in the face-to-face group reconciliations the crew leaders were forbidden to attend (except when we were the subjects of complaint), the adult staff suffered massive quarrels. We were the ones who couldn’t get along. The reason was the same one Aronson encountered—while the kids profited from figuring out ways to cooperate at close range and get the work done, the staff profited from competing. Our National Park Service evaluations ranked us against each other. Our superiors praised, criticized, and compared us in pointed ways as to whether my crew did a better job or was better behaved than another crew leader’s. So bitter did staff conflicts become that by my last year in the YCC, vicious gossip, personnel complaints, suspensions, firings, administrative litigation between the program executives, and other infighting wasted vast time and morale.
It was only after my teenage crew and I hiked many miles into the park’s backcountry for a week-long revegetation project at a remote alpine lake that the answer dawned on me. Granted, an 11,000-foot ridge line was an ideal spot to ponder why my volatile, multiracial, multi-class, two-sex crew of adolescents got along so warmly when forced together seven days a week in grinding 14th century stoop-labor work all summer while the adult staff could barely spend an hour in the same ranger district without ganging up and screaming. The kids saw each member of their crew as vital to their own well-being. The hard-core, knife-wielding satanist the terrified girls on my crew begged me to send home two days into the summer they now missed and cried over when he had to go to the infirmary for a couple of days.
Yet, Aronson’s and other practical, cooperative learning techniques have not caught on in American schools over the last 30 years. Why not? Aronson’s answer: “I’m not sure.” I suspect the difficulty in implementing Aronson’s plan on a large scale is that ultimately, it implies a fundamental reworking of American culture. The high school is not a uniquely cruel institution; it is just the initial sorting process in a larger society that demands winners and losers. While middle and high school students, due to their less entrenched thinking, may work idealistically to everyone’s benefit in cooperative situations imposed on them, the larger society’s winners have no reason to change and no intention of submitting to cooperation schemes just because it would make America a better place.
Like the high school elites, the president and congress spent most of the last 20 years isolating and blaming the allegedly irresponsible personal behaviors of unpopular groups (youths, poor people, welfare recipients, illegal immigrants, gays) for American social problems. Just as the mass of average students accept the popular kids’ verdict and join in picking on the outcast students, so mainstream American institutions, the news media, and “experts” tamely fall in line behind national leaders’ crusade to stigmatize outcast demographic groups as the menace to “traditional American values.”
In the 1990s apex of sophisticated demonization of outcasts, the Clinton administration blamed and bullied pregnant teenagers and welfare mothers because polls and focus groups showed such a “Democratic family values” crusade would win votes—that is, would make him more popular. The press and political experts lauded Clinton’s skillful stomping on the powerless as a brilliant “New Democratic” strategy that stole Republicans’ moral thunder. Voters ratified it in the 1996 election.
And we wonder why high school students strive to sort themselves into winner categories by making other students losers? Who do we think they learned that from? Adolescents would have to be self-sacrificing rebels indeed to pursue any other strategy. They can see the larger society in front of them, in which American hierarchies allow the winners to accumulate massive wealth at the same time they build vast police forces and prisons to contain the losers. Just as the exclusion and pain inflicted by high school hierarchies provoke a few of the excluded to extreme violence, so the isolation and stigma inflicted by centrist political and institutional hierarchies provoke a few of the “underclass” to higher levels of murder and gunplay than found in the privileged classes these hierarchies reward. The “teen panel syndrome”—in which teens criticize their “peers’” supposedly rampant bad behaviors for the benefit of adult audiences—is just practice for adult-level sycophancy, in which social scientists, biologists, and institutional experts gain popularity by reliably telling leaders what they want to hear.
Unfortunately, the forces that create America’s winner-loser system are not ones to produce leadership that will reverse it. Just as a high school’s hierarchy is rarely composed of the school’s most thoughtful and public-spirited students, so our political system does not pick the most high-minded candidates for high office nor business the most socially conscious CEOs. The skills required to rise in America’s echelons from high school to Pennsylvania Avenue and Wall Street are not the skills needed to render statesmanlike service once at the top.
Still, things are not going as dismally as our dinosaur-eat-dinosaur institutional structure would have them go. In Framing Youth, I speculated that the startling improvements in youth behavior over the last 25 years result from the very thing our leading authorities most fear and seek to suppress—peer values and culture. Studies by the National Association of Secretaries of State show today’s young people volunteer more for community service than those of past decades. Further, the ability of youths today (and adults who know how to work with them) to form their own groups around common interests and mutual defense, aided by the ubiquity of Internet connections among the middle and upper classes and gangs and other community groupings among the less privileged, is the most plausible reason the kids we’d expect to mess up the worst are doing better.
My criticisms of Aronson’s theories about adolescents in no way diminish his proposal for a more cooperative classroom atmosphere. In fact, his recommendation would be stronger if justified by the healthy trends and positive behaviors among today’s teenagers. The record of today’s youth in volunteerism and in forming their own alliances indicates they are likely to respond well to cooperative classroom approaches. Just as including outcast youths as valued members of their schools is vital to curbing school bullying and violence, so including this “outcast generation” of youth as valued members of American society is vital to mending the social divisions that contribute to excessive violence throughout our culture.