by Frank Furedi, First Things, January 2021

Students clamor for “safe spaces.” Protesters smash windows and political leaders insist on saying that it’s important to hear their “voices.” Parents speak of children’s behavior as “unhealthy,” avoiding moral terms. We tend to think of these as recent developments, which have come upon us because of “cultural Marxism” or some other alien invasion or academic virus. But this is not so. In truth, we are witnessing the fruition of a long history of antipathy toward moral authority that has been sponsored for decades by social scientists, intellectuals, professors, and activists. It’s not something new that is disorienting and demoralizing the West: It is a venerable set of attitudes encouraged by generations of psychologists, educators, child-rearing experts, and even ­religious leaders.

Huxley advised that the search for the certainty of a moral truth was in any case a futile one, since “the provision of dogma, whose absolute truth is buttressed by authority or guaranteed by revelation,” is likely to break down “in the face of the accumulation of new facts and new knowledge.”

Since the early 1920s, Western culture in general and American society in particular have become increasingly hostile to the conscious act of judgment. It was at this point in time that the term nonjudgmentalism was invented, used by progressive educators, social workers, and therapeutic professionals as a core axiom of their work. They believed that the establishment of a relationship with students, welfare cases, and patients that was free from judgment would encourage people to open up and confide their problems. This professional ideal of establishing a relationship of open and neutral nonjudgment would in the decades to come mutate into the contemporary idealization of the “safe space.”

In its early version, nonjudgmentalism was justified on instrumental grounds. The disposition was seen as a way to gain the trust of individuals in need. Advice books directed at parents urged this technique, warning mothers and fathers to refrain from imposing judgment on their children. “As loving parents, we try to stay in touch with our wayward children, keeping lines of communication open, being nonjudgmental,” counselled one commentator in 1925. In 1934, a supporter of the National Council of Parent Education urged nonjudgmentalism on the promise that it would create an environment wherein “emotional conflicts may be worked out and a better balance of emotional stresses achieved.”

During the 1930s, therapeutic professionals and sections of the helping professions expanded the meaning of nonjudgmentalism from a method of support to assume the general status of a positive personality trait for those seeking to help others. In their 1935 Newsletter, for example, the Family Services of America noted that social workers’ desirable “personal qualities would include, among others, sensitivity, freedom from prejudice, non-judgmental and non-managing attitudes, a genuine feeling for people, maturity, capacity for self-development.” It was not long before these qualities became ideals for everyone.

At the same time that nonjudgmentalism attained this new status, a parallel sentiment associated judgment with negative outcomes. In some cases, the transmission of moral values to children was presented as undesirable or even dangerous, for it was claimed that the time-honored methods of moral formation suffocated their creativity and healthy development. By the late 1930s and especially the 1940s, judgmentalism was linked to authoritarian behaviors and deemed the cause of totalitarianism. Judgment, that is, was at the root of conflict and war. Moral truths firmly stated and the acceptance of moral authority were depicted by progressive pedagogues and social scientists as symptoms of a sick personality.

This devaluation of judgment contributed to the steady unravelling of moral authority in the cultural imagination of the West in the mid-twentieth century. From the interwar period onwards, but especially since the 1940s, moral authority shifted from a necessity many regarded in a positive light to something increasingly ominous. In this shift, the very term authority lost much of its legitimacy. Indeed, in the 1940s and 1950s, numerous commentators increasingly used the terms authority and ­authoritarian interchangeably. They often dismissed moral authority as an outdated ideal, favored by those who wished to impose their dogma on impressionable people. They derided the act of obeying authority as a marker of psychological malaise. One influential text published in 1941 by psychologist Erich Fromm—Escape from Freedom—was devoted to the study of this malaise. Fromm asserted that acceptance of moral authority was the result of a “psychological mechanism of escape” based on a “simultaneous love for authority and the hatred against those who are powerless.” Such attitudes were supposedly typical traits of the “authoritarian character.”

In the postwar years, Anglo-American liberal and leftist intellectuals outside the realm of psychology echoed the diagnosis. A person’s need for authority was understood as a form of psychological deficit, while the act of judgment was associated with the exercise of authoritarian impulses. One scholarly account of the time echoes the increasingly popular view that authority is “necessarily oppressive and distorts personality development.”

Fromm was a Marxist-inspired intellectual associated with the Frankfurt School. But these judgments about judgment and authority were not limited to leftist intellectuals or radical thinkers. The English biologist and philosopher Julian Huxley expressed similar sentiments in his 1950 lecture to the Royal Anthropological Institute of Great Britain and Ireland. The need for authority, he said, indicated a psychological problem inflicted by a “burden of guilt” which was “a result of having to repress the impulses of aggression in the primal infantile conflict of hate and love for the parents.” Huxley advised that the search for the certainty of a moral truth was in any case a futile one, since “the provision of dogma, whose absolute truth is buttressed by authority or guaranteed by revelation,” is likely to break down “in the face of the accumulation of new facts and new knowledge.”

Along with other leaders of the newly established post–World War II international organizations, Huxley was a scion of the British intellectual establishment. As the first Director General of UNESCO, he played an important role in gaining institutional support for the delegitimation of moral authority. Huxley and his colleagues dismissed morality as dogma while depicting the exercise of judgment as the unthinking application of archaic values. Some went so far as to suggest that moral judgment, particularly its idealization during the process of the socialization of young people, was responsible for both world wars.

Brock Chisholm, the Canadian psychiatrist and first Director General of the World Health Organization (1948–1953), adopted an aggressive stance against the exercise of judgment. Chisholm believed that people had to be reeducated to reject their old-fashioned moral outlook. This reform would begin by rejecting moral formation of children that relied on the “concept of right and wrong.” In a widely publicized lecture, delivered in 1946, he asked, “What drives people to war?” His answer: “Only one common ­factor—morality.”

He proceeded to call for the “substitution of intelligent and rational thinking for faith in the certainties of the old people.” His program had the earmarks of a revolutionary movement, sweeping in its verdict and daring in its proposals. Liberation from the crushing burden of moral distinctions was essential in order to overcome the “poisonous certainties fed us by our parents, our Sunday and day school teachers, our politicians, our priests, our newspapers and others with a vested interest in ­controlling us.”

Liberation from the crushing burden of moral distinctions was essential in order to overcome the “poisonous certainties fed us by our parents, our Sunday and day school teachers, our politicians, our priests, our newspapers and others with a vested interest in ­controlling us.”

Although Chisholm was more vocal in his hostility to moral judgments than most, his outlook matched the views of cultural elites during the postwar era. The published version of his lecture was introduced by Abe Fortas, at the time the Under Secretary of the Interior, who praised Chisholm’s thesis because “freedom from moralities means freedom to think and behave sensibly.” This newfound “­freedom from moralities” was hailed by parenting experts, educators, and a growing body of therapeutic professionals as the foundation of an open-­minded personality. By definition, such a personality was nonjudgmental and regarded distinctions between right and wrong as simplistic. This open-minded ideal was widespread. In 1948, literary critic Lionel Trilling wrote of a “characteristically” American impulse not to judge people too harshly, noting the assumption that the consequences of judgment “will turn out to be ‘undemocratic.’”

Two years after Trilling’s statement, the supposed link between moral norms and undemocratic attitudes and behaviors was underscored by the widely discussed findings of The Authoritarian Personality. One of the most influential social science texts of the twentieth century, this study claimed that individuals who exhibited authoritarian and fascistic personality traits in their personal life rely “upon authority, conventionalized values, church dogma, public opinion, and prestige figures.” The main author of the study, Theodor Adorno, concluded that “it is authority more than anything else that structures or pre-­structures the world of the prejudiced individual.” This link between authority and a dysfunctional personality continues to exercise a dominant influence over political psychology to this day.

The polemics against authority invariably eroded confidence in moral judgment. As Chantal Delsol observes, “Thoughtful moral judgment establishes a relationship between a situation and certain points of reference.” Individuals justify their judgments by referring to a moral source whose authority is recognized by their community. But this was precisely what the emerging consensus deemed dangerous and “anti-democratic.” As one account of the changing status of moral authority explained in 1966, putting “right” in scare quotes:

Traditionally man points beyond the limitation of his own judgments of value to some ultimate source that he expects will authenticate and give validity to the conviction that these decisions are universally “right.”

Without an authoritative moral source, judgment loses cultural currency. Not surprisingly, when confronted with the claim that moral judgments are merely an expression of the personal opinion of a closed­-minded individual, many citizens in the mid-twentieth-­century West decided to keep their views of what is right and wrong to themselves. With so many experts challenging the value of moral absolutes, the very idea of obedience to normative authorities became suspect. Social scientists adduced powerful historical reasons for warning against (presumed) moral authorities: The willingness of civilian populations to obey totalitarian leaders leading up to World War II proved to many that obedience was inconsistent with a democratic form of public life.

In the 1950s, the word obedience was frequently qualified by the term unquestioned, as if this was an act carried out by unthinking and uncritical people. Such sentiments were particularly influential in intellectual life and the sphere of education. Even a child’s obedience to parent and teacher were sometimes presented as the precursor to more dangerous forms of deference to authority, the first steps on the road to an “authoritarian personality.”

One American study of public opinion relative to traits desired in children, published in 1988, shows how much things had changed. Between the years 1924 and 1978, polling revealed a marked decline in the valuation of obedience to family and church and an increase in the affirmation of individual independence. Preferences for “strict obedience” fell from 45 percent in 1924 to 17 percent in 1978, while preferences for “loyalty to the church” fell from 50 percent to 22 percent. By 1978, the “most important traits desired in children were independence and tolerance.”

Trends in Britain closely paralleled those in ­America. One educator declared in 1957 that “nothing is so characteristic of twentieth-century man as his critical and questioning approach to all traditional forms of authority.” What brought this about was not simply a revolt against authority, a phenomenon that many imagine burst onto the scene only in the 1960s. That was just one side of the phenomenon. There was also the other side: Those in authority lost their will to exercise it. After all, the people casting authority as treacherous and inhumane were themselves in positions of authority! As one prescient account written in 1957 observed, “The alteration of attitude has taken possession not only of those who take the orders but of those who give them: the one side has become more hesitant as the other has become more clamorous.” In effect, authority had become a sort of embarrassment to those who were called upon to exercise it, a subject best avoided or disavowed.

The most striking expression of society’s reluctance to judge was the estrangement of religious institutions from their own traditions. As the priest and pastor lost their authority, so did the dogma that they were appointed to spread. British commentator Noel Annan pointed out in 1990 that there was a “transformation of the Christian message in the post-war era”:

The liver and lungs were torn out of the old theology, leaving the heart still beating. Compassion came from the heart, judgment disappeared. ­Personal evil and wickedness were no longer so sinful.
According to this new theology of the heart, charity is opposed to judgment. Compassion means that “we don’t judge others.”

By the time Stanley Milgram published his classic polemic against authority, Obedience to ­Authority: An Experimental View in 1974, the idea that obedience was a dangerous and dysfunctional form of behavior enjoyed considerable cultural support. Milgram based his book on a famous experiment in which subjects were asked to inflict pain upon others at the command of a presumed expert ­scientist. The readiness of Milgram’s study participants to obey an authority figure who commanded them to deliver electrical shocks for “scientific reasons” became a widely disseminated moral tale about the dangers of deference. Milgram concluded that his experiment demonstrated that the rise of Nazism stemmed from people’s willingness to obey authority. Although the integrity of the experiment is widely discredited—sharp criticisms have been raised about ­Milgram’s methods and his distortion of results—it is still popular as a cautionary story about the perils of ­authority.

When considering the history of discussion of moral authority in the 1950s and 1960s, one is struck by the evasive and confused thinking. During the Cold War, economic prosperity and the legitimacy enjoyed by the Free World allowed Western societies to avoid facing up to the consequences of the unravelling of moral authority, while the pressure of the Soviet threat injected moral seriousness into civic life. In 1953, Jesuit writer Douglas Copland explicitly connected the “great progress in production, in living standards, in security against many of the hazards of personal life” that the modern world has witnessed with “the growth of authority and control in free societies.” He imagined that prosperity would be taken as a sign of the legitimacy of the moral foundations of the postwar settlement. But this didn’t make sense, not with so many people defining a free society as one that loosens authority and control. Copland, like many others, assumed that the spread of prosperity and democracy in the West after World War II would ensure respect for the authorities leading the way.

As it turned out, a long time before the end of the Cold War it became evident that moral authority was fast losing its hold over a significant section of ­society. In November 1968, LIFE Magazine published an editorial with the title, “When Authority Is Challenged.” The editorial noted:

All major sources of moral authority seem to have rebellions on their hands—universities, parents, U.S. law, the Russian Communist party, and now even the Roman Catholic Church, to name a few. A general question arises: once such an authority is seriously challenged or even undermined, how can it reestablish itself?

The answer offered by most commentaries on the subject was a resounding, “Don’t bother!”

To be sure, during that era there were arguments on behalf of the “need for authority,” but they often had a tentative and abstract character. Participants in a 1956 meeting of the American Society for Political and Legal Philosophy were convinced that this topic was of “vital importance,” but could not find the words with which to justify the rehabilitation of the ideal of authority. “Whenever philosophy even glances at this question nowadays, it seems to have eyes only for freedom and ignores authority,” complained one person. While most participants struggled to rescue a version of meaningful authority, Hannah Arendt observed that it had become “almost a lost cause.” Arendt’s paper spoke to the past and was self-consciously titled, “What Was Authority?” She notes, “authority has vanished from the modern world, and . . . if we raise the question what authority is, we can no longer fall back upon authentic and undisputable ­experiences common to all.” Arendt’s narrative of loss left little room for thinking that authority in its classical form could survive.

Neither, then, could the act of judgment in its classical form. With the loss of moral authority, the very idea of judgment has a different meaning. Its decay is illustrated by the word’s rhetorical mutation as it has expanded into the pejorative term judgmentalism. The Oxford English Dictionary defines judgmentalism as a form of “overly critical or moralistic behavior,” and suggests that the term was virtually unknown until the 1950s, when it came into use in conjunction with the term moralism. My investigation into the genealogy of the term indicates that the use of the word took off in the 1970s, when it entered everyday speech. Since then, judgmentalism has increasingly signified an act of narrow-minded prejudice, and all acts of judgment share in that stigma to some degree or other (save, perhaps, for the small class of judgments that fall under political correctness, which amounts to a plenary requirement of nonjudgmentalism).

Since the 1980s, this diseasing of judgement has become a powerful dynamic. Today, many hold that people, especially children, lack the resilience to deal with judgment, a belief widely advocated by parenting experts and early-childhood educators. This claim that we are vulnerable to judgment is paired with a positive ideal, “self-esteem,” which is presumed to cure the fragility of individuals who lack it and need it. Nonjudgmentalism fosters self-esteem, we are told, and there is no better outcome than for a person to be comfortable with himself. People with any authority, therefore, must be very careful how they exert it. Schoolteachers avoid explicit criticism of their pupils and practice techniques that validate all members of the classroom. Corporate voices insist, “Diversity is our strength,” which is another way of saying, “We will not judge you.” The axiom “Criticism is violence” has gained significant influence on university campuses and among society’s cultural elites. Judgment is sometimes depicted as a form of psychic violence, especially if applied to children. The distinguished sociologist Richard Sennett implied as much in a 2003 book with the pertinent title Respect when he posited the “devastating implications of rendering judgement on someone’s future.”

By the 1980s, then, the meaning of the word “nonjudgmental” had broadened from its original use among the social scientists as a method for gaining trust. Older adults who occupy positions of authority in hierarchical institutions have absorbed the lessons of the antiauthoritarians who came to dominate educational theory and therapeutic disciplines. They have no confidence in their own grounds, and so they run their decisions through bureaucratic machinery while mouthing the nonjudgmental watchwords tolerance and inclusion and diversity. For them, nonjudgmentalism is a philosophy, a social outlook, a worldview, and a professional ethic. It is assumed to be the disposition that all good and responsible people in charge should cultivate.

For young adults, nonjudgmentalism is likewise a worldview, a paradoxical one of absolute relativism. An important study from 2011, Lost in Transition: The Dark Side of Emerging Adulthood, demonstrated this widespread prevalence of moral indifference among young people. The majority of subjects interviewed believed that since morality is a matter of individual choice, it was inappropriate to judge others. Those who insisted on doing so were perceived as legitimate targets of condemnation. The prevailing sentiment among these young interviewees was that judging was associated with “condemning, castigating, disparaging, or executing.” One young adult thought people who sought to impose their moral beliefs on others were “sick.”

The fact that nonjudgmentalism does not endorse any positive qualities did not seem to trouble the interviewees. Nonjudgmentalism leaves a vacuum in the consciences of human beings, a moral indifference that provides no satisfaction other than the fleeting virtue of showing one’s liberal forbearance, but that was enough for them. They would never think that their enlightened tolerance is a species of moral cowardice. This is the way of Nietzsche’s Last Man, the one who lives without deep convictions and strong gods, happily trading them for a relaxed, ­autonomous lifestyle.

Nonjudgmentalism leaves a vacuum in the consciences of human beings, a moral indifference that provides no satisfaction other than the fleeting virtue of showing one’s liberal forbearance, but that was enough for them.

The other paradox of nonjudgmentalism leads not to relativism but to progressive moralizing. In some situations, opponents of judgment have no problem condemning those who adhere to traditional values and moral norms. On the contrary, they are eager to do so. But it is telling that this condemnation is often framed in medical or sociological terms. Those who exercise moral judgment in traditional ways are deemed homophobic or transphobic; they are deluded by their white privilege or captive to a patriarchal mentality. In this way of talking, debates about right and wrong (a feature of every culture, even those with settled sources of moral authority) are translated into talk of pathologies and social dynamics. One is not offering a counter-judgment based on moral reasoning; rather, the work is being done by “critical analysis.” In this way, the larger ideal of nonjudgmentalism is preserved.

The 1619 Project of the New York Times is one example. It is an attempt to deprive the founding of the United States of moral authority by showing that its foundational norms lack legitimacy. In introducing its doctored history, the Times web page points to the role of slavery and declares, “Our democracy’s founding ideals were false when they were written.” It asserts an alternative interpretation based on its own authority: “Black Americans have fought to make them true.”

The Times thus implies that it possesses an authentic substitute for the false ideals of the old founding, one that identifies a new set of founders, “Black Americans.” Their voices are to provide moral ­orientation, not those of John Winthrop and ­William Penn, Thomas Jefferson and James Madison, or other canonical figures in American history.

But true to form, The 1619 Project is an exercise in nihilistic critique, not the consolidation of a substantive moral alternative. The revisionist history promoted by the Times cannot offer the normative foundation for its “true” ideals that are supposed to supersede the false and discredited ones long taught in American schools. That’s because the sharp dichotomy between black and white is ideological, not historical. Figures such as Frederick Douglass left posterity a rich trove of moral reflection and exhortation. But his voice, though often fiercely critical, draws upon the same biblical and Enlightenment traditions that so strongly influenced the Founders whom The 1619 Project seeks to discredit. Douglass corrects, supplements, and reinforces 1776, rather than replacing it.

The polemical and ideological character of The 1619 Project demonstrates that the simple act of negating the past and its legacy does not provide the foundation for authority; it merely draws attention to its absence. This is perhaps why the project has won accolades. The lead author of the project, Nikole Hannah-Jones, won the 2020 Pulitzer Prize for Commentary. She contributes to the already well-established cultural enterprise of eroding the bases of authority, and in so doing diminishing the power of judgment. The notion of American values is “­problematized,” not clarified or strengthened.

The outburst of protests associated with Black Lives Matter highlights the importance of justice in society, but determining what constitutes just or ­unjust conduct requires judgment. This moment might have been an occasion for renewal. But in a culture estranged from living sources of authority and untutored in the exercise of judgment, justice becomes a slogan without moral content. Its rhetoric serves as a political weapon rather than invitation to deliberation, debates, and moral discernment. To avoid this fate, we must find a way to restore judgment to our ways of thinking about ourselves, others, and society at large.

Frank Furedi is emeritus professor of sociology at the University of Kent

This article first appeared HERE.