Talking to learn

I graduated from Berkeley in 1969 with a degree in history. But, as the song says, I don’t know much about history. I suspect that my failure to fully understand the dynamics of the Hapsburg Empire or the rise of the Robber Barons had much to do with the university system.

Imagine that you are one of a thousand students sitting in a vast lecture hall. The professor stands before a lectern at the front of the room. If you’re lucky, the professor is mildly entertaining, bringing some life to the past. For the most part, or at least for me, the lectures were, well, boring. Lots of words with little relevance, as we used to say. In my four years at one of the best universities in the world (and at the time I attended, Berkeley was ranked at the top) I never got to know a professor, save one, who volunteered to be my undergraduate advisor.

To be sure, I was an athlete first and a student second or third. My Big C jacket was more important to me than the stacks of books on my desk, which I rarely opened. That I graduated at all was a small marvel. But graduate I did.

Then a few years later circumstances led to me to California State University at Hayward, as it was called back then. Having taken a philosophy of education course for a teaching certificate, I became hooked on philosophy. And I learned, perhaps for the first time in my post-secondary academic career. What was difference?

My philosophy classes were much, much smaller than my Berkeley lectures. Often there were just a half-dozen students studying logic or Bertrand Russell or existentialism. Also, the professors were more engaged with their students, perhaps because there were so few of us. And, of course, I was a bit more mature, having exhausted my baseball-playing days, and better disposed to absorb the subject matter.

Yet, I think the most important element in my learning was my interactions with both my fellow students and my professors during and, especially, after classes. There were lots of discussions, and philosophy invites thinking and talking and writing.

Harry Brighouse is a philosophy professor who blogs on the Crooked Timber site. He recently posted a piece on the importance of classroom discussions, in which he limits his own speaking to a quarter of the period, encouraging and facilitating conversations about what he’s said, what the textbooks say, and what students are thinking. He writes:

So they need to discuss intellectual issues in class, both to do the learning of the discipline-specific content and skills that can only occur through discussion – through practical application if you like – and to get habituated to doing the same outside of class. They need, I think, to be told explicitly why classroom discussion is such an important part of the class, and that they should discuss the material with friends or classmates outside of class – not just when they have a test, but all the time, instead of discussing the much less interesting things that make up small talk. (And, just as a general matter, I have become much more explicit over time about everything I want them to do.

Most of his students, he reports, experienced the “factory model” of learning, what I encountered at Berkeley. As he describes it:

[They] have been taught, since middle school, on a kind of factory model – you go to class, you learn things, you regurgitate them on tests (or, very occasionally, in papers) and then you either (if you are poor or working class) go to your job, or (if you are middle or upper middle class) devote yourself to being a semi-professional athlete, or musician, or actor, or debater, or whatever.

Though I attended one of the elite universities in the world, I learned a great deal more at Hayward, a few miles down the road. My grades were certainly better at the latter. And I’ve kept many of my philosophy books and discarded nearly every history text, of which there were hundreds, all told.

I should close with an anecdote from my philosophy days. There was a fellow student by the name of Ira. We began a discussion, really an argument, one day. It lasted through a couple of quarters! The topic, as best as I can recall, was: Which comes first: thought or language? I argued the former, he the latter. Our months-long conversation took place over coffee, in between classes, in our respective apartments, or at any time we accidentally bumped into each other. As”philosophers” we could pull this off, respecting each other’s person and point of view.

I wish that more of us philosophized as Ira and I did. I think the world would be a better place.

Gunning for philosophy

I generally become speechless, if not also apoplectic, when confronted with the gun mongers and their rhetoric. But this is America, so my tongue is often tied. Case in point: Texas.

The benighted legislators of the Lone Star state passed a law that allows for students and faculty to carry “concealed” guns on college campuses. It becomes effective this August 1. Now, I find it impossible to see the logic of commingling guns with schools. It’s like trying to discover an intelligent office holder in Texas. Oxymorons, while rare in civilized society, are evidently abundant in entire states, especially in the South, where all of the species’ worst aspects inhere.

A graduate student in philosophy at the University of Texas at Austin, the state capital, tried to be reasonable in the face of the new law. She penned an objection in her initial reaction. Today, she writes for the New York Times on the subject.

In general, we do not feel apprehension about the presence of strong people in spaces reserved for intellectual debate (although we might in other contexts — a boxing ring, say, or a darkened alley), but we do feel apprehension about the presence of a gun. This is because the gun is not there to contribute to the debate. It exists primarily as a tool for killing and maiming. Its presence tacitly relates the threat of physical harm.

As for debate, the University of Houston promulgated some pedagogical guidelines:

“be careful in discussing sensitive topics; to drop certain topics from curriculum; [to] not ‘go there’ if you sense anger…”.

Conservatives loathe critical thinking and liberal minds in general (liberal in the very literal sense). They have therefore sought to exclude or limit topics to which they object. In the South state boards of education work hard to purge curricula of facts, choosing instead to promote a kinder, gentler approach to slavery, lynchings, and the whole concept of the Confederacy. Further, the reactionary mind rejects “political correctness,” believing that men should dominate women, whites should rule, and people of color should live elsewhere.

One consequence of the campus carry law is to stifle intellectual debate. The bible may be safe. But beware the evolutionists and philosophers.


Super intelligence and morality

Philosophers and scientists have long posited an artificial intelligence created by humans. The philosophers, more so than the scientists, perhaps, wonder if such an intelligence would necessarily be moral. Would it be kind and merciful or, like HAL in 2001, turn on the humans it was thought to protect?

It seems to me that this question has its origins in the heuristic nature of the algorithms developed by engineers or computer scientists. That is, the robot, shall we say, is programmed to discover and learn on its own.

We may recall Walt Disney’s illustrations of the Sorcerer’s Apprentice in the movie Fantasia. The apprentice is played by Mickey Mouse. (Who else?) He has observed the sorcerer performing magic. Mickey surmises that the magic resides in the sorcerer’s hat. In the sorcerer’s absence Mickey dons the hat, moves his arms, and, as if by magic, the mops and brooms clean the rooms, a task assigned to Mickey, the apprentice. Of course, things get out of hand, and before long the rooms flood and the apprentice comes close to drowning—until the sorcerer returns and puts a halt to Mickey’s shenanigans. There is clearly more to the sorcerer’s magic than his hat.

A Swedish philosopher, Nick Bostrom, posits a paper-clip maximizing robot, in his book Superintelligence: Paths, Dangers, Strategies. From the Wikipedia link:

Regardless of the initial timescale, once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

Bostrom imagines an AI-robot programmed to build paper clips. Indeed, it is programmed to make as many paper clips as possible. It will need raw material. After exhausting readily available sources, it starts transforming human bodies into, well, paper clips. Oops.

But we need not stop with paper clips. Bostrom also suggested the possibility that we humans and everything around us are really the products of a computer simulation.

We know that computational power has grown exponentially (Moore’s Law) since the first computers. While scientists argue about physical limitations to unending growth (e.g., the materials used in integrated circuits constrain further increases), we can posit an infinite expansion of computing power, for the sake of argument.

How, then, can we be certain that we are not merely 1’s and 0’s in an incredibly elaborate computer simulation? I think of the strides made in computer-generated-imaging on movie and television screens. The stuff looks very real, so much so that the line between actors and their surroundings disappears.

In attempting to explain evolution and natural selection Richard Dawkins created a simple software program consisting of a few premises, rules, and randomness. After several cycles, the computer-generated structure becomes increasingly elaborate, expanding incrementally according to the basic algorithms.

Our universe is over a dozen billion years old. It began, we’ll assume, with a few initial conditions, gradually, or perhaps in fits and starts, yielding our current situation: an earth with several billion people, thousands of cities, electricity, scores of scientists working on artificial intelligence, and wars, famine, pestilence and widespread cruelty. There is no good reason to believe that the universe itself is inherently moral. It just is.

Or maybe the universe began with a video game for someone’s entertainment. If so, given humankind’s abominable acts, that super programmer did not include a code of ethics to limit our behaviors. He, she, or it was likely a masochist.

Whether we are real or simulated, the evidence seems clear: morality was not part of the equation.

Inequality and democracy

Thomas Piketty, author of Capital in the 21st Century, believes that inequality undermines democracy in the literal sense—that is government of, by, and for the people. Those who struggle to make ends meet have scant resources or time to change the political and economic systems for their benefit. The rich, on the other hand, have surpluses galore, enough to buy politicians. So, instead of democracy we have a plutocracy.

For Crooked Timber, philosopher Chris Bertram writes:

Piketty fears that given rising levels of wealth inequality, democracy is doomed. People will not tolerate high levels of inequality forever, and repressing their resistance to an unequal social order will eventually require dispensing with democratic forms. I’m not so sure. A highly unequal society in wealth and income is certainly incompatible with a society of equal citizens, standing in relations of equal respect to one another and satisfying their amour propre, their craving for recognition though a sense of shared citizenship. (This benign outcome roughly corresponds to the Rawlsian ideal of a well-ordered society where the social bases of self-respect are in place.) But the outward form of democracy, its procedures, are surely compatible with great inequality, just so long as the wealthy can construct a large enough electoral coalition to win or can ensure that the median voter is the kind of “aspirational” person who identifies with the one per cent, even though they are not of it. In an unequal society such people are very common. They may be very poor compared to the super-rich, but they have just enough to take pride in their status as members of “hard working families” and to hope for the lucky break that will elevate them. At the same time they can look down with contempt on the welfare claimant and the “illegal” immigrant, nurturing their own amour propre by taking satisfaction in what they are not. Here we have, in another guise, the phenomenon of the “poor white” who looks down on poorer blacks and is thereby impelled to sustain a hierarchical social order. Procedural democracy limping on against a background of inequality, disdain and humiliation is not an attractive prospect, but it is already a big part of our present and may be the whole of our future unless egalitarian politics can be revived.

A few years ago I commented, albeit indirectly, on Rousseau’s notion of amour propre, in particular the “aspirational person,” mentioned by Bertram. In my sometimes provocative tone, I wrote:

Republicans, by the way, and especially their most recent incarnation, are generally selfish bastards who operate behind the “veil of opulence.” They may not be all One-percenters, but they certainly aspire to be one. As I recall Archie Bunker saying, taxing the rich removes his incentive to one day join their ranks.

The “veil of opulence,” a term used by philosopher Benjamin Hale, prevents many of us from recognizing that we are all products of accidental circumstances and that fortune, or fate, can dictate good and bad outcomes. Hale:

Those who don the veil of opulence may imagine themselves to be fantastically wealthy movie stars or extremely successful business entrepreneurs. They vote and set policies according to this fantasy. “If I were such and such a wealthy person,” they ask, “how would I feel about giving X percentage of my income, or Y real dollars per year, to pay for services that I will never see nor use?”

In his essay, Bertram invokes John Rawls’s A Theory of Justice, as I have on several occasions (e.g., here). Rawls spoke of an “original position” and a “veil of ignorance.” He asked how we might decide to, among other things, distribute precious goods and services if we were ignorant of our original position, whether we were born rich or poor, black or white, tall or short, and so on. Given the possibility that our original position was comparatively worse than others, would we not, for example, ensure that economic security was guaranteed for all? Here’s how I put it back in 2012:

Americans, in particular, approach the fundamental challenge in an entirely different fashion, behind a “veil of opulence.” Instead of exploring an “original position” in which we are ignorant of our own circumstances, we imagine ourselves to be the next Bill Gates or a rich hedge-fund manager. We replace ignorance with aspiration. If the goal is to become that wealthy individual, how might we go about achieving it? Moreover, if we assume that we are privileged, how do we answer questions about efficient or equitable taxation? Would I prefer a smaller government, one less intrusive and, therefore, less capable of impeding my personal aggrandizement? What would I think of the poor, the infirm, the “lesser mortals” who demand much but contribute little?

Democracy, beyond its procedural aspects, demands that a sufficient percentage of the populace feels that they are in life together, that my needs and wants and even dreams are broadly shared. Just reciting the Pledge of Allegiance or singing the Star Spangled Banner in a group does not make us a community. Rousseau, in his Social Contract, put it thusly:

…the general will, if it be deserving of its name, must be general, not in its origins only, but in its objects [my emphasis], applicable to all as well as operated by all, and that it loses its natural validity as soon as it is concerned to achieve a merely individual and limited end, since, in that case, we, pronouncing judgment on something outside ourselves, cease to be possessed of that true principle of equity which is our guide.

We’ve got a lot of work to do.

Don’t confuse me with the facts

Let’s say that Bob believes that the Seattle Mariners won the World Series this year, 2015. When we point out to Bob that the Mariners did not win and that in all their years of existence they’ve never come close to being in the Fall Classic, Bob asks for proof. It would not be difficult to provide a mountain of evidence—including videos, newspaper clippings, testimonials from the Royals’ players—proving that the Mariners got an early vacation. Should Bob continue to hold to his belief, all evidence to the contrary, we are likely to think that Bob is a bit odd, someone unhinged from reality.

Take another example, again from baseball. In the series, Alex Gordon hit a home run. It was estimated to have traveled 438 feet to deep centerfield off the Mets’ closer. John rejects the estimate. He concedes that the ball sailed over the fence, but that the distance is disputable. We tell John that the estimate is based on a number of factors, including the spot where the ball landed, the density of the air, and ballistics. John suggests that there is uncertainty surrounding the estimate, so that a range of possible or probable values should be used. Okay.

Now we turn to Republicans. They believe, for the most part, that there is a god, that prayers make a difference, and that when we die our souls go to heaven or hell. They do so, despite there being no evidence of god’s existence.

At the same time, many of these same Republicans deny both the science and the fact of global warming. To those who accept both, the Republicans are inclined to say anything from “it’s a hoax” to “there is too much uncertainty” to give a credible account of a changing climate. And even if such Republicans concede that the earth is warming, they are as likely to argue that natural variability is to blame, but surely not humans.

In today’s New York Times, research fellow Lee McIntyre writes:

We hear a lot of folks in Washington claiming to be “skeptics” about climate change. They start off by saying something like, “Well, I’m no scientist, but …” and then proceed to rattle off a series of evidential demands so strict that they would make Newton blush. What normally comes along for the ride, however, is a telltale sign of denialism: that these alleged skeptics usually have different standards of evidence for those theories that they want to believe (which have cherry picked a few pieces of heavily massaged data against climate change) versus those they are opposing.

And speaking of “different standards of evidence,” the aforementioned Republicans have no doubts whatsoever believing that which they cannot see, feel, hear, or smell (i.e., god) but have all the doubts in the world about things that are manifestly evident, like the Mariners never having made it to the World Series or planetary warming.

Going way back

As a result, major racial inequalities have been deeply institutionalized over about 20 generations. One key feature of systemic racism is how it has been socially reproduced by individuals, groups and institutions for generations. Most whites think racial inequalities reflect differences they see as real — superior work ethic, greater intelligence, or other meritorious abilities of whites. Social science research is clear that white-black inequalities today are substantially the result of a majority of whites socially inheriting unjust enrichments (money, land, home equities, social capital, etc.) from numerous previous white generations — the majority of whom benefited from the racialized slavery system and/or the de jure (Jim Crow) and de facto overt racial oppression that followed slavery for nearly a century, indeed until the late 1960s.

— Joe Feagin, New York Times