What is it about Hopper? Every once in a while an artist comes along who articulates an experience, not necessarily consciously or willingly, but with such prescience and intensity that the association becomes indelible. He never much liked the idea that his paintings could be pinned down, or that loneliness was his metier, his central theme. “The loneliness thing is overdone,” he once told his friend Brian O’Doherty, in one of the very few long interviews to which he submitted.
Why, then, do we persist in ascribing loneliness to his work? The obvious answer is that his paintings tend to be populated by people alone, or in uneasy, uncommunicative groupings of twos and threes, fastened into poses that seem indicative of distress. But there’s something else too; something about the way he contrives his city streets. What Hopper’s urban scenes replicate is one of the central experiences of being lonely: the way a feeling of separation, of being walled off or penned in, combines with a sense of near unbearable exposure.
This tension is present in even the most benign of his New York paintings, the ones that testify to a more pleasurable, more equanimous kind of solitude. Morning in a City, say, in which a naked woman stands at a window, holding just a towel, relaxed and at ease with herself, her body composed of lovely flecks of lavender and rose and pale green. The mood is peaceful, and yet the faintest tremor of unease is discernible at the far left of the painting, where the open casement gives way to the buildings beyond, lit by the flannel-pink of a morning sky.
In the tenement opposite there are three more windows, their green blinds half-drawn, their interiors rough squares of total black. If windows are to be thought analogous to eyes, as both etymology, wind-eye, and function suggests, then there exists around this blockage, this plug of paint, an uncertainty about being seen – looked over, maybe; but maybe also overlooked, as in ignored, unseen, unregarded, undesired.
In the sinister Night Windows, these worries bloom into acute disquiet. The painting centres on the upper portion of a building, with three apertures, three slits, giving into a lighted chamber. At the first window a curtain billows outward, and in the second a woman in a pinkish slip bends over a green carpet, her haunches taut. In the third, a lamp is glowing through a layer of fabric, though what it actually looks like is a wall of flames.
There’s something odd, too, about the vantage point. It’s clearly from above – we see the floor, not the ceiling – but the windows are on at least the second storey, making it seem as if whoever’s doing the looking is hanging suspended in the air. The more likely answer is that they’re stealing a glimpse from the window of the ‘El’, the elevated train, which Hopper liked to ride at night, armed with his pads, his fabricated chalk, gazing avidly through the glass for instances of brightness, moments that fix, unfinished, in the mind’s eye. Either way, the viewer – me, I mean, or you – has been co-opted into an estranging act. Privacy has been breached, but it doesn’t make the woman any less alone, exposed in her burning chamber.
KAKISTOCRACY (n.)
Government by the least qualified or most unprincipled citizens; a form of government in which the worst people are in power.
[Origin: Greek “kakistos” or “worst”, the superlative form of “kakos” or “bad”.“Kakos” is closely related to “Caco” or “defecate”]

By DAVID Z. HAMBRICK and ALEXANDER P. BURGOYNE
NY Times: Sept 16, 2016
ARE you intelligent — or rational? The question may sound redundant, but in recent years researchers have demonstrated just how distinct those two cognitive attributes actually are.
It all started in the early 1970s, when the psychologists Daniel Kahneman and Amos Tversky conducted an influential series of experiments showing that all of us, even highly intelligent people, are prone to irrationality. Across a wide range of scenarios, the experiments revealed, people tend to make decisions based on intuition rather than reason.
In one study, Professors Kahneman and Tversky had people read the following personality sketch for a woman named Linda: “Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.” Then they asked the subjects which was more probable: (A) Linda is a bank teller or (B) Linda is a bank teller and is active in the feminist movement. Eighty-five percent of the subjects chose B, even though logically speaking, A is more probable. (All feminist bank tellers are bank tellers, though some bank tellers may not be feminists.)
In the Linda problem, we fall prey to the conjunction fallacy — the belief that the co-occurrence of two events is more likely than the occurrence of one of the events. In other cases, we ignore information about the prevalence of events when judging their likelihood. We fail to consider alternative explanations. We evaluate evidence in a manner consistent with our prior beliefs. And so on. Humans, it seems, are fundamentally irrational.
But starting in the late 1990s, researchers began to add a significant wrinkle to that view. As the psychologist Keith Stanovich and others observed, even the Kahneman and Tversky data show that some people are highly rational. In other words, there are individual differences in rationality, even if we all face cognitive challenges in being rational. So who are these more rational people? Presumably, the more intelligent people, right?
Wrong. In a series of studies, Professor Stanovich and colleagues had large samples of subjects (usually several hundred) complete judgment tests like the Linda problem, as well as an I.Q. test. The major finding was that irrationality — or what Professor Stanovich called “dysrationalia” — correlates relatively weakly with I.Q. A person with a high I.Q. is about as likely to suffer from dysrationalia as a person with a low I.Q. In a 2008study, Professor Stanovich and colleagues gave subjects the Linda problem and found that those with a high I.Q. were, if anything, more prone to the conjunction fallacy.
Based on this evidence, Professor Stanovich and colleagues have introduced the concept of the rationality quotient, or R.Q. If an I.Q. test measures something like raw intellectual horsepower (abstract reasoning and verbal ability), a test of R.Q. would measure the propensity for reflective thought — stepping back from your own thinking and correcting its faulty tendencies.
There is also now evidence that rationality, unlike intelligence, can be improved through training. In a pair of studies published last year in Policy Insights From the Behavioral and Brain Sciences, the psychologist Carey Morewedge and colleagues had subjects (more than 200 in each study) complete a test to assess their susceptibility to various decision-making biases. Then, some of the subjects watched a video about decision-making bias, while others played an interactive computer game designed to decrease bias via simulations of real-world decision making.
In the interactive games, following each simulation, a review gave the subjects instruction on specific decision-making biases and individualized feedback on their performance. Immediately after watching the video or receiving the computer training, and then again after two months, the subjects took a different version of the decision-making test.
Professor Morewedge and colleagues found that the computer training led to statistically large and enduring decreases in decision-making bias. In other words, the subjects were considerably less biased after training, even after two months. The decreases were larger for the subjects who received the computer training than for those who received the video training (though decreases were also sizable for the latter group). While there is scant evidence that any sort of “brain training” has any real-world impact on intelligence, it may well be possible to train people to be more rational in their decision making.
It is, of course, unrealistic to think that we will ever live in a world where everyone is completely rational. But by developing tests to identify the most rational among us, and by offering training programs to decrease irrationality in the rest of us, scientific researchers can nudge society in that direction.
Terrence Malick doing what he does best. This time no pesky actors to distract him from the breathtaking vistas.
by Mick Stute
I was hired by a psychologist to fix a program that seemed to have “strange output” written by one of his ex-grad students. It was a program that reads a data file, asks about 50 questions, does some calculations, and comes up with some score based on this PhD’s research. It’s on a research 3B2 at the university. He demonstrates the program and sure enough there seemed to be strange flashing words on the screen when it moves from question to question, and they don’t seem nice. I agree to do it, should be pretty straightforward, so he’ll pay me by the hour to determine how big the fix is and then we’ll agree to a fee.
Day 1
I sit down at the 3B2 and login to the ex-grad student’s account that has been given to me. This is where the code resides. I examine the C code. It is written to be hard to read. All the code is squished on one line. It’s spread over 15 files with about 3 functions per file – all on one line. All variable names are just three, seemingly random, letters. I talk to the guy and agree to go with hourly on this (great decision). I untangle all the code and format it nicely so I can see it.
It was done on purpose. It used the curses library to move to a point on the screen, print a question and the answers, and wait for a response. But it first went to the first line of the question, printed some white supremacy message, waited ½ a second, and then overwrote it with the question. This ought to be simple. There are only about five places it could output anything, and all of them had this subliminal flash of a message. Each one was hard coded. No problem. Delete the offending mvprintw() and all is well. Or should be. I compile, thinking I’m done. But when I ran it, there it is again – the subliminal messages. This time with different text still the same subject, just different messages.
I check my code and believe it or not it’s back to the initial state I found it. 15 files, mangled, 3-letter variables – the whole thing right back where I started. I want shoot myself for not making a copy of my code. I unmangle again, this time putting it in three files, named differently. I make a copy of the whole directory, and I mark the files readable only. I compiled it. All looks good. I run the program. There’s now a copy of the original 15 files in the directory along with mine and the subliminal messages are back.
Okay, so somewhere on the disk is the source code necessary to keep doing this and he’s set the program up to pull in that code when you compile it. I do a full disk search in the include areas (/usr/include) and since this is a research version we have source for just about everything but the kernel itself. That’s a lot of header files and this takes some time on the 3B2, so that’s day 1.
Day 2
The disk search showed up nothing. The strings were apparently either encrypted or they are buried in a library somewhere. Because I don’t have check sums of all the original executable objects, I decide to search all libraries for the text. This is even longer than before, so day two is over.
Day 3
No results. The strings are encrypted. That means I’m going to have to follow all the header files from each #include and each one they #include to find where this is. And that will, take some time. We do alert the campus computing department that we believe someone has gained root level access to Dr. Phelps research computer, which is just a shared lab computer in the science building. They’re understandably not convinced.
I start unwinding the #include files. I do that, nowhere do I find the code. So now I know it’s compiled in a library. No problem at all. Why not just recompile all those libraries, we do have the source after all.
Days 4-6
The hardest part, convincing the campus nerds they have an issue. But we finally do and Mark, the Unix admin who was hired because he married the Dean’s daughter, gets busylearning how to do this. In the end, he agrees to allow me to handle it, because he just doesn’t really know how to get all that stuff compiled. End of Day 6, all standard libraries are recompiled. Woo hoo!
I whip out my modified, cleaned up source and start the compile. All looks good. I run it. O M G. It did it again. 15 messed up source files and the subliminal messages are back. This is suddenly like magic. I investigate very very carefully though I am stumped. This code doesn’t exist in source code. I think I might be beaten. Dr. Phelps isn’t happy with the hours involved and thinks maybe we ought to just rewrite the program from scratch. “Sure”, I say staring at the terminal like a lost puppy too deep in my thoughts to put out of my thinking mode, “I think you’re right. That will be quicker.” “Good,” he says, “we can start tomorrow.”
Day 7
To hell with that. This guy isn’t beating me. We are compiling it from his stinking code or not at all! “You don’t have to pay me anymore, Dr. Phelps, I just want lab time.” This is nerd war.
Days 8-14
I get smart, I’m thinking he somehow modified the curses library. I compile the curses code to assembly and though I don’t know 3B2 assembly (yet!), I start learning. I read manuals for 6 days, piecing together that assembly code. Waste of time, nothing seems unusual.
Day 15
I suddenly realize it’s in the compiler. It was the compiler. And every time you compile the original code and run it puts in the subliminal message code into the source code. I’d heard of this before.
Ah ah! I’ve got him!!!! We have the source code for the compiler as well. I search through it looking for a reference. Lo and behold, I find it. Indeed. There is source code in the compiler/linker that does this:
1) it examines any call to fopen(), searches the file opened looking for Dr. Phelp’s questions; if it finds them then
2) it rewrites the 15 files to the current directory when compiling that specific program.
3) It then compiles Dr. Phelps program using the 15 files and outputs to the -o name in the link phase.
The compiler was modified to put that code in Dr. Phelps program was written by the man that modified the compiler.
Several days later, an AT&T tech shows up with a disk and loads the proper compile and linker source and we recompile the compiler from the source. That solves it. All the bad source in the compiler is gone and we’ve got a new clean copy of the compiler.
Except it didn’t. Because the compiler was poisoned with other source code that we didn’t have. And that source code, that now existed only in the executable compiler, put those changes back into the compiler source before it compiled it. But this time it didn’t modify the /usr/src copy, it copied it to a hidden directory, modified the compiler source, compiled itself from there, and deleted the hidden directory. It took an AT&T tech to find this. The ex-grad student had poisoned the compiler to poison itself when it was recompiled. We had to put a new binary version of the compiler on disk from another 3B2 running the same revision before the problem went away.
We also found that if /sbin/login is compiled it puts in a backdoor allowing anyone who uses a specific password to login in as the root user. This computer is accessible by modem and Tymnet. Finally, this gets the computing center’s attention.
Genius! But put to a horrible cause.