[[Cy Twombly, Roman Notes (1970)]]
Woke up this morning with my brain all click-clack from too much computer time last night trying to fine tune Jewish Philosophy Place (JPP) and setting up links to Facebook. All of this seems apropos to Jason Lanier’s I Am Not a Gadget, which I read over winter break.
Lanier is a Silicon Valley and Wired magazine guy. His I Am Not a Gadget is a popular intervention into debates re: technology and consciousness. A critical one. You won’t be surprised to find that JPP was very interested in the modest “spiritual” impulse that informs his critique not of technology per se, but of technological hyper-culture and hive consciousness.
One of the points that I remember most clearly (sorry, my copy of the book is at home) is the incredibly brittle and anti-innovative nature of larger computer programming systems. These perpetuate themselves because they get too big to fail; so we get stuck with lousy technologies.
The idea of brittleness is what remains with me now. I have noticed for a long time now how much of my intellectual life as an academic has been reduced to cursor-pointing and button-pushing.
The bigger issue is how our cognitive psychologists and Artificial Intelligence workers are modeling consciousness and intelligence on a computer model. Namely, the brain is a understood to be a complex computer device. You can see this in Pinker’s How the Mind Works.
While I am sure you can get lots of interesting applications out of this kind of research, I also wonder about its basic limits. Perhaps the end result will be gadgets and prosthetics, many of which will actually improve our daily lives. But I’m not sure how much insight it will yield into human cognition, intelligence, and consciousness. Perhaps a lot about psychological forms of functionalist brain processing. About what David Chalmers calls the “phenomenal” quality of subjectively experienced states of consciousness, we may just get nothing or next to nothing.
I’ll defer to the experts, but I’m suspecting that computers are a bad model upon which to image a more large bore model of brain function and phenomenal consciousness, particularly as these work out outside the highly controlled confines of a research lab. Sure we can pretend if we want to live in the matrix, to be a cyborg, to act like a computer. But do brain function and felt consciousness come down to cursor-pointing, button-pushing data crunching?
I can’t help but think that wooly headed humanists are on to something. The old models of speech thinking (Buber, Rosenzweig) or writing (Derrida) provide perhaps the germ of better models upon which to understand brain function and consciousness. Regardless of the difference between speech and writing, both of these two forms of expression offer more fluid, motile models of brain function and consciousness than do the more staccato button-pushing activities based on quick, discrete motions and moments. Perhaps brain function and consciousness are more like speech flows across a duration of time or like writing out a line in cursive long hand across a physical page or canvas. Maybe consciousness is as wet as the brain is!
I don’t know the literature and would be interested to know if the psychologists and AI “guys” are have already been saying something like this. Please tell me how much of this literature I’m getting wrong.
Setting aside the science, one might still prefer a type of thinking that is less fixed than our current technologies by high speed eye-hand coordination and brain firings based on symbolic systems of (buttons) (and bytes) (and 0’s and 1’s) (tap, tap, tap, tap). My brain-hand needs a break from the buttons. It wants the more cursive motions of spooling and unspooling in time across space. My brain-hand wants a break from my eye. And my brain-eye needs a break from my hand, to gaze out past the immediate register of my hand and that which is present to hand.
In this technological moment in the culture, every phenomenon or value is always vetted by the ultimate authority of a neurologist or the just-so stories of evolutionary psychologists.
The point here is not to get rid of technology, but rather how to mix new, old, and non-technologies –computers, social networks, pens, papers, ink, crayons, books, cell phones without smart functions, religion, parchment, used cars, dvd players, cd’s, and pc’s. To me, this lends itself to a more textured technological environment that is more in synch with the human body. Is this the problem that propels Henri Atlan in The Sparks of Randomness to find models of thought that are random, not plot or system driven, models of thought based upon scientific and archaic combinations? I’ll probably say more about Atlan as I get into the text.
For right now, all I can say is that one of the things I like about writing JPP is the occasion it provides me to stop, sit down or lean up against a wall, and develop a line of thought and sketch out ideas longhand on a paper notebook when I’m away from the computer. So no, I really don’t want that iPad even if it would make my life a little easier or more quick. I spend enough time as it is button-pushing.