Gregory Chaitin
Wednesday, October 28, 2009
Subscribe to:
Post Comments (Atom)
A blog about, among other things, imaginary ideas - What ifs? and Imagine thats. What if photographs looked nothing like what we see with our eyes? Imagine that the Berlin Wall had never come down. What if we were the punchline of an interminable joke? All contributions welcome.
It is very Turin-esque, and very, very geekish I would say. More of Robinson's 'Facing Reality' essay is floating through my mind. I can't imagine why...
ReplyDeleteTo be fair, if Chaitin is saying this in the context of trying to programme computers to mimic human thinking then it could be quite harmless. But my experience of the AI crowd is that it doesn't stop there at all. From McCarthy on they strike me as a somewhat confused lot.
Not as confused as the people who read them. Are we entering the realms of digital philosophy here? In which case, good luck Bryan. If you can explain it without mentioniong backwards-running one-dimensional anti-matter clocks and whatnot then I'll definitely buy your book.
ReplyDeleteOh alright then. I suppose there's no way of telling the difference between understanding something and thinking you understand it. And you can always define 'understand' so that you do understand it (and thinking you do is sufficient) or, alternatively, so that you can never understand anything.
ReplyDeleteSuppose you created a program to do x, and it did x, but unbeknownst to you you had made two errors in the programming, which flukily cancelled each other out so that it did x anyway.
Then you could reoeat it, and you could program x, but could you be said to understand how to program x?
Well, could you?
Not even close, Brit. Much more interesting. A biography of Cheryl Cole. Obvious when you think about it.
ReplyDeleteThe reduction of knowledge to control is a gross impoverishment and shows why scientists are sometimes led by their technical enthusiasms into support for fascism.
ReplyDeleteIn the above I meant:
ReplyDeleteThen you could repeat it, and you could program x, but could you be said to understand x?
Well, could you?
'To me, you understand something only if you can program it.... Otherwise you don't really understand it, you only think you understand it.'
ReplyDelete- This is a computer programmer's oblique way of excusing himself for being a total failure with women
A biography of Cheryl Cole.
ReplyDeleteAs it happens I can well believe that.
Appleyard: I want to do a book about the implications of AI and digital philosophy for the free will/determinism compatability debate.
Publishers: Like it, but can you do it as a biography of Cheryl Cole?
Appleyard: I'm not aware of the lady but I expect I can hang it on that peg, yes.
Your insight into the publishing mind is downright scary, Brit.
ReplyDeleteThe first thing I was told when I signed up as a junior economist was "If you can't explain it to your grandma, you don't understand it." Grandmas have been around for ages longer than computers so I don't think anything's really changed here.
ReplyDeleteAre grandmas like computers? Well, for starters, neither can be reasoned with. Both consider anything that they/it doesn't immediately understand to be utter nonsense. I think the thing here is that the test of understanding lies in the ability to translate the information, displaying an ability to view the subject from all angles.
Amusing quote if you’ve ever spent more than ten minutes working as a computer programmer.
ReplyDeleteProgrammers, as a rule, like predictability. Without it, they/we are doomed. The reality of any piece of code, however, is that it rapidly exceeds what the brain can predict. Once you’ve nested a loop within a loop, made a few function calls, and attempted to trace how a variable changes, the brain can't cope. They've tried to make things more manageable by moving to Object Orientated code but the problems don’t go away. I’ve read figures for 0.1 bugs in every 1000 lines of code, which is impressive. Except not all products have 1000 or even 10,000 lines of code.
Going back to your previous quotation, you might even say that computer code is like a very large and very predictable metaphor – a cliché almost – which produces the same meaning, time and time again. However, eventually, it will produce some other meaning that nobody anticipated. In a poem, it can be a thing of beauty. In Window Vista, it’s a day spent formatting a drive and reinstalling drivers.
I don't understand. I mean, I don't understand understand. Not now I've thought about it a bit.
ReplyDeleteI wonder if you can only understand something you made up - even then, you may not be aware of all the implications of the process you have started (i.e. your brain can't model it sufficiently well), as in a complex computer programme.
This, so far as I'm concerned, is the central problem of existence; it's not our game. We didn't make the rules.
We can make guesses (predictive models) & if they work well enough we call them laws - which we can understand. Possibly. But that's as far as it goes.
To a man with a hammer the whole world looks like a nail.
ReplyDeleteBefore we had computers, we had geniuses. If you said that you had the proof for the four square theorum or Fermat's last theorum, you would not publish it in the local newspaper for everyone who could not understand it to read, but present it before known mathematical experts. They'd look at your work and say "yeah" or "nay". Once they said you were a genius like them, then everyone could read about how they said so in your local paper. You could become a known genius.
ReplyDeleteSo on one hand, it's a political thing. You want to be known as a genius nowadays? Join the computer geek club.
We can easily understand problems within the old system of knowledge-knowing. What if the known geniuses could not understand that you indeed had made an astounding proof? Or what if they thought you were of the wrong stock to even read your work? Politics, damn politics. The person who is closest to understanding life, the universe and everything is probably nowhere near a computer lab.
And if you think these computers don't have pre-programmed biases built in, when we speak of cosmic models, there must be starting points. These starting points can have biases. Probably do.
Now, Gregory Chaitin would want to prop his specialty up as THE way of knowing anything. Personally, I don't think you can say you know beans until you have written a column for The Sunday Times.
De Selby lives on.
ReplyDeleteThere are degrees of understanding: if you read or hear something you understand at a basic level, if you then use the knowledge in some practical way you understand at a higher level. When you try to teach something then you must understand it at en even higher level.
ReplyDeleteI guess what he is saying is that putting something into a computer programme takes understanding one level further than teaching another human.
Not sure I agree with this. It certainly takes extra skill, but is the skill one of epistemological understanding of the thing one is trying to explain to the computer, or is it a skill of understanding the language of computers?
Nice one James!
ReplyDelete