住在克里夫兰的哥哥[AN OLDER BROTHER LIVING IN CLEVELAND]
两种解释:
- Tom无法找到和“我有位住在克里夫兰的哥哥”相关联的信念命题。因为“哥哥”这个语词并未被“储存”在大脑里(见Pinker《How The Mind Works》第二章),Tom无法找到相关联的信息
- Tom 因坚持这个“假信念”而丧失了理性特征 - 观察现状与周遭。
自译
尽管如此,意义并不是一个完全神秘的属性。我们大脑中的结构以某种方式“存储”了我们的信念。当你得知pudus是哺乳动物后,你的大脑就必须改变;在你学会这个之前,某些东西必须以一种不固定的方式变得相对固定,无论它是什么,都必须有足够的aboutness(意向性),以某种方式来解释你发现的pudus与水牛比起barracudas关系更密切 。因此的确很容易想象信念是“存储在大脑中”而不是数据文件存储在硬盘上的方式,在某些系统代码中 - 每个人可能有所不同,与指纹不同。雅克的信仰将在他的大脑中用“雅克语”写成,夏洛克用“夏洛克语”写成。但这个有吸引力的想法存在问题。
假设我们已进入神经密码学的黄金时代,“认知微神经外科医生”当然有可能做一些修补并将信念写入一个人的大脑,使用大脑默认语言在人的神经元中写出相关命题, (如果我们可以学习阅读大脑写作,如果我们的工具足够精致的话,我们大概可以写大脑写作)假设我们要在汤姆的大脑中插入以下错误信念:我有一个哥哥住在克利夫兰。让我们假设认知微神经外科医生可以随心所欲地进行必要的重新布线。这种重新布线会损害汤姆的基本理性。考虑两个结果。汤姆正坐在酒吧里,一位朋友问:“你有兄弟姐妹吗?”汤姆说,“是的,我有一个哥哥住在克利夫兰。”“他叫什么名字?”现在会发生什么?汤姆可能会回答,“姓名?谁的名字?
哦,我的天哪,我在说什么?我没有哥哥!有那么一刻,在我看来,我有一个哥哥住在克利夫兰!“或者,他可能会说,”我不知道他的名字,“当他被迫时,他将否认所有关于这个兄弟的知识并断言像“我是一个独生子女,有一个哥哥住在克利夫兰。”这两种情况都没有让我们在汤姆的大脑里成功生成新的信念。在第一种情况下,这条新信念一经出现就被当作“入侵者”被大脑“一致完整性约束”所消除:一种转瞬即逝的倾向 - “我有一个生活在克利夫兰的哥哥” - 并不是一个真正的信念- 它更像是图雷特综合症的痉挛一瞬。如果可怜的汤姆坚持这种病态-像第二种情况中那样,他的"固执己见"(意思是坚定地认为自己的信念是对的,而不去考虑现实情况)使他失去了作为信徒的资格。不清楚一个人不能同时“是独生子女”且“有一个哥哥住在克利夫兰”的所有人确实是没有理解“医生植入汤姆大脑的那条信念”,而对你不明白的,你可能会“鹦鹉学舌”,但你不能相信。(如果一个人真的内隐了一个知识,它会与其它知识形成关联)
这个科幻小说般的例子凸显了构成所有信念特征心智基础的默认推定;除非你有一个无限可扩展“程序指令表”,能在不同的环境中使用你的“待植入”信念"的话,“待植入信念”不会在任何可识别的感觉中成为信念(待重译)。如果外科医生精心完成了这项工作,大脑也功能完好,那么大脑就会在问题出现时立即撤消这个信念 - 否则,从病理学的角度来看,大脑将陷入珍珠般精致的层层虚构混乱,如“他的名字是塞巴斯蒂安,他是一个生活在气球中的马戏团杂技演员。这种混乱在世界上有迹可循 - 患有柯萨科夫综合症(经常折磨酗酒者的健忘症)的人,可以惊人地说服他们的“记忆中的”过去的故事,而这些事过去没有发生。但是这个非常详尽的证据表明,这个人并不只是将一个孤立的“命题”存储在她的大脑中:即使是妄想的信念也需要大量非妄想信念的支持,并且能够承认所有这些信念的含义(全文核心)如果她不相信她的哥哥具有“男性,呼吸,在波士顿以西,在巴拿马以北”等等,那么说外科医生的“壮举”会比误导更糟。
这个直觉泵表明,人没有孤立存在的信念,就像你不能既相信一条狗有四条腿而又不相信腿是四肢,四大于三等等)它在其它方面也有着大用处,感兴趣的可以先行思考。后文会先展示其他思维工具。
原文
Still, meaning is not an utterly mysterious property. One way or another, structures in our brains somehow “store” our beliefs. When you learn that pudus are mammals, something has to change in your brain; something has to become relatively fixed in a way it wasn’t fixed before you learned this, and whatever it is must have enough aboutness, one way or another, to account for your newfound ability to identify pudus as closer kin to buffalos than to barracudas. So it is indeed
tempting to imagine that beliefs are “stored in the brain” rather the way data files are stored on your hard disk, in some systematic code—which might be different in each individual, as different as fingerprints. Jacques’s beliefs would be written in his brain in Jacquish, and Sherlock’s in Sherlockish. But there are problems with this attractive idea.
Suppose we have entered the golden age of neurocryptography, and it becomes possible for a “cognitive micro-neurosurgeon” to do a bit of tinkering and insert a belief into a person’s brain, writing the relevant proposition in the person’s neurons, using the local brain language, of course. (If we can learn to read brain-writing, presumably we can write brain-writing, if our tools are delicate enough.) Let us suppose we are going to insert into Tom’s brain the following false belief: I have an older brother living in Cleveland. Let us suppose the cognitive micro-neurosurgeon can do the requisite rewiring, as much and as delicate as you please. This rewiring will either impair Tom’s basic rationality or not. Consider the two outcomes. Tom is sitting in a bar and a friend asks, “Do you have any brothers or sisters?” Tom says, “Yes, I have an older brother living in Cleveland.” “What’s his name?” Now what is going to happen? Tom may reply, “Name? Whose name?
Oh my gosh, what was I saying? I don’t have an older brother! For a moment, there, it seemed to me that I had an older brother living in Cleveland!” Alternatively, he may say, “I don’t know his name,” and when pressed he will deny all knowledge of this brother and assert things like “I am an only child and have an older brother living in Cleveland.” In neither case has our cognitive micro-neurosurgeon succeeded in wiring in a new belief. In the first case, Tom’s intact rationality wipes out the (lone, unsupported) intruder as soon as it makes an appearance. An evanescent disposition to say, “I have an older brother living in Cleveland” isn’t really a belief—it’s more in the nature of a tic, like a manifestation of Tourette’s syndrome. And if poor Tom persists with this pathology, as in the second alternative, his frank irrationality on the topic of older brothers disqualifies him as a believer. Anybody who doesn’t understand that you can’t be an only child and have an older brother living in Cleveland really doesn’t understand the sentence he asserted, and what you really don’t understand you may “parrot” but you can’t believe.
This science-fiction example highlights the tacit presumption of mental competence that underlies all belief attributions; unless you have an indefinitely extensible repertoire of ways to use your candidate belief (if that is what it is) in different contexts, it is not a belief in any remotely recognizable sense. If the surgeon has done the work delicately, preserving the competence of the brain, that brain will undo this handiwork as soon as the issue arises—or else, pathologically, the brain will surround the handiwork with layers of pearly confabulation (“His name is Sebastian, and he’s a circus acrobat who lives in a balloon”). Such confabulation is not unknown; people suffering from Korsakoff’s syndrome (the amnesia that often afflicts alcoholics) can be astonishingly convincing in spinning tales of their “remembered” pasts that have not a shred of truth in them. But this very elaboration is clear evidence that the person doesn’t just have an isolated “proposition” stored in her brain; even a delusional belief requires the support of a host of non-delusional beliefs and the ability to acknowledge the implications of all this. If she doesn’t believe her older brother also is male, breathing, west of Boston, north of Panama, and so on and so forth, then it would be worse than misleading to say that the surgeon’s feat was inserting a belief.
What this intuition pump shows is that nobody can have just one belief. (You can’t believe a dog has four legs without believing that legs are limbs and four is greater than three, etc.)1 It shows other things as well, but I won’t pause to enumerate them. Nor will I try to say now how one might use a variation on this very specific thinking tool for other purposes—though you are invited to turn the knobs yourself, to see what you come up with. I want to get a varied assortment of such thinking tools on display before we reflect more on their features.
1 This conclusion is often called the holism of the mental (or the intentional); holism has been staunchly denied by Jerry Fodor, who claims to have no trouble imagining a creature with exactly one belief (Fodor and Lepore, 1992).