From the course conference on
ConferU for
Gödel, Escher, Bach, Winter Term 1994.
Item 23 Feb03/94 01:27 1 line 41 responses
Paul Prime=22
Class of Wed 2/2
Class of Wed 2/2
41 responses
- - - - -
Feb03/94 01:29
23:1) Paul :
(I'll post my notes tomorrow; I'm rather tired at the moment. But I
thought I'd go ahead and start the item in case people wanted to start
up discussion on any of the subjects covered. I myself will give my
"10-sentence history of AI" at some point. :-)
- - - - -
Feb03/94 15:51
23:2) Paul :
Notes from class of Wed 2/2:
* signups for presentations
* read to the end of Part 1 of GEB
* Questions:
"God" in GEB
Bach's last piece (p. 80)
The "record players" example... JL doesn't think it works too well
* Chapter 6 (p. 158)
Recall, H. views language entirely as a written medium
Bizarre; speech predates writing by far. 50,000 yrs? 500,000?
Note: speech made possible by a right angle bend in the throat
(goes with bipedalism)
Animals and language? discussion
brood parasite birds; subspecies use "foreign language" as mating call
Human language&culture are Lamarckian (pass on acquired characteristics)
Why changes so fast
Great Vowel Shift (between Chaucer and Shakespeare)
modern English spelling is Middle English
pronunciation of long vowels (take twice as long to pronounce) "raised"
why this sort of language change? SOME kind of adaptive purpose
H. equates a lot of things too facilely (math, language, art, music)
but that doesn't invalidate what he has to say; just, some bits are
"obiter dictum" (on the path of saying) and not key
MEANING = core of thought. A *tough* term to handle!
one of Bateson's dormitive principles... v. hard to pin down
property of the human mind
etymology of "meaning":
mean (n.) = average <- Lat. medianus (middle)
mean (adj.) = nasty; comMON; vulgar (class discrimination!)
note: kind (~king), gentle (~gentleman) originally = upper class
* mean (v.) = intend <- Lat. mentus (mental, mind)
Many words are for language/communication
Also many for people-stuff
"right" and "left" ... fun to watch a dictionary falter!
example: chair with doors to hymnal cabinet
"right arm" (sitting orientation) is not same side as "right door"
(turn around and get in front to open it)
"sinister" and "dextrous" --- the minority thing again
definitions vs. mnemonics vs. determiners ("ways to find out")
So... meaning is mental, interpretative
A big question is how does one figure out conventions
H. has an odd use of "meaning", seems to equate to isomorphic; tho iso.
is a part of meaning, it's not the same thing (e.g. "phenotype is the
meaning of the genotype" seems odd; it's the *expression* of it)
Note that ATCG is a language but not a *human* language. There is no
*meaning* in the way we think of it.
Back to the CONDUIT METAPHOR for communication, from Lakoff & Johnson
(myths = satisfying explanations picked up as we go along; "age 3")
1. words & meanings are physical
2. words are hollow containers. inside a word is the meaning
3. communication = sending words physically thru a conduit
note "code" metaphor for language... implies that there is a code book.
Where does the code book come from?
We make up our own internal languages. Then, we cope.
Miller: To understand, assume it's true. Then figure out what it's true
*of*. Context is not given.
Language understanding is "AI-complete"... to do language fully would
require doing everything else in AI fully as well.
Humans are "automatic" processors most of the time.
Long discussion on AI, simulating the brain
"Can computers learn language?"
What do you mean by 'learn' & 'language'?
Chess computers work v. differently from real grandmasters
(brute force look-ahead, rate moves via heuristics vs.
"see things", "see what is good", very little conscious search)
- - - - -
Feb03/94 16:03
23:5) Mark :
What notes you take! I relived wed. class all over again. It was nice.
- - - - -
Feb03/94 16:27
23:6) Paul :
OK, here goes, my 10-sentence history of AI! (OK, so I use a lot of
semi-colons... :-)
(1) In the beginning, programmers tried to have computers do the tasks
that people found "hard", such as playing chess, solving mazes, and
solving certain kinds of well-defined problems (e.g. the "get the
wolves and sheep ac the river in a boat" problem).
(2) Computers got pretty good at these tasks, to the point that modern
computer chess programs can play at the grandmaster level, although
they don't work in the same way as human grandmasters (they use
explicit brute-force "search ahead" through multiple possibilities,
and rating by heuristics; humans experts use pattern recognition to
"see patterns, see what to do," with very little conscious
search-ahead).
(3) Meanwhile, a lot of very "easy" things (from a human standpoint)
turned out to be very tough, such as understanding language and
recognizing objects; programming using symbols, algorithms, and
explicit serial rules just doesn't seem to work very well for these
sorts of tasks, except for very oversimplified problems.
(4) A fairly recent development (circa 1980) was connectionism
(parallel distributed processing, neural nets), in which the computer
works as a parallel "pattern-matcher", trained to recognize certain
patterns (such as 20 different human faces) and categorize new ones.
(5) Since the connectionist approach was "inspired by" the complex
pattern of neurons in the human brain, it's not surprising that it can
do things via parallel-processing pattern-matching that previously
could only be done by human brains.
(6) There are limits to connectionism, however.
(7) So far, connectionist networks can only work at a single level in a
single domain; some connectionist networks learn to recognize words,
and other networks form past tenses of verbs, but no one has any idea
yet how to put these levels together.
(8) A second problem is that, in current connectionist models,
programmers still have to specify the symbols by which input and output
information is coded; the computer itself can't do that.
(9) My opinion is that future progress in AI will come primarily by
modeling the human brain even *more* closely, in all its complexity;
however, the necessary increase in complexity is *enormous*, and there
is a *lot* that we don't yet know about the brain. (10) It will
probably also turn out that modeling just the *brain* isn't enough--
learning probably has to be done *experientially*, which requires
perceptual/motor senses, abilities, and interactions... currently we
don't have a *clue* as to how to build such a complex entity!
- - - - -
Feb03/94 17:07
23:8) Greg :
I wonder how useful a computer with human reasoning would be. As you
already mentioned, human and computers are better at different things.
Perhaps some other, unconcieved form of intelligence might be more
usefull. Besides, we already know how to create human intelligence. It
just takes a little teamwork.
- - - - -
Feb03/94 17:18
23:9) Perry :
Greg, I've wondered the same thing, but retain a scientific curiousity
about whether it can be done...even though I feel no emotional investment
in it. Why did we go to the moon? Because it was there, we wanted to see
if we could! Why do we want to recreate the human brain with machinery?
Because someone asked the question about whether it could be done...and
maybe we'll learn some interg stuff along the way.
For me, though, I'll spend my life working with real humans, not the
silicon approximations!
- - - - -
Feb03/94 17:29
23:10) Steven :
A note on chess, since it's one of my interests:
Human experts are capable of a great deal of look ahead, and use
it frequently, in situations where the possibilities are limited,
such as mating combinations.
Machines do use *rating by heuristics*, but it isn't clear that an
adaptive heuristic rating scheme is incapable of "seeing what to
do" at least as well as a pattern recognition routine. In fact,
pattern recognition can be programmed in serial machines, and
could be used to define an adaptive heuristic.
I don't know if anyone's working on this or not, but it seems like
they should be.
- - - - -
Feb03/94 17:49
23:11) Steven :
As far as the "why AI" question, I don't think there's much of
a push right now to reinvent the human brain. But we stand to
learn tons about neurophysiology if we can do experiments on
brain-like circuits. Furthermore, there are a lot of industrial
applications for AI machines, one of the first and foremost is
robotics control. AI visual recognition techniques can allow a
machine entity to find a bolt, orient it correctly, and put it
in place without human intervention. You can squawk all you want
to about displacing human workers, but if the bolt is inside a
nuclear reactor, or any other hazardous environment, I'll let the
machine do it, thanks. If anyone wants to hear more on this
subject, or ask questions, I'm happy to go on until my keyboard
fails.
- - - - -
Feb04/94 00:55
23:12) Paul :
I should point out that my "ten-sentence history of AI" is rather
biased toward a cognitive psychology viewpoint. We cogpsych people
aren't actually very concerned with building intelligent machines,
except insofar as computer modeling teaches us more about how human
brains and minds work. (How we do this is a separate thread which
I'll skip for now.)
Most AI researchers are a whole lot more concerned with developing a
machine that does a particular job, any way that works! The problem
is that sometimes our intuitions as to what will work in order to do a
given job (like recognizing objects or understanding language) are
*wrong*... in those cases, often the only way to make progress is to
get inspiration from examining the only working model we have, the
human brain.
Computers do some things *better* than humans, though. Fast
calculations. Indexed search through memory. Following repetitive
routines without getting bored. Following a decision-making procedure
in a consistent way. (Often, a computerized "expert system" is better
than the expert it was built to simulate. Why? Because it can be
more *consistent* in applying its algorithms than the human expert
can.)
The *real* problem is how to make computers do tasks they currently
can't do (by making them more like human brains) *without* losing the
advantages of current computer design. For some tasks, like the
industrial visual scanning that Steve refers to, that may be enough.
- - - - -
Feb04/94 00:55
23:13) Paul :
For other jobs, perhaps the best we can do is to build a machine that
runs identically to a human brain, but faster; that would be worth
*something*. But it would be nice to have even more improvement than
that!
Steve, re: chess... I was thinking of the middle game more than the
opening (where both human and computer chessplayers tend to work
semi-automatically) or the endgame (where the possibilities are
limited and so both use mostly look-ahead). Even so, you're right
that the computer and human aren't quite so qualitatively different as
I've been describing them. The more that adaptive the heuristics
employed by the computer are made, and the more that the "look-ahead"
is restricted via heuristics to promising patterns, the less
distinction there is between the human and computer. But there are
decreasing marginal gains from *redesigning* meta-heuristics upon
heuristics... at some point, to get further improvement, it may be
easier to start with a fresh type of design. Exactly what, I'm not
sure... perhaps something like John Holland's adaptive systems notion.
- - - - -
Feb04/94 14:50
23:14) Steven :
Can you give me a reference on Holland?
- - - - -
Feb04/94 16:02
23:15) Paul :
L.B. Booker, D.E. Goldberg and J.H. Holland
"Classifier Systems and Genetic Algorithms"
Artificial Intelligence 40 (1989) 235-282
(Holland's teaching a class this semester, in fact, but it's at 8 AM,
a time of day at which my brain does not function.)
- - - - -
Feb04/94 16:11
23:16) Steven :
Thanks.
- - - - -
Feb04/94 19:38
23:17) John Lawler:
There's also an article in Scientific American recently. Consult
Reader's Guide.
- - - - -
Feb05/94 15:53
23:18) Jeff :
"Artificial Intelligence is the study of how to make computers do things at
which , at the moment, people are better."
-Elaine Rich
Does this seem correct to you?
- - - - -
Feb06/94 14:29
23:19) Keenan:
No quite because it could also be the study of how to make computers out-
perform other computers. Or how to just advance the "things" as whole items.
- - - - -
Feb06/94 16:23
23:20) Paul :
Part of the reason Elaine Rich's quote works so well, though, is that
we keep redefining what we would consider an "intelligent" computer.
So, for example, once upon a time, doing fast calculations made a
person "smart", but we don't consider a computer to be smart just
because it does such quick calculations.
There's a funny phenomenon by which a somewhat convincing example of
intelligence (many people can be fooled by a simple Eliza program) is
no longer considered to be that interg, once one understands how
the program works. Kind of like the mathematician who takes 20
minutes to work out a proof and then decides, "Yes, it *is* obvious!"
- - - - -
Feb06/94 23:36
23:21) John Lawler:
The first sentence of Weizenbaum's paper on Eliza makes that point:
"It is said that to explain is to explain away."
- - - - -
Feb09/94 16:29
23:22) Fran :
On a prophetic note, I figure that once computers are advanced enough to
think, really think, they'll start thinking about themselves. The
conclusion the computers will come up with is that there is no reason for
their existence, and having no morals like we humans do, they'll destroy
themselves. It's like Frankenstein...
- - - - -
Feb09/94 19:50
23:23) Greg :
Oh yeah, maybe they'll realize that there is no reason for YOUR existance
and destroy YOU.
- - - - -
Feb10/94 15:20
23:24) Fran :
Be nice Greg, or I'll sic my computer on you.
- - - - -
Feb10/94 16:20
23:25)! HAL:
I'm sorry, David. This mission is much too important to jeopardize by
allowing you back in the ship.
- - - - -
Feb10/94 17:45
23:26) Steven :
Fran: If you believe morals are hard-wired in humans, don't you suppose
that we can pre-program them in machines? For example, if we developed
morals circuitry through evolution (all the immoral humans got offed in
the inquisition, no? :-) ) couldn't the same sort of biological pressure
be brought to bear artifically on thinking machines? All we'd have to do
is unplug machines that commit immoral acts (like engaging in networking
with multiple partners).
- - - - -
Feb11/94 00:00
23:27) Fran :
Humans have a moral code, but it's not hard wired. We can be bad if we
want to... It would probably be pretty hard to program freewill...
- - - - -
Feb11/94 12:54
23:28) Karen :
But Fran, how can you say that it is "not hard wired" if you think
that everyone has it (even if they choose at times to ignore it?
- - - - -
Feb11/94 15:54
23:29) Fran :
I guess this all depends on what we define as hard wired... This moral
code that I am talking about is not just one aspect of our psyche that you
can point to and say, this is it... It's more like a compass that tells
you what is right at what time. You can't just pick one human emotion and
set it up as what is right. Even someting like love of humanity can bad.
If you follow love of humanity blindly, you will find yourself breaking
promises, lying, and falsifying evidence all for humanity's sake and you
will end up being an unjust person. There are no right keys and wrong
keys on a piano, but each one is right at one time and wrong at another.
The moral code is like sheet music.
- - - - -
Feb12/94 15:17
23:30) Jeff :
"Hard wiring" is normally used in cases where something won't be changing,
so I'd probably say that moral code is inherent to human nature, rather than
'hard wired'.
- - - - -
Feb12/94 19:37
23:31) Alex :
That's exactly what Wilson says, Fran and Jeff, and I think I greatly agree
with you..except I keep wondering..if a moral sense is inherent but weak
enough to be changed and overruled in every case (i.e. learning morals from
parents) what difference does it make if its inherent or not?
- - - - -
Feb13/94 19:22
23:32) Perry :
Maybe a *predisposition* to learning/accepting a moral code?...
- - - - -
Feb13/94 21:52
23:33) John Lawler:
How about an "instinct" for a moral code? Or a set of dormitive
moral principles? Or -- this one's very modern -- a moral code
acquisition device?
- - - - -
Feb13/94 23:30
23:34) Brian :
i kinda of agree with Professor Lawler. I see it, though, as people
just like to make general rules of thumb for everything because
its makes life artificially comprehensible. parents are there
just to make the morals good ones.
- - - - -
Feb14/94 09:16
23:35) Perry :
only if the PARENTS are good ones!
- - - - -
Feb14/94 16:34
23:36) Steven :
I'm glad we got off on a new tangent, I kept expecting someone to type
"the fruit of the tree of the knowledege of good and evil," or somesuch.
- - - - -
Feb14/94 19:00
23:37) Paul :
"Hard-wired" often confuses two ideas, (1) not subject to change, and
(2) built-in and innate. In actual human beings, we get all
combinations. The pull-hand-out-of-fire reflex is both innate and
usually unchangeable. The fear-of-falling reflex is built-in but can
be changed. A native language is not innate (though the *ability* to
learn a language is innate), but once it's learned, much of it is not
subject to change (see 'phonemic perception' over in item #6). And
lots of things are both non-innate and subject to change.
Morality? I'd guess that it's like language in some ways, that we
have a predisposition to learn *some* system. But just as "language"
combines a lot of different levels and concepts and stages, so does
morality. Some psychologist (I've forgotten the name) talked about
"stages of morality", ranging from simple avoidance of punishment to
advocacy of a higher good. That's probably a good point-- just
because someone doesn't kill someone, for example, we don't know if
they really believe that "it is wrong to kill another human being" or
if they're just afraid of going to jail. The multiple levels make the
whole issue really confusing.
- - - - -
Feb14/94 22:41
23:38) Perry :
or if they're too stupid to think of it, or if they're not strong
enough, or if they just don't feel like it today...
- - - - -
Feb15/94 19:56
23:39) Keenan:
I think that we have to accept some things as true or have some set of
assumptions because without those it would be very difficult to re-analyze
every situation that we are put into. We look at parents to guide out
behavior so that we don't have to go out and try every possibility. It would
make life very interg if every time we were presented with a new
situation we thought about EVERY possible explaination and choice to be made.
- - - - -
Feb16/94 18:25
23:40) Michelle :
It would slow a lot of things down a lot but it sure would help avoid
getting stuck in ruts
- - - - -
Feb16/94 18:31
23:41) Steven :
The rut would be having to reexamine discarded possibilities again and
again and ...
Gödel, Escher, Bach class home page
John Lawler
<jlawler@umich.edu>