Thursday, December 30, 2004

Last DUH moment of 2004

At least, I hope so.

Reading an article about an AI technology labelled 'Piquant' by IBM. Found a news thread on Slashdot that talked about it and the concepts behind and related to it, to some depth.

About half way through, realized that most AI is actually language parsing and inference.

Gee, really?

.
.

2 comments:

Nan said...

Hi Bill, I went to bed last night after reading your post (and the related sites) with my head spinning. In your posting the key word to me was "most" as in, "most AI is actually language parsing and inference." After reading about the Diciple COG, and words like "intelligent agent" and "ontology", and going to the dictionary more often than I have in years, I have come to the conclusion that the Army is teaching an intelligent agent to "learn" and "reason." This brings to mind movies like "2001" and "Wargames". The meaning for "ontology", used over and over again, in their description of Disciple COG is "the branch of metaphysics that deals with the nature of being." How far are they going? I assume that if the intelligent agent is the disciple, that they, the army, are the master. How far do they go before these roles change?
I didn't read the article on 'Piquant'. I may search for that today if I get a chance.

P.S. The "chain mail" comment was priceless.

Cerulean Bill said...

I'm not alarmed by it, but perhaps thats because, as the saying goes, if you can keep your head when all about you are losing theirs, you don't truly understand the situation. I agree that the use of the phrase 'Disciple' is a bit off-putting, though.

For the longest time, I have felt that I *ought* to be able to understand AI quite well. Now, what you have to understand is that I have very little reason for this attitude. Though I work as a systems programmer (and my current job is just barely that), I am *not* an AI guy, or anything even remotely close. Nevertheless, I have this feeling that I ought to understand it, because it's something that interests me. I dislike the thought that there might be something that interests me where I can't understand it right down to its bones. Why I have this thought, I have no idea. Its my little slice of intellectual arrogance, I guess. Maybe not so little, either.

Anyway, when I read the article about Piquant, and the SlashDot stuff (I did a Google search with the phrase 'piquant slashdot' just now, to find that forum; its http://slashdot.org/article.pl?sid=04/12/26/0240232&from=rss), it suddenly occurred to me that all of what I'd just read didn't have to do with understanding, it had to do with inference. Think of it as your dog not understanding your words, but understanding the tone of the words, your posture when you say them, all of that. Now, this is not, I admit, a great insight. I expect that people in the field had this insight some time in the mid seventies. But it startled me, because, even though I knew that AI wasn't literally thought, somehow I think that I *did* think of it as thought. This kind of knocked it down a register, from the sublime to the 'damn, this stuff can actually be made to work'. Again, I already knew that it works -- I just didn't have that knowledge internalized. Yesterday, I internalized it.

I heard a story years ago about an AI system that would respond to you -- the Turing test idea. The story goes that this particular system was set up as a psychotherapist. One researcher asked a secretary to try out the system, just to get a laymans viewpoint. After a couple of exchanges, the secretary asked the researcher to leave the room, as 'this is getting a bit personal'. Well, okay, its easy to laugh at, but the idea that a system with AI could *appear* to be intelligent can be a scary one. (Almost as scary as the thought that a senator could appear to be intelligent, when all he or she really has is a damn good speechwriter.) Your WarGames reference is a good one. I don't think that AI systems will be on line soon that you can't really tell are a machine, but thats not important (to me). If the AI system allows a service to be delivered as well, or better, with the same accuracy, as a human, then I think its a good thing, and thats the kind of thing I want to know about and understand. I think my little insight might help me do that.