The more I look at examples of AI (and its not as if I'm doing this in depth study, I'm just hokin' around), the more I think well, heck, that's not AI, thats just.... Like the scene in that Pixar film, Toy Story, where one character says That's not flying, that's....falling gracefully!
I'm trying to get the ability to look at a problem -- and that's a trick all by itself - what kind of problem? That we're out of cereal? That the bathroom is dirty again? That traffic lights stay on too long sometimes? -- and see how AI could be used to resolve it. I know, this is stupid, because I will never ever have the time, money, motivation, energy, ability, or anything else to actually apply AI, so why bother, it's like reading up on Buddhist laundry practices, but bear with me, my natural lack of committment will keep me from wasting much more time on it.
But until then -- its really wierd. AI really does deconstruct into common concepts. That's just categorization....that's just classification.... that's just keywords...that's just a neural net based expert system.
There's no there there.
5 comments:
Interesting...and I almost understood what you said! Except for the Buddhist laundry practices.
That's pretty mystical stuff, I agree. There's some wax on, wax off, too, but I'm not exactly sure how.
Basically, I'm trying to get into the mindset of people who've lived with the capabilities of AI all their life, so its no big deal to them....so that maybe I could apply it in ways that wouldn't occur to most of the rest of us. And make money doing so.
Not likely to happen. But it's fun.
There isn't much money in AI, Bill. No one can explain how it works, so no one wants to buy it!
I led a project, a long time ago, that was perfect for a primitive AI system. It was "frequently asked questions" thing for the mortgage group of a bank. I tailored the RFP (it was a high-ceremony place) to primitive intelligence. A bit more than a classification system, but not a keyword/categorization problem.
The biggest problem I kept running into was something I took to calling "the paint brush issue". To paint a picture, an artist goes to the store and buys the paint, brushes, canvas and other bits and pieces. It used to be that the artist had to make his (they were all men, back then) own materials - which cut down the number of paintings he could do. The problem I had was that every single vendor (6, if memory serves) had to rebuild their neural networks to cope with the FAQ problem. (The analogy doesn't work quite as well as it did, then! Mostly because I'm leaving out some important details for brevity. Or the fact that I'm still drinking my morning coffee and aren't coherent, yet.) The vendors couldn't explain how they would adapt their products - each type of FAQ (we identified 10, I think) needed a new development effort, basically. Architectural rigidity was built into the products.
The biggest problem with AI isn't its capabilities - it's the understanding needed to implement it! I've often thought the designers of a product need to think in Zen-like ways to see the possibilities. Which might explain why some washing machines are the most "intelligent" devices in a house! (They use primitive AI systems, not rule-based "expert" systems) to measure the dirt, and adapt the wash cycle on a semi-continuous basis.
Neat stuff, artificial intelligence. I like neural networks, myself. NN's are superb at finding patterns, if their architecture is correct. I even wrote one (they really are simple!) in Excel Visual Basic, some time ago. I pitched it against lottery numbers - they're truly random, and I didn't get instantly rich. Bummer. :-(
:-) Carolyn Ann
After reading Carolyn Ann's comment, I remembered how I was involved in one of the very first AI type of projects (as the customer---not the developer)over 10 years ago and it was like an FAQ thing. I remember inputting stuff into the index base. Seems so primitive in structure now, but back then it even made it into one of the major computer magazines as something new!
I tried writing an expert system once for things I knew about one piece of software, but I could never figure out how to get it to tell me things that I didn't already know. Now I see that its because I was giving it information, not ways to derive it. Not to say I could do that now!
This is an interesting presentation on AI and computer vision. I had to keep stopping it because the guy talks so damn fast, and knows so damn much. Pretty good, though.
Post a Comment