You know, I had a silly little thought about AI the other day … People talk about software that can write music, hold a conversation (the Turing Test), or some other creative task being the \”real\” test for AI. But really, a person wrote the program that wrote the music, so you really want to say AI has succeeded when it writes a program that can write programs that make music. So, at this level, we now have programming being the \”real\” test. Of course, we know it\’s creative, but it\’s not really seen that way. But, even if we write a program that writes music-writing programs, we\’re still acting at one level above that program. So, we need to write a program that can write programs that will generate arbitrary levels of other programs and eventually spit out a music-writing program. Of course, then we\’re writing programs that do this arbitrary nesting, so we have to write programs that do arbitrary nesting to show that the AI is real. After this, it gets a bit recursive …
But wouldn\’t a music-writing program that spits out really original and quality music be considered to have passed an AI test? Well, sure, but a chess-playing computer is considered to have passed an AI test, there\’s just a huge gap between that AI and our intelligence. A difference in kind.
Am I saying it\’s impossible for a computer to emulate creativity completely? because of the infinate recursion? Not really; the difference between Real Intelligence and AI approaches zero. I\’m just trying to skip all the steps where people are like \”this is an AI test\” or \”that\’s an AI test\” and get right to the fact that it has to be a program-writing test, at some level of arbitrary complexity. Get rid of the obviously false stages in between.
That was my post-shower thinking.
Of course, even though the distance between Real Intelligence and AI approaches zero, This seems like a rather roundabout way of getting there. Unless you subscribe to the \”theory\” 0f Intelligent Design, you don\’t think of this as the way we came into existence. A better approach seems to be to develop a base set of self-organizing rules and let them develop like we did. Of course, there\’s time to think about. And also the millions of failures that come with such an approach. Then, we get back into recursion. When do we recognize the AI? When it manages to develop a self-organizing system as well. And then we wait to see if their self-organizing system becomes an AI (which means having that second AI develop a self-organizing system and having it become an AI). Of course, this loop has a way out … if an AI develops a self-organizing system identical to any of the previous ones, we have a cycle, and know that it can continue.