Aug. 6th, 2003

alexpgp: (Aura)
The sim today ended up being fairly interesting. The simulated station took a simulated debris hit in the middle of some mundane checkout work, and everyone ended up practicing procedures from the so-called "red book" of emergency procedures.

During a lull in the excitement, I had a sort of Xanadu-like train of thought (albeit without the benefit of the opium that Coleridge used) regarding the nature of intelligence, God, and machine translation. I call it Xanadu-like because, for just a moment, I thought I had a very clear picture of the whole thing. Then the capcom keyed his microphone to talk to the crew, and my reverie was broken.

Or maybe I dreamt the whole thing, I don't know.

The whole thing started when my thoughts turned to the late, lamented Sasha, who has a small shrine here in the Pearland house. Natalie has a couple of photos pinned to the wall above a pencil sketch she made of her dog and a little incense burner on a table. (At least I think it could be used for incense... it's not been used since my arrival at least.) There is also a small glass with water on the table, which I wondered about until Natalie told me it was fresh water for her ever-loving dog. Naturally, I left it alone. I think we're both a little batty in this regard.

Anyway,whenever I think of dogs, I almost invariably start to think about the nature of intelligence and how beings having markedly different levels of intelligence can interact. Obviously, humans get along with canines, despite the fact that we do myriad things that dogs cannot imagine in their wildest dreams. (On the other hand, dogs are naturally capable of some kinds of processing, e.g., of olfactory data, that we can hardly duplicate even with our vast toolmaking ability.)

So what would it be like - might it be like - to deal with a being whose level of intelligence is as far in advance of ours as ours is in advance of, say, your typical dog? (What a classic science fiction theme!)

There is no answer,of course, as we are not aware of any such intelligence (unless you want to call it "God"), and as a result, we have no limits on imagining how such an intelligence might manifest itself, or behave, or what goals such beings of such intelligence might pursue. (It would, for example, be dangerous to assume that a superior intelligence would necessarily be benevolent toward humans or other "lower" life forms.)

The manifestation of such an intelligence would, I think, be a combination of That Which Is Seen and That Which Is Not Seen. Just as a dog can interact with us when we play fetch, a superior intelligence will certainly be able to interact with us on a physical basis and, one presumes, on a mental level as well. However, just as there are some things that a dog can't understand (the need to refuel a car from time to time, to pick something out of the air), there would be things that a superior intelligence would do that would completely escape us and we wouldn't even be aware of it.

From our human perspective, would the hidden nature of a superior intelligence be learnable? measurable? Or would it involve something that we simply wouldn't have the biological or psychological capability to absorb? It's a tough call and one worthy of a fiction writer. I suspect the answer might lie in the middle: we might be able to hack into a higher level of intelligence, but there will likely also be areas that will be as "alien" to our minds as magnetic resonance imaging is to a squirrel.

Here, of course, I am thinking of another favorite science fiction theme: the rise of intelligent machines... I seem to recall one writer referring to this event as the Singularity.

What might be the consequences of building a self-aware machine? What would be the nature of its intelligence?

Whatever the answers to those questions might be, it is almost certain that present speculation will fall far wide of the mark.

That's because it's hard to think that far outside the box, because of the fairly large chunk of real estate that's represented by what's not in the box. As a result, there is a tendency to return to homo sapiens as the prototype of all that is intelligent in the universe. In fact, we can see this quite often in religious discussions, where gods are endowed by their creators with certain characteristics that make them, basically, humans with all of the options installed (and with many of the limitations, too, though they are not typically identified as such by the priesthood; read Twain's Letters From the Earth).

* * *
As far as machine translation is concerned, it occurred to me that the successful breaking of the Enigma cipher during WW II is yet another example (in addition to chess) of solving a cognitive problem thought to be unsolvable because some key assumption was wrong. In theory, you see, the Enigma was thought to be uncrackable because of its complexity: if you tried all combinations,it'd take you a long time to find the key. This had been proven mathematically, and the Germans therefore placed full faith and confidence in the system.

It turned out, however, that between using "cribs," or likely combinations of words, and not playing fair (raiding German weather ships, which were equipped with Engimas and settings schedules), the British and Americans were able to crack the system. Cribs massively pared down the mathematical "space" in which a message could lurk, which made the employment of early "bombe" data processors feasible, and of course, if you steal the keys themselves, the rest is a piece of cake.

In the early years of computer chess, by the way, it was commonly thought that really successful programs would have to be programmed to emulate the way humans think in order to achieve good results. (Human grandmasters invariably consider no more than two or three different moves in any given position, and rarely calculate exhaustive variations to any great depth.) It turns out, however, that massive brute-force searches and refinements in evaluation functions were able to make up for the lack of imagination, and in fact, research into finding ways to select likely "candidate moves" didn't advance very far.

So it may very well be that we're merely awaiting a spectacular paradigm shift in the field of machine translation that will usher in an era of acceptable machine translations. The best information and smart money indicate that statistical methods are only slightly better than traditional methods, so the elusive silver bullet has apparently yet to be found.

Enough rambling and freewheeling. I get the feeling that Coleridge certainly did a better job in his 54 lines than I've done here, but I've had distractions. Time to go to sleep. Even though I have a late day tomorrow (the sim starts at 2:30 pm), I still have things to do.

Cheers...

Profile

alexpgp: (Default)
alexpgp

January 2018

S M T W T F S
  1 2 3456
7 8910111213
14 15 16 17181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 15th, 2025 06:10 pm
Powered by Dreamwidth Studios