Rain and no work...
Nov. 16th, 2011 06:57 pmThe rain yesterday was just enough to dampen the driveway and patio. Today's version was quite a bit heavier.
I embarked on my once-every-couple-of-years-or-so evaluation of machine translation, mostly because the buzz at the recently held ATA conference in Boston was heavily oriented in that direction. It would appear, you see, that clients are being targeted by marketing that promotes the idea that translation costs can be trimmed by feeding documents to MT apps and then finding "editors" (who I would describe more as "rewrite specialists") to turn the output into a finished product. On the flip side, translators are being targeted by marketing that promotes the idea that they can improve their own productivity by using MT apps as source text preprocessors.
As always, there are those who maintain steadfastly that machines will never produce high-quality translations the way an experienced human can. During one such conversation in Boston, when I pointed out that it took only about 20 years for chess-playing programs to progress from where rank beginners could beat them to where a computer beat the reigning World Chess Champion in match play, the rebuttal was, basically, that "translation is a much harder problem," etc., machines will never etc.
Which seems to me to miss my point. "Much harder problems" are what computer scientists are really good at addressing, generally speaking. Not necessarily so much so that you can plan your breakthroughs, but often enough not to be terribly surprised at outcomes, such as when, this past September, a group of volunteer video-gamers determined the structure of a key protein-cutting enzyme from a monkey form of AIDS. In three weeks. (After scientists had spent more than a decade trying to figure this out.)
Which is not machine translation, I'll grant you, and was accomplished because humans can do spatial reasoning, which computers currently cannot, but the bottom line is that it got the job done. This is making me take another look at crowdsourcing of translations, but that's another topic, and a sensitive one.
The last time I did this MT evaluation, the output I evaluated (from PROMT) was little better than horse puckey. As English, it was awful; the only decent thing to do was to delete the file and retranslate from scratch.
And today's output? I'm still going through it, and while the text is far from ready for prime time, the quality has markedly improved, in my opinion.
Cheers...
I embarked on my once-every-couple-of-years-or-so evaluation of machine translation, mostly because the buzz at the recently held ATA conference in Boston was heavily oriented in that direction. It would appear, you see, that clients are being targeted by marketing that promotes the idea that translation costs can be trimmed by feeding documents to MT apps and then finding "editors" (who I would describe more as "rewrite specialists") to turn the output into a finished product. On the flip side, translators are being targeted by marketing that promotes the idea that they can improve their own productivity by using MT apps as source text preprocessors.
As always, there are those who maintain steadfastly that machines will never produce high-quality translations the way an experienced human can. During one such conversation in Boston, when I pointed out that it took only about 20 years for chess-playing programs to progress from where rank beginners could beat them to where a computer beat the reigning World Chess Champion in match play, the rebuttal was, basically, that "translation is a much harder problem," etc., machines will never etc.
Which seems to me to miss my point. "Much harder problems" are what computer scientists are really good at addressing, generally speaking. Not necessarily so much so that you can plan your breakthroughs, but often enough not to be terribly surprised at outcomes, such as when, this past September, a group of volunteer video-gamers determined the structure of a key protein-cutting enzyme from a monkey form of AIDS. In three weeks. (After scientists had spent more than a decade trying to figure this out.)
Which is not machine translation, I'll grant you, and was accomplished because humans can do spatial reasoning, which computers currently cannot, but the bottom line is that it got the job done. This is making me take another look at crowdsourcing of translations, but that's another topic, and a sensitive one.
The last time I did this MT evaluation, the output I evaluated (from PROMT) was little better than horse puckey. As English, it was awful; the only decent thing to do was to delete the file and retranslate from scratch.
And today's output? I'm still going through it, and while the text is far from ready for prime time, the quality has markedly improved, in my opinion.
Cheers...