PacLing 2007

Today I attended PacLing2007, the 10th Conference of the Pacific Association for Computational Linguistics. I attended sessions on Named Entities, Lexical Semantics, Machine Translation and Terminology. There was also an invited talk by Ann Copestake on applying robust semantics. She had a neat example of how underspecification works, in solving Sudoku, and how you can make inferences from something underspecified. Well it’s easy with sudoku, I wonder how easy it is with language. :)

There were two main interesting points to me. The first is that Francis Bond, the Program Chair, asked all the presenters to license their papers under the Creative Commons Attribution 3.0 license, and they did. All of the papers from the conference program are available under this liberal license. (The webpage doesn’t say so, but each paper’s PDF has this as a footnote on the first page.) I think this is a fantastic forward-thinking and commendable move on behalf of PacLing. It acknowledges that all human knowledge builds on what came before.

The second thing that was interesting was the session Bridging the Gap: Thai – Thai Sign Language Machine Translation , although in the end it was not perhaps a terribly exciting MT system. I was curious about how TSL was represented. Apparently they have a big dictionary of Thai word <-> photograph of someone making the equivalent TSL sign(s). Given that movement is a meaningful part of sign language I wonder how well this works. I am not sure now if the presenter told me that they slice up a video of the movement into frames to represent it, or if I imagined that. :)

I spoke to the presenter (I think it was Srisavakon Dangsaart) afterwards about signwriting, which she had heard of. She seemed to indicate it wasn’t used for TSL. I asked if it couldn’t be useful for TSL ‘speakers’ to be able to write using it. Her MT system is definitely useful and cool, but it’s basically one way: not really possible for TSL ‘speakers’ to create sentences using photographs of people making signs. She said it would mean they would have to learn three languages: TSL, signwriting, and written Thai (to communicate with the rest of the population). I don’t disagree, but I imagine it would be easier to learn to write Thai given literacy first in signwriting, which I presume would be an order of magnitude easier to acquire over any phonemic representation of a language (such as an alphabet-based script, which Thai is). That would be a fertile area for research I imagine.

20 September, 2007 • , , ,


Elsewhere on the web...

Commenting is closed for this article.

list of all posts, ever

find articles by tag

monthly archive

most popular articles

  1. [guest] Rethinking the Top Ten
  2. How to use Gmail to manage high-traffic mailing lists
  3. An alternative term for "User-generated content"
  4. NLA Innovative Ideas Forum audio/video now available
  5. Write API enabled on Wikimedia sites!
  6. Top 10 software extensions Wikimedia Commons needs in 2008
  7. GLAM-WIKI, day one
  8. Is mass collaboration all it's cracked up to be?
  9. Free MediaWiki hosting offered by Dreamhost Apps
  10. Reflections on PGIP phase 1

(from the last 30 days)