Integrating Joint n-gram Features into a Discriminative Training Framework

  1. (PDF, 229 KB)
AuthorSearch for: ; Search for: ; Search for:
ConferenceThe 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), June 1-6, 2010, Los Angeles, California
AbstractPhonetic string transduction problems, such as letter- to-phoneme conversion and name transliteration, have recently received much attention in the NLP community. In the past few years, two methods have come to dominate as solutions to supervised string transduction: generative joint n-gram models, and discriminative sequence models. Both approaches benefit from their ability to consider large, flexible spans of source context when making transduction decisions. However, they encode this context in different ways, providing their respective models with different information. To combine the strengths of these two systems, we include joint n-gram features inside a state-of-the-art discriminative sequence model. We evaluate our approach on several letter-to-phoneme and transliteration data sets. Our results indicate an improvement in overall performance with respect to both the joint n-gram approach and traditional feature sets for discriminative models.
Publication date
AffiliationNRC Institute for Information Technology; National Research Council Canada
Peer reviewedNo
NPARC number16885295
Export citationExport as RIS
Report a correctionReport a correction
Record identifier97f210a1-11d9-4ef9-a203-1dafb00f3a07
Record created2011-02-22
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)
Date modified: