S. Pado, D. Cer, M. Galley, D. Jurafsky, and C.D. Manning: Measuring Machine Translation Quality as Semantic Equivalence: A Metric Based on Entailment Features. Journal of Machine Translation 23 (2-3), 181-193. The original publication is available at www.springerlink.com, doi:10.1007/s10590-009-9060-y

Current evaluation metrics for machine translation have increasing difficulty in distinguishing good from merely fair translations. We believe the main problem to be their inability to properly capture _meaning_: A good translation candidate _means_ the same thing as the reference translation, regardless of formulation. We propose a metric that assesses the quality of MT output through its semantic equivalence to the reference translation, based on a rich set of match and mismatch features motivated by textual entailment.

We first evaluate this metric in an evaluation setting against a combination metric of four state-of-the-art scores. Our metric predicts human judgments better than the combination metric. Combining the entailment and traditional features yields further improvements. Then, we demonstrate that the entailment metric can also be used as learning criterion in minimum error rate training (MERT) to improve parameter estimation in MT system training. A manual evaluation of the resulting translations indicates that the new model obtains a significant improvement in translation quality.

  author =       {Sebastian Pad\'o and Daniel Cer and Michel Galley and 
                  Daniel Jurafsky and Christopher D. Manning},
  title =        {Measuring Machine Translation Quality as Semantic Equivalence: 
                  A Metric based on Entailment Features},
  journal =      {Machine Translation},
  volume = {23},
  number = {2--3},
  pages = {181--193},
  year = {2009}