↓ Skip to main content

Identifying the Machine Translation Error Types with the Greatest Impact on Post-editing Effort

Overview of attention for article published in Frontiers in Psychology, August 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (83rd percentile)
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

twitter
14 X users

Citations

dimensions_citation
40 Dimensions

Readers on

mendeley
148 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Identifying the Machine Translation Error Types with the Greatest Impact on Post-editing Effort
Published in
Frontiers in Psychology, August 2017
DOI 10.3389/fpsyg.2017.01282
Pubmed ID
Authors

Joke Daems, Sonia Vandepitte, Robert J. Hartsuiker, Lieve Macken

Abstract

Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected.

X Demographics

X Demographics

The data shown below were collected from the profiles of 14 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 148 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 148 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 22 15%
Student > Master 21 14%
Researcher 13 9%
Lecturer 11 7%
Student > Doctoral Student 11 7%
Other 22 15%
Unknown 48 32%
Readers by discipline Count As %
Linguistics 52 35%
Social Sciences 12 8%
Computer Science 11 7%
Arts and Humanities 10 7%
Psychology 3 2%
Other 12 8%
Unknown 48 32%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 11. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 August 2019.
All research outputs
#2,738,676
of 23,577,761 outputs
Outputs from Frontiers in Psychology
#5,270
of 31,442 outputs
Outputs of similar age
#51,771
of 318,658 outputs
Outputs of similar age from Frontiers in Psychology
#142
of 584 outputs
Altmetric has tracked 23,577,761 research outputs across all sources so far. Compared to these this one has done well and is in the 88th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 31,442 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.6. This one has done well, scoring higher than 83% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 318,658 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 83% of its contemporaries.
We're also able to compare this research output to 584 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 75% of its contemporaries.