The thing with any QA testing is that catching every error is time-consuming. I’m talking from a proofreader’s perspective, but the process is pretty similar.
Let’s say a proofreader catches 97 % of errors in a single reading of a text. Let’s say the original text has 10,000 errors. The proofreader catches 9700 of these, leaving 300. On the next reading, he catches 291, leaving 9. On the third reading, he catches 8, leaving one. Each reading takes the same amount of time and costs the contractor the same amount of money. Most contractors are willing to pay for the first run, and maybe the second. Few are willing to pay for the third run, rather accepting the tiny amount of errors still remaining. Basically no-one would hire the proofreader to find the one remaining error.
All QA is a diminishing returns investment. The publisher must choose how much to invest in it, but the thing to understand here is that doubling the resources used in QA won’t double the results - it yields maybe 5 to 10 % better results.
Now, I’m not saying that Funcom couldn’t, or shouldn’t, do a better job. The number of glitches slipping through the cracks is making people unhappy, so obviously they haven’t found the sweet spot where the investment in QA results in an acceptable number of remaining errors. But demanding perfection every time is unrealistic. No company can afford that. (I wish they did - I’d love to get paid more for proofreading.)
7 Likes