Great localization content in your inbox. Unsubscribe anytime.
Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.
If you haven’t yet, now’s the time to acknowledge the hard truth: We’re way past the point of no return, at least when it comes to AI usage in localization. Machine translation, artificial intelligence, and automation powered by AI are transforming the way companies localize. and buyers are clambering over each other to get on the tech train.
And really, why shouldn’t they?
For starters, these latest technologies are already saving companies millions, and passing on that opportunity makes no business sense. If there’s a revolution, the ones who’ll benefit most are the ones at the forefront of it. But it’s not really all about money.
All that newfangled tech not only helps cut costs and save time. It also makes localization an achievable goal. As the entry barriers lower, we're seeing an increase in global accessibility, which can only be a positive development.
To put it simply: In a few years, even tiny, niche products will localize their UI into multiple languages, because it’ll be so easy to do (not to mention, cheap). And that means people all over the world will have more access to new tech, new opportunities, new tools, and ideas. Think about the social implications: increased access to information, equal opportunities, and cultural exchange. Who knows what global growth this revolution will lead to? I certainly don’t, but personally, I’m here for that future.
But while the AI revolution is taking over the loc industry with staggering speed, it does beg a big question: How can we maintain great quality in this new age of automation?
In this article, I'm going to suggest a framework designed specifically for this digital landscape we find ourselves in. In a nutshell, I’m proposing we use this change as an opportunity. Not to invest less in localization (less time, less money, less effort) – but to invest in other aspects, instead.
By shifting our focus away from manual processes and file transfers, we can finally zero in on what really matters: providing mind-blowingly good experiences for users across the globe.
So, grab a coffee, and let’s start by addressing a very important question.
In this guide
The very important question: What is quality in UX?
Quality is a term we throw around a lot, but when it comes down to it, we often struggle to define it.
How this was done in the past: The translation quality matrix
Traditionally, translation companies used a quality matrix to measure translation quality. The metrics included gave companies a standardized framework so that they could compare translations and translators, giving each of them a numeric quality score.
For example, companies would often ask testers to rate a translation for:
Higher severity rating was given to more significant errors, those that impacted the meaning or the readability of the test. A common classification system could be, for example:
Critical: Major errors that significantly change the meaning of the text or make it incomprehensible.
Major: Errors that lead to a partial loss of meaning or significant confusion, even if users would still manage to understand the text eventually.
Minor: Errors that impact the meaning or readability a bit, but nothing too critical - like minor inconsistencies or awkward phrasing.
2. Error category
On top of severity, testers were asked to classify each error into categories, so that companies could do a more detailed analysis of the translation quality. Common error categories include:
Mistranslations: Incorrect translation of words, phrases, or concepts.
Omissions: Missing words, phrases, or content from the source text.
Additions: Unnecessary words or content not present in the source text.
Grammar: Errors related to syntax, morphology, punctuation, or other grammatical aspects.
Style: Inconsistencies in tone, register, or terminology, as well as inappropriate use of idiomatic expressions or cultural references.
Generating the quality rating
Once they got the results, companies would organize them in something called a quality matrix. They’d use this to get a clear and comprehensive overview of translation quality and calculate the translation’s quality score.
Basically, each error got a specific point value based on its severity level. The total number of points is then divided by the total number of words or segments in the translation. The result is a final quality score that can be compared across different translations or projects.
Translators and reviewers would often get a rating scale. They could assess translations based on the quality score and the error matrix, and give them a final “grade”, such as:
Excellent: Minimal errors, with no impact on meaning or readability. A translation of outstanding quality.
Good: Some minor errors, but overall a high-quality translation that conveys the intended meaning effectively.
Fair: A moderate number of errors, with some impact on meaning or readability. The translation may require further revision.
Poor: A high number of errors or significant issues with meaning or readability. The translation is likely to require extensive revision or retranslation.
Quality in UX: The quality matrix doesn’t cut it anymore
The quality matrix was widely adopted because it was, first and foremost, easy to use. It’s so much easier to evaluate how things are going when you can just slap a grade onto each piece of copy or every linguist. When buyers asked about how quality was managed, vendors could say they “only work with the highest-rated linguists” and wax poetic about high standards and processes that maximize quality.
In reality, it’s incredibly hard to quantify and measure good UX copy. Because of how subjective it is, and how fluid audiences can get, trying to define high-quality copy in numbers only is a bit like grasping at clouds with a pair of tweezers.
For this reason, companies dealing with UX localization have to develop a more comprehensive understanding of translation quality. It’s simply essential if we want to have any real control over the quality of the localized microcopy results we get.
Sure, a product with subpar UX might still work – people could understand the text and figure out how to use the product. But the experience would be, well... not so great. 😕 And who wants that?
Quality for UX: A new framework
When we talk about quality in UX localization, we're not talking about the number of errors in a translation or how well date formats were converted (though those are important too). The components of good UX writing are more like the users themselves – flexible, widespread, and varied. You can't easily put a number on them.
However, some sort of framework is definitely needed if we want to be able to methodically evaluate the quality of our localized UX copy. This is especially true since we’re almost never able to actually read the localized UX copy (we rarely speak the language). We’re putting our trust in other consultants, reviewers, and proofreaders, basing our efforts and evaluations on their feedback. A joint frame of reference is even more critical in that case. It evens the field and creates mutual ground for discussions.
The framework I’m offering for quality in UX localization focuses on three key dimensions:
Fluency (or "naturalness")
Usability (or "helpfulness")
Personality (or "uniqueness")
Let’s break each of these down to see exactly what they mean.
Fluency refers to how well the localized copy reads in the target language. And yes, fluent text should be grammatically correct, free of spelling and punctuation errors, and follow the conventions of the target language. But it goes way beyond those basic technical requirements.
In essence, fluent text reads as if it was originally written in the target language. It doesn’t feel like a translation at all - which means it often drifts far apart from the structures of the source copy.
Fluent localized copy is also often full of interesting and unique language structures, idiomatic expressions, cultural references, and colloquialisms. Of course, these are adapted appropriately to ensure that the text feels natural and engaging to the target audience, and not just a pale imitation of the source.
It’s hard to describe how fluent text feels to those who only speak one language, especially if their native language is English. But the multilinguals reading this can surely imagine. Copy can be 100% technically correct, and still completely non-fluent. It would feel stiff and alienating, while still get a perfect score in the traditional quality matrix.
Usability is all about how effectively the localized copy helps users navigate and interact with the product. Good usability means that the copy should be clear, concise, and informative, guiding users through the product's features and functionality with ease. The text should be easy to understand and follow, avoiding ambiguity or confusion.
It’s useful to start by following the source copy, but often, making localized copy as helpful as possible requires some adjustments. Each culture approaches challenges and tasks differently, so naturally, the instructions and help texts accompanying those should be different as well. From the way information is organized to the phrasing chosen.
Evaluating copy for usability is hard to do without testing it with users. Even if you’re lucky enough to work with testers who match your user persona, they naturally know much more about your product than the everyday user. Therefore, to truly measure copy usability, you’ll need to run user testing - just like you would with your source copy.
Personality is the distinctive voice of the brand, designed to reflect the brand identity and appeal to the target audience. A strong and consistent personality helps create an emotional connection with users, making the product more engaging and memorable. A strong brand voice can be a wonderful differentiator and an overall significant asset for brands.
Despite that, brand voices are rarely used in localized copy. Getting the brand voice to be reflected in all languages requires significant prep and tight collaboration with the linguists themselves. Most companies aren’t equipped for that, as they follow protocols that prioritize speed and cost over quality of experience.
On top of that, personality is rarely taken into account in the quality matrix - though it’s sometimes mentioned under “fluency” as an afterthought. Since it’s not highlighted as a priority, linguists don’t invest as much time to make sure the voice is reflected in their localized copy. This means that the brand voice is usually one of the first ones to go, resulting in a poorer experience for those localized languages.
Implementing the quality framework for UX localization
Evaluating our localized results based on this framework would be a big step forward. Instead of checking box after box of technical data, we’ll be checking the copy actually supports a great user experience. But even with a clear framework in place, measuring quality is tricky, for several main reasons. To make sure our efforts actually lead to success, we start by preparing ourselves and understanding what can go wrong.
Step 1: Understanding the potential breaking points
If we understand where things might go downhill, we can proactively tackle the problems before they get too big - for an overall smoother localization journey.
Mapping the localization process
To kick things off, we need to have a clear end-to-end understanding of our localization process. This means identifying all the people, vendors, and departments involved, and figuring out who and what is impacted by their work. You can ask yourself these questions:
Who are the key people in the process (e.g., translators, writers, developers, designers, project managers)?
Who is dependent on whom, and how do they interact?
Where could things potentially break down, and what would be the consequences of these breakdowns?
Once you've mapped out your localization process, it's time to take a look at each point and assess the impact it has on the overall quality.
Identifying quality impact points
Each point of data transfer and each person involved have the potential to implement the quality of the results. Now that we have a framework to work on, we can think about what can be done to improve quality at each of these crossroads. We can also consider what shouldn’t be done - what practices can break things or damage quality in other ways.
Not all breaking points will have a direct impact on quality, but some may have a ripple effect that ultimately influences the experience for the end user. For example, developers may not write copy, but their work can impact quality in other ways, such as:
Ensuring that the product can handle different languages, scripts, and text directionality
Properly implementing localization tools, such as translation management systems
Providing ample support to translators and other localization professionals during QA
Similarly, designers can have an impact on copy quality by:
Creating layouts that allow for text expansion
Providing context-rich information including screenshots and mockups
Choosing the right typography for each market
Adapting visual elements to align with cultural preferences and norms.
These are just some examples. Get creative and try to list all potential pitfalls. You can even work with an AI brainstorming tool to mine for things you haven’t thought of yourself.
Understanding your markets
Before you go ahead, make sure you have a deep understanding of the markets your product is serving. Don’t just count on your linguists or vendors - as professional as they are, it can’t replace having first-hand knowledge of the culture, social circumstances, and other elements that impact user behavior.
Having that knowledge will prove infinitely helpful when you need to anticipate potential breaking points and adapt your localization strategy accordingly. Consider the following questions as a starting point:
What languages and cultures are you targeting?
Are there any unique challenges or opportunities associated with these markets (e.g., legal requirements, cultural sensitivities, technological constraints)?
Who can give you more information? How can you leverage local expertise to improve the quality of your localized UX?
You can use the Localization Station market analysis template for localization to help guide you through this step.
After thoroughly analyzing your localization process, identifying potential breaking points, and understanding your target markets, you can go ahead to the next step: Prepping your team to create the foundations for higher-quality UX.
Step 2: Prep your team and process
When you’re all better prepared, you’ll notice the problems don’t get a chance to grow. Your team identifies them long before and is able to figure out exactly how to solve them. Here’s how you get everything ready.
Empowering your team members
You want to begin by helping each person on the team understand their role – as well as the impact they have on quality. Those insights you got in the previous step? This is where they come into play.
Often, team members who are not directly involved in localization don’t even know how much of an impact they have. They’re not familiar with potential issues, so they don’t know to look out for them as they work. They assume localization quality is only determined by the skill of the translator, but we already know this isn’t the case.
The first thing you need to do, in that case, is to help them learn. This means you want to clearly communicate each team member's responsibilities when it comes to localization, as well as the expectations around quality. You want them to understand what quality means for your company and for the product, and what parts of their job have an impact on those results.
If needed, provide training and resources that will help them improve their skills and contribute to a better user experience. This can be training on localization management, UX writing, user experience design, communication practices, or anything else that you feel could be helpful.
Overall – and this is true for any endeavor – strive to create a culture of open communication. Team members should feel comfortable discussing their challenges when it comes to localization. And they should be encouraged to share their ideas for improvement. Who knows, their unique skills could actually lead to significant breakthroughs in your process.
Establishing periodic checkpoints
Now that everyone is engaged and ready to make localization magic, you want to keep that energy alive. Bake periodic checkpoints into the localization process, so that you can all work together for better localized UX. These checkpoints can look different based on your company culture, the way you’re used to working, and the preferences of your team. For example:
Have multi-team meetings to discuss progress, challenges, and opportunities for improvement
Create metric review sessions to analyze performance data and identify areas of concern
Set QA sessions with cultural consultants from your target markets, to ensure that localized content is accurate, relevant, and culturally sensitive
Run user tests to gather real-world feedback, then review that with your team to fine-tune the user experience
Whichever format you choose, these checkpoints can help you create a more structured, collaborative environment that’s truly committed to (the right kind of) quality.
Learning more about your local market
Lastly, understanding what constitutes a great user experience in each of your target markets is crucial. Don’t assume you know this. The assumptions you make based on your own culture may be way off base, and you’ll find yourself wasting time and money.
To gain those valuable insights, you can have exploratory sessions with consultants or local experts in each of the markets you’re localizing into. These sessions can help you:
Identify what potential pain points you can address for your users in those markets (they may be the same as those in your original market, or slightly different)
Understand the cultural preferences and norms that could shape users' expectations and experiences
Discover opportunities to delight users and make your product stand out in each of these markets
Once you and your team are ready, you can start running through your localization process.
Step 3: Design a process that brings better results
As you go forward, you want to make sure that the process you’re using supports good localized UX - i.e. that it’s helpful in creating translated copy that’s fluent, possible, and on-brand. These days, with machine translation and automation, this means something a bit different than before. A lot of manual labor is being phased out, and new, unique challenges arise instead.
To set yourself up for success, there are a few things you should keep in mind:
Keep humans in the loop
Yes, tech has come a long way. Yes, nowadays we’re using chatGPT for anything from planning trips to planning recipes. But human expertise still plays a critical role in great UX writing. Machines can translate text quickly and accurately, but they can't fully grasp the nuances and cultural sensitivities involved in crafting a truly localized experience.
The quality of copy you can get out of your MT depends on several factors: The language pair, the context matter, the type of MT you’re using… You can try machine-translating your UX copy and evaluating the results, but even in the highest-resource language pairs, you’ll still need a human eye to tweak the copy, for a few reasons:
For now, MT can’t take into account any visual context. Your AI translator won’t be able to tell where the copy you’re asking it for will be placed, or if there are any supporting visuals. This means poor usability.
MT also doesn’t consider the knowledge level of the audience, nor what they did before they reached this specific UI point. It has trouble nailing copy that fits within the flow of the product. Again, bad for usability.
MT has a tendency for literal translation, since it can’t really take in any of the surrounding context. It also happens since the corpus used to train MT engines was often written in stiff, formal language - while UX copy tends to be more plain and straightforward. This is a significant issue in UX translations, where copy is split into small strings that are seemingly distinct from each other. And, of course, it’s not ideal for fluency.
And, for the moment, MT fails spectacularly on the personality front. ChatGPT’s been getting more flexible when it comes to voice, but it has a hard time nailing the exact brand voice - and that’s when it’s given exact voice guidelines or plenty of examples. For localized copy, MT almost always defaults to that generic bot-like voice. Snooze.
Working with MTPErs Who Understand UX
The humans you work with? They should not only have a strong command of languages — but also understand UX and know how to write UX copy. Especially in MTPE, it’s less about the grammatical correctness of the copy, which MT can handle fine. It’s more about taking that raw MT output and turning it into a useful, unique, natural user experience.
Once you work with the right MTPErs and have enough confidence in their abilities, you can give them a bit more freedom and flexibility. Creating fluent copy under stifling, strict rules is nearly impossible, because language is flexible on its own. Letting your linguist veer off the course of the source copy is crucial if you want their localized texts to feel fluent and natural.
Skipping proof and adopting a two-stage QA approach
Traditionally, translation and localization projects always include a proofreading step. When projects reach QA, linguists are asked to avoid any unnecessary changes. They’re expected to only flag issues that are very critical, like blunt errors and potentially offensive copy.
I would like to argue that proofreading in a contextless environment, or proofing based on screenshots and mockups alone, is far less effective than proofing during QA. A lot can change in the final version, and you want to give linguists the option of changing things around once they actually see the copy live.
Instead of the traditional proof-and-QA process, consider skipping proof and implementing a two-stage QA process.
In the first QA round:
Allow testers to make any necessary changes to the copy. This stage is all about identifying and fixing issues that may have been missed during the initial MTPE process.
After testers have completed their revisions, let the original MTPErs/linguists review the changes to ensure they work well within the context of the product and maintain the intended meaning.
Finalize the changes and implement the copy to get things ready for the second (final) round of QA.
In the second QA round:
At this stage, most of the copy should be good. Now you can focus on high-level comments that either highlight definite errors or have a significant impact on the user experience.
Require testers to justify each comment they make during this stage. This approach encourages critical thinking and helps ensure there are no unnecessary preferential changes being made.
Step 4: Test for quality
The tests you run during QA should also be ones that prioritize the user experience, focused on the 3 dimensions of localized UX copy quality. The texts are just one part of the overall experience, and all pieces are tied together. You want to give your testers the ideal conditions so that they can assess the quality of the final experience. If you’re having two QA steps as suggested above, you’ll want these conditions to support them as they improve the experience – through changes and adaptations of the copy.
Providing enough context
Conducting QA in a contextual environment can significantly improve the results, as you can probably imagine. Of course, there’s no way to perform QA without context. Usually, testers are either given screenshots or a testing environment for them to use. The more context you can provide, the better they’ll be placed to analyze and improve your localized experience.
Based on that logic, performing QA within the actual app – live or through a testing environment – is the ideal way to go. It lets testers understand exactly how well the copy will fit within the overall design and layout.
That being said, combining this with in-context editing in the localization tool can be a game changer. Not only can testers evaluate how the copy looks now, but they can see exactly how it’ll look after each fix. This can dramatically decrease the amount of iterations you’ll need after QA.
Either way, make sure you also provide testers with the full context of the product, including its purpose, target users, and the state of mind of the users as they use the product. Those are critical to understanding the experience and addressing potential UX issues in the localized product.
Gathering both qualitative and quantitative results
Quantitative data is easy to analyze and simple to visualize. And it can help inform your decisions about which markets need work and which linguists you want to keep working with. To collect it, you want to ask your testers to rate the copy in terms of fluency, usability, and personality. Ask them pointed questions like:
Does the copy feel like it was written in your language?
Is the copy clear and easy to understand?
Do you feel like the users would be able to navigate this product easily?
Is the brand voice reflected in the copy?
To get a true understanding of the quality of your localized UX copy, you can’t stop at numbers. The information you get from quantitative data is painfully lean. Not only that, but quantitative data alone can lead to plenty of misunderstandings, especially since you and your testers come from different cultures.
If you have testers explain – in their own words - how they feel about the copy’s fluency, usability, and personality, it’ll help you make sure you’re on the same page. You can even ask follow-up questions and gain more insights into their thought process and reasoning.
Plus, qualitative insights will help you get a deeper understanding of the three dimensions, and how well they’re implemented in the localized copy. A fluency rating between 1 and 5 is significantly less detailed than a long-form free-text answer explaining how fluent the copy feels.
Scheduling out-of-sprint tests
Often, companies only perform QA when a new feature is launched or some new copy is added. This puts the spotlight on new copy only, without looking holistically at the entire product. Older copy doesn’t get tested after its initial release, especially if it’s not placed near the new copy in the product.
Adding out-of-sprint checkpoints can help with that. By reviewing the entire product from time to time, you can identify and address any issues that may have been missed or that came up later, while new copy has been incorporated.
In these checkpoints, you can have testers or linguists review the product, just like you would in a regular QA task. Only the QA script does not focus on a new feature, but on key contact points in the product as a whole. Alternatively, you can test the localized product with real users, watching them as they experience the UX themselves. These checkpoints will help maintain a consistently great experience in all languages.
Testing with real users, too
QA testing with linguists and language testers is a critical step, and one that can help you weed out the critical mistakes. But your linguists are not your users. If they’re professional, they can give you valuable insights into the copy’s personality and fluency. And they can make educated guesses about usability, too. But they can’t know for sure if your copy is truly usable. The only way to get this information is through user testing.
As with QA testing, you want to combine both quantitative and qualitative data in your research, to get your users’ real feelings about your product. You also want to try and use a wide variety of UX research methods. For example, you can do:
Usability testing: Observe users as they interact with your localized product, identifying any issues or areas for improvement.
Surveys: Collect feedback from users on their overall experience and specific aspects of your localized product.
Focus groups: Bring together small groups of users to discuss their experiences, preferences, and needs in relation to your localized product.
These methods will help you get detailed insights that’ll impact the paths you choose to take later.
Step 5: Put the data to work
Alright, so you've put in the effort, gathering all that valuable feedback, running tests, and keeping an eye on your UX localization game. Now it's time to make that data work for you. Here's how to take all those great insights and use them to make your localization even better:
Analyzing the data
First things first, take a good look at all the data you've collected - from both QA and user testing. You want to look at all types of feedback at this point:
Specific, pointed feedback (”this is missing a comma”) - This includes comments that refer to objective errors in specific strings or pieces of copy. If there’s too much of this, you’ll want to try and understand why that is. Talk with your loc team to try and pinpoint what went wrong and how you can prevent that from happening in the future.
General framework feedback - These are comments that have to do with the fluency, usability, and personality of the copy. Here you want to keep an eye out for any patterns and trends – since those can help you understand what's working and what needs improving.
Making a to-do list
Once you've got a handle on your data, it's time to figure out what needs fixing and how it can be done. Prioritize your to-do list based on what will make the biggest difference to your users' experience.
Once you know what you need to do, share your list with the team. Tell them what you’ve learned and ask for their ideas on how to make things better. Ideally, you want your entire team to be present – or at least go over the insights later. It'll help everyone understand how their work affects the user experience, and might just give them the motivation they need to keep improving the localization process.
Using your data to make decisions
Finally, after any issues are fixed, use the relevant data to make smart decisions about your localization strategy and process. Maybe you need to focus more on a particular language or invest in better translation tools. Maybe your UI in a specific language needs some work, or the team in one of your languages needs additional investment.
Whatever the case, making data-driven choices will help you put your efforts where they count. Keep using your data to make your localization better. Update your guidelines, chat with your team, and make sure everyone's working together to keep the experience the best it can be.
Do you do this for every localization task?
It depends on several factors, like the size of the task, the amount of time you have, and budget, of course. I’d do some level of QA after any localization task to weed out the issues. You can then analyze the data periodically to keep improving the fluency, usability, and personality of your product’s UX copy in that language.
Who should test your copy?
Ideally, you want to work with professional LQA testers - people who have both linguistic capability and the technical skills needed to go over your UX copy. If that’s not possible, I would recommend running QA step #1 with linguists, providing them with easy-to-use assets like screenshots and videos. Then, run QA step #2 with QA testers to find additional usability issues.
Got any more questions? Get in touch!
p.s. Did you know we have a Slack community for UX localization? Join right here – and don't forget to join the relevant language-specific channels, too. Can't wait to see you there!
World-class UX: A comprehensive guide to quality in UX localization
Is your localized copy good enough? A detailed guide with everything you need to know about managing QA for localized User experiences.