最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - What is the best way of writing a test for testing a multilingual website? - Stack Overflow

programmeradmin3浏览0评论

I've just written this code for testing our login process for the German version of our website:

describe('login', () => {

    context('Language: DE', () =>{

        beforeEach(() => {
            ...
        })

        it('links to #/passwordforgotten', () => {
            ...
        })

        it('links to #/register', () => {
            ...
        })

        it('links to login further options', () => {
            ...
        })

        it('requires username', () => {
            ...
        })

        it('requires password', () => {
            ...
        })

        it('requires valid username and password', () => {
            ...
        })

        it('navigates to #/ on successful login', () => {
            ...
        }) 
    })
})

Our website is offered in nine languages. Should I replicate this piece of code for each language:

describe('login', () => {

    context('Language: DE', () =>{
        ...
    })

    context('Language: EN', () =>{
        ...
    })

    context('Language: ES', () =>{
        ...
    })

    context('Language: IT', () =>{
        ...
    })

        ...
})

Or should I implement logical structures when it es to checking the content?

it('requires password', () => {

            cy.get('#UserName').type('username{enter}')
            const url = cy.hash()

            if(url.contains('de')) 
                cy.contains('Bitte geben Sie Ihr Passwort ein!')
            if(url.contains('en')) 
                cy.contains('Please enter your password!')
            if(url.contains('es')) 
                cy.contains('Por favor introduzca su contraseña')
            if(url.contains('it')) 
                cy.contains('Il nome utente o password non è corretto.')

            ...
        }

What's more efficient? And what if this battery of tests is intended to be run regularly?

I've just written this code for testing our login process for the German version of our website:

describe('login', () => {

    context('Language: DE', () =>{

        beforeEach(() => {
            ...
        })

        it('links to #/passwordforgotten', () => {
            ...
        })

        it('links to #/register', () => {
            ...
        })

        it('links to login further options', () => {
            ...
        })

        it('requires username', () => {
            ...
        })

        it('requires password', () => {
            ...
        })

        it('requires valid username and password', () => {
            ...
        })

        it('navigates to #/ on successful login', () => {
            ...
        }) 
    })
})

Our website is offered in nine languages. Should I replicate this piece of code for each language:

describe('login', () => {

    context('Language: DE', () =>{
        ...
    })

    context('Language: EN', () =>{
        ...
    })

    context('Language: ES', () =>{
        ...
    })

    context('Language: IT', () =>{
        ...
    })

        ...
})

Or should I implement logical structures when it es to checking the content?

it('requires password', () => {

            cy.get('#UserName').type('username{enter}')
            const url = cy.hash()

            if(url.contains('de')) 
                cy.contains('Bitte geben Sie Ihr Passwort ein!')
            if(url.contains('en')) 
                cy.contains('Please enter your password!')
            if(url.contains('es')) 
                cy.contains('Por favor introduzca su contraseña')
            if(url.contains('it')) 
                cy.contains('Il nome utente o password non è corretto.')

            ...
        }

What's more efficient? And what if this battery of tests is intended to be run regularly?

Share Improve this question asked Feb 6, 2020 at 8:42 Noob_Number_1Noob_Number_1 7556 silver badges22 bronze badges 2
  • Probably wont help you as much cause I'm not used to cypress but my approach would look something like this: Create a json structure for each language and use the same key for each language simply replacing the value. Then load the right JSON file and store the user decision maybe in a cookie or localstorage. This way also somebody not used to programing add another language or edit existing ones. – Aaron Commented Feb 6, 2020 at 8:54
  • What are you actually testing? You can test the working of the site separately from its UI basically, by concentrating on non-changing structural elements like button ids. If you explicitly want to test the wording, then I’d first question what exactly you’re testing. You probably don’t want to test whether the word on the button is indeed named as it should be named, that would be a lot of duplication and redundant changes when you rename stuff. So… what are you testing and how is it language dependent? – deceze Commented Feb 6, 2020 at 8:59
Add a ment  | 

3 Answers 3

Reset to default 9

I implemented testing for a multi-lingual site that supported 40 language locales by using this trick:

  1. Define one dictionary (JSON file) for each locale. In the locale file, define the mapping between element ID and text.

  2. Write the test suite to take 'locale' as the parameter / environment variable. Test cases will refer to the appropriate dictionary file, based on the locale parameter. Test case will do the assertions based on the mappings defined in locale dictionary.

  3. Run test suite with locale as the parameter. This had helped me avoid lot of if-else conditions and switch blocks!

It also helped run tests for only specific languages for a release. (Scenario where not all language sites were updated for every release)

First of all, recognise the redundancy of testing the exact wording of each particular UI element. That means you have a localisation file somewhere that does the actual localisation (e.g. a gettext PO file), and you have the exact same information again in your test files, just spread around more. In my experience, slight details of text may change every once in a while, so every time you update a wording, you also need to update the tests. In essence, you're just keeping your tests in sync with the PO file to keep your tests passing. This is a lot of redundant work without much upside.

What are you actually testing by doing this? Your top priority is probably to test whether your l10n system is working. If there's a bug somewhere in your i18n/l10n system, typically all localisations will be broken. So it's redundant to test everything. You can simply write one test which does spot checks of certain well known strings and checks if they're properly localised into different languages. That should be enough to catch failures in the i18n system as such.

If you're using placeholders like LOGIN_BUTTON_TEXT, you may want to test whether your button texts are not "LOGIN_BUTTON_TEXT" and not empty, to see whether localisations are working in general, without needing to confirm what each button text is exactly.

Perhaps load the PO catalog you use to localise your text and check that the string appearing in the UI is the one translated from the PO file. Again, this would catch bugs where somehow the l10n isn't working, without you needing to copy all the strings from the PO file to your tests.

You can do your functional tests without caring about exact texts; address UI elements by button ids and such, check whether inputs are generally "in error state" or not, check whether elements which are supposed to display error messages generally exist and aren't empty. Again, perhaps do spot checks for certain specific localised strings, but not necessarily for all.

You should quality audit your PO files separately from this. i18n systems like gettext have a huge ecosystem around quality checking translations, and you should use them for that purpose. E2E tests aren't necessarily the right place for that.

The only time it would make sense to hardcode the exact string which is supposed to appear in the UI in your tests is if you work with a TDD approach, and you have 100% pixel perfect mockups with 100% accurate text predefined by your design/UX team and it's 100% paramount that these mockups be 100% implemented as-is. In that case, the tests are the spec and the implementation is following the tests. This also requires that for any change to the text, you go through a mockup → spec → test → implementation cycle. Then this makes sense.
In any other case, your tests will just be chasing the implementation, and that's rarely useful.

It is a late answer but I hope it helps someone needs this type of solution.

To create dynamic tests from a JSON file for multi-language website

In general I am using one fixture file for all languages in an array for each page as following structure.

{
  "mon": {
    "variable": "value"
  },
  "lang": [
    {
      "name": "tr",
      "variable": "value"
    },
    {
      "name": "en",
      "variable": "value"
    }
  ]
}

This is the way I prefer to create a language loop. In test file I require the file after describe() as following.

describe('some description', () => {
  const fxData = require('../fixtures/foldername/fxpagedata');

  fxData.lang.forEach((lng) => {
    it(lng.name + 'test case definition', () => {
      cy.log('lng.name', lng.name);
      cy.log('fxData.mon.variable', fxData.mon.variable);

      // ..
    });
  });
});

With this approach, we can dynamically create tests using JSON file.
Also I think the most important point is you can check separated tests for per language in Cypress Test Runner.

Footnote:
You would see a working example repo with a neat explanation.

We cannot load JSON file using "cy.fixture" because it means the test is already running. Same with using "before" hook - new tests cannot be created from "before" hook. Instead we need to load JSON file using "require" at the start time and generate tests.

ref: https://github./cypress-io/cypress-example-recipes/tree/master/examples/fundamentals__dynamic-tests

发布评论

评论列表(0)

  1. 暂无评论