Is there a way to make the expect()
mandatory for all test cases? If the programmer doesn't put expect
it should throw some warning/error. I am using Jest and @testing-library/react
.
For example:
Should work fine:
test('Should work file',()=>{ ...code... expect(value).toBe(expectedValue) })
Should throw error/warning:
test('Should fail',()=>{ ...code... })
Is there a way to make the expect()
mandatory for all test cases? If the programmer doesn't put expect
it should throw some warning/error. I am using Jest and @testing-library/react
.
For example:
Should work fine:
test('Should work file',()=>{ ...code... expect(value).toBe(expectedValue) })
Should throw error/warning:
test('Should fail',()=>{ ...code... })
-
2
There's
expect.hasAssertions
, although you have to add that to each test, or the ESLint plugin'sexpect-expect
rule. – jonrsharpe Commented Dec 14, 2020 at 11:21 - Thank you for replying eslint rule is the solution. I was referring to npmjs./package/@testing-library/react – Pramod Mali Commented Dec 14, 2020 at 13:18
2 Answers
Reset to default 6There are two ways I'm aware of to automatically do this:
The ESLint Jest plugin's
jest/expect-expect
rule, which will ensure that every test callsexpect
at least once. This can also be configured to allow other assertion functions/methods, like Cypress's"cy.**.should"
or Supertest's"request.**.assert"
.Use
expect.assertions
/expect.hasAssertions
inside the tests, so that they will fail if the required number of expectations isn't met.This is a bit less automated, so may not solve the problem of people forgetting to include things, but will also catch cases where assertions exist but aren't reached (e.g. due to asynchronous code or no error being thrown). There's another rule in the ESLint plugin,
jest/prefer-expect-assertions
, which would check that you always use it, giving you the best of both worlds.
There's also a process-based approach: test-driven development. If you're always running the tests before you expect them to pass, you make sure they do indeed fail (and give useful feedback when they do so).
Linting rules such as jest/expect-expect
have been very useful, but I've seen expect.hasAssertions
, placed in the shared Jest setup file, catch a few things the linter hasn't:
beforeEach(() => {
expect.hasAssertions();
});
The linter does a static analysis while hasAssertions
tracks the statements executed during a test:
it('should detect whether an expectation is actually run', () => {
// Oops, we forgot the await!
waitFor(() => {
expect(someElement).toBeVisible();
});
// The waitFor essentially bees a no-op from the point of view of
// the test. The test moves on and quietly passes without running
// an expectation.
});
Using expect.hasAssertions()
also allows jest/expect-expect
to be reconfigured slightly to allow more flexibility when abstracting expectations into helper functions, with no loss of rigour, if that's something you're interested in.