Effective testing in CI/CD pipelines requires balancing speed, coverage, and reliability. This guide covers strategies to optimize your testing approach for continuous integration and deployment.
Testing pyramid for CI/CD
The testing pyramid applies to CI/CD with some modifications:
/\
/ \ E2E Tests (few, slow, expensive)
/____\
/ \ Integration Tests (some, moderate)
/________\
/ \ Unit Tests (many, fast, cheap)
/____________\
CI/CD testing layers
- Unit tests: Fast, run on every commit
- Integration tests: Moderate speed, run on PRs
- E2E tests: Slow, run on main branch or scheduled
- Performance tests: Run periodically or on release candidates
Pipeline testing strategy
Stage 1: Pre-commit (local)
Run fast checks before committing:
# .pre-commit-config.yaml or package.json scripts
scripts:
"pre-commit": "lint-staged && npm run test:unit"
What to include:
- Linting and formatting
- Unit tests (fast subset)
- Type checking
- Security scanning (basic)
Stage 2: Pull request (CI)
Run comprehensive checks on PRs:
# .github/workflows/pr-checks.yml
name: PR Checks
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Unit Tests
run: npm run test:unit -- --coverage
- name: Integration Tests
run: npm run test:integration
- name: Lint
run: npm run lint
- name: Build
run: npm run build
What to include:
- All unit tests with coverage
- Integration tests
- Build verification
- Code quality checks
- Dependency scanning
Stage 3: Merge to main (pre-deploy)
Run before deploying to staging:
jobs:
pre-deploy:
steps:
- name: E2E Tests
run: npm run test:e2e
- name: Security Scan
run: npm audit --audit-level=high
- name: Performance Tests
run: npm run test:performance
What to include:
- E2E tests (critical paths)
- Security scanning
- Performance benchmarks
- Contract testing (if microservices)
Stage 4: Post-deploy (validation)
Verify deployment success:
jobs:
smoke-tests:
steps:
- name: Health Check
run: curl -f ${{ env.STAGING_URL }}/health
- name: Smoke Tests
run: npm run test:smoke -- --baseUrl=${{ env.STAGING_URL }}
What to include:
- Health checks
- Smoke tests (critical user flows)
- API availability checks
- Database connectivity
Test execution strategies
1. Parallel execution
Run tests in parallel to reduce total time:
# Jest parallel execution
module.exports = {
maxWorkers: 4,
testMatch: ['**/*.test.js']
};
# GitHub Actions matrix strategy
strategy:
matrix:
test-group: [1, 2, 3, 4]
steps:
- run: npm run test -- --group=${{ matrix.test-group }}
2. Test sharding
Split test suite into smaller chunks:
# Split tests across multiple jobs
npm run test -- --shard=1/4 # Run shard 1 of 4
npm run test -- --shard=2/4 # Run shard 2 of 4
3. Selective testing
Run only relevant tests based on changes:
# Only run tests for changed files
- name: Changed Files
uses: dorny/paths-filter@v2
id: changes
with:
filters: |
frontend:
- 'frontend/**'
backend:
- 'backend/**'
- name: Frontend Tests
if: steps.changes.outputs.frontend == 'true'
run: npm run test:frontend
4. Caching
Cache dependencies and test results:
- name: Cache dependencies
uses: actions/cache@v3
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('package-lock.json') }}
- name: Cache test results
uses: actions/cache@v3
with:
path: .jest-cache
key: ${{ runner.os }}-jest-${{ hashFiles('**/*.test.js') }}
Test types and when to run them
Unit tests
When: Every commit, every PR
Characteristics:
- Fast (< 1 second per test)
- Isolated (no external dependencies)
- High coverage target (80%+)
Example:
describe('UserService', () => {
it('should validate email format', () => {
expect(validateEmail('[email protected]')).toBe(true);
expect(validateEmail('invalid')).toBe(false);
});
});
Integration tests
When: On PR, before merge
Characteristics:
- Moderate speed (seconds to minutes)
- Test component interactions
- Use test databases/containers
Example:
describe('User API Integration', () => {
it('should create and retrieve user', async () => {
const user = await createUser({ email: '[email protected]' });
const retrieved = await getUser(user.id);
expect(retrieved.email).toBe('[email protected]');
});
});
E2E tests
When: On main branch, before production deploy
Characteristics:
- Slow (minutes to hours)
- Test complete user flows
- Run against staging environment
Example:
describe('User Registration Flow', () => {
it('should complete registration', async () => {
await page.goto('/register');
await page.fill('#email', '[email protected]');
await page.fill('#password', 'password123');
await page.click('button[type="submit"]');
await expect(page).toHaveURL('/dashboard');
});
});
Performance tests
When: Periodically, on release candidates
Characteristics:
- Measure response times, throughput
- Identify regressions
- Run in isolated environment
Example:
describe('API Performance', () => {
it('should respond within 200ms', async () => {
const start = Date.now();
await api.get('/users');
const duration = Date.now() - start;
expect(duration).toBeLessThan(200);
});
});
Optimizing test execution
1. Fail fast
Stop on first failure for faster feedback:
npm run test -- --bail
2. Test prioritization
Run critical tests first:
// Jest: Run tests matching pattern first
test.only('critical user flow', () => {
// This runs first
});
3. Skip flaky tests in CI
// Skip known flaky tests in CI
const isCI = process.env.CI === 'true';
if (isCI) {
test.skip('flaky test', () => {
// Skipped in CI
});
}
4. Use test tags
Tag tests by importance:
describe('Payment Processing', () => {
it('processes payment @critical', () => {
// Always runs
});
it('handles edge case @low-priority', () => {
// Runs less frequently
});
});
Monitoring and metrics
Key metrics to track
- Test execution time: Total pipeline duration
- Test pass rate: Percentage of passing tests
- Flaky test rate: Tests that fail intermittently
- Coverage trends: Code coverage over time
- Failure patterns: Common failure causes
Setting up alerts
# Alert on test failures
- name: Notify on failure
if: failure()
uses: 8398a7/action-slack@v3
with:
status: ${{ job.status }}
text: 'Tests failed in CI pipeline'
Best practices
1. Keep tests fast
- Mock external dependencies
- Use in-memory databases
- Parallelize when possible
- Cache aggressively
2. Maintain test stability
- Avoid time-dependent tests
- Use fixed test data
- Clean up after tests
- Isolate test environments
3. Balance coverage and speed
- Aim for 80%+ unit test coverage
- Focus integration tests on critical paths
- Limit E2E tests to happy paths
- Use code coverage to identify gaps
4. Automate everything
- No manual test steps in CI/CD
- Self-healing tests where possible
- Automatic retry for transient failures
- Automatic rollback on test failure
5. Continuous improvement
- Review test execution times regularly
- Remove obsolete tests
- Refactor slow tests
- Update test strategies based on metrics
Conclusion
Effective CI/CD testing requires careful planning and continuous optimization. Balance speed with coverage, prioritize critical tests, and use parallel execution and caching to keep pipelines fast. Remember: Fast feedback is better than perfect coverage.
The goal is not to test everything in every pipeline run, but to catch issues early while maintaining deployment velocity.
