Automated Unit Testing
Codity automatically generates comprehensive unit tests for your code changes, helping you maintain test coverage and catch bugs early.
Overview
Codity's automated unit testing feature:
- Analyzes your code changes in pull requests and merge requests
- Generates test files using industry-standard test frameworks
- Creates a pull request with the generated tests
- Runs tests in CI automatically (if configured)
- Auto-fixes failing tests up to 3 times to ensure they pass
This helps you maintain high test coverage without the manual effort of writing tests for every change.
How It Works
Step 1: Trigger Test Generation
Test generation is triggered by commenting in a pull request or merge request:
GitHub:
- Comment
/generate-testsin any pull request
GitLab:
- Comment
/generate-testsin any merge request
Azure DevOps:
- Comment
/generate-testsin any pull request
Step 2: Code Analysis
Codity analyzes your code changes:
- Detects changed files in the pull request/merge request
- Identifies functions and methods that need testing
- Analyzes code structure to understand dependencies
- Determines test framework based on language
Step 3: Test Generation
Codity generates comprehensive test files:
- Creates test files in the appropriate test framework
- Generates test cases covering:
- Happy path scenarios
- Edge cases (null, empty, boundary values)
- Error handling
- Multiple input variations
- Uses best practices for each language and framework
Step 4: Pull Request Creation
Codity creates a new pull request with the generated tests:
- Creates a new branch (e.g.,
codity-tests-pr-123) - Commits test files to the branch
- Opens a pull request with the tests
- Posts a comment in the original PR with a link to the test PR
Step 5: CI Execution (Optional)
If you have CI configured:
- CI runs automatically when the test PR is created
- Tests execute in your CI environment
- Results are reported in the test PR
Step 6: Auto-Fix (Optional)
If tests fail in CI:
- Codity analyzes failures and identifies issues
- Automatically fixes test code (up to 3 attempts)
- Pushes fixes to the test PR branch
- Re-runs CI to verify fixes
Step 7: Review and Merge
Once tests pass:
- Review the generated tests in the test PR
- Verify test coverage meets your needs
- Merge the test PR to add tests to your codebase
- Merge the original PR with confidence
Supported Languages
Go
Test Framework: Standard testing package with testify/assert
Test File Pattern: *_test.go (e.g., calculator_test.go)
Features:
- Table-driven tests (Go best practice)
- Comprehensive error handling
- Context cancellation testing
- Goroutine cleanup testing
Example:
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
want int
}{
{"positive numbers", 2, 3, 5},
{"with zero", 0, 5, 5},
{"negative numbers", -1, 1, 0},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
assert.Equal(t, tt.want, Add(tt.a, tt.b))
})
}
}
Python
Test Framework: pytest
Test File Pattern: test_*.py (e.g., test_calculator.py)
Features:
- Pytest fixtures
- Parameterized tests
- Exception testing
- Mock support
Example:
import pytest
def test_add():
assert add(2, 3) == 5
@pytest.mark.parametrize("a,b,expected", [
(2, 3, 5),
(0, 5, 5),
(-1, 1, 0),
])
def test_add_multiple(a, b, expected):
assert add(a, b) == expected
Ruby
Test Framework: RSpec
Test File Pattern: *_spec.rb (e.g., calculator_spec.rb)
Features:
- RSpec describe/context blocks
- Shared examples
- Mock and stub support
- Custom matchers
Example:
RSpec.describe Calculator do
describe "#add" do
it "adds two positive numbers" do
expect(Calculator.new.add(2, 3)).to eq(5)
end
context "with edge cases" do
it "handles zero" do
expect(Calculator.new.add(0, 5)).to eq(5)
end
end
end
end
JavaScript/TypeScript
Test Framework: Jest
Test File Pattern: *.test.ts or *.test.js (e.g., calculator.test.ts)
Features:
- Jest matchers
- Mock functions
- Async/await testing
- Snapshot testing
Example:
describe('Calculator', () => {
it('adds two numbers', () => {
expect(add(2, 3)).toBe(5);
});
it.each([
[2, 3, 5],
[0, 5, 5],
[-1, 1, 0],
])('adds %d and %d to get %d', (a, b, expected) => {
expect(add(a, b)).toBe(expected);
});
});
Java
Test Framework: JUnit 5 with Mockito
Test File Pattern: *Test.java (e.g., CalculatorTest.java)
Features:
- JUnit 5 annotations
- Parameterized tests
- Mockito mocking
- Exception testing
Example:
@ExtendWith(MockitoExtension.class)
class CalculatorTest {
@Test
@DisplayName("Should add two numbers")
void testAdd() {
Calculator calc = new Calculator();
assertEquals(5, calc.add(2, 3));
}
@ParameterizedTest
@MethodSource("additionTestCases")
void testAddMultiple(int a, int b, int expected) {
Calculator calc = new Calculator();
assertEquals(expected, calc.add(a, b));
}
static Stream<Arguments> additionTestCases() {
return Stream.of(
arguments(2, 3, 5),
arguments(0, 5, 5),
arguments(-1, 1, 0)
);
}
}
Test Quality
Comprehensive Coverage
Generated tests cover:
- Happy path: Normal operation with valid inputs
- Edge cases: Null, empty, boundary values
- Error handling: Exception scenarios and error conditions
- Multiple scenarios: Parameterized tests with various inputs
Best Practices
Tests follow language-specific best practices:
- Go: Table-driven tests, proper error handling, nil checks
- Python: Pytest fixtures, parameterized tests, proper assertions
- Ruby: RSpec conventions, shared examples, proper mocking
- JavaScript/TypeScript: Jest matchers, async handling, proper mocks
- Java: JUnit 5 annotations, parameterized tests, Mockito integration
Maintainable Code
Generated tests are:
- Well-structured: Clear organization and naming
- Readable: Self-documenting test names and assertions
- Maintainable: Easy to update as code changes
- Consistent: Follows framework conventions
CI Integration
Setting Up CI
To enable automatic test execution:
- Configure CI workflow for your language (see examples below)
- Ensure Codity has permissions to create PRs and commit code
- Tests will run automatically when test PR is created
CI Auto-Fix
If tests fail in CI:
- Codity detects failures from CI status
- Analyzes failure reasons (compilation errors, assertion failures, etc.)
- Automatically fixes test code (up to 3 attempts)
- Pushes fixes to the test PR branch
- Re-runs CI to verify fixes
Auto-fix capabilities:
- Fixes compilation errors
- Corrects assertion failures
- Adjusts test expectations
- Removes problematic tests if unfixable
CI Workflow Examples
GitHub Actions (Go):
name: Go Tests
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21'
- run: go test ./... -v
GitHub Actions (Python):
name: Python Tests
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- run: pip install -r requirements.txt
- run: pytest
What Gets Tested
Functions and Methods
Codity generates tests for:
- Public functions: All public functions and methods
- Changed code: Only code that was modified in the PR/MR
- Dependencies: Functions that depend on changed code
Test Scenarios
Each function gets multiple test scenarios:
- Basic test: Simple happy path with valid inputs
- Table-driven test: Multiple inputs and expected outputs
- Edge case test: Null, empty, boundary values
- Error test: Exception scenarios (if applicable)
What Doesn't Get Tested
Excluded Code
Codity does not generate tests for:
- Generated code: Auto-generated files
- Third-party code: Vendor dependencies
- Binary files: Images, executables, etc.
- Configuration files: YAML, JSON configs (unless they contain logic)
Limitations
- Integration tests: Only unit tests are generated
- E2E tests: End-to-end tests are not generated
- Performance tests: Performance benchmarks are not generated
- UI tests: Frontend UI tests are not generated
Best Practices
When to Use
Good use cases:
- Adding new functions or methods
- Modifying existing logic
- Refactoring code
- Adding new features
Less suitable:
- Very simple changes (single line fixes)
- Configuration-only changes
- Documentation-only changes
Review Generated Tests
Always review generated tests:
- Verify coverage: Ensure all important scenarios are covered
- Check assertions: Verify assertions match expected behavior
- Review edge cases: Ensure edge cases are properly handled
- Add missing tests: Add any additional tests you need
Maintain Tests
After merging generated tests:
- Keep tests updated: Update tests when code changes
- Add new tests: Add tests for new functionality
- Remove obsolete tests: Remove tests for deleted code
- Refactor as needed: Improve tests over time
Troubleshooting
Tests Not Generated
Possible causes:
- Language not supported
- No code changes detected
- Trigger command not recognized
How to fix:
- Verify language is supported (Go, Python, Ruby, JavaScript/TypeScript, Java)
- Ensure code changes exist in PR/MR
- Check trigger command spelling (
/generate-tests)
Tests Fail in CI
Possible causes:
- Compilation errors
- Missing dependencies
- Incorrect assertions
How to fix:
- Codity will auto-fix up to 3 times
- Review CI logs for specific errors
- Manually fix if auto-fix doesn't resolve
Tests Don't Cover Everything
Possible causes:
- Complex code structure
- Missing edge cases
- Incomplete analysis
How to fix:
- Review generated tests
- Add additional tests manually
- Provide feedback to improve generation
Feedback
Improving Test Generation
Your feedback helps improve test generation:
- Like tests: Click "Like" on test generation comments for helpful tests
- Dislike tests: Click "Dislike" for tests that aren't useful
- Report issues: Contact support with specific issues or suggestions
Next Steps
- What is Analyzed - Understand what Codity analyzes
- Policy Evaluation - Customize analysis behavior
- Troubleshooting - Resolve issues