mirror of
https://github.com/awalsh128/cache-apt-pkgs-action.git
synced 2025-09-08 11:06:48 +00:00
- Refactored README.md - Added workflows for version export and management. - Removed src directory, following Go best practices - Added COMMANDS.md documentation Saving the AI semi-slop for now with broken states to get a snapshot. Too lazy to setup another chained repo.
1044 lines
27 KiB
Markdown
1044 lines
27 KiB
Markdown
# Code Improvements by Claude
|
|
|
|
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
|
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
|
|
|
- [Code Improvements by Claude](#code-improvements-by-claude)
|
|
- [General Code Organization Principles](#general-code-organization-principles)
|
|
- [1. Package Structure](#1-package-structure)
|
|
- [2. Code Style and Formatting](#2-code-style-and-formatting)
|
|
- [3. Error Handling](#3-error-handling)
|
|
- [4. API Design](#4-api-design)
|
|
- [5. Documentation Practices](#5-documentation-practices)
|
|
- [Code Documentation](#code-documentation)
|
|
- [Project Documentation](#project-documentation)
|
|
- [6. Testing Strategy](#6-testing-strategy)
|
|
- [Types of Tests](#types-of-tests)
|
|
- [Test Coverage Strategy](#test-coverage-strategy)
|
|
- [7. Security Best Practices](#7-security-best-practices)
|
|
- [Input Validation](#input-validation)
|
|
- [Secure Coding](#secure-coding)
|
|
- [Secrets Management](#secrets-management)
|
|
- [8. Performance Considerations](#8-performance-considerations)
|
|
- [9. Profiling and Benchmarking](#9-profiling-and-benchmarking)
|
|
- [CPU Profiling](#cpu-profiling)
|
|
- [Memory Profiling](#memory-profiling)
|
|
- [Benchmarking](#benchmarking)
|
|
- [Trace Profiling](#trace-profiling)
|
|
- [Common Profiling Tasks](#common-profiling-tasks)
|
|
- [pprof Web Interface](#pprof-web-interface)
|
|
- [Key Metrics to Watch](#key-metrics-to-watch)
|
|
- [10. Concurrency Patterns](#10-concurrency-patterns)
|
|
- [11. Configuration Management](#11-configuration-management)
|
|
- [12. Logging and Observability](#12-logging-and-observability)
|
|
- [Non-Go Files](#non-go-files)
|
|
- [GitHub Actions](#github-actions)
|
|
- [Action File Formatting](#action-file-formatting)
|
|
- [Release Management](#release-management)
|
|
- [Create a README File](#create-a-readme-file)
|
|
- [Testing and Automation](#testing-and-automation)
|
|
- [Community Engagement](#community-engagement)
|
|
- [Further Guidance](#further-guidance)
|
|
- [Bash Scripts](#bash-scripts)
|
|
- [Script Testing](#script-testing)
|
|
- [Testing Principles](#testing-principles)
|
|
- [1. Test Organization Strategy](#1-test-organization-strategy)
|
|
- [2. Code Structure](#2-code-structure)
|
|
- [Constants and Variables](#constants-and-variables)
|
|
- [Helper Functions](#helper-functions)
|
|
- [3. Test Case Patterns](#3-test-case-patterns)
|
|
- [Table-Driven Tests (for simple cases)](#table-driven-tests-for-simple-cases)
|
|
- [Individual Tests (for complex cases)](#individual-tests-for-complex-cases)
|
|
- [4. Best Practices Applied](#4-best-practices-applied)
|
|
- [5. Examples of Improvements](#5-examples-of-improvements)
|
|
- [Before](#before)
|
|
- [After](#after)
|
|
- [Key Benefits](#key-benefits)
|
|
- [Conclusion](#conclusion)
|
|
|
|
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
|
|
|
## General Code Organization Principles
|
|
|
|
### 1. Package Structure
|
|
|
|
- Keep packages focused and single-purpose
|
|
- Use internal packages for code not meant to be imported
|
|
- Organize by feature/domain rather than by type
|
|
- Follow Go standard layout conventions
|
|
|
|
### 2. Code Style and Formatting
|
|
|
|
- Consistent naming conventions (e.g., CamelCase for exported names)
|
|
- Keep functions small and focused
|
|
- Use meaningful variable names
|
|
- Follow standard Go formatting guidelines
|
|
- Use comments to explain "why" not "what"
|
|
|
|
### 3. Error Handling
|
|
|
|
- Return errors rather than using panics
|
|
- Wrap errors with context when crossing package boundaries
|
|
- Create custom error types only when needed for client handling
|
|
- Use sentinel errors sparingly
|
|
|
|
### 4. API Design
|
|
|
|
- Make zero values useful
|
|
- Keep interfaces small and focused, observing the [single responsibility principle](https://en.wikipedia.org/wiki/Single-responsibility_principle)
|
|
- Observe the [open-closed principle](https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle) so that it is open for extension but closed to modification
|
|
- Observe the [dependency inversion principle](https://en.wikipedia.org/wiki/Dependency_inversion_principle) to keep interfaces loosely coupled
|
|
- Design for composition over inheritance
|
|
- Use option patterns for complex configurations
|
|
- Make dependencies explicit
|
|
|
|
### 5. Documentation Practices
|
|
|
|
#### Code Documentation
|
|
|
|
- Write package documentation with examples
|
|
- Document exported symbols comprehensively
|
|
- Include usage examples in doc comments
|
|
- Document concurrency safety
|
|
- Add links to related functions/types
|
|
|
|
Example:
|
|
|
|
```go
|
|
// Package cache provides a caching mechanism for apt packages.
|
|
// It supports both saving and restoring package states, making it
|
|
// useful for CI/CD environments where package installation is expensive.
|
|
//
|
|
// Example usage:
|
|
//
|
|
// cache := NewCache()
|
|
// err := cache.SavePackages(packages)
|
|
// if err != nil {
|
|
// // handle error
|
|
// }
|
|
package cache
|
|
```
|
|
|
|
#### Project Documentation
|
|
|
|
- Maintain a comprehensive README
|
|
- Include getting started guide
|
|
- Document all configuration options
|
|
- Add troubleshooting guides
|
|
- Keep changelog updated
|
|
- Include contribution guidelines
|
|
|
|
### 6. Testing Strategy
|
|
|
|
#### Types of Tests
|
|
|
|
1. **Unit Tests**
|
|
- Test individual components
|
|
- Mock dependencies
|
|
- Focus on behavior not implementation
|
|
|
|
2. **Integration Tests**
|
|
- Test component interactions
|
|
- Use real dependencies
|
|
- Test complete workflows
|
|
|
|
3. **End-to-End Tests**
|
|
- Test full system
|
|
- Use real external services
|
|
- Verify key user scenarios
|
|
|
|
#### Test Coverage Strategy
|
|
|
|
- Aim for high but meaningful coverage
|
|
- Focus on critical paths
|
|
- Test edge cases and error conditions
|
|
- Balance cost vs benefit of testing
|
|
- Document untested scenarios
|
|
|
|
### 7. Security Best Practices
|
|
|
|
#### Input Validation
|
|
|
|
- Validate all external input
|
|
- Use strong types over strings
|
|
- Implement proper sanitization
|
|
- Assert array bounds
|
|
- Validate file paths
|
|
|
|
#### Secure Coding
|
|
|
|
- Use latest dependencies
|
|
- Implement proper error handling
|
|
- Avoid command injection
|
|
- Use secure random numbers
|
|
- Follow principle of least privilege
|
|
|
|
#### Secrets Management
|
|
|
|
- Never commit secrets
|
|
- Use environment variables
|
|
- Implement secure config loading
|
|
- Rotate credentials regularly
|
|
- Log access to sensitive operations
|
|
|
|
### 8. Performance Considerations
|
|
|
|
- Minimize allocations in hot paths
|
|
- Use `sync.Pool` for frequently allocated objects
|
|
- Consider memory usage in data structures
|
|
- Profile before optimizing
|
|
- Document performance characteristics
|
|
|
|
### 9. Profiling and Benchmarking
|
|
|
|
#### CPU Profiling
|
|
|
|
```go
|
|
import "runtime/pprof"
|
|
|
|
func main() {
|
|
// Create CPU profile
|
|
f, _ := os.Create("cpu.prof")
|
|
defer f.Close()
|
|
pprof.StartCPUProfile(f)
|
|
defer pprof.StopCPUProfile()
|
|
|
|
// Your code here
|
|
}
|
|
```
|
|
|
|
View with:
|
|
|
|
```bash
|
|
go tool pprof cpu.prof
|
|
```
|
|
|
|
#### Memory Profiling
|
|
|
|
```go
|
|
import "runtime/pprof"
|
|
|
|
func main() {
|
|
// Create memory profile
|
|
f, _ := os.Create("mem.prof")
|
|
defer f.Close()
|
|
// Run your code
|
|
pprof.WriteHeapProfile(f)
|
|
}
|
|
```
|
|
|
|
View with:
|
|
|
|
```bash
|
|
go tool pprof -alloc_objects mem.prof
|
|
```
|
|
|
|
#### Benchmarking
|
|
|
|
Create benchmark tests with naming pattern `Benchmark<Function>`:
|
|
|
|
```go
|
|
func BenchmarkMyFunction(b *testing.B) {
|
|
for i := 0; i < b.N; i++ {
|
|
MyFunction()
|
|
}
|
|
}
|
|
```
|
|
|
|
Run with:
|
|
|
|
```bash
|
|
go test -bench=. -benchmem
|
|
```
|
|
|
|
#### Trace Profiling
|
|
|
|
```go
|
|
import "runtime/trace"
|
|
|
|
func main() {
|
|
f, _ := os.Create("trace.out")
|
|
defer f.Close()
|
|
trace.Start(f)
|
|
defer trace.Stop()
|
|
|
|
// Your code here
|
|
}
|
|
```
|
|
|
|
View with:
|
|
|
|
```bash
|
|
go tool trace trace.out
|
|
```
|
|
|
|
#### Common Profiling Tasks
|
|
|
|
1. **CPU Usage**
|
|
|
|
```bash
|
|
# Profile for 30 seconds
|
|
go test -cpuprofile=cpu.prof -bench=.
|
|
go tool pprof cpu.prof
|
|
```
|
|
|
|
2. **Memory Allocations**
|
|
|
|
```bash
|
|
# Track allocations
|
|
go test -memprofile=mem.prof -bench=.
|
|
go tool pprof -alloc_objects mem.prof
|
|
```
|
|
|
|
3. **Goroutine Blocking**
|
|
|
|
```bash
|
|
# Track goroutine blocks
|
|
go test -blockprofile=block.prof -bench=.
|
|
go tool pprof block.prof
|
|
```
|
|
|
|
4. **Mutex Contention**
|
|
|
|
```bash
|
|
# Track mutex contention
|
|
go test -mutexprofile=mutex.prof -bench=.
|
|
go tool pprof mutex.prof
|
|
```
|
|
|
|
#### pprof Web Interface
|
|
|
|
For visual analysis:
|
|
|
|
```bash
|
|
go tool pprof -http=:8080 cpu.prof
|
|
```
|
|
|
|
#### Key Metrics to Watch
|
|
|
|
1. **CPU Profile**
|
|
- Hot functions
|
|
- Call graph
|
|
- Time per call
|
|
- Call count
|
|
|
|
2. **Memory Profile**
|
|
- Allocation count
|
|
- Allocation size
|
|
- Temporary allocations
|
|
- Leak suspects
|
|
|
|
3. **Goroutine Profile**
|
|
- Active goroutines
|
|
- Blocked goroutines
|
|
- Scheduling latency
|
|
- Stack traces
|
|
|
|
4. **Trace Analysis**
|
|
- GC frequency
|
|
- GC duration
|
|
- Goroutine scheduling
|
|
- Network/syscall blocking
|
|
|
|
### 10. Concurrency Patterns
|
|
|
|
- Use channels for coordination, mutexes for state
|
|
- Keep critical sections small
|
|
- Document concurrency safety
|
|
- Use context for cancellation
|
|
- Consider rate limiting and backpressure
|
|
|
|
### 11. Configuration Management
|
|
|
|
- Use environment variables for deployment-specific values
|
|
- Validate configuration at startup
|
|
- Provide sensible defaults
|
|
- Support multiple configuration sources
|
|
- Document all configuration options
|
|
|
|
### 12. Logging and Observability
|
|
|
|
- Use structured logging
|
|
- Include relevant context in logs
|
|
- Define log levels appropriately
|
|
- Add tracing for complex operations
|
|
- Include metrics for important operations
|
|
|
|
## Non-Go Files
|
|
|
|
### GitHub Actions
|
|
|
|
#### Action File Formatting
|
|
|
|
- Minimize the amount of shell code and put complex logic in the Go code
|
|
- Use clear step `id` names that use dashes between words and active verbs
|
|
- Avoid hard-coded API URLs like https://api.github.com. Use environment variables (GITHUB_API_URL for REST API, GITHUB_GRAPHQL_URL for GraphQL) or the @actions/github toolkit for dynamic URL handling
|
|
|
|
##### Release Management
|
|
|
|
- Use semantic versioning for releases (e.g., v1.0.0)
|
|
- Recommend users reference major version tags (v1) instead of the default branch for stability.
|
|
- Update major version tags to point to the latest release
|
|
|
|
##### Create a README File
|
|
|
|
Include a detailed description, required/optional inputs and outputs, secrets, environment variables, and usage examples
|
|
|
|
##### Testing and Automation
|
|
|
|
- Add workflows to test your action on feature branches and pull requests
|
|
- Automate releases using workflows triggered by publishing or editing a release.
|
|
|
|
##### Community Engagement
|
|
|
|
- Maintain a clear README with examples.
|
|
- Add community health files like CODE_OF_CONDUCT and CONTRIBUTING.
|
|
- Use badges to display workflow status.
|
|
|
|
##### Further Guidance
|
|
|
|
For more details, visit:
|
|
- https://docs.github.com/en/actions/how-tos/create-and-publish-actions/manage-custom-actions
|
|
- https://docs.github.com/en/actions/how-tos/create-and-publish-actions/release-and-maintain-actions
|
|
|
|
### Bash Scripts
|
|
|
|
Project scripts should follow these guidelines:
|
|
|
|
- Create scripts in the `scripts` directory (not `tools`)
|
|
- Add new functionality to the `scripts/menu.sh` script for easy access
|
|
- Use imperative verb form for script names:
|
|
- Good: `export_version.sh`, `build_package.sh`, `run_tests.sh`
|
|
- Bad: `version_export.sh`, `package_builder.sh`, `test_runner.sh`
|
|
- Follow consistent naming conventions:
|
|
- Use lowercase with underscores
|
|
- Start with a verb in imperative form
|
|
- Use clear, descriptive names
|
|
- Make scripts executable (`chmod +x`)
|
|
- Include proper error handling and exit codes
|
|
- Add usage information (viewable with `-h` or `--help`)
|
|
|
|
Script Header Format:
|
|
|
|
```bash
|
|
#==============================================================================
|
|
# script_name.sh
|
|
#==============================================================================
|
|
#
|
|
# DESCRIPTION:
|
|
# Brief description of what the script does.
|
|
# Additional details if needed.
|
|
#
|
|
# USAGE:
|
|
# ./scripts/script_name.sh [options]
|
|
#
|
|
# OPTIONS: (if applicable)
|
|
# List of command-line options and their descriptions
|
|
#
|
|
# DEPENDENCIES:
|
|
# - List of required tools and commands
|
|
#==============================================================================
|
|
```
|
|
|
|
Every script should include this header format at the top, with all sections filled out appropriately. The header provides:
|
|
|
|
- Clear identification of the script
|
|
- Description of its purpose and functionality
|
|
- Usage instructions and examples
|
|
- Documentation of command-line options (if any)
|
|
- List of required dependencies
|
|
|
|
#### Script Testing
|
|
|
|
All scripts must have corresponding tests in the `scripts/tests` directory using the common test library:
|
|
|
|
1. **Test File Structure**
|
|
- Name test files as `<script_name>_test.sh`
|
|
- Place in `scripts/tests` directory
|
|
- Make test files executable (`chmod +x`)
|
|
- Source the common test library (`test_lib.sh`)
|
|
|
|
2. **Common Test Library**
|
|
The `test_lib.sh` library provides a standard test framework:
|
|
```bash
|
|
# Source the test library
|
|
source "$(dirname "$0")/test_lib.sh"
|
|
|
|
# Library provides:
|
|
- Color output (GREEN, RED, BLUE, NC, BOLD)
|
|
- Test counting (PASS, FAIL)
|
|
- Temporary directory management
|
|
- Standard argument parsing
|
|
- Common test functions
|
|
```
|
|
|
|
Key Functions:
|
|
- `test_case "name" "command" "expected_output" "should_succeed"`
|
|
- `print_header "text"` - Print bold header
|
|
- `print_section "text"` - Print section header in blue
|
|
- `print_info "text"` - Print verbose info
|
|
- `setup_test_env` - Create temp directory
|
|
- `cleanup_test_env` - Clean up resources
|
|
- `create_test_file "path" "content" "mode"` - Create test file
|
|
- `is_command_available "cmd"` - Check if command exists
|
|
- `wait_for_condition "cmd" timeout interval` - Wait for condition
|
|
- `report_results` - Print test summary
|
|
|
|
3. **Test Organization**
|
|
- Group related test cases into sections
|
|
- Test each command/flag combination
|
|
- Test error conditions explicitly
|
|
- Include setup and teardown if needed
|
|
- Use temporary directories for file operations
|
|
- Clean up resources in trap handlers
|
|
|
|
4. **Test Coverage**
|
|
- Test main functionality
|
|
- Test error conditions
|
|
- Test input validation
|
|
- Test edge cases
|
|
- Test each supported flag/option
|
|
|
|
Standard test framework:
|
|
```bash
|
|
#!/bin/bash
|
|
|
|
# Colors for test output
|
|
GREEN='\033[0;32m'
|
|
RED='\033[0;31m'
|
|
NC='\033[0m' # No Color
|
|
|
|
# Test counters
|
|
PASS=0
|
|
FAIL=0
|
|
|
|
# Main test case function
|
|
function test_case() {
|
|
local name=$1
|
|
local cmd=$2
|
|
local expected_output=$3
|
|
local should_succeed=${4:-true}
|
|
|
|
echo -n "Testing $name... "
|
|
|
|
# Run command and capture output
|
|
local output
|
|
if [[ $should_succeed == "true" ]]; then
|
|
output=$($cmd 2>&1)
|
|
local status=$?
|
|
if [[ $status -eq 0 && $output == *"$expected_output"* ]]; then
|
|
echo -e "${GREEN}PASS${NC}"
|
|
((PASS++))
|
|
return 0
|
|
fi
|
|
else
|
|
output=$($cmd 2>&1) || true
|
|
if [[ $output == *"$expected_output"* ]]; then
|
|
echo -e "${GREEN}PASS${NC}"
|
|
((PASS++))
|
|
return 0
|
|
fi
|
|
fi
|
|
|
|
echo -e "${RED}FAIL${NC}"
|
|
echo " Expected output to contain: '$expected_output'"
|
|
echo " Got: '$output'"
|
|
((FAIL++))
|
|
return 0
|
|
}
|
|
|
|
# Create a temporary directory for test files
|
|
TEMP_DIR=$(mktemp -d)
|
|
trap 'rm -rf "$TEMP_DIR"' EXIT
|
|
|
|
# Test sections should be organized like this:
|
|
echo "Running script_name.sh tests..."
|
|
echo "------------------------------"
|
|
|
|
# Section 1: Input Validation
|
|
test_case "no arguments provided" \
|
|
"./script_name.sh" \
|
|
"error: arguments required" \
|
|
false
|
|
|
|
# Section 2: Main Functionality
|
|
test_case "basic operation" \
|
|
"./script_name.sh arg1" \
|
|
"success" \
|
|
true
|
|
|
|
# Report results
|
|
echo
|
|
echo "Test Results:"
|
|
echo "Passed: $PASS"
|
|
echo "Failed: $FAIL"
|
|
exit $FAIL
|
|
```
|
|
|
|
Example test file structure:
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
|
|
# Test script for example_script.sh
|
|
set -e
|
|
|
|
# Get the directory containing this script
|
|
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
|
|
PROJECT_ROOT="$(dirname "$(dirname "$SCRIPT_DIR")")"
|
|
|
|
# Create a temporary directory for test files
|
|
TEMP_DIR=$(mktemp -d)
|
|
trap 'rm -rf "$TEMP_DIR"' EXIT
|
|
|
|
# Test helper functions
|
|
setup_test_env() {
|
|
# Setup code here
|
|
}
|
|
|
|
teardown_test_env() {
|
|
# Cleanup code here
|
|
}
|
|
|
|
# Individual test cases
|
|
test_main_functionality() {
|
|
echo "Testing main functionality..."
|
|
# Test code here
|
|
}
|
|
|
|
test_error_handling() {
|
|
echo "Testing error handling..."
|
|
# Test code here
|
|
}
|
|
|
|
# Run all tests
|
|
echo "Running example_script.sh tests..."
|
|
setup_test_env
|
|
test_main_functionality
|
|
test_error_handling
|
|
teardown_test_env
|
|
echo "All tests passed!"
|
|
```
|
|
|
|
1. **CI Integration**
|
|
- Tests run automatically in CI
|
|
- Tests must pass before merge
|
|
- Test execution is part of the validate-scripts job
|
|
- Test failures block PR merges
|
|
|
|
Example script structure:
|
|
```bash
|
|
#!/bin/bash
|
|
|
|
# Script Name: example_script.sh
|
|
# Description: Brief description of what the script does
|
|
# Usage: ./example_script.sh [options] <arguments>
|
|
# Author: Your Name
|
|
# Date: YYYY-MM-DD
|
|
|
|
set -e # Exit on error
|
|
|
|
# Help function
|
|
show_help() {
|
|
cat << EOF
|
|
Usage: $(basename "$0") [options] <arguments>
|
|
|
|
Options:
|
|
-h, --help Show this help message
|
|
-v, --version Show version information
|
|
|
|
Arguments:
|
|
<input> Description of input argument
|
|
EOF
|
|
}
|
|
|
|
# Parse arguments
|
|
while [[ $# -gt 0 ]]; do
|
|
case $1 in
|
|
-h|--help)
|
|
show_help
|
|
exit 0
|
|
;;
|
|
*)
|
|
# Handle other arguments
|
|
;;
|
|
esac
|
|
shift
|
|
done
|
|
|
|
# Main script logic here
|
|
```
|
|
|
|
## Testing Principles
|
|
|
|
### 1. Test Organization Strategy
|
|
|
|
We established a balanced approach to test organization:
|
|
|
|
- Use table-driven tests for simple, repetitive cases without introducing logic
|
|
- Use individual test functions for cases that require specific Arrange, Act, Assert steps that cannot be shared amongst other cases
|
|
- Group related test cases that operate on the same API method / function
|
|
|
|
### 2. Code Structure
|
|
|
|
#### Constants and Variables
|
|
|
|
```go
|
|
|
|
const (
|
|
manifestVersion = "1.0.0"
|
|
manifestGlobalVer = "v2"
|
|
)
|
|
|
|
var (
|
|
fixedTime = time.Date(2025, 8, 28, 10, 0, 0, 0, time.UTC)
|
|
sampleData = NewTestData()
|
|
)
|
|
```
|
|
|
|
- Define constants for fixed values where the prescence and format is only needed and the value content itself does not effect the behavior under test
|
|
- Use variables for reusable test data
|
|
- Group related constants and variables together
|
|
- Do not prefix constants or variables with `test`
|
|
|
|
#### Helper Functions
|
|
|
|
Simple examples of factory and assert functions.
|
|
|
|
```go
|
|
func createTestFile(t *testing.T, dir string, content string) string {
|
|
t.Helper()
|
|
// ... helper implementation
|
|
}
|
|
|
|
func assertValidJSON(t *testing.T, data string) {
|
|
t.Helper()
|
|
// ... assertion implementation
|
|
}
|
|
```
|
|
|
|
Example of using functions to abstract away details not relevant to the behavior under test
|
|
|
|
```go
|
|
type Item struct {
|
|
Name string
|
|
Description string
|
|
Version string
|
|
LastModified Date
|
|
}
|
|
|
|
// BAD: Mixed concerns, unclear test name, magic values
|
|
func TestItem_Description(t *testing.T) {
|
|
item := Item{
|
|
Name: "test item",
|
|
Description: "original description",
|
|
Version: "1.0.0",
|
|
LastModified: time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC),
|
|
}
|
|
|
|
AddPrefixToDescription(&item, "prefix: ")
|
|
|
|
if item.Description != "prefix: original description" {
|
|
t.Errorf("got %q, want %q", item.Description, "prefix: original description")
|
|
}
|
|
}
|
|
|
|
// GOOD: Clear focus, reusable arrangement, proper assertions
|
|
const (
|
|
defaultName = "test item"
|
|
defaultVersion = "1.0.0"
|
|
defaultTimeStr = "2025-01-01T00:00:00Z"
|
|
)
|
|
|
|
func createTestItem(t *testing.T, description string) *Item {
|
|
t.Helper()
|
|
defaultTime, err := time.Parse(time.RFC3339, defaultTimeStr)
|
|
if err != nil {
|
|
t.Fatalf("failed to parse default time: %v", err)
|
|
}
|
|
|
|
return &Item{
|
|
Name: defaultName,
|
|
Description: description,
|
|
Version: defaultVersion,
|
|
LastModified: defaultTime,
|
|
}
|
|
}
|
|
|
|
func TestAddPrefixToDescription_WithValidInput_AddsPrefix(t *testing.T) {
|
|
// Arrange
|
|
item := createTestItem(t, "original description")
|
|
const want := "prefix: original description"
|
|
|
|
// Act
|
|
AddPrefixToDescription(item, "prefix: ")
|
|
|
|
// Assert
|
|
assert.Equal(t, want, item.Description, "description should have prefix")
|
|
}
|
|
```
|
|
|
|
- Create helper functions to reduce duplication and keeps tests focused on the arrangement inputs and how they correspond to the expected output
|
|
- Use `t.Helper()` for proper test failure reporting
|
|
- Keep helpers focused and single-purpose
|
|
- Helper functions that require logic should go into their own file and have tests
|
|
|
|
### 3. Test Case Patterns
|
|
|
|
#### Table-Driven Tests (for simple cases)
|
|
|
|
```go
|
|
// Each test case is its own function - no loops or conditionals in test body
|
|
func TestFormatMessage_WithEmptyString_ReturnsError(t *testing.T) {
|
|
// Arrange
|
|
input := ""
|
|
|
|
// Act
|
|
actual, err := FormatMessage(input)
|
|
|
|
// Assert
|
|
assertFormatError(t, actual, err, "input cannot be empty")
|
|
}
|
|
|
|
func TestFormatMessage_WithValidInput_ReturnsUpperCase(t *testing.T) {
|
|
// Arrange
|
|
input := "test message"
|
|
expected := "TEST MESSAGE"
|
|
|
|
// Act
|
|
actual, err := FormatMessage(input)
|
|
|
|
// Assert
|
|
assertFormatSuccess(t, actual, err, expected)
|
|
}
|
|
|
|
func TestFormatMessage_WithMultipleSpaces_PreservesSpacing(t *testing.T) {
|
|
// Arrange
|
|
input := "hello world"
|
|
expected := "HELLO WORLD"
|
|
|
|
// Act
|
|
actual, err := FormatMessage(input)
|
|
|
|
// Assert
|
|
assertFormatSuccess(t, actual, err, expected)
|
|
}
|
|
|
|
// Helper functions for common assertions
|
|
func assertFormatSuccess(t *testing.T, actual string, err error, expected string) {
|
|
t.Helper()
|
|
assert.NoError(t, err)
|
|
assert.Equal(t, expected, actual, "formatted message should match")
|
|
}
|
|
|
|
func assertFormatError(t *testing.T, actual string, err error, expectedErrMsg string) {
|
|
t.Helper()
|
|
assert.Error(t, err)
|
|
assert.Contains(t, err.Error(), expectedErrMsg)
|
|
assert.Empty(t, actual)
|
|
}
|
|
```
|
|
|
|
#### Individual Tests (for complex cases)
|
|
|
|
```go
|
|
func TestProcessTransaction_WithConcurrentUpdates_PreservesConsistency(t *testing.T) {
|
|
// Arrange
|
|
store := NewTestStore(t)
|
|
defer store.Close()
|
|
|
|
const accountID = "test-account"
|
|
initialBalance := decimal.NewFromInt(1000)
|
|
arrangeErr := arrangeTestAccount(t, store, accountID, initialBalance)
|
|
require.NoError(t, arrangeErr)
|
|
|
|
// Act
|
|
actualBalance, err := executeConcurrentTransactions(t, store, accountID)
|
|
|
|
// Assert
|
|
expected := initialBalance.Add(decimal.NewFromInt(100)) // 100 transactions of 1 unit each
|
|
assertBalanceEquals(t, expected, actualBalance)
|
|
}
|
|
|
|
// Helper functions to keep test body clean and linear
|
|
func arrangeTestAccount(t *testing.T, store *Store, accountID string, balance decimal.Decimal) error {
|
|
t.Helper()
|
|
return store.SetBalance(accountID, balance)
|
|
}
|
|
|
|
func executeConcurrentTransactions(t *testing.T, store *Store, accountID string) (decimal.Decimal, error) {
|
|
t.Helper()
|
|
const numTransactions = 100
|
|
var wg sync.WaitGroup
|
|
wg.Add(numTransactions)
|
|
|
|
for i := 0; i < numTransactions; i++ {
|
|
go func() {
|
|
defer wg.Done()
|
|
amount := decimal.NewFromInt(1)
|
|
_, err := store.ProcessTransaction(accountID, amount)
|
|
assert.NoError(t, err)
|
|
}()
|
|
}
|
|
wg.Wait()
|
|
|
|
return store.GetBalance(accountID)
|
|
}
|
|
|
|
func assertBalanceEquals(t *testing.T, expected, actual decimal.Decimal) {
|
|
t.Helper()
|
|
assert.True(t, expected.Equal(actual),
|
|
"balance should be %s, actual was %s", expected, actual)
|
|
}
|
|
```
|
|
|
|
### 4. Best Practices Applied
|
|
|
|
1. **Clear Naming**
|
|
- Use descriptive test names
|
|
- Use test name formats
|
|
- `Test<function>_<arrangement>_<expectation>` for free functions, and
|
|
- `Test<interface><function>_<arrangement>_<expectation>` for interface functions.
|
|
- Name test data clearly and meaningfully
|
|
- Name by abstraction, not implementation
|
|
- Use `expected` for expected values
|
|
- Use `actual` for function results
|
|
- Keep test variables consistent across all tests
|
|
- Always use "Arrange", "Act", "Assert" as step comments in tests
|
|
|
|
2. **Test Structure**
|
|
- Keep test body simple and linear
|
|
- No loops or conditionals in test body
|
|
- Move complex arrangement to helper functions
|
|
- Use table tests for multiple cases, not loops in test body
|
|
- Extract complex assertions into helper functions
|
|
|
|
3. **Code Organization**
|
|
- Group related constants and variables
|
|
- Place helper functions at the bottom
|
|
- Organize tests by function under test
|
|
- Follow arrange-act-assert pattern
|
|
|
|
4. **Test Data Management**
|
|
- Centralize test data definitions
|
|
- Use `<function>_<arrangement>_<artifact>` naming
|
|
- Use constants for fixed values
|
|
- Abstract complex data arrangement into helpers
|
|
|
|
5. **Error Handling**
|
|
- Test both success and error cases
|
|
- Use clear error messages
|
|
- Validate error types and messages
|
|
- Handle expected and unexpected errors
|
|
|
|
6. **Assertions**
|
|
- Use consistent assertion patterns
|
|
- Include helpful failure messages
|
|
- Group related assertions logically
|
|
- Test one concept per assertion
|
|
|
|
### 5. Examples of Improvements
|
|
|
|
#### Before
|
|
|
|
```go
|
|
func TestFeature(t *testing.T) {
|
|
// Mixed arrangement and assertions
|
|
// Duplicated code
|
|
// Magic values
|
|
}
|
|
```
|
|
|
|
#### After
|
|
|
|
```go
|
|
// Before: Mixed concerns, unclear naming, magic values
|
|
func TestValidateConfig(t *testing.T) {
|
|
c := &Config{
|
|
Path: "./testdata",
|
|
Port: 8080,
|
|
MaxRetries: 3,
|
|
}
|
|
|
|
if err := c.Validate(); err != nil {
|
|
t.Error("validation failed")
|
|
}
|
|
|
|
c.Path = ""
|
|
if err := c.Validate(); err == nil {
|
|
t.Error("expected error for empty path")
|
|
}
|
|
}
|
|
|
|
// After: Clear structure, meaningful constants, proper test naming
|
|
const (
|
|
testConfigPath = "./testdata"
|
|
defaultPort = 8080
|
|
defaultMaxRetries = 3
|
|
)
|
|
|
|
func TestValidateConfig_WithValidInputs_Succeeds(t *testing.T) {
|
|
// Arrange
|
|
config := &Config{
|
|
Path: testConfigPath,
|
|
Port: defaultPort,
|
|
MaxRetries: defaultMaxRetries,
|
|
}
|
|
|
|
// Act
|
|
err := config.Validate()
|
|
|
|
// Assert
|
|
assert.NoError(t, err, "valid config should pass validation")
|
|
}
|
|
|
|
func TestValidateConfig_WithEmptyPath_ReturnsError(t *testing.T) {
|
|
// Arrange
|
|
config := &Config{
|
|
Path: "", // Invalid
|
|
Port: defaultPort,
|
|
MaxRetries: defaultMaxRetries,
|
|
}
|
|
|
|
// Act
|
|
err := config.Validate()
|
|
|
|
// Assert
|
|
assert.Error(t, err)
|
|
assert.Contains(t, err.Error(), "path cannot be empty")
|
|
}
|
|
```
|
|
|
|
## Key Benefits
|
|
|
|
1. **Maintainability**
|
|
- Easier to update and modify tests
|
|
- Clear structure for adding new tests
|
|
- Reduced code duplication
|
|
|
|
2. **Readability**
|
|
- Clear test intentions
|
|
- Well-organized code
|
|
- Consistent patterns
|
|
|
|
3. **Reliability**
|
|
- Thorough error testing
|
|
- Consistent assertions
|
|
- Proper test isolation
|
|
|
|
4. **Efficiency**
|
|
- Reusable test components
|
|
- Reduced boilerplate
|
|
- Faster test writing
|
|
|
|
## Conclusion
|
|
|
|
These improvements make the test code:
|
|
|
|
- More maintainable
|
|
- Easier to understand
|
|
- More reliable
|
|
- More efficient to extend
|
|
|
|
The patterns and principles can be applied across different types of tests to create a consistent and effective testing strategy.
|