32 KiB
Code improvements by claude
- General Code Organization Principles
- Non-Go Files
- Testing Principles
- Key Benefits
- Conclusion
General code organization principles
1 . package structure
- Keep packages focused and single-purpose
- Use internal packages for code not meant to be imported
- Organize by feature/domain rather than by type
- Follow Go standard layout conventions
2 . code style and formatting
- Use 2 spaces for indentation, never tabs
- Consistent naming conventions (for example, CamelCase for exported names)
- Keep functions small and focused
- Use meaningful variable names
- Follow standard Go formatting guidelines
- Use comments to explain "why" not "what"
3 . error handling
- Return errors rather than using panics
- Wrap errors with context when crossing package boundaries
- Create custom error types only when needed for client handling
- Use sentinel errors sparingly
4 . API design
- Make zero values useful
- Keep interfaces small and focused, observing the single responsibility principle
- Observe the open-closed principle so that it is open for extension but closed to modification
- Observe the dependency inversion principle to keep interfaces loosely coupled
- Design for composition over inheritance
- Use option patterns for complex configurations
- Make dependencies explicit
5 . documentation practices
Go code documentation standards
Following the official Go Documentation Guidelines:
-
Package Documentation
- Every package must have a doc comment immediately before the
packagestatement - Format:
// Package xyz ...(first sentence) followed by detailed description - First sentence should be a summary beginning with
Package xyz - Follow with a blank line and detailed documentation
- Include package-level examples if helpful
- Every package must have a doc comment immediately before the
-
Exported Items Documentation
- Document all exported (capitalized) names
- Comments must begin with the name being declared
- First sentence should be a summary
- Omit the subject when it's the thing being documented
- Use article "a" for types that could be one of many, "the" for singletons
Examples:
// List represents a singly-linked list. // A zero List is valid and represents an empty list. type List struct {} // NewRing creates a new ring buffer with the given size. func NewRing(size int) *Ring {} // Append adds the elements to the list. // Blocks if buffer is full. func (l *List) Append(elems ...interface{}) {} -
Documentation Style
- Write clear, complete sentences
- Begin comments with a capital letter
- End sentences with punctuation
- Keep comments up to date with code changes
- Focus on behavior users can rely on, not implementation
- Document synchronization assumptions for concurrent access
- Document any special error conditions or panics
-
Examples
-
Add examples for complex types or functions using
Examplefunctions -
Include examples in package docs for important usage patterns
-
Make examples self-contained and runnable
-
Use realistic data and common use cases
-
Show output in comments when examples print output:
func ExampleHello() { fmt.Println("Hello") // Output: Hello }
-
-
Doc Comments Format
- Use complete sentences and proper punctuation
- Add a blank line between paragraphs
- Use lists and code snippets for clarity
- Include links to related functions/types where helpful
- Document parameters and return values implicitly in the description
- Break long lines at 80 characters
-
Quality Control
- Run
go docto verify how documentation will appear - Review documentation during code reviews
- Keep examples up to date and passing
- Update docs when changing behavior
- Run
Code documentation
- Write package documentation with examples
- Document exported symbols comprehensively
- Include usage examples in doc comments
- Document concurrency safety
- Add links to related functions/types
Example:
// key.go
//
// Description:
//
// Provides types and functions for managing cache keys, including serialization, deserialization,
// and validation of package metadata.
//
// Package: cache
//
// Example usage:
//
// // Create a new cache key
// key := cache.NewKey(packages, "v1.0", "v2", "amd64")
//
// // Get the hash of the key
// hash := key.Hash()
// fmt.Printf("Key hash: %x\n", hash)
package cache
Project documentation
- Maintain a comprehensive README
- Include getting started guide
- Document all configuration options
- Add troubleshooting guides
- Keep changelog updated
- Include contribution guidelines
6 . testing strategy
Types of tests
-
Unit Tests
- Test individual components
- Mock dependencies
- Focus on behavior not implementation
-
Integration Tests
- Test component interactions
- Use real dependencies
- Test complete workflows
-
End-to-End Tests
- Test full system
- Use real external services
- Verify key user scenarios
Test coverage strategy
- Aim for high but meaningful coverage
- Focus on critical paths
- Test edge cases and error conditions
- Balance cost vs benefit of testing
- Document untested scenarios
7 . security best practices
Input validation
- Validate all external input
- Use strong types over strings
- Implement proper input validation and cleaning
- Assert array bounds
- Validate file paths
Secure coding
- Use latest dependencies
- Implement proper error handling
- Avoid command injection
- Use secure random numbers
- Follow principle of least privilege
Secrets management
- Never commit secrets
- Use environment variables
- Implement secure configuration loading
- Rotate credentials regularly
- Log access to sensitive operations
8 . performance considerations
- Minimize allocations in hot paths
- Use
sync.Poolfor frequently allocated objects - Consider memory usage in data structures
- Profile before optimizing
- Document performance characteristics
9 . profiling and benchmarking
CPU profiling
import "runtime/pprof"
func main() {
// Create CPU profile
f, _ := os.Create("cpu.prof")
defer f.Close()
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
// Your code here
}
View with:
go tool pprof cpu.prof
Memory profiling
import "runtime/pprof"
func main() {
// Create memory profile
f, _ := os.Create("mem.prof")
defer f.Close()
// Run your code
pprof.WriteHeapProfile(f)
}
View with:
go tool pprof -alloc_objects mem.prof
Benchmarking
Create benchmark tests with naming pattern Benchmark<Function>:
func BenchmarkMyFunction(b *testing.B) {
for i := 0; i < b.N; i++ {
MyFunction()
}
}
Run with:
go test -bench=. -benchmem
Trace profiling
import "runtime/trace"
func main() {
f, _ := os.Create("trace.out")
defer f.Close()
trace.Start(f)
defer trace.Stop()
// Your code here
}
View with:
go tool trace trace.out
Common profiling tasks
-
CPU Usage
# Profile for 30 seconds go test -cpuprofile=cpu.prof -bench=. go tool pprof cpu.prof -
Memory Allocations
# Track allocations go test -memprofile=mem.prof -bench=. go tool pprof -alloc_objects mem.prof -
Goroutine Block Profiling
# Track goroutine blocks go test -blockprofile=block.prof -bench=. go tool pprof block.prof -
Mutex Contention
# Track mutex contention go test -mutexprofile=mutex.prof -bench=. go tool pprof mutex.prof
pprof web interface
For visual analysis:
go tool pprof -http=:8080 cpu.prof
Key metrics to watch
-
CPU Profile
- Hot functions
- Call graph
- Time per call
- Call count
-
Memory Profile
- Allocation count
- Allocation size
- Temporary allocations
- Leak suspects
-
Goroutine Profile
- Active goroutines
- Blocked goroutines
- Scheduling latency
- Stack traces
-
Trace Analysis
- GC frequency
- GC duration
- Goroutine scheduling
- Network/syscall blocking
10 . concurrency patterns
- Use channels for coordination, mutexes for state
- Keep critical sections small
- Document concurrency safety
- Use context for cancellation
- Consider rate limiting and load shedding
11 . configuration management
- Use environment variables for deployment-specific values
- Validate configuration at startup
- Provide sensible defaults
- Support multiple configuration sources
- Document all configuration options
12 . logging and observability
- Use structured logging
- Include relevant context in logs
- Define log levels appropriately
- Add tracing for complex operations
- Include metrics for important operations
Non - go files
Github actions
Action file formatting
- Minimize the amount of shell code and put complex logic in the Go code
- Use clear step
idnames that use dashes between words and active verbs - Avoid hard-coded API URLs like https://api.github.com. Use environment variables (GITHUB_API_URL for REST API, GITHUB_GRAPHQL_URL for GraphQL) or the @actions/github toolkit for dynamic URL handling
Release management
- Use semantic versioning for releases (for example, v1.0.0)
- Recommend users reference major version tags (v1) instead of the default branch for stability.
- Update major version tags to point to the latest release
Create a README file
Include a detailed description, required/optional inputs and outputs, secrets, environment variables, and usage examples
Testing and automation
- Add workflows to test your action on feature branches and pull requests
- Automate releases using workflows triggered by publishing or editing a release.
Community engagement
- Maintain a clear README with examples.
- Add community health files like CODE_OF_CONDUCT and CONTRIBUTING.
- Use badges to display workflow status.
Further guidance
For more details, visit:
- https://docs.github.com/en/actions/how-tos/create-and-publish-actions/manage-custom-actions
- https://docs.github.com/en/actions/how-tos/create-and-publish-actions/release-and-maintain-actions
YAML formatting
Quoting guidelines
Follow these rules for consistent YAML formatting:
DO quote when required:
# Strings with special characters or spaces
version: "test version with spaces"
name: "app-v1.2.3"
message: "Value contains: colons, commas, quotes"
# Empty strings
packages: ""
input: ""
# Values that could be interpreted as other types
id: "123" # Prevents interpretation as number
flag: "true" # Prevents interpretation as boolean
version: "1.0" # Prevents interpretation as number
# YAML special values that should be strings
value: "null" # String "null", not null value
enable: "false" # String "false", not boolean false
do not quote simple values:
# Booleans
debug: false
enabled: true
# Numbers
count: 42
version: 1.2
# Simple strings without special characters
name: ubuntu-latest
step: checkout
action: setup-node
# Github actions expressions ( never quote these )
if: github.event_name == 'push'
with: ${{ secrets.TOKEN }}
GitHub Actions specific guidelines:
# Action references - never quote
uses: actions/checkout@v4
uses: ./path/to/local/action
# Boolean inputs - don ' t quote
debug: false
cache: true
# Version strings with special chars - quote if needed
version: "v1.2.3-beta"
# Expressions - never quote
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "${{ github.actor }}"
Formatting standards
- Use 2 spaces for indentation
- Use
-for list items with proper indentation - Keep consistent spacing around colons
- Use block scalar
|for multiline strings - Use folded scalar
>for wrapped text
Example of well-formatted YAML:
name: CI Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tests
run: |
npm install
npm test
env:
NODE_ENV: test
DEBUG: false
Bash scripts
Project scripts should follow these guidelines:
File and directory structure
All scripts must go into the project's script directory with specific guidance below
scripts
├── dev (directory for local development)
│ ├── *.sh (scripts used for development)
│ ├── lib.sh (library to use for common development functionality)
│ └── tests
│ ├── *.sh (corresponding tests for all scripts in the parent)
│ ├── template_test.sh (example template for all tests)
│ └── test_lib.sh (library to use for common testing functionality)
├── *.sh (files used for integration and deployment operations)
├── lib.sh (general library for common functionality applying to all script types)
├── lib_test.sh (testing framework that all script tests must use)
├── template_test.sh (example template for all tests)
└── template.sh (example template for all development scripts)
-
scriptsis where all scripts go, either at root or in sub-folders -
scripts/*.shis where non-development functionality like integration and deployment functionality goes -
devis where local development functionality only -
dev/lib.shis where common functionality for local development scripts go -
dev/tests.shis for local development script tests -
lib_test.shis the testing framework that all scripts must use -
lib.shis the common functionality applying to both root scripts anddevscripts -
template_test.shbaseline template that all script tests must be created from -
template.shbaseline template that all scripts must be created from -
Make scripts executable (
chmod +x) -
Add new development script functionality to the
scripts/dev/menu.shscript for easy access -
Always use
template.shwhen creating a script
Style and format rules
- MANDATORY: All Bash scripts must strictly follow the Google Bash Style Guide for naming, formatting, comments, and best practices.
- MANDATORY: All Bash scripts must pass ShellCheck with no warnings or errors.
- MANDATORY: All script comments and header blocks must wrap at a maximum line length of 80 characters.
- Use the
functionkeyword before all function definitions:function my_function() { - Use imperative verb form for script names:
- Good:
export_version.sh,build_package.sh,run_tests.sh - Bad:
version_export.sh,package_builder.sh,test_runner.sh
- Good:
Scripts that do not comply with these external standards will be flagged in code review and CI. See:
Comments
For functions:
- All functions greater than 5 lines must have a comment
- Always follow the format described in Google Bash Style Guide: Function Comments
Script header requirements ( MANDATORY )
Every Bash script must begin with a standardized header block, formatted as follows:
#!/bin/bash
#==============================================================================
# < script_name > . sh
#==============================================================================
#
# DESCRIPTION :
# < detailed description of the script ' s purpose and functionality >
#
# USAGE :
# < script_name > . sh < command > [ args ]
#
# COMMANDS :
# < command_1 > < description of command_1 >
# < command_2 > < description of command_2 >
# . . .
#
# OPTIONS :
# - h , - - help show this help message
# . . . < other options and their descriptions >
#
# DEPENDENCIES :
# < list required dependencies , e . g . external tools , environment variables >
#==============================================================================
Checklist for script headers:
- Script name and clear identification
- Detailed description
- Usage instructions and examples
- Command and option documentation
- Required dependencies
All new and updated scripts must comply with this format. Non-compliant scripts will be flagged in code review and CI.
Script testing
All scripts must have corresponding tests in the tests sub-directory using the common test
library:
-
Test File Structure
- Name test files as
<script_name>_test.sh - Place in a
testsdirect sub-directory to the script under test - Make test files executable (
chmod +x) - Source the common test library (
test_lib.sh)
- Name test files as
-
Test Organization
- Group related test cases into sections
- Test each command/flag combination
- Test error conditions explicitly
-
Test Coverage
- Test error conditions
- Test input validation
- Test edge cases
- Test each supported flag/option
-
CI Integration
- Tests run automatically in CI
- Tests must pass before merge
- Test execution is part of the validate-scripts job
- Test failures block PR merges
Test framework architecture pattern
All tests must start with scripts/template_test.sh
Key Framework Features:
- SCRIPT_PATH Setup: Test files must set
SCRIPT_PATHbefore sourcingtest_lib.shto avoid variable conflicts - Function-based Test Organization: Tests are organized in a
run_tests()function called after framework initialization - Consistent Test Sections: Use
test_sectionto group related tests with descriptive headers - Standard Test Case Pattern:
test_case "name" "args" "expected_output" "should_succeed" - Framework Integration: Call
start_tests "$@"before running tests to handle argument parsing and setup
Centralized configuration management
The project implements centralized version management using the .env file as a single source of
truth:
Configuration Structure:
# . env file contents
GO_VERSION=1.23.4
GO_TOOLCHAIN=go1.23.4
GitHub Actions Integration:
# . github / workflows / ci . yml pattern
jobs:
setup:
runs-on: ubuntu-latest
outputs:
go-version: ${{ steps.env.outputs.go-version }}
steps:
- uses: actions/checkout@v4
- id: env
run: |
source .env
echo "go-version=$GO_VERSION" >> $GITHUB_OUTPUT
dependent-job:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v5
with:
go-version: ${{ needs.setup.outputs.go-version }}
Synchronization Script Pattern:
scripts/dev/sync_go_version.shreads.envand updatesgo.modaccordingly- Ensures consistency between environment configuration and Go module requirements
- Can be extended for other configuration synchronization needs
Testing principles
1 . test organization strategy
We established a balanced approach to test organization:
- Use table-driven tests for simple, repetitive cases without introducing logic
- Use individual test functions for cases that require specific Arrange, Act, Assert steps that cannot be shared amongst other cases
- Group related test cases that operate on the same API method / function
2 . code structure
Constants and variables
const (
manifestVersion = "1.0.0"
manifestGlobalVer = "v2"
)
var (
fixedTime = time.Date(2025, 8, 28, 10, 0, 0, 0, time.UTC)
sampleData = NewTestData()
)
- Define constants for fixed values where the presence and format is only needed and the value content itself does not affect the behavior under test
- Use variables for reusable test data
- Group related constants and variables together
- Do not prefix constants or variables with
test
Helper functions
Simple examples of factory and assert functions.
func createTestFile(t *testing.T, dir string, content string) string {
t.Helper()
// ... helper implementation
}
func assertValidJSON(t *testing.T, data string) {
t.Helper()
// ... assertion implementation
}
Example of using functions to abstract away details not relevant to the behavior under test
type Item struct {
Name string
Description string
Version string
LastModified Date
}
// BAD: Mixed concerns, unclear test name, magic values
func TestItem_Description(t *testing.T) {
item := Item{
Name: "test item",
Description: "original description",
Version: "1.0.0",
LastModified: time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC),
}
AddPrefixToDescription(&item, "prefix: ")
if item.Description != "prefix: original description" {
t.Errorf("got %q, want %q", item.Description, "prefix: original description")
}
}
// GOOD: Clear focus, reusable arrangement, proper assertions
const (
defaultName = "test item"
defaultVersion = "1.0.0"
defaultTimeStr = "2025-01-01T00:00:00Z"
)
func createTestItem(t *testing.T, description string) *Item {
t.Helper()
defaultTime, err := time.Parse(time.RFC3339, defaultTimeStr)
if err != nil {
t.Fatalf("failed to parse default time: %v", err)
}
return &Item{
Name: defaultName,
Description: description,
Version: defaultVersion,
LastModified: defaultTime,
}
}
func TestAddPrefixToDescription_WithValidInput_AddsPrefix(t *testing.T) {
// Arrange
item := createTestItem(t, "original description")
const want := "prefix: original description"
// Act
AddPrefixToDescription(item, "prefix: ")
// Assert
assert.Equal(t, want, item.Description, "description should have prefix")
}
- Create helper functions to reduce duplication and keeps tests focused on the arrangement inputs and how they correspond to the expected output
- Use
t.Helper()for proper test failure reporting - Keep helpers focused and single-purpose
- Helper functions that require logic should go into their own file and have tests
3 . test case patterns
Table - driven tests ( for simple cases )
// Each test case is its own function - no loops or conditionals in test body
func TestFormatMessage_WithEmptyString_ReturnsError(t *testing.T) {
// Arrange
input := ""
// Act
actual, err := FormatMessage(input)
// Assert
assertFormatError(t, actual, err, "input cannot be empty")
}
func TestFormatMessage_WithValidInput_ReturnsUpperCase(t *testing.T) {
// Arrange
input := "test message"
expected := "TEST MESSAGE"
// Act
actual, err := FormatMessage(input)
// Assert
assertFormatSuccess(t, actual, err, expected)
}
func TestFormatMessage_WithMultipleSpaces_PreservesSpacing(t *testing.T) {
// Arrange
input := "hello world"
expected := "HELLO WORLD"
// Act
actual, err := FormatMessage(input)
// Assert
assertFormatSuccess(t, actual, err, expected)
}
// Helper functions for common assertions
func assertFormatSuccess(t *testing.T, actual string, err error, expected string) {
t.Helper()
assert.NoError(t, err)
assert.Equal(t, expected, actual, "formatted message should match")
}
func assertFormatError(t *testing.T, actual string, err error, expectedErrMsg string) {
t.Helper()
assert.Error(t, err)
assert.Contains(t, err.Error(), expectedErrMsg)
assert.Empty(t, actual)
}
Individual tests ( for complex cases )
func TestProcessTransaction_WithConcurrentUpdates_PreservesConsistency(t *testing.T) {
// Arrange
store := NewTestStore(t)
defer store.Close()
const accountID = "test-account"
initialBalance := decimal.NewFromInt(1000)
arrangeErr := arrangeTestAccount(t, store, accountID, initialBalance)
require.NoError(t, arrangeErr)
// Act
actualBalance, err := executeConcurrentTransactions(t, store, accountID)
// Assert
expected := initialBalance.Add(decimal.NewFromInt(100)) // 100 transactions of 1 unit each
assertBalanceEquals(t, expected, actualBalance)
}
// Helper functions to keep test body clean and linear
func arrangeTestAccount(t *testing.T, store *Store, accountID string, balance decimal.Decimal) error {
t.Helper()
return store.SetBalance(accountID, balance)
}
func executeConcurrentTransactions(t *testing.T, store *Store, accountID string) (decimal.Decimal, error) {
t.Helper()
const numTransactions = 100
var wg sync.WaitGroup
wg.Add(numTransactions)
for i := 0; i < numTransactions; i++ {
go func() {
defer wg.Done()
amount := decimal.NewFromInt(1)
_, err := store.ProcessTransaction(accountID, amount)
assert.NoError(t, err)
}()
}
wg.Wait()
return store.GetBalance(accountID)
}
func assertBalanceEquals(t *testing.T, expected, actual decimal.Decimal) {
t.Helper()
assert.True(t, expected.Equal(actual),
"balance should be %s, actual was %s", expected, actual)
}
4 . best practices applied
-
Clear Naming
- Name test data clearly and meaningfully
- Name by abstraction, not implementation
- Use
expectedfor expected values - Use
actualfor function results - Keep test variables consistent across all tests
- Always use "Arrange," "Act," "Assert" as step comments in tests
- Use descriptive test name arrangement and expectation parts
- Use test name formats in a 3 part structure
Test<function>_<arrangement>_<expectation>for free functions, andTest<interface><function>_<arrangement>_<expectation>for interface functions.- The module name is inferred
- Treat the first part as either the type function or the free function under test
func Test<[type]<function>>_<arrangement>_<expectation>(t *testing.T) { // Test body }// Implementation type Logger { debug bool } var logger Logger logger.debug = false func (l* Logger) Log(msg string) { // ... } func SetDebug(v bool) { logger.debug = v }// Test func TestLoggerLog_EmptyMessage_NothingLogged(t *testing.T) { // Test body } func TestSetDebug_PassFalseValue_DebugMessageNotLogged(t *testing.T) { // Test body } -
Test Structure
- Keep test body simple and linear
- No loops or conditionals in test body
- Move complex arrangement to helper functions
- Use table tests for multiple cases, not loops in test body
- Extract complex assertions into helper functions
-
Code Organization
- Group related constants and variables
- Place helper functions at the bottom
- Organize tests by function under test
- Follow arrange-act-assert pattern
-
Test Data Management
- Centralize test data definitions
- Use constants for fixed values
- Abstract complex data arrangement into helpers
-
Error Handling
- Test both success and error cases
- Use clear error messages
- Validate error types and messages
- Handle expected and unexpected errors
-
Assertions
- Use consistent assertion patterns
- Include helpful failure messages
- Group related assertions logically
- Test one concept per assertion
5 . examples of improvements
Before
func TestFeature_MixedArrangements_ExpectAlotOfDifferentThings(t *testing.T) {
// Mixed arrangement and assertions
// Duplicated code
// Magic values
}
After
// Before: Mixed concerns, unclear naming, magic values
func TestValidateConfig_MissingFileAndEmptyPaths_ValidationFails(t *testing.T) {
c := &Config{
Path: "./testdata",
Port: 8080,
MaxRetries: 3,
}
if err := c.Validate(); err != nil {
t.Error("validation failed")
}
c.Path = ""
if err := c.Validate(); err == nil {
t.Error("expected error for empty path")
}
}
// After: Clear structure, meaningful constants, proper test naming
const (
testConfigPath = "./testdata"
defaultPort = 8080
defaultMaxRetries = 3
)
func TestValidateConfig_WithValidInputs_Succeeds(t *testing.T) {
// Arrange
config := &Config{
Path: testConfigPath,
Port: defaultPort,
MaxRetries: defaultMaxRetries,
}
// Act
err := config.Validate()
// Assert
assert.NoError(t, err, "valid config should pass validation")
}
func TestValidateConfig_WithEmptyPath_ReturnsError(t *testing.T) {
// Arrange
config := &Config{
Path: "", // Invalid
Port: defaultPort,
MaxRetries: defaultMaxRetries,
}
// Act
err := config.Validate()
// Assert
assert.Error(t, err)
assert.Contains(t, err.Error(), "path cannot be empty")
}
Key benefits
-
Maintainability
- Easier to update and modify tests
- Clear structure for adding new tests
- Reduced code duplication
-
Readability
- Clear test intentions
- Well-organized code
- Consistent patterns
-
Reliability
- Thorough error testing
- Consistent assertions
- Proper test isolation
-
Efficiency
- Reusable test components
- Reduced boilerplate
- Faster test writing
Conclusion
These improvements make the test code:
- More maintainable
- Easier to understand
- More reliable
- More efficient to extend
The patterns and principles can be applied across different types of tests to create a consistent and effective testing strategy.