Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dotandev/envcheck/llms.txt
Use this file to discover all available pages before exploring further.
This guide walks you through adding a new validator type to envcheck. We’ll use real examples from the codebase to illustrate the process.
Overview
Validators in envcheck are modular components that check specific aspects of your development environment. Each validator implements the Validator trait and returns validation results.
Understanding the Validator Trait
All validators implement this trait (defined in src/validators/mod.rs:52):
pub trait Validator {
fn validate(&self) -> Result<Vec<ValidationResult>>;
}
ValidationResult Structure
Validators return ValidationResult objects (defined in src/validators/mod.rs:20):
pub struct ValidationResult {
pub status: ValidationStatus,
pub message: String,
pub suggestion: Option<String>,
}
With three status types:
ValidationStatus::Success - Check passed
ValidationStatus::Warning - Optional check failed
ValidationStatus::Error - Required check failed
Helper Methods
Use these helper methods to create results:
ValidationResult::success("Message")
ValidationResult::warning("Message", Some("Suggestion"))
ValidationResult::error("Message", Some("Suggestion"))
Step-by-Step Guide
Step 1: Create the Validator File
Create a new file in src/validators/ for your validator. For example, to add a database validator:
touch src/validators/database.rs
Step 2: Define the Configuration Structure
First, add a configuration struct in src/config.rs. This defines what users can configure:
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct DatabaseCheck {
pub connection_string: String,
#[serde(default = "default_true")]
pub required: bool,
#[serde(default)]
pub timeout_ms: Option<u64>,
}
Then add it to the main Config struct:
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct Config {
pub version: String,
#[serde(default)]
pub tools: Vec<ToolCheck>,
// ... other fields ...
#[serde(default)]
pub databases: Vec<DatabaseCheck>, // Add your new field
}
Step 3: Implement the Validator
Here’s a complete example based on the EnvValidator pattern (src/validators/env.rs:1):
use crate::config::DatabaseCheck;
use crate::validators::{ValidationResult, Validator};
use anyhow::Result;
pub struct DatabaseValidator {
check: DatabaseCheck,
}
impl DatabaseValidator {
pub fn new(check: DatabaseCheck) -> Self {
Self { check }
}
}
impl Validator for DatabaseValidator {
fn validate(&self) -> Result<Vec<ValidationResult>> {
let mut results = Vec::new();
// Your validation logic here
match self.test_connection() {
Ok(_) => {
results.push(ValidationResult::success(
format!("Database connection successful"),
));
}
Err(e) => {
if self.check.required {
results.push(ValidationResult::error(
format!("Database connection failed: {}", e),
Some("Check your connection string and ensure the database is running".to_string()),
));
} else {
results.push(ValidationResult::warning(
format!("Database connection failed (optional): {}", e),
None,
));
}
}
}
Ok(results)
}
}
impl DatabaseValidator {
fn test_connection(&self) -> Result<()> {
// Implementation of connection test
// This is where your specific validation logic goes
Ok(())
}
}
Step 4: Register the Validator Module
Add your validator to src/validators/mod.rs:5:
pub mod tool;
pub mod env;
pub mod port;
pub mod file;
pub mod network;
pub mod database; // Add your new validator
Step 5: Integrate Into Validation Runner
Add your validator to the run_all_validations function in src/validators/mod.rs:56:
pub fn run_all_validations(config: &Config) -> Result<Vec<ValidationResult>> {
let mut results = Vec::new();
// ... existing validators ...
// Validate databases
for db_check in &config.databases {
let validator = database::DatabaseValidator::new(db_check.clone());
results.extend(validator.validate()?);
}
Ok(results)
}
Real-World Example: EnvValidator
Let’s examine the EnvValidator (src/validators/env.rs:1) as a complete example:
use crate::config::EnvVarCheck;
use crate::validators::{ValidationResult, Validator};
use anyhow::Result;
use std::env;
pub struct EnvValidator {
check: EnvVarCheck,
}
impl EnvValidator {
pub fn new(check: EnvVarCheck) -> Self {
Self { check }
}
}
impl Validator for EnvValidator {
fn validate(&self) -> Result<Vec<ValidationResult>> {
let mut results = Vec::new();
match env::var(&self.check.name) {
Ok(value) => {
if let Some(pattern) = &self.check.pattern {
let is_match = match regex::Regex::new(pattern) {
Ok(re) => re.is_match(&value),
Err(_) => value.contains(pattern),
};
if is_match {
results.push(ValidationResult::success(
format!("{} is set and matches pattern", self.check.name),
));
} else {
results.push(ValidationResult::error(
format!("{} is set but does not match pattern", self.check.name),
Some(format!("Ensure {} matches pattern: {}", self.check.name, pattern)),
));
}
} else {
results.push(ValidationResult::success(
format!("{} is set", self.check.name),
));
}
}
Err(_) => {
if self.check.required {
results.push(ValidationResult::error(
format!("{} is not set", self.check.name),
Some(format!("Set {} environment variable", self.check.name)),
));
} else {
results.push(ValidationResult::warning(
format!("{} is not set (optional)", self.check.name),
None,
));
}
}
}
Ok(results)
}
}
Key patterns to note:
- Required vs Optional: Check
self.check.required to return errors vs warnings
- Helpful suggestions: Always provide actionable suggestions in error/warning messages
- Multiple results: You can push multiple
ValidationResult objects for complex checks
Real-World Example: FileValidator
The FileValidator (src/validators/file.rs:1) shows more complex validation:
impl Validator for FileValidator {
fn validate(&self) -> Result<Vec<ValidationResult>> {
let mut results = Vec::new();
let path = Path::new(&self.check.path);
if path.exists() {
let mut item_passed = true;
// Check if it's a directory when required
if self.check.is_directory {
if path.is_dir() {
results.push(ValidationResult::success(
format!("Directory {} exists", self.check.path),
));
} else {
item_passed = false;
results.push(ValidationResult::error(
format!("{} exists but is not a directory", self.check.path),
Some(format!("Ensure {} is a directory", self.check.path)),
));
}
}
// Only check permissions if the item passed previous checks
if item_passed {
if let Some(required_perms) = self.check.permissions {
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
if let Ok(metadata) = path.metadata() {
let actual_perms = metadata.permissions().mode() & 0o777;
if actual_perms == required_perms {
results.push(ValidationResult::success(
format!("{} has correct permissions ({:o})",
self.check.path, actual_perms),
));
} else {
results.push(ValidationResult::error(
format!("{} has permissions {:o}, but {:o} is required",
self.check.path, actual_perms, required_perms),
Some(format!("Run 'chmod {:o} {}' to fix",
required_perms, self.check.path)),
));
}
}
}
}
}
} else {
// Handle missing file/directory
let item_type = if self.check.is_directory { "Directory" } else { "File" };
if self.check.required {
results.push(ValidationResult::error(
format!("{} {} does not exist", item_type, self.check.path),
Some(format!("Create {} {}", self.check.path, item_type.to_lowercase())),
));
}
}
Ok(results)
}
}
This example demonstrates:
- Multiple validation steps: First check existence, then type, then permissions
- Conditional checks: Only check permissions if the file exists and is the right type
- Platform-specific code: Using
#[cfg(unix)] for Unix-only features
- Specific error messages: Different messages for different failure modes
Adding Tests
Always add tests for your validator. Here’s an example pattern:
#[cfg(test)]
mod tests {
use super::*;
use crate::config::DatabaseCheck;
#[test]
fn test_database_validator_success() {
let check = DatabaseCheck {
connection_string: "postgresql://localhost".to_string(),
required: true,
timeout_ms: Some(5000),
};
let validator = DatabaseValidator::new(check);
let results = validator.validate().unwrap();
// Add assertions based on expected behavior
assert!(!results.is_empty());
}
#[test]
fn test_database_validator_failure() {
let check = DatabaseCheck {
connection_string: "invalid://connection".to_string(),
required: true,
timeout_ms: Some(5000),
};
let validator = DatabaseValidator::new(check);
let results = validator.validate().unwrap();
assert!(!results.is_empty());
// Verify error message contains helpful information
}
}
Best Practices
1. Clear Error Messages
Always provide clear, actionable error messages:
// Good
ValidationResult::error(
"Node.js version 12.0.0 does not meet requirement >=14.0.0",
Some("Update Node.js to version 14.0.0 or higher"),
)
// Bad
ValidationResult::error("Node.js version mismatch", None)
2. Respect the Required Flag
Always check the required field and return appropriate status:
if self.check.required {
results.push(ValidationResult::error(msg, suggestion));
} else {
results.push(ValidationResult::warning(msg, None));
}
3. Return Multiple Results When Appropriate
Don’t hesitate to return multiple validation results for complex checks:
// Check existence
results.push(ValidationResult::success("File exists"));
// Check permissions
results.push(ValidationResult::success("Permissions are correct"));
// Check content
results.push(ValidationResult::warning("File is empty", Some("Add content")));
4. Use Rust Idioms
- Use pattern matching instead of if-else chains
- Leverage the
? operator for error propagation
- Use descriptive variable names
- Keep functions focused and small
5. Handle Edge Cases
Consider edge cases in your validation:
- Missing dependencies
- Permission issues
- Platform differences (Windows vs Unix)
- Network timeouts
- Malformed input
Documentation
After implementing your validator, update the documentation:
- Add examples to
README.md showing how to use your validator
- Document the configuration options
- Provide real-world use cases
Checklist
Before submitting your validator:
Getting Help
If you’re stuck:
- Look at existing validators for patterns
- Open a draft PR and ask for feedback
- Create an issue with your questions
- Check for
good first issue labels for simpler starting points