Automated Log Rotation with Cron: Complete Shell Script Solution
Learn how to create a comprehensive log rotation system with shell scripting and cron automation. Master log compression, retention policies, and scheduled maintenance.
Automated Log Rotation with Cron: Complete Shell Script Solution
In production environments, applications can generate massive amounts of log data that quickly consume disk space. Without proper log rotation, your servers can run out of disk space, causing applications to fail and systems to become unstable. A well-designed log rotation system with automated compression and retention policies is essential for maintaining healthy, efficient systems.
Understanding Log Rotation
Why Log Rotation is Critical
Log rotation serves multiple purposes:
- Disk space management - Prevents logs from consuming all available disk space
- Performance optimization - Smaller log files are easier to process and search
- Compliance requirements - Many regulations require log retention policies
- Operational efficiency - Automated rotation reduces manual maintenance
Log Rotation Strategy
A typical log rotation strategy includes:
- Recent logs (0-7 days) - Keep uncompressed for active monitoring
- Older logs (7-30 days) - Compress to save space while maintaining accessibility
- Very old logs (30+ days) - Delete to free disk space
Complete Log Rotation Script
Basic Log Rotation Script
#!/bin/bash
# log_cleanup.sh
# Configuration
LOG_DIR="/var/log/myapp"
LOG_FILE="/var/log/myapp/log_rotation.log"
COMPRESS_DAYS=7
DELETE_DAYS=30
# Ensure the log directory exists
if [ ! -d "$LOG_DIR" ]; then
echo "[$(date)] ERROR: Log directory $LOG_DIR does not exist!" >> "$LOG_FILE"
exit 1
fi
# Function to log actions
log_action() {
echo "[$(date)] $1" >> "$LOG_FILE"
}
log_action "Starting log rotation process"
# Compress logs older than 7 days (but newer than 30)
log_action "Compressing logs older than $COMPRESS_DAYS days"
find "$LOG_DIR" -type f -name "*.log" -mtime +$COMPRESS_DAYS -mtime -$DELETE_DAYS ! -name "*.gz" -exec sh -c '
file="$1"
size=$(du -h "$file" | cut -f1)
gzip "$file"
echo "[$(date)] Compressed: $file ($size -> $(du -h "$file.gz" | cut -f1))" >> "$LOG_FILE"
' _ {} \;
# Delete compressed logs older than 30 days
log_action "Deleting compressed logs older than $DELETE_DAYS days"
find "$LOG_DIR" -type f -name "*.gz" -mtime +$DELETE_DAYS -exec sh -c '
file="$1"
size=$(du -h "$file" | cut -f1)
rm -f "$file"
echo "[$(date)] Deleted: $file ($size)" >> "$LOG_FILE"
' _ {} \;
# Optional: Delete uncompressed logs older than 30 days
log_action "Deleting uncompressed logs older than $DELETE_DAYS days"
find "$LOG_DIR" -type f -name "*.log" -mtime +$DELETE_DAYS -exec sh -c '
file="$1"
size=$(du -h "$file" | cut -f1)
rm -f "$file"
echo "[$(date)] Deleted (uncompressed): $file ($size)" >> "$LOG_FILE"
' _ {} \;
# Clean up empty directories
find "$LOG_DIR" -type d -empty -delete
log_action "Log rotation completed successfully"
Cron Job Configuration
Basic Cron Setup
# Make script executable
chmod +x /usr/local/bin/log_cleanup.sh
# Add to crontab
sudo crontab -e
# Run daily at 2 AM
0 2 * * * /usr/local/bin/log_cleanup.sh >> /var/log/cron.log 2>&1
Advanced Cron Configuration
# Multiple cron jobs for different purposes
# Daily log rotation
0 2 * * * /usr/local/bin/log_cleanup.sh >> /var/log/cron.log 2>&1
# Weekly archive cleanup
0 3 * * 0 /usr/local/bin/archive_cleanup.sh >> /var/log/cron.log 2>&1
# Monthly report generation
0 4 1 * * /usr/local/bin/generate_log_report.sh >> /var/log/cron.log 2>&1
# Emergency cleanup if disk usage > 90%
*/30 * * * * /usr/local/bin/emergency_cleanup.sh >> /var/log/cron.log 2>&1
Monitoring and Alerting
Disk Space Monitoring
#!/bin/bash
# disk_monitor.sh
LOG_DIR="/var/log/myapp"
THRESHOLD=80
ALERT_EMAIL="admin@example.com"
# Check disk usage
USAGE=$(df "$LOG_DIR" | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -gt "$THRESHOLD" ]; then
# Send alert
echo "Warning: Disk usage in $LOG_DIR is ${USAGE}%" | mail -s "Disk Space Alert" "$ALERT_EMAIL"
# Trigger emergency cleanup
/usr/local/bin/emergency_cleanup.sh
fi
Best Practices and Safety
Safety Checks
#!/bin/bash
# safe_log_rotation.sh
# Configuration
LOG_DIR="/var/log/myapp"
BACKUP_DIR="/var/log/backup"
DRY_RUN=false
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--dry-run)
DRY_RUN=true
shift
;;
--backup)
BACKUP_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
exit 1
;;
esac
done
# Function to safely process files
safe_process() {
local file="$1"
local action="$2"
# Check if file exists
if [ ! -f "$file" ]; then
echo "File not found: $file"
return 1
fi
# Check if file is currently open
if lsof "$file" >/dev/null 2>&1; then
echo "Skipping open file: $file"
return 1
fi
# Create backup if requested
if [ "$BACKUP_DIR" != "" ]; then
mkdir -p "$BACKUP_DIR"
cp "$file" "$BACKUP_DIR/"
fi
# Perform action
if [ "$DRY_RUN" = true ]; then
echo "Would $action: $file"
else
case "$action" in
"compress")
gzip "$file"
echo "Compressed: $file"
;;
"delete")
rm -f "$file"
echo "Deleted: $file"
;;
esac
fi
}
# Main processing
find "$LOG_DIR" -type f -name "*.log" -mtime +7 -exec sh -c 'safe_process "$1" "compress"' _ {} \;
find "$LOG_DIR" -type f -name "*.gz" -mtime +30 -exec sh -c 'safe_process "$1" "delete"' _ {} \;
Conclusion
Automated log rotation with cron is essential for maintaining healthy Linux systems. A well-designed log rotation system includes:
- Comprehensive scripting - Handle compression, archiving, and deletion
- Safety checks - Verify files aren't in use before processing
- Monitoring and alerting - Track disk usage and rotation success
- Testing and validation - Ensure scripts work correctly before deployment
- Documentation and reporting - Keep records of all operations
Key takeaways:
- Implement tiered retention - Different policies for different log ages
- Use safety checks - Always verify files aren't in use
- Monitor disk usage - Set up alerts for high usage
- Test thoroughly - Validate scripts before production use
- Document everything - Keep records of all operations