Addressing Capacity Issues in an Avamar System
When dealing with capacity issues in an Avamar system, it is crucial to understand the root cause. This requires a series of steps, starting with data collection for thorough investigation.
Avamar systems have several types of capacity limits. A comprehensive understanding of these limits, along with their historical context, can clarify both current and past capacity issues experienced by the system.
The system generates specific events, warnings, or errors in the User Interface (UI) when certain capacity thresholds are crossed:
- 80%: Capacity Warning
- 95%: Health Check Limit is reached
- 100%: Server Read-Only Limit is reached, causing the grid to switch to admin mode
When an Avamar system is full, it may exhibit the following symptoms or errors:
- Garbage collection fails, resulting in MSG_ERR_DISKFULL or MSG_ERR_STRIPECREATE errors.
- Checkpoints fail due to MSG_ERR_DISKFULL error.
- Backups cannot run or fail due to full capacity.
- Backups fail with MSG_ERR_STRIPECREATE errors or messages indicating that the target server is full.
- The access state switches to admin mode (unless maintenance is running).
- The backup scheduler is disabled and cannot be resumed due to metadata capacity limits.
Understanding these aspects can help in managing and resolving capacity issues in an Avamar system.
Gathering information:
Log in to the Avamar server (Single-Node or Utility Node) and run all commands. These only collect information and do not apply any changes:
1. If not already known, it provides the Avamar server full name or Fully Qualified Domain Name (FQDN):
2. Verify that all services are enabled, including the maintenance scheduler:
3. The overall system state:
4. Run the capacity
.sh script to collect 60 days worth of data and the top 10 contributing clients:
capacity.sh --days=60 --top=10
5. Logs showing basic garbage collection behavior over the last 30 days:
dumpmaintlogs --types=gc --days=30 | grep "4202"
6. The amount of data that garbage collection removed, how many passes it completed and for how long it ran.
For Avamar
v5.x and v6.x, run:
dumpmaintlogs --types=gc --days=30 | grep passes | cut -d ' ' -f1,12,13,15
For Avamar
v7.x, onwards run:
dumpmaintlogs --types=gc --days=30 | grep passes | cut -d ' ' -f1,10,14,15,17
7. Check how long hfscheck runs for:
dumpmaintlogs --types=hfscheck --days=30 | grep -i elapsed|cut -d ' ' -f1,12 | grep -v check
8. Details of file system Capacity usage per node and per partition:
avmaint nodelist | egrep 'nodetag|fs-percent-full'
9. A list of checkpoints available on the system:
10. Maintenance job scheduled start/stop times:
avmaint sched status --ava | egrep -A 2 "maintenance-window|backup-window" | tail -16
11. Collect all disk settings:
avmaint config --ava | egrep -i 'disk|crunching|balance'
Never change values unless advised by an Avamar Subject Matter Expert (SME). Non-default values might be in place for a good reason. Understand the situation thoroughly.
12. Collect counts of different types of stripes per node per data partition:
avmaint nodelist --xmlperline=99 | grep 'comp='
13. Check the amount of memory (and swap) in use on each node: