EVT-SPACE-00004: Space usage in metadata storage has exceeded 100% threshold.
# filesys show space tier active local-metadata
--------------------------
Active Tier: local-metadata usage
Size GiB Used GiB Avail GiB Use%
-------- -------- --------- ------
1293.0 1291.5 1.5 100.0% -> We can see Metadata space is full.
-------- -------- --------- ------
2 - The process of adding more Metadata disks differs depending on the Cloud Provider, see each of the Cloud Provider manuals at the bottom.
# disk show hardware
------------------
Disk Slot Manufacturer/Model Firmware Serial No. Capacity Type
(pci/idx)
---- --------- -------------------- -------- ---------- --------- -----
dev1 -/a Virtual BLOCK Device n/a (unknown) 250.0 GiB BLOCK -> DDOS Disk
dev2 -/b Virtual BLOCK Device n/a (unknown) 10.0 GiB BLOCK -> NVRAM disk
dev3 -/c Virtual BLOCK Device n/a (unknown) 1.0 TiB BLOCK -> Currently used for Metadata
dev4 -/d Virtual BLOCK Device n/a (unknown) 1.0 TiB BLOCK -> Currently used for Metadata
---- --------- -------------------- -------- ---------- --------- -----
4 drives present.
2.1 - Go to AWS/GCP/AZURE console and add storage to the DDVE, in this case, we added a 1TiB disk, and it should then be seen as dev5, remember not to expand or touch any of the other existing disks.
# disk show hardware
------------------
Disk Slot Manufacturer/Model Firmware Serial No. Capacity Type
(pci/idx)
---- --------- -------------------- -------- ---------- --------- -----
dev1 -/a Virtual BLOCK Device n/a (unknown) 250.0 GiB BLOCK -> DDOS Disk
dev2 -/b Virtual BLOCK Device n/a (unknown) 10.0 GiB BLOCK -> NVRAM disk
dev3 -/c Virtual BLOCK Device n/a (unknown) 1.0 TiB BLOCK -> Currently used for Metadata
dev4 -/d Virtual BLOCK Device n/a (unknown) 1.0 TiB BLOCK -> Currently used for Metadata
dev5 -/e Virtual BLOCK Device n/a (unknown) 1.0 TiB BLOCK -> Newly added disk for metadata, unused.
---- --------- -------------------- -------- ---------- --------- -----
2.3 - Add the new disk to the active Tier. In this case will be dev5
# storage add tier active dev5
ATTENTION: If when adding more disks to a DDVE ATOS local-metadata from the CLI, the message "Local storage size exceeds the maximum required metadata capacity for this configuration" is received, such as below:
** Local storage size exceeds the maximum required metadata capacity for this configuration.
Do you want to continue? (yes|no) [no]:
# filesys expand
3. - Confirm the FS which was read-only due to lack of metadata space can now be written to
https://downloads.dell.com/TranslatedPDF/PT-BR_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/ZH-CN_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/ES_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/DE_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/FR_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/IT_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/JA_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/NL_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/KO_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/RU_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/PT_KB537488.pdf |
https://downloads.dell.com/TranslatedPDF/SV_KB537488.pdf |
- The Filesystem expand can be done online.
- Metadata usage is directly proportional to the dedup factor. If the workload of backups dedups very well, the larger the DDVE index, therefore the higher the needs for metadata.
- Old and large snapshots, will be not only holding Data, but also increase the dedup factor, therefore leading the system to run out of metadata sooner.
- There are only Metadata Requirement Guidelines for deployment, these will be 10% of Capacity in Metadata disks, and assume a 10x dedup. Higher dedup ratios will eventually require more metadata disks.
- DD Support does not have local-metadata storage requirement guidelines for higher dedups than 10x. Metadata Disks are recommended to be added in 1 TiB increments until a balance is reached between Filesystem Usage and Metadata Usage.
- There is currently no known method of predicting the future Metadata Usage based on potential dedup factors.
- Index imbalance can also occur on disks of the same size, most common reason would be that new disks have been incremented post metadata being nearly full, as such the older disks, would be still holding most of the metadata structures, with more IO being requested. As older data is expired, the metadata should balance itself between the disks.