Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
Some article numbers may have changed. If this isn't what you're looking for, try searching all articles. Search articles

Data Domain: Data Domain Virtual Edition(DDVE) which are Cloud deployed (ATOS), may run out of Local-Metadata Storage

Summary: Data Domain: Data Domain Virtual Edition(DDVE) which are Cloud deployed (ATOS), may run out of Local-Metadata Storage.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

Data Domain Virtual Edition(DDVE), is a software-only storage Appliance, which can be either on-premises or cloud deployed. DDVEs deployed on Cloud are also referred as ATOS (Active Tier on Object Storage).  
On premises, DDVE supports VMware, Hyper-V, KVM, and VxRail. In the cloud, DD VE also runs in the Amazon Web Services (AWS) (cloud and gov cloud), Azure (cloud and gov cloud), VMware Cloud on AWS cloud platforms, and Google Cloud Platform (GCP).
Note that DDVE on Cloud is not the same as a Cloud Tier. Cloud Tier is a different product, that can only be deployed on on-premises Hardware DDs and DDVEs.
If a DDVE deployed on the Cloud fills up its Local-Metadata Storage, the following alert is issued:
EVT-SPACE-00004: Space usage in metadata storage has exceeded 100% threshold.

Cause

DDVE deployed on Cloud(ATOS), supports two types of data storage:
- Block Storage (Used for Data and Metadata, or just Metadata on Object Storage enabled DDVEs aka ATOS).
- Hot Blob Storage also known as Object Storage (Used only for Data )  
Metadata Disks on an ATOS deployed DDVE will be populated with: Index, LP segment references, CMETA(Metadata) containers, and DM(Directory manager) data.
Most deployments are using (Hot Blob or Object Storage) as that is the current recommendation on Cloud Deployments. This means that we write all the data using Object Storage, but we have to define 10% of space as Block Storage for the DD Filesystem metadata. The size of the local-metadata is a speculation based on the size of the Licensed Capacity, and at deployment time assumes 10% Metadata usage for a 10x dedup.  A DDVE with a 100% Metadata Storage usage cannot ingest backups.
Due to the following scenarios the DDVE may run out of local-metadata storage space:
- The dedup is higher than 10x, as such the metadata requirements are higher than what was deployed in metadata size(default 10%).  
- A large number or old snapshots, can increase metadata size.
A larger amount of metadata may be noticed in ATOS DDVEs acting as Destinations for File Replication (Controlled Clone Replication CCR /Managed File Replication MFR): 
- If not all base files are being replicated, the destination ATOS DDVE has a higher usage in metadata.
- In this scenario Replication is NOT taking full advantage of the Virtual Synthetics optimizations, as such needing more space to store metadata. 
- This type of problem was noticed in NW CCR, where it does not seem all the base files were being replicated, causing the destination to have a higher metadata usage. 

Resolution

The solution for this issue, would be to add further disk/s as Local Metadata Storage. In the case for Azure and only when the Azure VM already has the maximum allowed number of disks allocated, there is the possibility to expand the metadata capacity by making existing metadata disks (except for the first one) larger in 1 TiB increments. See the "Expand metadata storage" section in the "PowerProtect DD Virtual Edition 7.x in the Azure Cloud Installation and Administration Guide" document for the details.
IMPORTANT INFORMATINO WHEN ADDING DISKS FOR METADATA: 

. Using different-sized metadata disks may create an index imbalance; therefore, leading the DDVE IO bound on the largest disk/s, as these hold most of the metadata.
. Each metadata disk added to the active tier should have its own Spindle Group, this should be done automatically by the software. Maximum known number of spindle groups is 16. 
. If going over the max number of spindle groups (16), you will have eventually metadata disks sharing the same spindle group. IO against disks on the same spindle groups is sequential. 
. See the manual for the recommended sizes, most will recommend a 1 TiB increment, higher sizes are possible depending on Cloud provider.
. Existing disks CANNOT be expanded, you can only ADD disks, please do not expand any existing disks, as it may lead the DDVE unusable with corruption, or have the expanded space unused.
1 - Review the Metadata Usage, if it reached 100%, it needs more disks assigned.
- Example below is provided, some outputs may differ depending on Cloud Provider, see the Install and Admin Manual:
- Display the metadata Usage
# filesys show space tier active local-metadata
--------------------------
Active Tier: local-metadata usage
Size GiB   Used GiB   Avail GiB     Use%
--------   --------   ---------   ------
  1293.0     1291.5         1.5   100.0%             -> We can see Metadata space is full.
--------   --------   ---------   ------
2 - The process of adding more Metadata disks differs depending on the Cloud Provider, see each of the Cloud Provider manuals at the bottom.
- Display which disks are used
# disk show hardware
------------------
Disk   Slot        Manufacturer/Model     Firmware   Serial No.   Capacity    Type
       (pci/idx)                                                                  
----   ---------   --------------------   --------   ----------   ---------   -----
dev1   -/a         Virtual BLOCK Device   n/a        (unknown)    250.0 GiB   BLOCK  -> DDOS Disk
dev2   -/b         Virtual BLOCK Device   n/a        (unknown)    10.0 GiB    BLOCK  -> NVRAM disk
dev3   -/c         Virtual BLOCK Device   n/a        (unknown)    1.0 TiB     BLOCK  -> Currently used for Metadata
dev4   -/d         Virtual BLOCK Device   n/a        (unknown)    1.0 TiB     BLOCK  -> Currently used for Metadata
----   ---------   --------------------   --------   ----------   ---------   -----
4 drives present.
2.1 - Go to AWS/GCP/AZURE  console and add storage to the DDVE, in this case, we added a 1TiB disk, and it should then be seen as dev5, remember not to expand or touch any of the other existing disks. 
                MANUALS for DDVE on Cloud (Only posting below for DDVE V4)
                                DDVEV4 on Microsoft Azure Install and Admin Manual
                                DDVEV4 on Google Cloud Platform(GC)
                                https://support.emc.com/docu91982_Data-Domain-Virtual-Edition-4.0-with-DD-OS-6.2.0.10-in-Google-Cloud-Platform-(GCP)-Installation-and-Administration-Guide.pdf?language=en_US
                                DDVE4 on Amazon web Services(AWS), Install, and Admin Manual          https://support.emc.com/docu91980_Data_Domain_Virtual_Edition_4.0_with_DD_OS_6.2.0.10_in_Amazon_Web_Services_(AWS)_Installation_and_Administration_Guide.pdf?language=en_US

ATTENTION: New and Updated Install/Admin Manuals may exist at the time of reading this article. 
2.2 - At this stage assuming a new 1 TiB disk was added, will be seen on the output as disk dev5.
# disk show hardware
------------------
Disk   Slot        Manufacturer/Model     Firmware   Serial No.   Capacity    Type
       (pci/idx)                                                                  
----   ---------   --------------------   --------   ----------   ---------   -----
dev1   -/a         Virtual BLOCK Device   n/a        (unknown)    250.0 GiB   BLOCK  -> DDOS Disk
dev2   -/b         Virtual BLOCK Device   n/a        (unknown)    10.0 GiB    BLOCK  -> NVRAM disk
dev3   -/c         Virtual BLOCK Device   n/a        (unknown)    1.0 TiB     BLOCK  -> Currently used for Metadata
dev4   -/d         Virtual BLOCK Device   n/a        (unknown)    1.0 TiB     BLOCK  -> Currently used for Metadata
dev5   -/e         Virtual BLOCK Device   n/a        (unknown)    1.0 TiB     BLOCK  -> Newly added disk for metadata, unused.
----   ---------   --------------------   --------   ----------   ---------   -----
2.3 - Add the new disk to the active Tier. In this case will be dev5
# storage add tier active dev5
  ATTENTION: If when adding more disks to a DDVE ATOS local-metadata from the CLI, the message "Local storage size exceeds the maximum required metadata capacity for this configuration" is received, such as below:
** Local storage size exceeds the maximum required metadata capacity for this configuration.
Do you want to continue? (yes|no) [no]:
2.4 - Expand the file system
# filesys expand
3. - Confirm the FS which was read-only due to lack of metadata space can now be written to
This may be achieved in a number of ways, from being sure backups are now working fine, to checking incoming replication resumed and traffic may be seen.

Additional Information

This content is translated in other languages: 
https://downloads.dell.com/TranslatedPDF/PT-BR_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/ZH-CN_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/ES_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/DE_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/FR_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/IT_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/JA_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/NL_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/KO_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/RU_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/PT_KB537488.pdf
https://downloads.dell.com/TranslatedPDF/SV_KB537488.pdf

- The Filesystem expand can be done online.
- Metadata usage is directly proportional to the dedup factor. If the workload of backups dedups very well, the larger the DDVE index, therefore the higher the needs for metadata.
- Old and large snapshots, will be not only holding Data, but also increase the dedup factor, therefore leading the system to run out of metadata sooner.
- There are only Metadata Requirement Guidelines for deployment, these will be 10% of Capacity in Metadata disks, and assume a 10x dedup. Higher dedup ratios will eventually require more metadata disks.
- DD Support does not have local-metadata storage requirement guidelines for higher dedups than 10x. Metadata Disks are recommended to be added in 1 TiB increments until a balance is reached between Filesystem Usage and Metadata Usage.
- There is currently no known method of predicting the future Metadata Usage based on potential dedup factors. 

- Index imbalance can also occur on disks of the same size, most common reason would be that new disks have been incremented post metadata being nearly full, as such the older disks, would be still holding most of the metadata structures, with more IO being requested. As older data is expired, the metadata should balance itself between the disks.

Affected Products

Data Domain, Data Domain Virtual Edition
Article Properties
Article Number: 000055464
Article Type: Solution
Last Modified: 18 Dec 2023
Version:  6
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.