Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products

VNX: What is the difference between Thick LUNs and Thin LUNs ?

Summary: This article explains the difference between Thick LUNs and Thin LUNs.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

What is the difference between Thick LUNs and Thin LUNs created from Pools ?

Two different types of LUNs may be created on Pools Thick LUNs and Thin LUNs. While both allocate space on demand, there are significant differences between them in terms of both operation and performance.

Thick LUN :

When a Thick LUN is created, the entire space that will be used for the LUN is reserved; if there is insufficient space in the Pool, the Thick LUN will not be created. An initial allocation of 3 GiB holds metadata and user data, and as users save additional data to the LUN, 1 GiB slices will be allocated as needed. These slices contain 1 GiB of contiguous Logical Block Addresses (LBAs), so when a slice is written for the first time, it will be allocated to the Thick LUN. Since tracking happens at a granularity of 1 GiB, the amount of metadata is relatively low, and the lookups that are required to find the location of the slice in the Pool are fast. Because lookups are required, Thick LUN accesses will be slower than accesses to Traditional LUNs.

Thin LUN :

Thin LUNs also allocate 1 GiB slices when space is needed, but the granularity inside those slices is at the 8 KiB block level. Any 1 GiB slice will be allocated to only 1 Thin LUN, but the 8 KiB blocks will not necessarily be from contiguous LBAs. Oversubscription is allowed, so the total size of the Thin LUNs in a Pool can exceed the size of the available physical data space. Monitoring is required to ensure that out of space conditions do not occur. There is appreciably more overhead associated with Thin LUNs than with Thick LUNs and Traditional LUNs, and performance is substantially reduced as a result.

How to calculate Metadata of Pool LUNs ?

Metadata is associated with the use of both Thick LUNs and Thin LUNs. The metadata is used to locate the data on the private LUNs used in the Pools structure (slow performance due to the lookups that are required to find the location of the slice in the Pool). The amount of metadata depends on the type and size of the LUN.

For Thick LUN MetaData (GB) =  .001* capacity (GB) + 3 GB
For Thin  LUN MetaData (GB)  = .02* capacity (GB) + 3 GB

In summary, note that:

The use of Thin LUNs is not supported in some environments: for example in VNX File storage groups.
Thin LUNs should never be used where high performance is an important goal (see article 335002 ).
Pool space should be monitored carefully (Thin LUNs allow Pool oversubscription whereas Thick LUNs do not). The system issues an alert when the consumption of any pool reaches a user selectable limit.  By default, this limit is 70%, and allow sample time for the user to take any corrective action required (see article 78223 ).

Additional Information

Thin LUNs should be positioned in Block environments where space saving and storage efficiency outweigh performance as the main goals. Areas where storage space is traditionally over allocated, and where the Thin LUN allocate space on demand functionality would be an advantage, include user home directories and shared data space.

If FAST VP is a requirement, and Pool LUNs are being proposed for that reason, it is important to remember that Thick LUNs achieve better performance than Thin LUNs.
Be aware that Thin LUNs are not recommended in certain environments. Among these are Exchange 2010, and file systems on VNX.


Space is assigned to Thin LUNs at a granularity of 8 KiB (inside a 1 GiB slice). The implication here is that tracking is required for each 8 KiB piece of data saved on a Thin LUN, and that tracking involves  capacity overhead in the form of metadata. In addition, since the location of any 8 KiB piece of data cannot be predicted, each data access to a Thin LUN requires a lookup to determine the data location. If the metadata is not currently memory-resident, a disk access will be required, and an extended response time will result. This makes Thin LUNs appreciably slower than Traditional LUNs, and slower than Thick LUNs. Since Thin LUNs make use of this additional metadata, recovery of Thin LUNs after certain types of failure (e.g. cache dirty faults) will take appreciably longer than recovery for Thick LUNs or Traditional LUNs. A strong recommendation, therefore, is to place mission critical applications on Thick LUNs or Traditional LUNs.

In some environments, those with a high locality of data reference FAST Cache may help to reduce the performance impact of the metadata lookup. 


Units

* Decimal - all are powers of 10

** KB,MB,GB,TB - 10^3 ,10^6,10^9,10^12
** 1,000 , 1,000,000 ,1,000,000,000 ,1,000,000,000,000

* Binary -all are powers of 2

** KiB,MiB,GiB,TiB - 2^10,2^10,2^20,2^30,2^40
** 1,024 - 1.048,576 - 1,073,741,824 - 1,099,511,627,776

* Disk manufacturers use decimal units to list capacity
* Link speeds are measured in decimal units
* I/O sizes,cache page sizes,LUN sizes are binary units

Units used in the design of storage environments can be confusing. In the SI system of measurement, multipliers are decimal, and mega, for example, means 10^6, or 1,000,000.
Some measurements used in IT, though, are based on the binary equivalents, which are somewhat larger than the decimal units. Note, for example, that at the TB/TiB level the binary unit is almost 10% larger than the decimal unit.
Disk manufacturers specify disk sizes in decimal units, and use a sector size of 512 bytes when discussing formatted sizes. The VNX Block systems, and their CLARiiON predecessors, use 520 byte sectors, and this must be taken into account as well. Note that cache sizes, LUN sizes and file system sizes are specified in binary units.
The binary units are a relatively new standard, established by the IEC in 2000. The standard has been accepted by all major standards organizations including the IEEE.



In flare 32, when you create a new Thick LUN, all the slices are pre-assigned. This is to keep all the slices on the same SP when the LUN is created, "the array LBA's are assigned sequentially" to prevent some slices from being assigned to the peer SP (this can occur under certain circumstances).
 

Thick LUN in Release for 32 :

With the introduction of the new EMC VNX Operating Environment (OE) for block release 5.32, file version 7.1 there is no change in behavior for Thin LUNs; but Thick LUNs will be fully allocated upon creation. This means that all 1 GB slices will be allocated when the LUN is first created. The writes will still be organized according to Logical Block Address range, as was originally the case. Thick LUNs and Thin LUNs can share the same pool. When a thick LUN is created, all of its capacity is reserved and allocated in the pool for use by that LUN.  Therefore, a Thick LUN will never run out of capacity. Any new writes are located and distributed in all the pre-allocated area, so the host reported capacity is roughly equal to the consumed capacity. Thick LUNs are higher performing than thin LUNs because of the direct addressing, and should be used for applications where performance  is more important than space savings. This enhancement has beneficial implications for FAST VP, in that by fully allocating the Thick Pool LUN upon creation, users can better control which tier the slices are written to. As pools are initially being created, and there is still sufficient space in the highest tier, users can be assured that when they create a LUN with either Highest Available Tier or Start High, then Auto-Tier, data will be written to the highest tier because the LUN is allocated immediately.


Thick LUN consumption per tier for Release 31 :

When you create a Thick LUN, the pool storage required for that Thick LUN is not actually allocated, but rather it is reserved. Since these reservations are based on the pool rather than the tier, this reserved storage is not reflected in the tier breakdown at the Thick LUN level until the Thick LUN is written to and the storage is actually allocated.
Additionally, when you set a tiering preference for a thick LUN, the storage is only reserved for the LUN even if the Thick LUN appears to be fully provisioned. Since these reservations are not made on a per-tier level, by the time the data is actually allocated to the Thick LUN as the result of a write, the originally requested tier of storage may no longer be available. If you enable FAST, this problem will be resolved during subsequent relocations.

Affected Products

VNX/VNXe

Products

CLARiiON CX4 Series, VNX Family Monitoring and Reporting, VNX1 Series, VNX2 Series, VNXe1 Series, VNX/VNXe
Article Properties
Article Number: 000010830
Article Type: How To
Last Modified: 06 Feb 2024
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.