This post is more than 5 years old
68 Posts
0
3857
ScaleIO Licensing - RAW Storage calculation with device limit
Hello,
right now we are evaluating ScaleIO, we are planning to use it for a new HyperV deployment that will scale in the future.
I'd like to understand exactly how licensing works.
We are planning to start with a cluster of three nodes (all-flash) with 1,92 TB SSD and with a ScaleIO "starter" license (RAW capacity: 12 TB).
Since a higher number of disks help to achieve better performance we would like to install more than 2 SSD disks per node and limit the capacity to have 12 TB of RAW storage. I saw a function in "Backend" that let us limit the hard disk capacity at physical disk level called: "Set Device Capacity Limit". When applied the "Capacity" applet in the dashboard shows the space limitation as "Decreased" but the total is however the sum of the raw space of disks installed in every system. Obviously the "Decreased" space is unavailable to create volumes.
So if we install 6 SSD every node we have a total RAW physical space of: 1,92 TB * 18 = 34,56 TB but if we limit the space of every single disk using the function "Set Device Capacity Limit" from ScaleIO to 666 GB for every device we have 666 GB * 18 = 11988 GB. In this case we are in the term of use of the 12 TB license?
In the image you can see an example: a ScaleIO deployment of 3 nodes. Every node has a disk of 127 GB so we have a total of 381 GB. I set a limit of 100 GB on disks installed in every node, so 81 GB are marked as "Decreased" and aren't available for volume creation. In this specific case we have to license ScaleIO for 381 GB or it is sufficient to license it for 300 GB? (I know that the minimum license is 12 TB but the example is useful to understand how licensing works).
Thanks in Advance,
Davide
SanjeevMalhotra
138 Posts
0
September 4th, 2016 06:00
From the 2.0 User Guide:
Setting device capacity limits
In circumstances when you need to replace a device in your system with a device of a
smaller capacity, you should first set the capacity limit of the device to be removed to
less than its full capacity. In such a case, capacity will be decreased, but the size of
the disk remains unchanged. The capacity assigned to the SDS device must be
smaller than its actual physical size.
So modifying the SDS device capacity was not intended to "short stroke" the devices. Licensing ignores that. It was designed for replacement of devices and if the replacement disk will be smaller.
A few options for them to consider:
1. Purchase a license with the capacity increased to match the total available.
2. Create partitions on each SDS device to match the smaller size and use the partitions instead of the full device. This will allow for maximum number of spindles, and is really the best option.
3. Remove some devices from each SDS.
pawelw1
306 Posts
1
September 4th, 2016 01:00
Hi Davide,
I believe you need a license for 381GB, "Decreased" capacity is taken into account for the licensing.
Alternatively, you can partition the devices and add the partitions instead the full disks to the SDS so you won't exceed your licensed capacity.
Many thanks,
Pawel
c0redump
68 Posts
0
September 4th, 2016 14:00
Hello SanjeevMalhotra,
thank you for clear out my confusion about licensing. However I have some consideration about the options to consider to circumvent this:
1. A lot of companies have a fixed separated budget for hardware and software licensing so increase capacity licensing could not be an option for all the different cases. Furthermore, increase license limit only to increase performance, since right now we don't need to increase size, could be a point difficult to understand for investors and executive board members.
2. Create partitions is the best options but there are two different approches that lead to other types of limits:
- Static partition size: during startup we can choose a static partition size to be added to SDS but this way we are limiting future scalability.
- Dynamic partition size (Pawel way): that is a good idea but partition in linux for example aren't hot resizable, the disk has to be taken offline, the partition must be delete and recreated, maintaining the "start offset" data are preserved but such operations are delicate and management become more complex.
3. In order to increase performance that solution couldn't be used.
There is a fourth possible solution but I don't know if is compatible with ScaleIO and is supported by EMC:
LVM way but with a ScaleIO specific setup:
We can create a volume group and a logical volume (sized as desired) for every single physical disk installed in every SIO node. Having different volume group always on a single physical volume is a way to avoid striping of blocks on multiple disk at LVM layer, so is like to work with physical disk at SIO level. This strategy permits to hot resize logical volume when needed. Obviously I plan to expand all logical volume of the same size every time a resize is needed. This solution adds a layer of complexity but is more flexible and eventually can be used in conjuction with lvmcache to circumvent the "write cache" scaleIO missing feature using two NVME cards replicated (RAID1).
As I said I don't know if the solution works technically and if is supported by EMC but I'm interested to know what do you think? Could we work in this direction?
Thanks in Advance,
Davide
c0redump
68 Posts
0
September 5th, 2016 11:00
Hi all,
just an update: technically working with LVM the way described in my previous post works perfectly. I didn't noticed performance hits due to LVM layer. But unfortunately I found a limitation: if I expand the logical volume, the size of disk shown by SDS remains unchanged. It seems that the the disk size is taken in account only during disk addition. I didn't find any command to refresh the disk that is useful to get the updated disk size. The only way to see the additional space is to remove and readd the disk so we have, anyway, to deal with a rebuild/rebalance operation. Is there any command that is useful to force an update of the disk size on the SDS?
Anyway, I don't know if this is a supported scenario. In the next days I want to add lvmcache using two replicated NVME flash disk to boost write operations performance I will keep this post updated.
Thanks,
Davide