(Not applicable to thin clones) Type the ID of a consistency group to which to associate the new LUN.
View consistency groups explains how to view information about consistency groups.
NOTE:If no consistency group is specified with
-group or
-groupName, the LUN is not assigned to a consistency group.
-groupName
(Not applicable to thin clones) Type the name of a consistency group to which to associate the new LUN.
NOTE:If no consistency group is specified with
-group or
-groupName, the LUN is not assigned to a consistency group.
-size
Type the quantity of storage to allocate for the LUN.
-standalone
(Not applicable to thin clones) Remove the LUN from the consistency group.
-sched
Type the ID of the schedule to apply to the LUN.
View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Pause the schedule specified for the
-sched qualifier. Valid values are:
yes
no
-noSched
Removes the protection schedule from the LUN.
-spOwner
(Not applicable to thin clones) Specify the default owner of the LUN. Valid values are:
spa
spb
-fastvpPolicy
(Not applicable to thin clones) Specify the FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
startHighThenAuto (default)—Sets the initial data placement to the highest-performing drives with available space, and then relocates portions of the storage resource's data based on I/O activity.
auto—Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource's data based on the storage resource's performance statistics such that data is relocated among tiers according to I/O activity.
highest—Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing drives with available space.
lowest—Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective drives with available space.
-lunHosts
Specify a comma-separated list of hosts with access to the LUN.
NOTE:This list must include all the hosts that you want to have access to the LUN. When adding access for new hosts, include the list of existing hosts in the comma-separated list, and append the new hosts that you want to have access. When removing access for hosts, include the list of existing hosts in the comma-separated list, but exclude the hosts for which you want to remove access from the list.
-hlus
Specifies the comma-separated list of Host LUN identifiers to be used by the corresponding hosts which were specified in the
-lunHosts option. The number of items in the two lists must match. However, an empty string is a valid value for any element of the Host LUN identifiers list, as long as commas separate the list elements. Such an empty element signifies that the system should automatically assign the Host LUN identifier value by which the corresponding host will access the LUN.
If not specified, the system will automatically assign the Host LUN identifier value for every host that is specified in the
-lunHosts argument list.
-replDest
Specifies whether the resource is a replication destination. Valid values are:
yes
no
NOTE:This value must be
no for a thin clone.
-ioLimit
Specify the name of the host I/O limit policy to be applied.
-noIoLimit
Specify the removal of an applied host I/O limit policy.
-dataReduction
(Not applicable to thin clones) Specify whether data reduction is enabled for the thin LUN. Valid values are:
yes
no
-advancedDedup
Specify whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that that advanced deduplication is not supported on the LUN. Valid values are:
yes
no (default)
NOTE:Advanced deduplication is available only on:
Dynamic or Traditional pools in Unity 380F, 480F, 680F, and 880F systems
Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems
All-Flash pools and Hybrid Flash pools in Unity Hybrid 380, 480, 680, and 880 systems
-addHosts
Specify the host or hosts that you want to add to have access to the LUN. You must separate each host with a comma. This option allows you to incrementally add hosts that can access the LUN. It does not overwrite all existing hosts that have LUN access.
-removeHosts
Specify the host or hosts for which you want to remove access to the LUN. You must separate each host with a comma. This option allows you to incrementally remove hosts from having access to the LUN. It does not overwrite all existing hosts that have access to the LUN.
-addSnapHosts
Specify the host or hosts that you want to add that will have access to the LUN snapshots. You must separate each host with a comma. This option allows you to incrementally add hosts that can access the LUN snapshots. It does not overwrite all existing hosts that can access the LUN snapshots.
-removeSnapHosts
Specify the host or hosts for which you want to remove access to the LUN snapshots. You must separate each host with a comma.
Example 1
The following command updates LUN lun_1 with these settings:
Name is NewName.
Description is “My new description.”
The primary storage size is 150 MB.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun -id lun_1 set -name NewName -descr "My new description" -size 150M
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_1
Operation completed successfully.
Example 2
The following command adds access for two new hosts to LUN lun_2 in addition to its existing hosts:
host13
host14
NOTE:Although host1, host2, and host11 already have access to the LUN, the complete list of hosts that should have access to the LUN must be specified when making any host access changes.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_2
Operation completed successfully.
Example 3
The following command shows Host_2 and Host_3 being given access to LUN sv_1 with host LUN identifiers 8 and 9, and access to the LUN being removed from Host_1.