Thanks a lot for the immediate reply , I'm sorry but I didnt understand your abandom approch suggestion , network connectivity between all the storage nodes , server and the clients is OK , I guess what you mean by scripting is to change the affinity by nsr_storgae_node command before the groups get run , right ?
and If you excuse me , what about the pools stuff ?, sorry I posted this before I saw your reply ...
You can't - storage node field is shared by all client instances. You can only changes it which would require some scripting or abandom approach where same client writes to different storage nodes (as client-storage node relationship should be relation based on network connectivity).
and if I cant do so , what about the pools devices attributes ? can I make the groups containing these weekly instances for example to write on pools which are restricted to mount on the remote sotrage node's devices ? would that be possible without changing the storage node affinity on the clients level ?
I was more referring to approach why would you have same client writing to different storage nodes? I can't figure out top of my head any advantage of such approach. Perhaps if you would describe your setup that would make sense.
As for pool thing, yes, I believe that would work... actually that should work, you are right.
reagrdless of the devices selected/"restricted" into the pools, just how do you get it to use a different "Storage Node" than the one configured into the Client configuration...you cant do it......
if you have a group selected into a specified pool, and the pool has Storage Node 3 and its devices all selected.....The client will fail if it is Configured to use Storage Node 2, and none of its devices are allowed to use this pool.....
you cant get 1 client to backup to 2 different Storage Nodes without have a script to perform nsradmin updates on the 'fly'......
You could do that with device restriction where client would require both storage nodes to be listed (but it also means you will need to have pool just to accomodate such set up and I'm not fan of that approach at all). Actually with different libraries you even won't require any device restrictions if you will have different pools...
I'm using Networker v7.3.1 on a Server 2003 box with no other storage nodes and all of my clients have the default entry of "nsrserverhost" in their storage node attribute boxes.
Here's what I think would work for you:
Put "nsrserverhost" in each client definition's storage node attribute box. Use two different groups. One for each client. In the properties window of each group, you should see all devices available in all storage nodes. Select only the devices that are in the particular storage node where you want the data for a particular saveset of your client to be going. If you can't see all the devices of each storage node, make sure the the server's storage nodes attribute box has both of the storage nodes listed in it. So now you are selecting different devices in two different groups but you have the same storage node entry in each client definition.
Also, you shouldn't require more than one media pool with this setup.
-------- From the field help in one of my client's property boxes: Storage nodes - This is an ordered list of storage nodes for the client to use when saving its data. Its saves are directed to the first storage node that has an enabled device and a functional media daemon (nsrmmd). A default value of nsrserverhost represents the server.
Clone storage nodes - This attribute specifies the hostnames of the storage nodes that are to be selected for the `save' side of clone operations. Cloned data originating from the storage node will be directed to the first storage node that has an enabled device and a functional media daemon (nsrmmd). There is no default value. If this attribute has no value, the server's 'clone storage nodes' will be consulted. If this attribute also has no value, then the server's 'storage nodes' attribute will be used to select a target node for the clone.
Recover storage nodes - This is an ordered list of storage nodes for the client to use when recovering data. Data will be recovered from the first storage node that has an enabled device on which we can mount the source volume and a functional media daemon (nsrmmd). If this attribute has no value, the client's `storage nodes' will be consulted. If this attribute also has no value, then the server's 'storage nodes' attribute will be used to select a target node for the recover operation. --------
When I have different instances of the same client , say a weekly instance and a daily instance , with each one having its own different savesets , and I have another storage node in addition to the default backup server's storage node .
how can I make each of these client instances backup to a different storage node ? Because if I choose to change the storage node attribute in one of them, it also changes in the others , i.e. in the rest of the instances of the same client .
I was more referring to approach why would you have same client writing to different storage nodes? I can't figure out top of my head any advantage of such approach.
I have a similar requirement where I need to periodically archive some clients to a geographically remote storage node/site. I had considered using storage node affinity and pools->devices to control the destination.
One other option I thought of would be to create the 2nd instance of the client with a different name (say with a prefix) and specify a backup command of save -c to keep the indexes in one place.
Overall I think the first option seems much cleaner though.
sorry I've just read you're post , Im not sure but is there a devices selection in the group properties ?? , I'll check that and see , because if so , then ofcourse your approach will be much neater and cleaner ...
Why simply don't clone (or stage) already existing backup from location A to location B? In case of archiving (versus backup) you need different pool anyway so I see no issues there.
Mega Thanks for you all folks , I Really appreciate your input .
infact I need this because we wanted to keep the Full Weekly backup outside on that remote storage node which have enough space and capacity , and I didnt't go with the clone option because we dont have enough tapes on the local backup server's node for that purpose , its full of daily incremental tapes .
so actually restricting the devices attribut in the pool resource to write on the remote storage node devices , does not work , it keeps reporting a no matching device found error on the groups startup !! (although it is properly configured and appears fine in the devices and autoloaders tab) , but you can do that for the backup server's storage node devices only .ie u can restrict them at the pool without any errors .
so what I did is that I checked these local backup server devices on the pool where the daily group is writing , so it over rides the storage node affinity sequence as defined in the clients included in this group .
and cleared all the devices checks for the pool where the Weekly full backup group is writing , so the storage node affinity sequence will take effect for the clients included in this group . and it sent the backup to the remote storage node .
yet it still needs a volume from that remote pool to be present at the local storage node for the purpose of writing the index data to it ..., I guess I can live with that , unless someone has some suggestion on how to inforce the group to write the index there at the remote node too ..?
Sorry about that, it was late and I was getting a bit punchy.
You can select what group(s) can be used by a particular pool by enabling them in the group(s) selection area of the pool properties box and you can select what device(s) a pool uses by enabling them in the devices selection area of the same pool properties box.
Does that make more sense? : )
Remember, a lot of the advanced features are hidden unless you select Diagnostic Mode from the view menu of NMC...
Why simply don't clone (or stage) already existing backup from location A to location B?
There is a requirement that the backup be a full so that it can be independently restored. Consolidation would appear to fit the bill here but I have always been warned off of using it.
OK, but I guess you have full backups anyway at location A? Or does it have to be on specific day? In such case, I would expect you have different pools for A and B and such setup would be easy to accomplish.
atomari77
5 Posts
0
October 29th, 2006 05:00
Thanks a lot for the immediate reply
and If you excuse me , what about the pools stuff ?, sorry I posted this before I saw your reply ...
Thanks
ble1
4 Operator
•
14.4K Posts
0
October 29th, 2006 05:00
atomari77
5 Posts
0
October 29th, 2006 05:00
Thanks again
ble1
4 Operator
•
14.4K Posts
0
October 29th, 2006 08:00
As for pool thing, yes, I believe that would work... actually that should work, you are right.
cfaller
96 Posts
0
October 29th, 2006 20:00
if you have a group selected into a specified pool, and the pool has Storage Node 3 and its devices all selected.....The client will fail if it is Configured to use Storage Node 2, and none of its devices are allowed to use this pool.....
you cant get 1 client to backup to 2 different Storage Nodes without have a script to perform nsradmin updates on the 'fly'......
ble1
4 Operator
•
14.4K Posts
0
October 29th, 2006 22:00
DPCDSB
30 Posts
0
October 30th, 2006 16:00
I'm using Networker v7.3.1 on a Server 2003 box with no other storage nodes and all of my clients have the default entry of "nsrserverhost" in their storage node attribute boxes.
Here's what I think would work for you:
Put "nsrserverhost" in each client definition's storage node attribute box. Use two different groups. One for each client. In the properties window of each group, you should see all devices available in all storage nodes. Select only the devices that are in the particular storage node where you want the data for a particular saveset of your client to be going. If you can't see all the devices of each storage node, make sure the the server's storage nodes attribute box has both of the storage nodes listed in it. So now you are selecting different devices in two different groups but you have the same storage node entry in each client definition.
Also, you shouldn't require more than one media pool with this setup.
--------
From the field help in one of my client's property boxes:
Storage nodes - This is an ordered list of storage nodes for the client to use when saving its data. Its saves are directed to the first storage node that has an enabled device and a functional media daemon (nsrmmd). A default value of nsrserverhost represents the server.
Clone storage nodes - This attribute specifies the hostnames of the storage nodes that are to be selected for the `save' side of clone operations. Cloned data originating from the storage node will be directed to the first storage node that has an enabled device and a functional media daemon (nsrmmd). There is no default value. If this attribute has no value, the server's 'clone storage nodes' will be consulted. If this attribute also has no value, then the server's 'storage nodes' attribute will be used to select a target node for the clone.
Recover storage nodes - This is an ordered list of storage nodes for the client to use when recovering data. Data will be recovered from the first storage node that has an enabled device on which we can mount the source volume and a functional media daemon (nsrmmd). If this attribute has no value, the client's `storage nodes' will be consulted. If this attribute also has no value, then the server's 'storage nodes' attribute will be used to select a target node for the recover operation.
--------
say a weekly instance and a daily instance , with
each one having its own different savesets , and I
have another storage node in addition to the default
backup server's storage node .
how can I make each of these client instances backup
to a different storage node ? Because if I choose to
change the storage node attribute in one of them, it
also changes in the others , i.e. in the rest of the
instances of the same client .
Really appreciate any help on this
Thanks
Jason20
19 Posts
0
October 31st, 2006 06:00
storage nodes?
I can't figure out top of my head any advantage of such approach.
I have a similar requirement where I need to periodically archive some clients to a geographically remote storage node/site. I had considered using storage node affinity and pools->devices to control the destination.
One other option I thought of would be to create the 2nd instance of the client with a different name (say with a prefix) and specify a backup command of save -c to keep the indexes in one place.
Overall I think the first option seems much cleaner though.
atomari77
5 Posts
0
October 31st, 2006 06:00
sorry I've just read you're post , Im not sure but is there a devices selection in the group properties ?? , I'll check that and see , because if so , then ofcourse your approach will be much neater and cleaner ...
Thanks
ble1
4 Operator
•
14.4K Posts
0
October 31st, 2006 06:00
atomari77
5 Posts
0
October 31st, 2006 06:00
Mega Thanks for you all folks , I Really appreciate your input .
infact I need this because we wanted to keep the Full Weekly backup outside on that remote storage node which have enough space and capacity , and I didnt't go with the clone option because we dont have enough tapes on the local backup server's node for that purpose , its full of daily incremental tapes .
so actually restricting the devices attribut in the pool resource to write on the remote storage node devices , does not work , it keeps reporting a no matching device found error on the groups startup !! (although it is properly configured and appears fine in the devices and autoloaders tab) , but you can do that for the backup server's storage node devices only .ie u can restrict them at the pool without any errors .
so what I did is that I checked these local backup server devices on the pool where the daily group is writing , so it over rides the storage node affinity sequence as defined in the clients included in this group .
and cleared all the devices checks for the pool where the Weekly full backup group is writing , so the storage node affinity sequence will take effect for the clients included in this group . and it sent the backup to the remote storage node .
yet it still needs a volume from that remote pool to be present at the local storage node for the purpose of writing the index data to it ..., I guess I can live with that , unless someone has some suggestion on how to inforce the group to write the index there at the remote node too ..?
Thanks again everybody
DPCDSB
30 Posts
0
October 31st, 2006 09:00
Sorry about that, it was late and I was getting a bit punchy.
You can select what group(s) can be used by a particular pool by enabling them in the group(s) selection area of the pool properties box and you can select what device(s) a pool uses by enabling them in the devices selection area of the same pool properties box.
Does that make more sense? : )
Remember, a lot of the advanced features are hidden unless you select Diagnostic Mode from the view menu of NMC...
Jason20
19 Posts
0
November 1st, 2006 03:00
There is a requirement that the backup be a full so that it can be independently
restored. Consolidation would appear to fit the bill here but I have always been warned
off of using it.
ble1
4 Operator
•
14.4K Posts
0
November 1st, 2006 04:00