Start a Conversation

Unsolved

This post is more than 5 years old

1470

December 12th, 2014 08:00

Failover and retrieving retention class contexts

Hi;

I have a method storeContent that store my documents.

In this methode i use this code in Java :

retentionClassList = thePool.getRetentionClassContext();

  retentionClass = retentionClassList.getNamedClass(MyParameterr);

  theClip.setRetentionClass(retentionClass);

I want to use failover betewenn cluster but the probleme is as mentionned in EMC centera documentation :

"Other operations, such as monitoring and retrieving retention class

contexts, always fail when the primary cluster is unavailable"

then my storeContent will fails if the primary cluster is unavailable!!!

Any help.

thanks for your remplies

208 Posts

December 12th, 2014 10:00

Hello y.labchiri -

I assume you are following Centera best practice and creating a single FPPool instance to share across all of your threads and transactions.  As long as you do this, you can retrieve the retention class context once right after a successful connection, and then use it repeatedly throughout the life of your process.  There's no need to look up the RC for each clip you create.

Good Luck,

Mike Horgan

15 Posts

December 12th, 2014 11:00

thanks for your reply;

But if i define my primay and secondary cluster.

And the primay crash and we switch to secandary. if we want retreiv RC it will success?Beacuse the doc Centera tell

Other operations, such as monitoring and retrieving retention class

contexts, always fail when the primary cluster is unavailable

15 Posts

December 12th, 2014 11:00

yes i great with you but consider the same code but in read operation.

208 Posts

December 12th, 2014 11:00

There generally is no need for a retention class context in a read operation, as the retention APIs handle calculating the expiration time for both fixed and class-based retention periods.    But yes, this could conceivably be an issue. However, I would not hold my breath waiting for the API to change (because it most likely will not); I would just code around it as best I could for my use case.

Good Luck,

Mike Horgan

208 Posts

December 12th, 2014 11:00

Remember that writes do not fail over automatically with the Centera SDK. So if your primary goes down you would have to reconnect in order to make the 'old' secondary into the new primary (assuming you have bi-directional replication as well). At which point you would have an opportunity to re-fetch the retention class context immediately after connection.

Regards,

Mike Horgan

15 Posts

December 13th, 2014 12:00

Thank you

very much for your reply.i will act as u suggest.

Question: if openpool is for each transaction.fileover will not work?

Another suestion i have 2cluster for failover in each cluster i have 2@ ip. In open pool must just specify the 2 of each cluster ou must i use the 4@ ip.

I want that if of primary cluster fail we pass to the second @ ip of the primary cluster and if il fais we pass to the secadary 1and 2 after if the first fails

Thzns for your help

409 Posts

December 14th, 2014 06:00

Hi

You should perform a pool connection only once and use the pool reference returned in all your IO operations.  There is no need to continually open and close pool connections.  Pool Opens are expensive operations and will therefore add unnecessarily to the time to do IO operations.

If you have two centera clusters then you only need to supply the primary IP addresses in the connection string.  We recommend at least 2 primary IPs to be supplied and probably no more than 4 Primary IPs (if they exist) although you can specify all primary IP addresses if you want e.g.


"PRIP1,PRIP2?myPEAfilePathName.pea"

When the SDK makes the connection it will parse the IP list from left to right and with 1st IP address that succeeds the SDK will be returned the IP's of all available Primary and Replica Nodes.  So if the node PRIP1 is down the SDK will try PRIP2 and if successful the SDK will be give all available Node IPs.  So in your case it would discover REPIP1 and REPIP2 (say) to add to the list of PRIP1 (which is knows as being down) and PRIP2.

If during the lifetime of your application the primary cluster is unavailable, the SDK will fail any write IO but will failover to the replica any clip or blob READs.

In this case your application will detect primary cluster is down as the write gets an error, you should log that and try again later when hopefully the primary if accessible again.  For an archive this behaviour is typically acceptable,

You could at this point decide to close your current pool connection and reopen it but this time specifying the replication IP addresses in the connection string

"REPIP1,REPIP2?myPEAfilePathName.pea"

and use this as your temporary primary.  If you have bidirectional replication enabled between the clusters then at some point you can switch back to your "original" primary when convenient.

Some customers actually preempt this in a way by specifying the following connection string

"PRIP1, PRIP2, REPIP1,REPIP2?myPEAfilePathName.pea"

if the primary is up then the SDK will use it.  If it's down then the attempt to open PRIP1 and then PRIP2 will timeout and the SDK will succeed with REPIP1.  This all happens under the covers your app will not see this.

I would recommend reading https://community.emc.com/docs/DOC-11625

208 Posts

December 14th, 2014 07:00

As you might expect, Paul has this absolutely clear and correct.  Thanks Paul.

You sound like you might be new to Centera SDK programming, in which case you could probably benefit from these two recorded presentations (about 45 mins each):

EMC Centera API Overview

EMC Centera API Best Practices

These are a few years old but they still apply and should help you create a better CAS integration.

Best Regards,

Mike Horgan

15 Posts

December 14th, 2014 10:00

Hi

Thank you very much for all your very clear replies.

I will perform my open connection.and i will use this prip1, prip2, repip1, repip2?...

For eligible cluster.i mentionned that i have a bidirectionnel replicatiob

No Events found!

Top