Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

ECS 3.6.2 Data Access Guide

PDF

Relocate the default file system from HDFS to an ECS bucket

Although the system is now usable and may appear to work well, a configuration with HDFS as the default file system is not supported. You must therefore relocate the default file system from HDFS to the root ECS bucket. This procedure copies all files from the HDFS file system to an ECS bucket and then sets the ECS bucket as the default file system.
  1. Use Ambari to stop all services except HDFS, YARN, and Zookeeper.
  2. Copy all existing files on the DAS HDFS file system to the ECS bucket. Even for a new installation of Hadoop, there are critical directories that must exist in the default Hadoop file system. Use DistCp to perform the file copy.
    [hdfs@mycluster1-master-0~]$ hadoop distcp -skipcrccheck -update -pugp -i / viprfs://mycluster1-root.ns1.federation/
  3. Use Ambari to configure the following settings.
    Table 1. Hadoop configuration to enable Hive concurrency and ACID transactionsThe table lists the Hadoop configuration details to enabe Hive concurrency and ACID transactions
    Hadoop location Property Value (example)
    HDFS Advanced core-site fs.defaultFS viprfs://<bucket_name>.<namespace>.<federation_name> For example: viprfs://mycluster1-root.ns1.federation1
    Spark Advanced spark-defaults spark.eventLog.dir viprfs://<bucket_name>.<namespace>.<federation>/<spark-history> For example: viprfs://mycluster1-root.ns1.federation1/spark-history
    Spark Advanced spark-defaults spark.history.fs.logDirectory viprfs://<bucket_name>.<namespace>.<federation>/<spark-history> For example: viprfs://mycluster1-root.ns1.federation1/spark-history
  4. Use Ambari to stop and start all services.
  5. Ensure proper directory permissions. If DistCp encounters any errors, the necessary permissions may not have been applied to critical directories. The following commands set the correct permissions.
    [hdfs@mycluster1-master-0~]$
    hadoop fs -chmod 777 /apps/hive/warehouse
    hadoop fs -chown hive:hdfs /apps/hive/warehouse
    hadoop fs -chmod -R 770 /user/ambari-qa
    hadoop fs -chown -R ambari-qa:hdfs /user/ambari-qa

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\