Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2806

October 18th, 2017 05:00

ECS CE fails step1 with 'The deployment file at /opt/emc/ecs-install/deploy.yml' is not valid. Schema validation failed.

I'm unable to get step1 to run after bootstrap.sh appeared to run without error.  After reboot I got the error in the subject.  Here is the last line of the error:

- Key ssh_crypto' was not defined. Path: '/facts/ssh_defaults'.


The deploy.yml looks fine (included below), so I am at a lost to figure out what happened.


One thing i noticed is that when I start with CentOS 7.3 as my base install when the bootstrap.sh runs it pulls down CentOS 7.4 and upgrades the system.  At that point the later checks (in step2 i believe) will fail with OS not supported. 


Here are the contents of my deploy.yml (with passwords intentionally removed):

# deploy.yml reference implementation v2.2.0

# [Optional]

# By changing the license_accepted boolean value to "true" you are

# declaring your agreement to the terms of the license agreement

# contained in the license.txt file included with this software

# distribution.

licensing:

  license_accepted: true

# [Required]

# Deployment facts reference

facts:

  # [Required]

  # Node IP or resolvable hostname from which installations will be launched

  # The only supported configuration is to install from the same node as the

  # bootstrap.sh script is run.

  # NOTE: if the install node is to be migrated into an island environment,

  #       the hostname or IP address listed here should be the one in the

  #       island environment.

  install_node: 192.168.1.17

  # [Required]

  # IPs of machines that will be whitelisted in the firewall and allowed

  # to access management ports of all nodes. If this is set to the

  # wildcard (0.0.0.0/0) then anyone can access management ports.

  management_clients:

    - 0.0.0.0/0

  # [Required]

  # These credentials must be the same across all nodes. Ansible uses these credentials to

  # gain initial access to each node in the deployment and set up ssh public key authentication.

  # If these are not correct, the deployment will fail.

  ssh_defaults:

    # [Required]

    # Username to use when logging in to nodes

    ssh_username: admin

    # [Required]

    # Password to use with SSH login

    # *** Set to same value as ssh_username to enable SSH public key authentication ***

    ssh_password:

    # [Required when enabling SSH public key authentication]

    # Password to give to sudo when gaining root access.

    ansible_become_pass:

    # [Required]

    # Select the type of crypto to use when dealing with ssh public key

    # authentication. Valid values here are:

    #  - "rsa" (Default)

    #  - "ed25519"

    ssh_crypto: rsa

  # [Required]

  # Environment configuration for this deployment.

  node_defaults:

    dns_domain: betacloud.local

    dns_servers:

      - 192.168.1.10

    ntp_servers:

      - 192.168.1.10

    #

    # [Optional]

    # VFS path to source of randomness

    # Defaults to /dev/urandom for speed considerations.  If you prefer /dev/random, put that here.

    # If you have a /dev/srandom implementation or special entropy hardware, you may use that too

    # so long as it implements a /dev/random type device abstraction.

    entropy_source: /dev/urandom

    #

    # [Optional]

    # Picklist for node names.

    # Available options:

    #  - "moons" (ECS CE default)

    #  - "cities" (ECS SKU-flavored)

    autonaming: moons

    #

    # [Optional]

    # If your ECS comes with differing default credentials, you can specify those here

    # ecs_root_user: root

    # ecs_root_pass:

  # [Optional]

  # Storage pool defaults. Configure to your liking.

  # All block devices that will be consumed by ECS on ALL nodes must be listed under the

  # ecs_block_devices option. This can be overridden by the storage pool configuration.

  # At least ONE (1) block device is REQUIRED for a successful install. More is better.

  storage_pool_defaults:

    is_cold_storage_enabled: false

    is_protected: false

    description: Default storage pool description

    ecs_block_devices:

      - /dev/sdb

      - /dev/sdc

  # [Required]

  # Storage pool layout. You MUST have at least ONE (1) storage pool for a successful install.

  storage_pools:

    - name: sp1

      members:

        - 192.168.1.17

      options:

        is_protected: false

        is_cold_storage_enabled: false

        description: My First SP

        ecs_block_devices:

          - /dev/sdb

          - /dev/sdc

  # [Optional]

  # VDC defaults. Configure to your liking.

  virtual_data_center_defaults:

    description: Default virtual data center description

  # [Required]

  # Virtual data center layout. You MUST have at least ONE (1) VDC for a successful install.

  # Multi-VDC deployments are not yet implemented

  virtual_data_centers:

    - name: vdc1

      members:

        - sp1

      options:

        description: My First VDC

  # [Optional]

  # Replication group defaults. Configure to your liking.

  replication_group_defaults:

    description: Default replication group description

    enable_rebalancing: true

    allow_all_namespaces: true

    is_full_rep: false

  # [Optional, required for namespaces]

  # Replication group layout. You MUST have at least ONE (1) RG to provision namespaces.

  replication_groups:

    - name: rg1

      members:

        - vdc1

      options:

        description: My First RG

        enable_rebalancing: true

        allow_all_namespaces: true

        is_full_rep: false

  # [Optional]

  # Management User defaults

  management_user_defaults:

    is_system_admin: false

    is_system_monitor: false

  # [Optional]

  # Management Users

  management_users:

    - username: admin1

      password: ChangeMe

      options:

        is_system_admin: true

    - username: monitor1

      password: ChangeMe

      options:

        is_system_monitor: true

  # [Optional]

  # Namespace defaults

  namespace_defaults:

    is_stale_allowed: false

    is_compliance_enabled: false

  # [Optional]

  # Namespace layout

  namespaces:

    - name: ns1

      replication_group: rg1

      administrators:

        - root

      options:

        is_stale_allowed: false

        is_compliance_enabled: false

  # [Optional]

  # Object User defaults

  object_user_defaults:

    # Comma-separated list of Swift authorization groups

    swift_groups_list:

      - users

    # Lifetime of S3 secret key in minutes

    s3_expiry_time: 2592000

  # [Optional]

  # Object Users

  object_users:

    - username: object_admin1

      namespace: ns1

      options:

        swift_password: ChangeMe

        swift_groups_list:

          - admin

          - users

        s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe

        s3_expiry_time: 2592000

    - username: object_user1

      namespace: ns1

      options:

        swift_password: ChangeMe

        s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe

October 25th, 2017 07:00

Hi Ben,

Commenting that line out in the deploy.yml resolved the problem.  There are a couple other small bumps with the ntpd service and ntp server, but after a few reboots seemed to sort themselves out.  I was able to successfully run step1 and now I am waiting on step2 which appears to be working.

I appreciate your help.  In the future, is there a quicker way to get help on something like this?  Maybe a keyword to put in the title of the discussion thread? I ask because I lost a week or so waiting and 'trying things'.

-Don

October 24th, 2017 05:00

Bump.

Is there anyone who has successfully gotten CE 3.x running that can provide some guidance here?  If there is a base OS version, or suggested methodology that works please pass along the info. 

October 25th, 2017 06:00

We are looking into this issue, but for now, can you start over and try running step1 with the line ssh_crypto: rsa commented out?

Ben

October 25th, 2017 08:00

Glad to hear Don!  I would recommend you post questions/issues on the github page in the future.  Those issues are actively monitored by multiple team members.

https://github.com/EMCECS/ECS-CommunityEdition/issues

No Events found!

Top