Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell ObjectScale Application 1.3.x Installation Guide for Red Hat OpenShift

PDF

Create and prepare the necessary namespaces in Red Hat OpenShift

ObjectScale and certain CSI components require their own namespaces (an OpenShift project) to properly function. You can create namespaces for each object store.

Prerequisites

  • If the ObjectScale Qualification Tool precheck report showed another scheduler extender on the cluster, follow the OpenShift steps of Manual Kubernetes Scheduler Configuration from https://github.com/dell/csi-baremetal-operator/blob/master/docs/MANUAL_SCHEDULER_CONFIGURATION.md. These steps ensure that the current scheduler extender is NOT overwritten. Do these steps before you deploy the Bare-Metal CSI Driver.

About this task

Complete these steps to create and prepare the required namespaces.

Steps

  1. Set these environment variables:
    export CSI_NS=<CSI_NAMESPACE>
    export SSO_NS=openshift-secondary-scheduler-operator
    export OBJECTSCALE_NS=<OBJECTSCALE_PROJECT_NAMESPACE>
  2. Create a namespace where you can install the secondary scheduler operator.

    Red Hat provides this secondary scheduler operator, which is Kubernetes-level software like the default scheduler. You must install the secondary scheduler on its own namespace.

    kubectl create ns $SSO_NS
  3. Create a namespace where you can install the CSI Bare-Metal components.
    kubectl create ns $CSI_NS
  4. Create a namespace where you can install ObjectScale.
    kubectl create ns $OBJECTSCALE_NS
  5. Optional: Create a namespace for one or more object stores.
    You can deploy object stores within the same namespaces as ObjectScale or within their own, separate namespaces.
    kubectl create ns <OBJECT_STORE_NAMESPACE>
  6. Add privileges for pod security for the namespaces.
    1. Apply the privileges for pod security to the secondary-scheduler operator namespace.
      # kubectl label --overwrite ns $SSO_NS pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged security.openshift.io/scc.podSecurityLabelSync="false"
    2. Apply the privileges for pod security to the CSI namespace.
      # kubectl label --overwrite ns $CSI_NS pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged security.openshift.io/scc.podSecurityLabelSync="false"
    3. Apply the privileges for pod security to the ObjectScale namespace.
      OpenShift 4.12:
      # kubectl label --overwrite ns $OBJECTSCALE_NS pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged security.openshift.io/scc.podSecurityLabelSync="false"
       
      OpenShift 4.13:
      # kubectl label --overwrite ns $OBJECTSCALE_NS pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged security.openshift.io/scc.podSecurityLabelSync="false" 
  7. Create the following role and rolebinding in the $CSI_NS namespace.
    1. Set the context to the $CSI_NS.
      kubectl config set-context --current --namespace=$CSI_NS
    2. Create the role.yaml for CSI in this namespace.
      Contents of role.yaml. Ensure that this yaml file is properly formatted and contains the values that are shown here:
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: pod-csi
        namespace: <CSI_NAMESPACE> 
      rules:
      - apiGroups:
        - security.openshift.io
        resourceNames:
        - privileged
        resources:
        - securitycontextconstraints
        verbs:
        - use

      Where: <CSI_NAMESPACE> is the name of your CSI namespace.

    3. Create the rolebinding.yaml for CSI in this namespace.
      Contents of rolebinding.yaml. Ensure that this yaml file is properly formatted and contains the values that are shown here:
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: role-binding
        namespace: <CSI_NAMESPACE>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: pod-csi
      subjects:
      - kind: ServiceAccount
        name: csi-baremetal-extender-sa
        namespace: <CSI_NAMESPACE>
      - kind: ServiceAccount
        name: csi-node-sa
        namespace: <CSI_NAMESPACE>

      Where: <CSI_NAMESPACE> is the name of your CSI namespace.

  8. Apply these yaml files.
    1. Apply the role.yaml.
      kubectl apply -f role.yaml -n $CSI_NS
    2. Apply the rolebinding.yaml.
      kubectl apply -f rolebinding.yaml -n $CSI_NS
  9. You should ensure that the OCP cluster global registry pull secret includes the registry pull secret for the docker.io registry server. If the registry pull secret for docker.io is already in the global cluster pull secret, you may skip the steps below. If the registry pull secret must be added or updated, follow the steps below.
    1. Download the current OCP global pull secrets to a temporary file.
      oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > ocp_cluster_pull_secret
    2. Enter the following command to add or update the registry secret for the registry server which hosts the ObjectScale images to the temporary file.
      oc registry login --registry="docker.io/objectscale" --auth-basic="<username>:<password>" --to=ocp_cluster_pull_secret
      <username>:<password> - registry user name and password
      
    3. Enter the following command to update the global registry pull secret.
      oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=ocp_cluster_pull_secret
    4. Delete the temporary file.
      rm -f  ocp_cluster_pull_secret
    The update to registry pull secret is rolled out to all nodes in the cluster. This update can take some time, depending on the size of the cluster.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\