Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell EMC Data Mover Admin Guide

PDF

Review logs and system status

Each Data Mover component logs warnings, errors, and additional information. These logs help with debugging or discovering operational errors.

Job log

The job log file shows transfer jobs details, including:

  • User name
  • User IP
  • DataIQ login name
  • Source path
  • Destination path
  • Checksum algorithm
  • Delete source
  • Path overlap allowance

The View Transfers screen appears with a list of transfer jobs, and whether they are Active, Canceled, Completed, or Failed. Each job listing has an associated button at the far right of this window. The button text shows you the Data Mover Service name for the job. Select this button to bring up job-specific log messages.

At the bottom of View Transfers and near the top of the Transfer dialog, you can see:

  • The number of workers across all worker nodes.
  • The number of busy worker processes.
  • The number of active transfer jobs.

To view a job: From the View Transfers window, select the ID of the transfer job.

To copy log entries for a job to your clipboard: Select Copy all to clipboard. You can paste the output into a spreadsheet program like Microsoft Excel.

NOTE When there are more than 2500 log entries, the button label changes to Copy first 2500 logs to clipboard. In addition, the full path of the log entry file appears in the line directly below the job ID. To see all of the log entries, a DataIQ administrator can use this path to open the log file on the DataIQ host.

Service log

The Service log shows details about Data Mover operations and communications.

To see logs for the Data Mover host, from the DataIQ command line, type:

kubectl logs -ndataiq <container name>

Check status for workers

To check the status of the Data Mover worker container on the DataIQ host:

kubectl get pods
kubectl exec -it -ndataiq <data mover worker pod name> bash

To check the status of an external worker node, run the following script:

 /usr/local/data_mover_workers/watch_statuses

Get detailed logs for workers

To get detailed logs for the Data Mover worker container on the DataIQ host:

kubectl logs –ndataiq <data-mover-workers pod full name>

To get detailed logs for an external worker node, open files in:

/var/log/claritynow/data_mover_workers.log

(In some cases, log messages may be shown in /var/log/messages instead.)

To search for a job-specific error, search for lines wher the beginning of the log message contains the Job ID, and then 'ERROR'.

Name conventions for worker processes

Each Data Mover Worker process has its own name, which can be seen in the logs on both the logs from the View Transfers dialog and in the logs present on the Data Mover Worker nodes. The name pattern is <node hostname>.<L or H><unique identifier>, for example, data-mover-workers-12345abcde12ab34cd.H4:

  • L indicates the message came from a light worker process.
  • H indicates the message came from a heavy worker process.
  • The final number is the unique identifier on the Data Mover Worker node.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\