The MCS process appears to stop responding when the AVI workflow tries to stop MCS.
A root case is still under investigation.
See the resolution section for a temporary workaround to bypass this issue.
1. Log in to the Avamar Utility Node.
2. Confirm that MCS is down using dpnctl and mcserver.sh commands:
dpnctl status mcs ; mcserver.sh --test
Both outputs should report that MCS is down.
3. Switch to the root user:
su -
4. Check the AVI UI or the workflow.log and confirm it reports failing to stop mcserver.sh:
tail -20 /data01/avamar/repo/temp/<<<The name of the MCS package>>/tmp/workflow.log
5. Check for the running MCS processes: (Usually three or four processes are in the output)
ps -elf | grep mcserver |grep -v grep
Expected output:
0 S admin 6754 6743 0 80 0 - 40725 - 12:10 ? 00:00:00 /usr/bin/perl /usr/local/avamar/bin/mcserver.sh --stop --force
0 S admin 7466 6754 0 80 0 - 1594176 - 12:11 ? 00:00:12 /usr/java/latest//bin/java -Xmx3G -XX:+HeapDumpOnOutOfMemoryError -X
0 S admin <<PID>> 1 99 80 0 - 2116593 - Dec13 ? 28-02:55:00 /usr/java/latest//bin/java -Xmx3G -XX:+HeapDumpOnOutOfMemoryError
The output will likely include two commands trying to stop MCS from the AVI (simultaneously during the workflow duration), and an older MCS process, which is our focus.
6. From the output, kill only the MCS process, not the two processes trying to kill it from AVI.
(The MCS process is mostly the oldest and has +HeapDumpOnOutOfMemoryError in its name.
kill <<PID>>
Where <<PID>> is the Process ID as shown in step 5.
7. All MCS processes should disappear. Confirm this by rerunning the following command:
ps -elf | grep mcserver |grep -v grep
If after a few seconds, the process is still running, repeat the kill command, but this time include the "-9" flag:
kill -9 <<PID>>
8. Once the MCS processes are no longer running, retrying the workflow should allow it to proceed and complete successfully.