Windows Process Activation Service does not work inside containers: "System error 6801 has occurred"

Refers to:

  • Plesk 11.0 for Windows
  • Plesk 12.0 for Windows
  • Plesk 12.5 for Windows

Created:

2016-11-16 12:59:48 UTC

Modified:

2016-12-21 19:43:13 UTC

0

Was this article helpful?


Have more questions?

Submit a request

Windows Process Activation Service does not work inside containers: "System error 6801 has occurred"

Symptoms

  1. Windows Process Activation Service (WAS) and dependent services (IIS and Parallels Plesk) do not work inside containers.

  2. The following error is shown when WAS is started:

    C:\\Users\\Administrator>net start was
    The Windows Process Activation Service service is starting.
    The Windows Process Activation Service service could not be started.

    A system error has occurred.

    System error 6801 has occurred.

    Transaction support within the specified resource manager is not started or was
    shut down due to an error.

Cause

The issue occurs when the system transaction log becomes corrupted.

Resolution

Several resolutions are available. It is recommended you apply them in the order shown below. For example, if the first solution does not work, continue to the second, then the third, and so on.

First

Try the resolution described in the following Microsoft article:

IIS services fail to start: "Windows could not start the Windows Process Activation Service - Error 6801: Transaction support within the specified resource manager is not started or was shut down due to an error" when WAS service is started

  1. Issue the following command in the container`s command prompt

    fsutil resource setautoreset true c:\\

    Note: This assumes the system drive is "C:"

  2. Reboot the container after running the command.

Second

If the issue remains after following the above steps, cloning a container should help.

NOTE: Put the actual container IDs and VZ folder location in the below commands, and run them in the command prompt:

  1. Stop the container:

    vzctl stop 101
  2. Back up the original container configuration files:

    copy E:\\vz\\private\\101\\.vza\\eid.conf 101.eid
    copy E:\\vz\\conf\\101.conf 101.conf
  3. Clone the original container to another one:

    vzmlocal -C 101:202
  4. Change the container ID for the old container to another ID:

    vzmlocal 101:100000
  5. Disable autoboot for the original container:

    vzctl set 100000 --save --onboot no
  6. Delete IP address from the original container:

    vzctl set 100000 --save --ipdel all
  7. Change the container ID for the new container to the ID of the old container:

    vzmlocal 202:101
  8. Stop PVA agent:

    net stop pvaagent

    (Any operation with EID replacement should be done while PVA Agent is stopped.)

  9. Remove the EID cache file:

    del E:\\vz\\PVA\\Agent\\Data\\etc\\configs\\EID

    (Use the correct disk letter for the PVA Agent files.)

  10. Replace the EIDs:

    type E:\\vz\\private\\101\\.vza\\eid.conf > E:\\vz\\private\\100000\\.vza\\eid.conf
    type 101.eid > E:\\vz\\private\\101\\.vza\\eid.conf
  11. Start PVA Agent to regenerate the EID <-> CTID bindings:

    net start pvaagent
  12. Start the resulting container:

    vzctl start 101

After you have confirmed the new container is working correctly, it is safe to delete the old one.

Third

This solution should be applied if the Second solution (container cloning) did not help, or the issue reoccured after a short period. This usually happens if the container's disk is fragmented and the container always starts in sharing violation mode:

C:\\Users\\Administrator>vzctl start 101

Starting container...

WARNING: (C:\\vz\\Private\\101\\root.efd, {971890c5-833d-4857-86f7-17cc762bfda3}) sharing violation, trying nonpaged mount
Container is mounted
Container was started
  • As a first step, install the fix mentioned in article #112842 . It will help to deal with the "sharing violation" issue caused by a high fragmentation of the disk. Once it has been installed and the node is rebooted, it might be necessary to re-apply the Second solution above.

  • Then, check whether Antivirus is installed on a Hardware Node and properly configured. Sometimes a sharing violation can be caused by an Antivirus installed on a Hardware Node if the container's private area ( X:\\vz\\private\\ ) is not excluded from the Antivirus activities. Antivirus locks the container's disk (the root.efd file), so exclusive locking becomes impossible. This forces containers to start in sharing mode. To avoid issues like this, exclude X:\\vz\\private\\ from all activities performed by Antivirus.

  • Sometimes when the container is too fragmented, only defragmentation can fix the issue with inability to start WAS. Check the fragmentation level inside the container and perform a defragmentation .

Additional information

IMPORTANT:

  • If the container is running MSSQL Server , see:

116218 MSSQL does not work in a cloned container or after c2v migration

  • If the container is a member of Active Directory (AD) , see:

119018 Trust relationship error on domain clients after Domain Controller migration/restore

Have more questions? Submit a request
Please sign in to leave a comment.