SCCM 2012 R2 All Clients with Status Unknown

We recently had an issue where our clients in one of our primary site servers stopped functioning such as receiving windows updates, SCEP definition updates and reporting hardware/software inventory. If anything was deployed it would all come back as status unknown, making this a real annoying feature of centralized management.

We spend a good amount of time thinking it was an issue with our CAS until we identified another one of our primary site servers to be working without issues. This led us into a time consuming hunt through many logs in our management points, where we located the following issues that indicated something was wrong with the client policy file noted inside MP_GetPolicy.log:

10/20/2014 7:26:34 AM MP_GetPolicy_ISAPI 2768 (0x0AD0) MP IP: Policy ID={0d32ecc8-69a3-4b16-ad3f-332051da458b} Version=387.00 not found
10/20/2014 7:26:34 AM MP_GetPolicy_ISAPI 2768 (0x0AD0) MP IP: Policy ID=ScopeId_A73DDC0F-F592-4D78-84DC-4976AFB0782D/AuthList_2c3228cb-96c6-4aa0-80a1-88860fad95af/VI Version=SHA256:6E4DB6DFEB5CC2BAE925D4F98F1CBEE33DF7D1581F7DED23B930ADF8498D97E0 not found
10/20/2014 7:30:32 AM MP_GetPolicy_ISAPI 2080 (0x0820) MP IP: Policy ID={7e89490f-b101-4174-93e6-68bbafa6a827} Version=1.00 not found
10/20/2014 7:30:33 AM MP_GetPolicy_ISAPI 1172 (0x0494) MP IP: Policy ID={5a355e05-a129-4054-849a-7a1c02e9e631} Version=90.00 not found
10/20/2014 7:30:33 AM MP_GetPolicy_ISAPI 1172 (0x0494) MP IP: Policy ID={d0b27429-73b9-4dc5-b997-5a7b20215735} Version=1.00 not found
10/20/2014 7:30:34 AM MP_GetPolicy_ISAPI 4048 (0x0FD0) MP IP: Policy ID={2ade517e-68d8-4561-87cd-ac40d85eae6a} Version=1.00 not found
10/20/2014 7:30:34 AM MP_GetPolicy_ISAPI 1172 (0x0494) MP IP: Policy ID={00f94329-f22e-4d92-98c6-9d56dfc78347} Version=1.00 not found

In addition to that the MP_Policy.log, we could see CRC changes

10/20/2014 3:13:56 PM MP_PolicyManager 2636 (0x0A4C) CalculateCRCFullAssignments cookie has changed, old cookie 2014-10-03 19:13:07.627, new cookie 2014-10-20 18:09:38.257
10/20/2014 3:13:56 PM MP_PolicyManager 5720 (0x1658) Detected at least one row in the result set from PolicyAssignment table which does not have a Signature, rejecting all rows.
10/20/2014 3:13:56 PM MP_PolicyManager 5720 (0x1658) CalculateCRCFullAssignments cookie has changed, old cookie 2014-10-03 19:13:07.627, new cookie 2014-10-20 18:09:38.257
10/20/2014 3:13:56 PM MP_PolicyManager 5696 (0x1640) Detected at least one row in the result set from PolicyAssignment table which does not have a Signature, rejecting all rows.
10/20/2014 3:13:56 PM MP_PolicyManager 5696 (0x1640) CalculateCRCFullAssignments cookie has changed, old cookie 2014-10-03 19:13:07.627, new cookie 2014-10-20 18:09:38.257

Note: these log files tend to be located generally under:

  • <Install Drive>\Program files\SMS_CCM\Logs\
  • <OS Drive>\Windows\CCM\Logs\

Once the issue was identified we needed to find a cure and in this case it was removing the corrupted policy stored in SQL. This was done in two steps, first by confirming there was something wrong through the following SQL query on that primary site database.

SELECT * FROM ResPolicyMap WHERE machineid = 0 and PADBID IN (SELECT PADBID FROM PolicyAssignment WHERE BodyHash IS NULL)

If any records return from the query above, these would need to be removed with the following query.

Delete FROM ResPolicyMap WHERE machineid = 0 and PADBID IN (SELECT PADBID FROM PolicyAssignment WHERE BodyHash IS NULL)

Once this runs, you should receive a message of number of rows affected. In our case it was a single row that ran extremely quick through this clean up. This would let us jump back on our MP to monitor the activity as the servers would soon start seeing the change followed by clients storming every management point, software update point accordingly.

Note: Depending of the size of the environment, thousands of clients may attempt to storm the software update point. This may cause the IIS application pool to crash if its memory has not already been increased appropriately to the environments size.

As for root cause, well there is no way to tell for every situation but in our case we believe it to be caused by a backup agent causing the database to stall temporarily and possibly corrupting the file reference mid transfer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s