Introduction
Ivanti EPM 2018.1 introduced a new addition to Self-Electing Subnet Services - Agent State. This changes the how device availability is determined when targeting devices in a scheduled task . Self-Electing Subnet Services will need to be enabled alongside at least 1 device on the subnet that has Agent State enabled in the Client Connectivity Agent Settings for managed devices:
![]()
![]()
Note: Having the Agent State setting enabled only controls if the device is able to be elected as the Agent State representative on it's network. As long as one machine on the network has that service enabled, the machines will send their Agent State status to the representative which is then sent to the core server.
If you want to utilize Agent State, make sure that the network is enabled and the Agent State service is enabled in the agent settings on at least one of the devices on that network. The network also needs to be shown as enabled in the SESS - Agent State tool.
![]()
Warning: For Agent State to function, both the deployed Client Connectivity agent setting and the desired network state in the Self-electing subnet services tool must both be enabled.
Agent State: What It Does And How To Enable It
When Agent State based targeting is enabled on the core, devices are no longer pinged from the core to see if the agent responds when a scheduled task is started. Instead, the core does a lookup in the database to view the value of the inventory value "Agent State - Available":
![]()
Even though this is a value that is viewable in the devices inventory, this entry is not sent up with an inventory scan. This information is queried by the Agent State rep on that devices network. The Agent State representative on that network receives information from EPM managed devices on that network and then relays that information to the core server (and then is updated in the database). The Agent State inventory menu will only show up on machines that are running an agent that is on version 2017.3+.
The 3 different available status of an agent's state are:
0 - Offline and not available:
If a device is listed as Offline, the core will not attempt to push the scheduled task on the machine. It will immediately set the task to "Policy has been made available" and wait for the machine to come online and perform a policy sync.
1 - Online and available:
If a device is listed as Online, the core will immediately begin the scheduled task process on the device. Since the core knows that the device is available, it doesn't need to ping the device from the core and it can immediately start the scheduled task process.
Unknown - Status of the agent is unknown.
If the status is Unknown, the scheduler will revert to the old method of determining device availability (pinging the machine and waiting for a response). This status indicates that there may be an issue with the Agent State representative on the network. If many of your devices are showing as "Unknown" you should view the status of agent state in the Self Electing Subnet Services window to see if the network has agent state enabled.
Troubleshooting Agent State Issues
While Agent State is a great addition to EPM (it lowers the time it takes to start a task on a group of devices by roughly 3-7 seconds per device!) that also means that there are network considerations that need to be made. Agent state uses multicast to communicate with devices on the network. If multicast is being filtered or is not working correctly, you will run into issues. Some common issues that have occurred are:
Devices in a scheduled task are not able to be pushed and revert to "Policy has been made available".
If you are receiving that message for your devices when starting a task, spot check the "Agent State - Available" value on some devices. If they are showing as "0", verify the status of the network in the Self Electing Subnet Services window. If it is disabled, enable it by right-clicking the network and selecting "Enable". Refresh the window and wait a few minutes for a device to be elected as the Agent State representative and the "Current State" to be enabled.
![]()
Once is shows as enabled and a device is successfully elected, check the availability status in the inventory. When it shows as "Available - 1", push the task again to the machine and verify that it is working.
If devices are showing as available in inventory but are immediately reverting to "Policy has been made available", open a support case with Ivanti for further troubleshooting and support.
Machines are showing that they are offline ("Available" - 0) but they are online and can be pinged from the core.
If the machine is not showing as available in inventory but you are able to ping the machine from the core, there may be an issue with the Agent State representative or multicast communication on that network. Ping the DNS name of the machine and make sure that the IP that is responding is the same one that is listed in inventory. If the IP is accurate, check for errors in the C:\ProgramData\LANDesk\Log\TMCSVC.log on the Agent State representative. If you see message similar to the errors below, multicast is most likely being filtered on that network and causing our problems with Agent State:
Error: Calculated numComputers 4, offCount 9 - Size of map 13 and offCount -1 invalid, walking list
![]()
To test the functionality of multicast, use either of the 2 following testing methods:
Method One:
Create a policy-supported push task and start it on a machine that is having issues. I use the package "Clear Preferred Servers" or "Remove Streamed Documents" since it has a low overhead and doesn't impact the agent device or network as much. From the device, run a policy sync to pull the task down onto the test device. Once that one task is successfully pulled down to a machine, duplicate the previous task, set the task to only allow "Peer to Peer Download" by using an agent setting that only allows peer download:
Distribution and Patch Setting:
![]()
Scheduled Task setting:
![]()
Since you successfully ran that task on a machine on that subnet, clients "should" be able to use multicast and successfully share resources locally to copy the file(s) from the machine that already downloaded and ran the software distribution package. Machines that are unable to communicate with multicast will error out and not to able to run that task.
Method Two:
Use the steps in the following document to test multicast functionality on that network: Troubleshooting multicast communication with MulticastNetworkTest.exe
Conclusion:
Regardless of what you use to test multicast functionality, you will need to work internally with your network team to resolve multicast issues that are discovered.