Seeing any error isn’t ever something you want, but this one can be relatively harmless. Fortunately, Citrix has accounted for a license server or service disruption by providing a grace period of 30 days to allow Delivery Controllers to continue brokering new user sessions successfully even without a functioning licensing service. That said, it’s still a good idea to ensure a quick resolution, so you can bring the license server back to a functioning state.
Of course, the first requirement here is to actually get alerted when the license server is unavailable. Like we said, your Citrix Virtual Apps & Desktops end users likely won’t experience any issues when it comes to this error, so they won’t be calling your help desk about it (thank God for small favors). Still, you obviously don’t want to be sifting through event logs all day either (nobody’s got time for that).
Here’s how ControlUp can help here.
For early alerting, you can use ControlUp Scoutbees to quickly and simply test the availability of your Citrix License Server and all of its important licensing services; this is simple and takes just a few minutes to set up.
You can set up alerts for when the server or services are unavailable or are responding more slowly than usual. If you want, you can get even more granular, as Scoutbees proactive monitoring provides lots of different options in the Alert Policy feature. For example, you could do an HTTPS test on the License Admin Console and get alerted if / when the page fails to load and also when the certificate is about to expire.
You can quickly check the health of the License Server using ControlUp Real-Time DX. Here you can see if you’re running low on available licenses. To view these metrics, just install the ControlUp agent on your Citrix License Server(s).
Of course, if your Citrix License Server is truly down and you want to get it up and running fast, you can navigate to the server in your Real-Time DX console and power it up either through Power Management actions [when right clicking on the server] or, if you are connected in Real-Time DX to your hypervisor (ControlUp supports Citrix Hypervisor [XenServer if you’re old-school], VMware vSphere, Nutanix Acropolis, and Microsoft Hyper-V), you can use ControlUp’s excellent Power On VM script action.
If your License Server looks like it’s powered on when you’re connected to it, you can try to remote to the virtual machine from the ControlUp Real-Time DX console to ensure it is functioning and / or choose to manage services, ensuring the licensing-related services are all started. If not, start them.
This error can be caused by different underlying issues and users will get a generic “cannot start” app error.
The potential underlying causes of this error include database connectivity problems, resulting in a database failover event, but due to a misconfiguration, a successful failover is not possible. In some cases, a reboot of Delivery Controllers can help restore the services. You can perform these reboots from the Real-Time Console using actions. If this doesn’t help, it’s best to work with Citrix support and coordinate a fix. This could include updating a database configuration change.
If you’ve been a Citrix Admin for more than a quick cup of coffee, you’ll be familiar with gathering CDF traces to share with Citrix support. When you have an issue that’s hard to reproduce and is intermittent, it makes getting an effective CDF trace tricky UNLESS you use ControlUp Automate. This is a great blog on how ControlUp Automate helps maximize resources, optimize end user’s experience and supplements your troubleshooting efforts.
Citrix says you can attempt a restart of the VDA as your first mitigation step. If that doesn’t resolve the issue or the issue starts coming up more frequently, you should gather CDF traces and share with Citrix support. They might advise you to change these registry settings on your Delivery Controllers:
HKEY_LOCAL_MACHINE\Software\Citrix\DesktopServer\MaxTimeBeforeStuckOnBootFaultSecs DWORD Value: 30000
HKEY_LOCAL_MACHINE\Software\Citrix\DesktopServer\MaxTimeBeforeUnregisteredFaultSecs DWORD Value: 30000
MaxTimeBeforeStuckOnBootFaultSecs: How long to wait (in seconds) after a machine started but did not receive any notification from the HCL that the VM tools were running. After this timeout, a machine’s fault state would be set to StuckOnBoot.
MaxTimeBeforeUnregisteredFaultSecs: How long to wait (in seconds) after a machine starts but remains unregistered with the Broker (with or without attempting to register). After this timeout, a machine’s fault state would be set to Unregistered.
You can quickly and easily set the registry values for all of your Delivery Controllers at once using the Controllers feature in Real-Time DX. Just go to Controllers, click on Registry, and add your Delivery Controllers. Then, simply create the Registry Values suggested.
If extending the timeout period through the registry helps alleviate the errors, it’s still a good idea to work with Citrix support and allow them to look at some CDF traces to find out what’s causing the slowness in your environment. Again, you can use ControlUp Automate to run these CDF traces automatically for you.
Another way to address this is to set up a Scoutbees Scout to test network performance to your Delivery Controllers using a Custom Hive on the same IP subnet as your VDAs. This is a great solution if the problem is intermittent. it’ll allow you to spot trends in the performance over time and pinpoint degradation over hours, days, weeks, and months. This helps zero in on the cause of the slowness.
This one is relatively simple. If you are on CVAD v7.15 or earlier, you may require to update to a newer cumulative update to resolve the problem. As 7.15 is not the current LTSR version, I will assume you are not experiencing the error on that version. If you are encountering this error on a newer version of Citrix Virtual Apps and Desktops, your service account for the Citrix Telemetry Service probably doesn’t have their Logon-as-a-Service permissions.
This can be found in Group Policy under Computer Configuration > Windows Settings > Security Settings > Local Policies > User Rights Assignment.
The Controllers feature (our friend in ControlUp Real-Time DX) can be used to quickly view the Citrix Telemetry Service on the affected VDA or Controller to see the account being used to run the service. Then you can ensure the account has the User Rights Assignment for Logon-as-a-Service.
When set, you can force a group policy update from Real-Time DX (as seen above).
Finally, perform a service restart from the Controllers feature.
According to Citrix, this error occurs when the VDA can’t access a Domain Controller on port 3268 (Microsoft Global Catalog). The VDA must communicate with the DC during the registration process to validate its list of configured Controllers. It could be worth checking the list of DDCs in the registry on your VDA (HKLM\Software\Citrix\VirtualDesktopAgent\ListOfDDCs). If you need to set the ListOfDDCs by policy, check out this guide.
ControlUp Scoutbees can perform tests of any Domain Controller on the corporate network over port 3286 and alert if / when the service is unavailable. It can also detect performance degradation that could indicate when the service is about to become unavailable, enabling you to be proactive and address the problem before it causes any service disruption. For a team who does not necessarily have visibility into the maintenance of Domain Controllers, but manages and supports Citrix, it would be a good idea to have a clear record of the availability trends on hand.
For assistance with troubleshooting point-in-time, you can use the Show Network Connections script action to list the current connections from a selected Citrix VDA to see active network connections from the VDA. If you see a connection to a valid Domain Controller over port 3286, it is not a problem right now. If there is no active connection, that is a clear indication of a problem. You can also optionally create your own SBA to run a simple BAT / CMD of netstat -na for a raw unsorted list of network connections per the CTX133769 but we think Guy’s script is a much cooler experience. 😎
This one is a little bit different because it’s a warning rather than an error, BUT it could well be a legitimate cause for concern. In Citrix XenApp and XenDesktop version v7.12, the Local Host Cache feature was reintroduced. It’s a good idea to use this feature because, when you experience a connection break between the Delivery Controllers and your Database, it makes sure user sessions won’t be disrupted. It uses a cached database on the primary Delivery Controller and continues on its merry way until it’s able to restore a connection (which you can usually see with Event 1200).
There’s a particular instance where this warning is particularly troublesome. When the database connection starts flapping—meaning the connection drops, but then establishes again then drops and establishes, drops, establishes and on and on, over and over again in a maddening, virtualized loop. The problem with this is that the connection may never be down long enough for Local Host Cache mode to take over, which can interfere with users attempting to launch applications and desktops while the flapping is occurring.
Once again, ControlUp Real-Time DX to the rescue! No need to go digging through event logs; you can see the database connectivity status for your Delivery Controllers like in the screenshot above.
Using Scoutbees to test your Citrix database over the port you are using for your Database connection can be useful for pinpointing when the DB becomes unavailable (if it actually does).
A real-world example of this was a customer who had this flapping occur when the database VM was being backed up by a third-party product. The Citrix team didn’t have visibility into this tool and its schedule, but were able to pinpoint the occurrences to within the same few hours on the days it happened, which led to discovery of the root cause.
This is also an instance where our earlier example of running a CDF trace could come in handy. You could also add a script to change the registry value on your primary Delivery Controller to force the use of the Local Host Cache after a quick validation that the cache is in a healthy state.
This error can be caused by several things: firewall blocking traffic, a timeout being reached, or the port the XML service is using becomes unavailable. This problem isn’t unique to Citrix or this individual service; it can happen to any process. It’s possible you patched the Storefront Servers, an update assigned the port to another process, and on reboot, took over the port.
Our friend the Show network connection script action can help here. It can aid you in identifying what took the XML port over and allow you to modify or disable that service to return Storefront to a working state.
This one can be simple! The error indicates a possible issue with network connectivity on your Citrix Cloud Connector for Citrix Cloud.
If you added Citrix Cloud as a monitored resource in ControlUp Real-Time DX, you can quickly check the health and status of your Citrix Cloud Connectors, including whether or not the Cloud Connector is on the latest version or not. Importantly, you can check the network metrics to see if there is an indication of a network issue.
You can ensure the correct Citrix services are all running and, if not, quickly start them with Controllers.
If you have a limited number of available or 1:1 persistent desktops for your employees or perhaps and one (or many) of those desktops become unregistered, you could find yourself in the unfortunate position of your Service Desk calling you early in the morning to say people can’t work because their desktops won’t launch. There’s not enough coffee in the world to get you through that.
We have a script you can use to restart the Citrix Desktop Service remotely, coupling this with a trigger based on the VDA being on, out of maintenance mode, and unregistered for several minutes. This will force the service to restart, which will force the VDA to re-register.
We also have a Community Trigger that can be used to find unregistered desktops and restart them if they have been in that state for at least five minutes. If the issue that caused the desktops to unregister or fail to register on restart was a point-in-time environmental problem, a service restart or reboot can probably resolve the problem. Since the triggers ensure there are zero user sessions on the machines at the time, this won’t disrupt your users.
Finally, we have the classic slow logons. There are so many different variables to account for when it comes to a slow logon. Luckily, we have you covered! ControlUp examines more than 40 different phases and factors of logon duration, including the execution of some third-party products common in the Enterprise, like those from Ivanti and VMware (to name just a couple).
Obviously using the Analyze Logon Duration script action will return great data and allow you to pinpoint the origin of problems. But it also allows you to proactively pursue slow logons in your organization without constantly running checking reports and the Console metrics.
To do this, we have another great script action (have we mentioned that the ControlUp Script Library has 363 [and counting!] community-driven script actions?) If you use ServiceNow, you can use our Report slow logon to ServiceNow ITSM script to report logons that exceed your expected average for further investigation, such as breaking down the phases of logon with the Analyze Long Duration script. Don’t let slow logons frustrate your employees, get on top of them (the slow logons, not your employees; that would be an HR violation) with our awesome scripts!.
These are the Top 10 Citrix Errors experienced by ControlUp customers. We hope this breakdown helps lead you to fixes to all of them, and also illustrates how you can use ControlUp to pick up on these errors as they happen, fix them quickly, and, in many cases, have them fixed automatically.