Information about Semaphore and Scheduler WorkersWhat is a semaphore? A semaphore is a crucial component on the platform that manages and regulates the number of transactions that can occur on a node at any given time. This mechanism is in place to protect the resources on the node and prevent overloading or contention for those resources. When a user submits a request, it is directed to a semaphore pool, where it effectively enters a queue. If there is an available semaphore in that pool, the request will acquire it and proceed to process the transaction. Once the transaction is successfully completed, the semaphore is then released, making it available for other requests that are waiting in the queue to utilize. Example: A home.do transaction is running on the first semaphore. Default Available semaphores: 15Queue depth: 0Max queue depth: 24In-use semaphores:0:C4F9DF6EDBAC7E004FC2F4621F961986 #1687034 /home.do (Default-thread-11) (0:00:00.004)Maximum transaction concurrency: 16Maximum concurrency achieved: 12-------------------------------- What are the types of semaphore pools? The semaphore pools have changed based on versions and the specific needs of customers. The OOB(Out of Box) semaphore pools are: Default (UI traffic) The default semaphore pool is used to process all UI transactions (for example, homepages, service portal pages, catalog, etc.). Unless a request is specified to go through a different semaphore pool, it will end up in the default pool, and therefore, that is the most critical pool for performance. If the default fills up and semaphore waits or rejected requests (queue depth is full) begin to occur, users will see an immediate impact. Most default semaphore pools are configured with a queue depth of 150 and 16 semaphores for concurrent processing. This queue is where users will feel the most direct impact. When the queue fills up, other user traffic must wait in line. This results in "wait time" and, more specifically, "semaphore wait time." The user waits for a semaphore to free up until their request is processed. If the queue depth is reached (which in this default case is 150), all subsequent requests will be rejected. This results in the common "HTTP 429 rejected requests" error message. Below is an example of Default semaphores. This setting should not be changed, though, unless confirmed by the platform development team. If customers change this setting themselves (especially if they increase the number), they run the risk of memory and other resource constraints. AMB Send & Receive The AMB semaphore pool sends and receives were separated into two different pools. Both pools are configured the same, with a queue depth of 150 and 4 semaphores for concurrent processing. SEND Messages: Client publishing a message to the server to be distributed to subscribers RECEIVE Messages: Delivery of a message to a client Single Integration Pool The Integration traffic is routed through the API_INT pool.This is commonly called the "single integration pool." Requests are routed here based on both parameters and their URI. All out of box REST APIs are specified in the path (/API/now/v1/table | stats | attachment). Note: Not all REST traffic is routed through this pool, but only REST traffic specific to integrations. REST loads many aspects of the UI, but this is considered UI traffic and is directed through the default semaphore pool. Additionally, if a separate integration pool is configured for a specific web service (for example, JSON or SOAP), the traffic will still be routed through the API_INT pool. You will need to remove the web service from the parameters in the sys_semaphore record for the API_INT pool. Presence The presence pool was implemented in Geneva after performance issues occurred for customers where high-volume presence requests were flooding their default pool. This pool strictly processes user presence. Note that record presence uses AMB, and therefore, all record presence traffic is routed through the AMB pools. Presence requests are high volume but low impact so it was decided to set the pool at 8. It is unlikely that customers will be affected if the queue backs up and presence requests are rejected because we send these frequently. Scheduler Workers There are 3 types of workers: Scheduler Worker - Each node has 8 of these workers to process jobs within the Scheduler Queue. It's important to note that scheduler workers are also threads and are numbered 0-7, allowing for efficient job processing within the queue. These workers are responsible for handling jobs that are scheduled to run at specific times or intervals. Burst Worker - This is a special 9th worker thread that is only used under specific scenarios to ensure the most critical jobs do not get delayed. For a job to run on the Burst Worker, it must have a priority of <=25 and have been queued for 60 seconds, indicating a delay and that it has been sitting there for a certain period of time. This worker is designed to handle jobs that require immediate attention. Progress Worker - These workers run on scheduler threads and are designed to handle long-running jobs where we want to display the progress/percentage in the UI. Use cases for Progress Workers include upgrades, plugins, update sets, and other similar tasks. By functioning as a 'wrapper' around the job, this allows it to provide updates on the activities it's conducting back to you in the UI, keeping you informed of the job's status. These workers are particularly useful for jobs that take a long time to complete, as they allow users to monitor the progress and ensure that the job is still running as expected. You can see the current details of the workers related to the node you're on by visiting the "stats.do" page. You'll then want to scroll down a bit to see the related information. Here's an example of what you'll see: