When should an ECC Queue sensor Business Rule run?<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } All features or integrations that use the ECC Queue are expected to have a Sensor, which is simply a Business Rule that runs for the insert of the ECC Queue Input record. The purpose of it is to process the result of the probe, or data returned by the probe, usually by updating records in other tables in the instance, and also update the State, Processed timestamp, and possibly also Error string field of the Ecc Queue record itself. Without one, those wouldn't be filled in, and the input would remain in ready state. But when should that Business rule run, as part of that record insert transaction? A business can run: Before Insert, where the script can make changes to the record before it goes into the SQL database.After Insert, which is usually for scripts that make changes to other records.Async, where the processing is moved to scheduler worker thread, where an extra update will need doing if that's making changes to the ecc_queue input record. The main things to take into account when designing your sensor are: Avoid blocking the API-INT semaphores with a long running transaction. This can impact MID Server communication, and other integrations.An Async Business rule is ideal for that, as it runs the script as a separate sys_trigger job, although as the script is no longer running as part of the insert, a separate current.update() will be required to update the status/error/processed timestamp. That is an allowed use of current.update().Discovery solves that by running After insert, but scheduling a separate sys_trigger job to actually run the Discovery sensor scripts. More recent versions of Discovery can use the sensor business rule to trigger a System Event instead, and the Script Action is then run by the Events Process scheduled job. That has the additional benefit of limiting the events processing to specific nodes, or event queues. Make sure the "ECC Queue - mark outputs processed" Business Rule is able to run and update the original Output to Processed state. This runs After insert, order 90, and will be skipped if the record is already in Processed state at that time.Setting the input as Processed before then will leave the Output in processing state, and that could cause jobs be re-run when MID Server restarts, or fails-over to another MID Server in a cluster when it goes down. Avoid recursion. Running after insert, order 100, would allow the above business rule to run and set the Output to processed. However to set the current record to Processing or Processed at this point would require a current.update() while the insert is still happening, which is forbidden in the platform. There are tricks to attempt to avoid the performance impact of doing this, but none are best practice. Running Aysnc avoids all of these potential issues Using the following template should not cause any problems: When: AsyncOrder: 100Insert: Ticked Condition: current.agent.startsWith("mid.server.") && current.queue == "input" && current.state == "ready" && <insert very strict and specific condition for the job here> The condition needs to be very specific to the job, using the Topic, Source, Name, and especially Agent Correlator field value and prefix to ensure the job only runs for the correct inputs. See: KB2567261 Best Practices for usage of the Agent Correlator field of the ECC QueueYou may want to include a function from a script include in the condition if you need to do some table lookups, however if you do that make sure the queries are as optimised as possible because this condition check will run for all ecc_queue inputs, not just the ones for this job. Script would include: something to get the payload from the input record, not forgetting it could be in an attachment if >500k.: var payload = current.payload; if (payload == '<see_attachment/>') { var sa = new GlideSysAttachment(); payload = sa.get(current, 'payload'); } Insert your own business logic here, to take that payload data and do something with it. It may need to parse the XML to pick out specific values.followed by setting it Processed:current.state = 'processed';current.processed = gs.nowDateTime();current.update(); If the job is expected to take a long time, as in minutes, you could start the script with an update to set the current ecc_queue status to Processing, and populate the Processed timestamp at that time. If it is <1s then there really isn't much point.