Performance Tuning Tips for Tenable Integration in ServiceNowSummary<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #6e9db4; } a:visited { font-size: 12pt; color: #7057C7; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: block; max-width: 500px !important; width: auto; height: auto; } } We have observed performance issues when importing a large volume of vulnerability data from Tenable sources such as: Tenable.io Assets IntegrationTenable.io Fixed Vulnerabilities IntegrationTenable.io Open Vulnerabilities Integration Issues1)Delayed Data Processing-->When importing large datasets, significant delays can occur. -->For example, the Import Queue Wait Time on the Vulnerability Integration Run record can be several hours (e.g., 4–12 hours).-->This is primarily because data processing is managed by Vulnerability Import Templates. By default, only 5 templates are active, but this can be scaled up to 15 or 20 based on instance capability and node availability. Increasing the number of templates enables parallel processing and reduces overall wait times.-->For information on increasing the number of templates, see KB0995644. -->Increase the data sources if the processes are waiting for a longer duration to be picked up. If the import wait time is higher then the integration is running into starvation and the recommendation is to increase the data sources.--> If more queue entries are waiting for the worker thread, increasing the number of data processors is advisable based on the infrastructure. -->For more information on increasing the number of data sources, see KB0995003. -->To check the node configuration, navigate to <instance url>/ sys_cluster_state_list.do?sysparm_query= to check the number of primary nodes available for the instance (Node Type = Generic Primary) 2) Integration Fails Due to Max Attachment Size-->Failures also occur when incoming attachments exceed the allowed size limit. While the default sys_attachment.max_size is 1024 MB, increasing this limit is not recommended.-->Reduce the chunk_size parameter in the integration job from 500 to 200 or a lower value. -->This helps in handling large attachments more efficiently, especially if your instance frequently receives large payloads.3)Getting "Job exceeded processing time and was forced to complete status" error for Tenable.io integration-->Need to set the respective integration parameter offset value set to 100 and that will solve the issue. -->Sample integration url : https://instance.service-now.com/nav_to.do?uri=sn_sec_int_impl.do?sys_id=xxxx4)The VIT is not associated or created as the discovered items have the vulnerability item field empty, though detections are present.-->Check system logs around the detection creation time.-->Review the processVI function in the "Detectionbase" script include, which handles the creation logic.-->Check the node logs for any related errors.-->Check the exclusion rules (If a new detection meets the conditions of an exclusion rule, the rule gets associated with the detection, but VIT is not created.)https://www.servicenow.com/docs/bundle/yokohama-security-management/page/product/vulnerability-response/concept/exclusion-rules.5)Check slow vulnerability rules-->During running a Tenable import, the related Vulnerability Rules will run after vulnerability data is imported. Below are the 3 typical Vulnerability Rules. -Vulnerability Risk Rules (table: sn_vul_calc_risk.LIST) -Vulnerability Assignment Rules (table: sn_vul_assignment_rule.LIST) -Vulnerability Group Rules (table: sn_vul_grouping_rule.LIST)-->You can follow below steps to check where the bottleneck is on a Tenable Integration Run record (a) Open table sn_vul_integration_run.LIST and open the corresponding Tenable Integration Run recordOR-->Go to menu Tenable Vulnerability Integration > Primary Integrations, open Tenable Integrations record (for example, Tenable.io open vulnerabilities Integration) then you can find the Tenable Integration Run records in the [Vulnerability Integration Runs] tab (b)Add below fields into the list view of "Vulnerability Integration Process" if they are not there -Assignment rules time -Group rules time -Risk rules time-->Other fields, such as "Import queue processing time", "VI creation time", "CI lookup time" can also be added for more processing information which is helpful to diagnose where the slowness happens.-->This will help to know where the slowness is during processing Qualys data.-->If slow processing is found on one specific assignment rule, try to disable it and re-run Tenable integration.-->If there are multiple assignment rules are all running slow, try below options. (a) Disable business rule "Run assignment rules" and re-run Qualys integration. (b) Mark all assignment rules and group rules as inactive and re-run initial import. This will import all the Vulnerability Items for the first time. After that you can enable all these rules (group+assignment rules) and click on [Apply Changes] button on these rules. This will apply rules to imported vulnerability items (existing on the instance) and reduce the workload during initial import.-->Once initial import is done from subsequent import onwards the number of delta new records will be less and it will be processed faster.-->If any of the Vulnerability Rules are found to be customized please raise task to involve dev team to check the logic further.6)Check for slow queries or scripts-->Slow scripts can be searched from sys_script_pattern.LIST table and slow queries can be searched from sys_query_pattern.LIST table. -->From the [Average execution time (ms)] and [Execution count] fields you will know which business rule, database query, scripts, etc., is running most frequently and slowly during Tenable import.-->You can also check if any slow business rules running on any vulnerability tables (table name starts with "sn_vul_") in the localhost logs.-->When a slow business rule or query is found, firstly finish below basic checks and if the root cause is still unknown, raise task to involve Performance team to check it further. (a) Check to see if any indexes are missing on the vulnerability table compare with OOTB (Out-Of-The-Box) instance (b) Check to see if any heavy database queries or script logic that can be optimized in that business rule7)Increase Tenable Attachment evaluation time-->During Tenable import the vulnerability data is actually retrieved from Tenable scanner and saved as a XML attachment into table sn_vul_ds_import_q_entry.LIST. -->The attachment evaluation time is by default set to 3600 seconds (60 minutes) in the script include "VulnerabilityDSAttachmentManager". -->When there is any processing issue happens if it is beyond the processing time an error message as below will be output in the "processing notes" field in the "sn_vul_ds_import_q_entry" records.-->Error, "Job exceeded processing time and was forced to complete status"-->In this case, you can increase the evaluation time following below steps to handle larger volume of Tenable data. (a)Open below script include on your instance (replace with your instance name in the URL) https://<instance_name>.service-now.com/nav_to.do?uri=sys_script_include.do?sys_id=aa1b81669f31020034c6b6a0942e7014 (b)Increase the default hard code limit of 3600 seconds (in variable _MAX_PROC_TIME_S: 3600) -->After the timeout value in script include "VulnerabilityDSAttachmentManager" has been changed, the same value in another script include "VulnerabilityIntegrationUtils" should be changed as well.