What is the best way to archive large volume of records without affecting instance performance?Issue <!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } ServiceNow is currently experiencing challenges in efficiently processing a significant volume of archive retired Configuration Items (CIs) records. This situation raises a critical question: how can we effectively archive these large volumes of records while ensuring that the instance performance remains unaffected? The scale of archive retired CIs requires a strategic approach to avoid overwhelming system resources, potential delays, or degradation of service quality during the archival process. Symptoms<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } For example: If we need to archive over 13 million servers and VM instances, and we set up 2 separate archive policies—one specifically for servers and the other for VM instances ( both looking for a discovery source of SG-Azure, an operational status = retired, and sys_updated > 90 days ). We observed that only approximately 1.6 million server tasks were created after a 9-day period. This initial phase of task generation had not yet progressed to the archiving stage, and notably, the VM instance rule execution had not begun. The sluggish progression of this process raised concerns about potential queuing delays impacting the overall efficiency of other archive rules, as the system struggled to manage the sheer volume of workloads within the expected timeframe. Release<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Any Cause<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } The volume of records and the serial nature of the archive process caused significant delays in task creation and execution. The existing policy conditions were too broad, leading to inefficient processing. Per the sample scenario above, which is based on the condition in policy (Retired Servers from SG-Azure) and the retirement filter, there are approximately few more millions of retired servers from SG-Azure that require archiving. However, due to the constraints of the policy conditions and the retirement filter, tasks are created only for roughly 2 million of these servers. Given the scale of this operation, the task creation process alone could realistically take weeks if not months to complete, as the system must sequentially process each server against the policy criteria before generating corresponding tasks. Resolution<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Recommend to create multiple policies to archive limited batches of data (e.g., by quarter) to improve performance and control. Test with smaller batches (e.g., 100K CIs) before scaling up to ensure efficiency. Overall, limiting the CIs to be archived under one policy, would be ideal from audit/control point of view and also performance point of view, because once let's say Q1 CIs are archived, there's no more contribute to the next policy execution, which should be faster. Steps to resolve above scenario: 1. Stop the Archive/Delete job "CMDB Data Manager Archive/Delete Policy Processor" to prevent further creation of CMDBTASK records. 2. All the archive CMDBTASK are still in Open state, they would've been set to Work In Progress by said job, but since it'll be stopped, all those tasks need to be set to Work In Progress manually (or through a script), so that it'll go on with subflow execution for those tasks and will create sys_archive_run_chunk records (which then would be picked up by Archive job from sys_trigger). 3. Make sure all the existing tasks are processed AUTOMATICALLY indicated by the state='Closed Complete'. [Note: Setting those tasks to 'Closed Incomplete' or 'Closed Cancelled', would disassociated the CIs from the tasks and will again be picked up in next execution of the policies causing similar long running job]. 4. Creating multiple policies to qualify limited amount of data (eg. 'Archive SG-Azure servers 2024Q1' for archiving only Q1 data) Based on the condition and existing indices, the time needed to query the underlying CIs might differ policy to policy (which can be seen in policy review page), so its best to qualify roughly 100K CIs to try out the speed and then scale it up if needed. 5. Monitor the performance of the Archive job.