Mass-Deletion and excess data management recommendationsIssue There may be occasions when it is desirable or necessary to mass-delete data from a table. You may want to perform this deletion with a predefined set of conditions, or you may want to simply wipe the table clean. Regardless of the situation, excepting corrupt or inaccessible data, there is a deletion method that will suit your needs. The data deletion methods that will be outlined in this article are as follows, ordered from the most simple approach to the most advanced: UI actionsClone exclude rulesTable cleanup policiesJavaScript Deciding the best approach will vary on a case-by-case basis, and it will be up to you to determine the best option in a given situation. For large quantities of records, there is no fast option except for clone exclude rules, but these may not be suitable for your situation. If you have business-critical need to delete multiple millions of records from a table and cannot wait for the amount of time required by the methods in this article, you can open a Case with ServiceNow to consult a Technical Support Engineer. ServiceNow Policy on Requests for Data Removal: Platform data relationships are extremely complex and extend well beyond simple database table management. Enhanced table models, reference fields, business logic, workflows and customizations make it impossible to simply remove data from the "backend". ServiceNow is unable to account for the impact and such an action; even removal of seemingly innocuous records could affect platform stability, causing unexpected behavior from business logic previously known to be working without fault. ServiceNow cannot intervene in data removal unless the presence of those records are causing a business critical operation to be unusable, the presence of that data has or will cause an outage, or those records are a result of a defect/Problem. DISCLAIMER: No portion of this article is intended to be implemented as-is on a live instance without thorough validation and testing. The contents of this article are provided without warranty or assurance of suitability for any given situation. Usage of any script or implementation provided in this article is done at your own risk. Planning Before deciding how to go about deleting data in bulk, it's important to consider some key points: What cascade deletions will occur? (Child/related records)What references will be broken/cleared?What business rules will be triggered?What workflows will run?What other features could be impacted? (Cross-scope privilege generation, etc.)Will my deletion be tracked by update sets? Performing the appropriate research beforehand can help you determine the potential side-effects of your deletion, and will help you to formulate a good plan for tidying up afterward if necessary. For example, you may want to devise a plan for replacing broken references. With a decent understanding of the moving parts that will come into play, you can make a more educated decision about whether or not it is safe and responsible to proceed with your plan.It's also important to consider possible backout plans. In most cases, exporting and downloading a large number of records is not viable, so the backout options for a mass deletion are limited. The Delete Recovery tool available in London is a great answer for this, as it will record deletions and allow you to roll them back, even restoring broken references. You can read about the Delete Recovery tool here: Roll back and delete recovery That said, there is no magic "undo" button! Even with the advanced delete recovery tools in London, you must use extreme caution when formulating and executing your plan. Deleting With UI Actions PROS: Very simple CONS: SlowVulnerable to cancellation by UI transaction quota rulesInflexible This can be considered the "easy" approach. If you would like to delete all the records from a table, you can simply open the sys_db_object record for that table and click the "Delete All Records" UI action. By default, UI transactions like this are limited to about 5 minutes. If they exceed that duration, they will automatically be canceled. Thus, if your table contains a lot of data or data that will trigger a lot of business rules or cascade deletions, you may need to increase the transaction quota in order to perform the operation in one go.You can also use the list view to filter a table, select multiple records, and click the "Delete" UI action. However, due to pagination, you are limited in the number of records you can select and delete at once. Thus, for a large number of records, the list view approach may not be viable.Deletions done this way will trigger business rules and workflows, and will be tracked by update sets. These UI actions rely internally on the usage of a GlideRecord object and the deleteMultiple() function. Thus, the performance of these methods is comparable to this script: var gr = new GlideRecord("table_name"); gr.query(); gr.deleteMultiple(); Deleting With Clone Exclude Rules PROS: Very fast CONS: Extremely inflexible, will truncate the entire tableCan leave broken references and possibly orphan recordsThe instance must be cloned over, not viable for production When data is excluded by a clone, the excluded table is truncated. Table truncation is a relatively cheap operation and is much faster than deleting all the records from a table. However, any references to the excluded records on other tables will be broken. That means the referencing fields will still contain the sys ID of records that no longer exist. This may result in unintended consequences, so it's important to consider that point.Another risk of direct table truncation is orphaned data. For table-per-class (TPC) hierarchies, each extended table exists as a separate table on the database. The separate physical tables in the hierarchy are then joined to form complete rows. If you truncate a table in the hierarchy, you will remove the data from that physical table, but the joined rows on the parent and/or child tables will remain. Data that has been corrupted this way is considered orphaned, and will no longer be accessible through normal means at the platform level. Thus, if you are excluding a table that is part of a TPC hierarchy (unlike the Task and CMDB hierarchies), you should be sure to exclude all parent and child tables as well to avoid creating orphaned data. Deleting With Table Cleanup Policies PROS: Set-it-and-forget-it feature. Runs in the background on a scheduleFlexible configuration optionsWill not trigger business rules/workflows (unless the table has the iterativeDelete attribute)Respects the reference cascade rule CONS: SlowDesigned for maintenance, not ideal for one-time deletions This is a good solution if you wish to routinely delete records that fit some simple criteria, such as closed tasks that have not been updated after a specific amount of time. The table cleaner job runs once per hour. Unless the table has the iterativeDelete attribute set to true, the underlying code does not rely on a GlideRecord to perform the deletion. That means no business rules or workflows will be triggered by the deletion, which can significantly improve performance. That also means the deletion will not be audited or tracked by update sets. However, if the aforementioned attribute is present and set to true, these engines will be triggered. When implementing a table cleanup policy on a table, you should look to make sure that there is a supporting index to assist the cleanup condition, especially on large tables. Here is an example of a table cleanup policy that will delete incident records in the "Closed" state that have not been updated in 30 days: Deleting with JavaScript PROS: Near-limitless flexibilityCan bypass business rules, workflows, and other enginesCan be run directly, or turned into a business rule, UI action, scheduled job, etc.Not vulnerable to transaction quota rules (if run in the background) CONS: SlowRiskyRequires scripting knowledge JavaScript code can be executed directly via the Scripts - Background module. This module has an important checkbox labeled "Cancel after 4 hours", which is checked by default. In most cases, you will want to make sure this is unchecked for bulk deletion scripts. Also, you will often want to ensure the "global" scope is selected in the dropdown unless you are deleting data from a scoped table. It will be up to you to determine the scoping requirements of your script. The design and logic of a scripted deletion depends on the requirement. It could be as simple as deleting every record from a table without triggering business rules and workflows, or it could contain advanced conditional logic and perform additional cleanup operations. Here is an example of the former, which simply deletes all records on the specified table: var gr = new GlideRecord("table_name"); gr.query(); gr.setWorkflow(false); // Bypass business rules and workflows gr.deleteMultiple(); Applying conditional logic enables you to be more specific about your deletion. Here is an example of the same script, modified to only delete records matching certain conditions: var gr = new GlideRecord("table_name"); gr.addQuery("state", "closed"); gr.addQuery("category", "4ca01be0db31eb009540e15b8a961936"); gr.addNullQuery("user"); gr.query(); gr.setWorkflow(false); // Bypass business rules and workflows gr.deleteMultiple(); Using setWorkflow(false) will also suppress update set tracking. It is a good idea to use this when you do not need any of these engines and simply want the data deleted as quickly as possible. It is also good practice to add some logging to your script to make it easier to debug and evaluate results. You can put a script like this into a scheduled script execution job and run it on a schedule. You can even create custom UI actions on tables where bulk deletion based on certain logic will be a common end-user requirement.IMPORTANT: Never implement an untested script on a live production instance without thorough testing. Even a small mistake, such as an incorrectly spelled column name, can lead to unintended results. It is best to clone over a sandbox or testing instance to use for experimentation, where any lost data can easily be recovered by cloning again. Maintenance and Prevention The best way to avoid the necessity of mass deletions is to prevent data from growing to an unmanageable size in the first place. The best way to do this is through the usage of table cleanup policies as described above. It is also possible to create a scheduled script execution job that will perform the deletion using JavaScript, but the table cleaner will be more performant in most cases.Keeping your database as lean as possible will yield significant performance and usability benefits, allowing your data to be more accessible and more manageable. Having a well-disciplined strategy for retiring old data that satisfy compliance regulations while keeping your database fast will have a significant positive impact on your company's operations on the ServiceNow platform.Related LinksRoll back and delete recoveryTable Cleaner - Vancouver documentationKB0694151 - How to use Table Cleaner to remove unwanted dataKB1518213 - Mastering Data Management in ServiceNow: from tracking growth to efficient cleanup