Why opening an incident record might freeze the browser and eventually time out the transactionIssue Occasionally, trying to open an incident record, might cause the browser to quit loading and eventually timing out the user transaction. Reviewing the incident record from a list view can show that the record is being continuously updated every ~10 seconds. A deeper investigation in the instance's transaction logs helps identified a root cause. There can be various reasons that can cause this issue to occur. This article will be detailing the causes found in a specific duplicate data entry scenario.CauseWhen the affected task record was created from the relevant record producer, a duplicate key entry issue occurred as shown in the localhost log shown below. 2017-12-11 07:59:04 (868) Default-thread-3 DACADF9CDB030B0038BB62EB0B9619A0 SEVERE *** ERROR *** FAILED TRYING TO EXECUTE ON CONNECTION 12: INSERT INTO task (`admin_override`, `made_sla`, `watch_list`, `upon_reject`,`sys_updated_on`, `auto_request`, `u_transaction_sent_to_stars`, `number`, `sys_updated_by`, `u_quality_check`, `opened_by`, `sys_created_on`, `sys_domain`, `state`, `sys_created_by`, `u_resolved`, `knowledge`, `u_no_separation`, `u_reopen`, `impact`, `u_work_notes_counter`, `active`, `priority`, `calendar_stc`, `opened_at`, `business_duration`, `a_ref_1`, `approval_set`, `u_comments_counter`, `short_description`, `assignment_group`, `description`, `calendar_duration`, `u_source`, `notify`, `sys_class_name`, `u_date_created`, `sys_id`, `contact_type`, `a_ref_10`, `incident_state`, `urgency`, `u_chase_counter`, `u_local`, `company`, `reassignment_count`, `u_servicedesk_perimeter`, `severity`, `variables`, `approval`, `sys_mod_count`, `u_project_linked`, `backordered`, `a_ref_6`, `upon_approval`, `escalation`) VALUES(...) /* Unique Key violation detected by database (Duplicate entry '2cda1f90db430b0038bb62eb0b96193c' for key 'PRIMARY')java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '2cda1f90db430b0038bb62eb0b96193c' for key 'PRIMARY'at org.mariadb.jdbc.internal.SQLExceptionMapper.get(SQLExceptionMapper.java:132)[...]at com.glide.sys.Transaction.run(Transaction.java:1977)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Caused by: org.mariadb.jdbc.internal.common.QueryException: Duplicate entry '2cda1f90db430b0038bb62eb0b96193c' for key 'PRIMARY'at org.mariadb.jdbc.internal.mysql.MySQLProtocol.getResult(MySQLProtocol.java:995)at org.mariadb.jdbc.internal.mysql.MySQLProtocol.executeQuery(MySQLProtocol.java:1050)at org.mariadb.jdbc.internal.mysql.MySQLProtocol.executeQuery(MySQLProtocol.java:1030)at org.mariadb.jdbc.MySQLStatement.execute(MySQLStatement.java:289) The task record where the issue occurs has the highlighted sys_id. This duplicate entry error could have happened due to a race condition of submitting the record producer twice, one immediately following the other. This could be an edge case, however, the error caused all the asynchronous Business Rules in place to run in an infinite loop, so creating events which eventually kept updating the affected task record for every ~10 seconds. These repeated updates generate a huge amount of audit data on the record. Because of the huge audit, the record can not get loaded nor rendered by the browser. In fact, while trying to open the record, the [sys_history_line] table gets dynamically populated from the main [sys_audit] table, in a very large transaction that ends up being timed out. ResolutionTemporarily remove the activity formatter from the form to restore usability.Identify, deactivate, and reactivate the asynchronous business rules that are updating the record.Remove the redundant unwanted audit entries from the system. These are the records in the sys_audit table where Document Key is the sys_id that is mentioned in the error message /sys_audit_list.do?sysparm_query=documentkey%3D<sys_id> You can do this via JavaScript by following the steps here: Mass-Deletion and Excess Data Management Guide Or contact ServiceNow Technical Support to help you further. To avoid this happening in the future, we recommend stopping the auditing of the custom field. You will find more information about this in the documentation: Exclude A Field From Being Audited