Performance improvements of scheduled imports<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Scheduled imports work as two parts. First it loads data to import set table (AKA staging table). Second transform import set table data to target tables. Therefore, the basic step of improving the performance is to look at the import set to find the load time (Load run time in the form view) and the transform time (Run time under Import set Runs tab). Improving the load time Improve the performance of JDBC data source By default, JDBC data source load records as 200 batchesIf not installed install ‘Integration – JDBC’ plugin c. Modify the form layout of the JDBC data source to bring Jdbcprobe result set rows. d. Set the value to 5000 2. Improve performance of ‘Custom load by Script’ data source performance Custom load by script performance depends on the performance of getting data within the script. Use Integration Hub data stream action to optimize the single thread performance.If the single thread performance is not enough then only option is to partition the data loading and use parallel loading to load data in parallel. 3. Enable Batch import Once data is obtained from the third-party system, it should be saved to import set table. Batch import allows to save data to import set table as one batched SQL statement rather than many individual statements. Batch size this the maximum rows to be kept in memory. Whenever use this feature make sure to set the maximum number of records not to cause any memory issues. Improving the Transform time Enable Concurrent import option. This option breaks the original large data set into multiple import sets and processes them in parallel. By default, we add two import set transformer jobs per node. This means an instance with 10 nodes can process 20 import sets in parallel. Enable Concurrent import with the ‘Custom size’ option. Set the partition size to a value it takes around 20 – 30 minutes to transform one import set. Optimize the performance of the transform Maps. Goto the transform Map form view and go to the relationships. Make sure your form view contains the following relations. These relations contain any slow scripts, unindexed reference fields etc … . Review all and fix any performance issues. 3. After the above step run a full load and then calculate amount of import set row records process per one second. This value can be obtained by dividing the number of import set rows by runtime in seconds. If the value is less than 10, check for any other performance critical factors like Flow triggers, data policies of the target table.