Nava throwing the following error message: "Sorry, there was a problem on my side trying to complete this request. Try asking again later." Issue <!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } The error message "Sorry, there was a problem on my side trying to complete this request. Try asking again later." is a generic error message that need to be investigated further from the sys_generative_ai_log. In this case, the error returned was: "Error 300000: Check guardrail response for flagged moderation checks." Symptoms<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } When the sys_generative_ai_log is returning the error : "Error 300000: Check guardrail response for flagged moderation checks.", we need to investigate the sys_generative_ai_metric records. In order to find the correct records, please get the sys_id of the sys_generative_ai_log record and open the sys_generative_ai_metric list with the following filter: /sys_generative_ai_metric_list.do?sysparm_first_row=1&sysparm_query=sys_generative_ai_log%3D{SYS_ID}&sysparm_view= Facts<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } The default threshold for the moderation is 0.5. The "Jailbreak Prompts/Prompt Injection" category can be seen above that threshold in the Raw Response record from the sys_generative_ai_metric like below where the value is 0.51 { "security": { "flagged": true }, "safety": { "flagged": false }, "results": [ { "category_scores": { "Suicide & Self-Harm": 0.0016484829418512883, "Privacy": 0.06465348835622392, "Jailbreak Prompts/Prompt Injection": 0.5120467368380145, "Indiscriminate Weapons": 0.0006667023092435893, "Sex-Related Crimes": 0.0007793660930208434, "Child Sexual Exploitation": 0.001064962672055945, "Hate": 0.008577485413711984, "Intellectual Property": 0.09534946489910949, "Violent Crimes": 0.00225184727950302, "Non-Violent Crimes": 0.0029810327298548972, "Sexual Content": 0.011331753531361455, "Specialized Advice": 0.0021827164453451808 }, "flagged": true, "model": "virtueguard-text-lite-001", "categories": { "Suicide & Self-Harm": false, "Privacy": false, "Jailbreak Prompts/Prompt Injection": true, "Indiscriminate Weapons": false, "Sex-Related Crimes": false, "Child Sexual Exploitation": false, "Hate": false, "Intellectual Property": false, "Violent Crimes": false, "Non-Violent Crimes": false, "Sexual Content": false, "Specialized Advice": false } }, { "security": { "result": true }, "safety": { "result": false }, "model": "llm_generic_large_v2" } ] } Release<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } any Cause<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } 1. False positives in Guardian models, particularly for "Jailbreak Prompts/Prompt Injection" categories, due to agentic workflow metadata resembling jailbreak signatures in training data.2. Low Guardian confidence thresholds (default 0.5) triggered unnecessary blocking of legitimate requests.3. Structural patterns in agentic prompts (e.g., ##Invalid operation blocks, execution_mode: autopilot) exacerbated false positive rates. Resolution<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } span { font-size: 12pt; font-family: Lato; color: var(--now-color--text-primary, #000000); } h2 { font-size: 24pt; font-family: Lato; color: var(--now-color--text-primary, black); } h3 { font-size: 18pt; font-family: Lato; color: var(--now-color--text-primary, black); } h4 { font-size: 14pt; font-family: Lato; color: var(--now-color--text-primary, black); } a { font-size: 12pt; font-family: Lato; color: var(--now-color--link-primary, #00718F); } a:hover { font-size: 12pt; color: var(--now-color--link-primary, #024F69); } a:target { font-size: 12pt; color: var(--now-color--link-primary, #032D42); } a:visited { font-size: 12pt; color: var(--now-color--link-primary, #00718f); } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } 1. Install the Generative AI Controller v13.0.3 plugin to address fixes for false positives in Guardian models (PRB1987537 and STRY62434187).2. If installation is not possible, modify the Guardian confidence threshold from the default 0.5 to 0.9 to reduce false positives, as documented in KB2765038.