NLU Service Updates NLU Service updates Refer to this documentation so you are up to date with changes to the NLU Service. Service update summary The NLU Service helps the system to understand natural language and drive intelligent actions. This service trains and predicts intents and entities for a given user utterance in your NLU model so it can understand human-expressed natural language, whether spoken or written. This service is updated every two months independently of your instance upgrade, minor updates are automatically updated and you will using the new version when you retrain an NLU model. Major updates are aligned with glide releases like Quebec, Rome etc. and when you upgrade your instance to the next release and create/train a model it will use the latest version. Through these bimonthly updates, we are constantly improving the quality of NLU model training and predictions. While most of these updates don't impact your existing use of NLU and should be only improvements, there may be some changes you need to be aware of. July 2023 NLU Service Update Introduced dialog acts to enable natural mid-conversation in VA and improve conversation fluidity. Affirm, negate, and modify dialog acts are supported in English and enabled by default for all new VA topicsMigrated all languages to use new language model, boosting average intent prediction quality by 10% across all languagesEnabled customers to manage and edit irrelevant utterances for their models to improve irrelevance detectionRemoved 2 intent requirement to train and publish an NLU model, making it easier for end-to-end topic testing in VA September 2022 NLU Service Update Releasing a newer version of the Enterprise Language Model - ELMSE, which will be used for Virtual Agent, with improved AI quality. If you are currently using the older Enterprise Language Model through the Optimize feature, it is recommended that you retrain and re-optimize your model. This ensures that your model is trained on the latest improved ELM. Otherwise, you may experience unexpected prediction results.Improved first prediction latency by 25% and overall prediction latency by ~20% for VA-NLU predictions.Improved punctuation robustness for vocabulary source entities and hardware/software system entities, resulting in improved VA-NLU predictions, especially with ITSM deployments.Improved ability to detect long hardware/software names resulting in improved NLU predictions for ITSM VA deployments. July 2022 NLU Service Update Significantly improved Out-of-the-box VA setup topic model content (available with Tokyo family) to improve intent recognition for intents like greetings and request for live agent Improved quality of suggested utterances in Expert feedback loop by filtering PII tokens and noisy mid-conversation utterances as part of Active learning Entity and vocabulary source support for Portuguese, Brazilian Portuguese and French CanadianRemoved hardcoded model threshold override in Virtual agent to improve prediction quality up to 10% in multi-model scenarios Performance improvements (prediction latency) for first prediction and through TFIDF based exact matching March 2022 NLU Service Update By default, the "ignore punctuation" property in Model Settings is automatically enabled for English, German, French, Japanese, and Spanish. In addition, this option is also available for Swedish and Portuguese, and customers could select this option, so that predicted intents and confidence scores would be less sensitive to punctuation variations in the utterance. Portuguese has been added as a newly supported language. There is support for intents, vocabulary, and vocabulary sources (for intent prediction only) and supports fast training of models. We recommend customers use Portuguese over Brazilian Portuguese due to significantly better quality and performance. We also have used a new language model and algorithm for Swedish, which has achieved a 20% improvement in prediction accuracy and now executes training faster. We have scaled the number of intents supported per model from 500 to 750 (mod el training will switch from fast training to asynchronous training once it crosses 4500 utterances or 300 intents) January 2022 NLU Service Update (Major Release - SanDiego.0) The continual learning framework supports the Expert Feedback Loop feature in the NLU Workbench Advanced Features application. This helps NLU admins to improve their model performance by providing correct intents for end-user utterances coming through Virtual Agent. Expert Feedback Loop is available from San Diego release onwards, and you must install NLU Workbench – Advanced Features (version 4.0.5) to use it. Utterances for all languages have been made case insensitive during intent prediction. If you want to keep certain words or phrases as case sensitive, you can create pattern vocabulary using regular expressions. There is improved robustness of NLU models with punctuation. Predicted intents and confidence scores are now less sensitive to minor punctuation variations in utterances. Starting from San Diego, you can use this feature by going to a model's Settings and checking the "ignore punctuation" checkbox. Example: below are two very similar utterances with slight differences in punctuation. If "ignore punctuation" in model settings is unchecked, then the utterances resulted in different predicted intents. For the same example, when "ignore punctuation" is checked, the utterances now result in the same predicted intent and confidence scores. There is a scaled number of intents and utterances that can be supported per model (500 intents and 20,000 utterances) by switching to asynchronous training when the model exceeds 300 intents or 4500 utterances. November 2021 NLU Service update The NLU Optimize feature support is extended to the French, German, and Spanish languages to improve intent prediction quality in these languages. Intent prediction is faster for scenarios where the utterance typed by the user exactly matches a training utterance in the model. This ensures that the latency doesn't increase linearly with model size. There are also improvements to intent prediction quality for Brazilian Portuguese and Swedish languages. There is 60% improvement in entity detection speed for certain scenarios. There are NER prediction quality improvements so that more software and hardware names can be detected as entities. September NLU Service update An earlier limitation of 200 char and 25 words per utterance during prediction has been relaxed. While this does not address scenarios where the user has typed a very long text with multiple intents, it removes the hard limitation. Test utterance: zoom video is not working as expected and zoom video of other meeting members is not visible and I try to start zoom video call but it throws an error and even if it works i cannot see anyone on my video (203 characters, 42 words) Test utterance: I tried to connect my printer to my laptop but I cannot connect to my printer in Amsterdam office as it will not work and it might also be out of ink (150 characters, 32 words) In scenarios, where a greeting was followed by an utterance, we correctly predict the intent now, ignoring the greeting. This was especially prevalent is languages like French and Spanish. The prediction response time when the model has a large number of entities. The quality of predictions of Software/Hardware system entities have been improved with this update We have addressed some issues in Japanese language with use of entities and vocabulary sources to improve intent prediction quality July NLU Service update (Major Release – Rome.0) Optimize functionality – Utilizes richer, enterprise language model and irrelevance detection to reduce incorrect predictions (Available with Aug store release of NLU Workbench – Advanced Features and applicable for Rome release) New system entities for Software and Hardware (Will be available with Rome family release) - Automatically detect any software and hardware names in utterances without requiring annotation of these entities in utterances Improvement in prediction latency for larger sized models (Applicable if you are either on Quebec/Rome) The following limits apply to your NLU model – A model needs to have at least 2 intents with 5 utterances each The recommended number of utterances per intent is 15 and there is a limit of 200 utterances per intent We currently support upto 300 intents or 4500 utterances (whichever comes first) per model May NLU Service update No major change in behavior. Some product defects have been fixed. March NLU Service update The NLU Service was updated around March 18 2021 (the exact date/time depends on your datacenter location). This update was minor and included mostly defects that were fixed. Customers who are already using the Quebec backend service, will automatically get this update (Please see below on how you can check which version you are currently using – the March update will continue to use the 3.1.2-HYB) Benefits of the March update to the NLU Service There are a couple of key benefits from March update. If you try an utterance which is exactly similar to an utterance found in the model, the prediction confidence will be closer to 100% With system-defined entities, entities are normalized when they are extracted - I.e. entity values are extracted in a normalized/standard form irrespective of how it was expressed in an utterance to make it easy to consume in a VA topic January NLU Service update (Major Release – Quebec.0/3.1.2.HYB) The NLU Service was updated on January 2021. We refer to this as the initial Quebec version for the NLU Service, this is a major update. Note that the ServiceNow family release for Quebec is in March 2021, which includes updates to other ServiceNow capabilities and applications and requires you to upgrade your instance. The NLU Service update is independent of your instance upgrade to Quebec, and refers to the NLU Service used for NLU training and predictions. If you are on a Paris instance, there will be no change in behavior to your existing NLU models (even if you retrain them) after this update, as we will continue to use the prior version of the NLU Service. Customers creating new models after this date on your instance (Paris), will benefit from the upgraded version, which has significant quality improvements for NLU predictions. Note that the updated version will automatically become your default version of the NLU Service (for both existing and new models) if you have recently upgraded your instance to the Quebec family release. If customers wish to change the default NLU service version for any reason, they can use a system property as described below. Benefits of the January Update to NLU Service 1. Utterances are now case insensitive with the updated version of the NLU Service. When training an intent, you don't need to provide the same utterance in a different case. If there are specific words that needs to be case specific, use regex- based vocabulary so that the case specific meaning is used to understand the intent. 2. Precision Improvements: There's an improvement in identifying gibberish and skipping predictions rather than still mapping it to one of the existing intents. Better score range for intents: The confidence scores will be spread for intents so that there is a clear difference between the correct intent and incorrect intents. Using a lower model threshold is recommended for better results. For example, for the OOB ITSM model, the new model threshold is 65 as compared to 82 (the update will be available to the OOB Model in the March 2021 store release). Retest your models to set the threshold appropriately. (We will be releasing a feature for NLU Workbench in a store release soon which will recommend the right model threshold to help with this). More accurate intent predictions: With the January upgrade, the NLU predictions will be more accurate. Seeing a menu in VA (Virtual Agent) because of multiple intents being returned for an utterance will reduce, as you are likely to be taken to the right topic more often. There's a higher likelihood that the NLU Service will skip predicting an intent when a user enters an irrelevant or gibberish utterance rather than predicting a wrong intent. With the Quebec NLU Service, the number of times multiple choices are shown is reduced, as the system is more confident about predicting the right one. Multiple predictions are returned only if an utterance is ambiguous. With the Quebec NLU Service, utterances can be written in multiple cases and the prediction will be consistent. Improvements for identifying gibberish and skipping predictions. With Quebec NLU Service, NLU can detect gibberish in most scenarios and skip the prediction. Note that in some cases where there is a use of acronyms in your model utterances or if the model has fewer intents/unbalanced intents (I.e. some intents having significantly larger number of utterances - gibberish might still predict an intent) Checking NLU Service version and using System property control the NLU Service version If you want to use the prior version of the NLU Service for new models created in Paris, you need to add a new entry to the sys_properties.list in the global scope. See the following steps below to implement this. When training the model, open the debug log. There you'll see an entry such as the following, which contains a JSON. Near the beginning of the text, you can see the "name" key, which has the model name, which is rendered below in bold font for emphasis. The ellipses at the end of the text represent the remainder of the text, which isn't shown here because of its length. The remainder of the text isn't needed to determine the model name, so it's excluded for brevity's sake. 14:17:57.438 [DEBUG] NLU Model JSON: {"name":"ml_x_snc_global_b6994aab8{56010} {8777687101":"language":"en","confidenceThreshold":"0.6"modelPurpose":"search","schemaVersion":"NY-1", Add a system property in your instance for the specific model by using the model name: (Optional) If you want to use the newer version of the NLU Service for multiple existing models, you could achieve that by specifying multiple models for the same property: