[NLU Batch Testing] Test Sets should only contain utterances for the intents that exist in the NLU Model(s) to be tested against, otherwise it will result in an "Incorrectly skipped" outcome for each utterance resulting in a lower Correct percentage scoreSummaryWhen using NLU Batch Testing, you can run a test against a single or multiple NLU Models that have been published. Do not include any published NLU Models for batch testing that are no longer used, so you should only test against NLU Models that are mapped to your published and active Virtual Agent topics. Therefore, Test Sets that are imported into NLU Batch Testing should only contain utterances for the intended intents that exist in the NLU Model(s) to be tested against. If the imported Test Set contains utterances for the intended intents that do not exist in the NLU Model(s), it will result in an "Incorrectly skipped" outcome for each utterance linked to an intended intent that does not exist. This will lower the Correct score and increase the Incorrect score skewing the Test Result Correct/Incorrect percentages. InstructionsEnsure that the Test Set of utterances and their intended intents that you are going to import, only contains utterances for the intents defined in the NLU Model(s) that you are going to be testing against. During the import process, it will remove duplicate utterances from the Test Set, and this can be reviewed in the Status link for the imported Test Set, which will open the Transform History with any import errors and logs. By removing 40% of utterances that did not have the intended intent in the NLU Model that it was tested against, it will now show a much higher Correct percentage score, when running the test, which is the true score for your Test utterances when testing against your NLU Model.Related LinksWe will review this process to see if it can be improved to prevent running NLU Batch Tests on utterances that do not exist in the NLU Model(s) that you are testing against.