Auto-testing in Teneo allows you to perform quality assurance tasks during the development and maintenance of a Teneo solution.
Auto-test checks example questions associated with flow triggers and transitions. It verifies that positive examples fire the trigger and transition while negative examples do not. For Auto-test to work, it requires that you have added positive examples to the trigger and transitions you want to test. That is one reason it is good practice to add example inputs to all triggers and transitions.
Auto-test can test triggers, transitions, and URLs to make sure they work as intended. All of them will be included by default. Disabling one of these options will speed up the Auto-test process, as it will test fewer items.
You can run an Auto-test at three different levels: Flow level, Folder level, and Solution level.
There are two different ways of running an Auto-test applied to the Flow, Folder, or Solution level: Run Test and Run Test Using Flow Scope.
When selecting Run Test, the triggers or transitions in that scope (flow, folder, or solution) are tested in two ways:
Run Test Using Flow Scope only tests that the trigger or transition’s condition matches their positive example and that negative examples do not fire the trigger or transition. Other triggers and transitions are ignored in this test.
Auto-test tests none of the following items:
You can exclude specific triggers and transitions from a test:
To perform Auto-test on a single flow, you need to open the flow you want to perform the test on. Then click on the Flow button in the upper left corner and select Auto-test in the panel to the left (see image above).
After you set up the flow as you want, it is good practice to run an Auto-test at a flow level to test if all example inputs match the correct trigger and transition.
To test ‘chunks’ of your solution you can run a test on a specific flow folder. Right-click the folder in the Solution Explorer and select Test. If the selected folder has sub-folders, it will include them in the test.
To test all the triggers, transitions, and URLs that have been set up in the solution, go to the Solution tab and click on the Auto-test tab. As with flow and folder level tests, you can choose what you want to test (triggers, transitions, URLs, or all of the above). It will include all of them in the test by default.
Solution testing is mostly used for regression testing after major updates or right before publishing the solution to quality assurance or production environments.
The test results panel shows the results of the tests you selected (trigger, transition, URL) and which level you did the test on (solution, folder, or flow). If you selected the “Run Test” option (instead of “Run Test Using Flow Scope”), you will also see if triggers ordered higher stole the tested positive inputs ordering. Ordering refers to the ordering of triggers with similar or overlapping trigger conditions that may conflict with each other, and if so, by which triggers.
By clicking the ‘Get Report’ button you can view the test results in an XLS format. You can also view older results by clicking on the ‘History’ button and then select which test result you want to view by clicking 'Open'. You can also export the older test versions.
In the results window, you will find the results of the selected Auto-test run. The most recently done test will be selected automatically. Here, you can see which flow (and which of its triggers) failed the test, and which folder it is in. You can also filter the test results on:
Besides filtering on items, you can also filter by text on flow name, example input, or message.
The action panel displays more information about the selected test result. For example, if an input was stolen or blocked by a higher-ordered trigger, the action panel will display what it did trigger and what it should’ve triggered. The action panel also provides suggestions for how to solve the selected test result. Each failed test and test passed (with a warning) will have its own suggestions on how to solve the problem. You can view the suggestion by clicking the ‘More Information’ button.
The action panel adapts the information it displays depending on what you have selected.
When a failed test (or one with a warning) is selected in the test result window, the action panel on the right displays further information. The most common reasons for failures are:
When the example input does not match the Class Match Requirement, the test result will say “The example was not matched”. This can happen when other classes contain training data that is too similar or the trigger uses a context restriction.
When an example is stolen by another higher ordered trigger, the test result shows a failure and mentions the trigger that fired.
In the image above, the input 'Do you have a store in London?' triggered the flow 'Safetynet' but, the example was found in the trigger 'User wants to know if we have a store in City'. To solve this, you would move the flow 'Safetynet' to a lower order group.
If an example is not covered by a Condition Match Requirement, the test result will say “The example did not match this trigger”.
In the image above, the trigger ‘Partial understanding: coffee‘ had a positive example added to it. And, since the condition is not designed to match 'doppio', we can open the 'Partial understanding: coffee' flow and expand the condition to include the positive example.
Another way of finding similar problems, like forgetting to change the positive example in a trigger, could be by using the Suggestion panel which can be found in the optimization tab backstage of your solution.
One common problem that often occurs in Auto-tests is that of the context restrictions not passing through the test.
The reason for this is that the context restrictions in question are based on one of two types of context. To showcase these, we have the following flow created previously in our Longberry Barista guides. Here, we have added one trigger with follow-up and one trigger with global variable context.
These triggers depend heavily on a specific path that has been taken by the user for the match requirement to be matched. These triggers will work in Try Out with the oriented path but fail in Auto-test. This can be fixed by excluding the trigger from Auto-test by deselecting the 'Include in Auto-tests' button. This will inactivate the testing examples from being tried on the trigger.
This will fix the problem and show the real results from the Auto-tests.
In some cases the problem is based on a Script created by the user which needs to hold a certain value for the trigger to fire. These Groovy scripts can be very long and have their own conditions that need to be met. If the groovy script is not met then the trigger won't fire and will therefore cause an error.
For the following example we have a flow called 'User wants to buy a coffee mug', which has its own Script Match Requirement. According to this Match Requirement, bItIsMorning has to be true for it to trigger. If we look around we discover that bItIsMorning is set in the Global Scripts inside 'Begin Dialog'. Looking closer at the Groovy code, we can discover that bItIsMorning is only set to true between 5 a.m. and 9 a.m. local time as they only prefer to sell coffee mugs in the morning.
The time of testing this is 11 a.m.; since that is after 9 a.m., bItIsMorning is set to false and the following result is shown in Auto-tests.
We can now ensure the trigger works as it should and can now go back and undo the changes temporarily made to the code by changing the condition from 12 a.m. to 9 a.m.
When a test passes with a warning, it means that although the example matched the trigger, it also matched a different syntax trigger in the same order group. It is recommended to create an order relation between any triggers that have a conflict.
Disabling a trigger stops it from triggering but it does not stop it from being tested on Auto-test. Therefore it is important to pay attention to a certain button under the examples section, called 'Include in Auto-tests'. If not, the trigger will still be tested with the testing examples and always return with 'failed'. This can affect the general Auto-test performance when you test your whole solution.
This is how the Auto-test results look while ''Include in Auto-tests' is still selected.
This is how the Auto-test results look while ''Include in Auto-tests' is unselected.
Was this page helpful?