Test definitions are available
- in Gazelle Master Model under Tests Definition --> Tests Definition (read/write)
- in Test Management under Tests Definiton --> Tests Definition (read only)
Test definitions are, with the technical framework, the basis of Gazelle and its important feature to prepare for and participate in a connect-a-thon. The tests define the scenarios the different actors implemented by a system must pass to be validated by the connect-a-thon managers. This section of the documentation is mostly dedicated to test editors to explain the different sections of a test and how they have to be filled when creating new tests.
Prior to writing a test, three main concepts have to be introduced that determine who will see the test and when.
- Test type: indicates whether the test has to be performed during the connectathon or the pre-connect-a-thon period. A third type was added in order to gather the tests dedicated to HITSP profiles.
- Test status: indicates the level of advancement of the test. Only "ready" tests will be displayed in the participants' system's tests list during the connect-a-thon or pre-connect-a-thon period. "To be completed" are test currently under development, but are not yet ready to expose to participants. "Deprecated" tests are those which are not used anymore, in the same way the storage/subsitute tests have been replaced by more relevant ones. Finally, the "convert later" status has been created when the test definitions have been imported from the Kudu system; it means that we do not need this test by now and consequently it is not a priority to work on it.
- Test Peer Type: indicates if the system can perform this test with "no peer" (no transaction to exchange with a vendor test partner), "peer to peer" are tests covering a subset of a profile, typically with 2, or sometimes 3, partners. "Group tests" cover a workflow within an integration profile and are tests run by a full set of actors within the profile; group test are typically supervised directly by a connectathon monitor.
Each test definitions is built of four parts which are defined below. Each part is contained on an separate tab within the test
1. Test Summary
It gives general informations about the test:
- Keyword, name and the short description are used to identify the test. By convention, the keyword name starts with the profile acronym.
- The test type is connectathon, pre-connectathon but can be other type as defined in the database.
- The test status indicates the readyness of the test. Only test with a status marked as ready are visible by the testers.
- The peer type indicates is the test is of type "No peer", "Peer to Peer" or "Group Test"
- The permanent link to the test is printed in this part (computed by Gazelle)
- The version of the test gives an indication of the most recent testing event for which the test was created/modified.
- The "Is orchestrable" attribute indicates whether the test will be run against a simulator (true) or against another system under test (false). When run against a simulator, the test requires the use of Bpel service and Gazelle web services to be orchestrated. Those services will enable the simulator to communicate with Gazelle in a standalone mode without any help from somebody.
- The "Is validated" attribute indicates whether the test is validated or not.
2. Test Description
This section describes very precisely the test scenario and gives indications to the vendor on how to perform the test, which tools are required and so on. This part also gives keys to the monitor about the things to check, the message to validate and so on. This part of the test can be translated into different languages. By convention, there are three sections in the test description:
- Special Instruction: contain information for the vendor of "special" considerations for this test, for example "ABC test must be run before this one", or "XYZ tool is used in this test"
- Description: Is a short overview of the scope of the test.
- Evaluation: These are the specific instructions to the connectathon monitor describing what evidence must be shown by the vendor in order to "pass" this test.
3. Test Roles
It is the most important part of the test, it is also the most complicated and confusing part of the work.
Assigning one or more Roles to a test determines which Actor/Integration Profile/Profile Option (AIPO) are involved in the test. Roles must be well-chosen for two reasons: (1) If a Role is assigned to a test, it means that the test will appear on the list of tests to do for any test system which supports the AIPO in the Role, and (2) only the transactions supported by the chosen Roles will be available when you define individual Test Steps on the next tab..
Prior to starting a test definition, you should ensure that the Test Roles you need for the test exist; if not, they can be created under Tests Definition --> Role in test management.
A test role (or role in test) is defined as a list of Actor/Integration profile/Profile Option and for each of these AIPO we must specify if the tuple is tested or not. The primary reason to include a Test Participant (ie an AIPO) in a Role with "Tested?" unchecked is because you want the transactions supported by that Test Participant (AIPO) to be used by the other test participants in that Role, but you do not want that test to show up as required for that test participant that is "not tested". This primarily occurs when one actor is "grouped" with another actor.
The whole test role can be set as "played by a tool", for example the OrderManager (formally RISMall) or the NIST registry or a simulator or so on.
A convention has been put in place for the naming of test roles:
If several actors from a profile or several profiles are used to defined the test role, only the main couple Actor/Integration Profile must be used to name the role.
By ANY_OPTIONS we mean that any system implementing one of the option defined for the profile must perform the tests involving this role.
_WITH_SN means that the transactions in which the role takes part must be run using TLS, consequently the involved actors must implement the Secure Node actor from ATNA profile. Note that, in that case, the Secure Node actor is set "not tested", so that failling this test do not fail the Secure Node actor.
_WITH_ACTOR_KEYWORD means that the system must support a second actor, the one is not tested, in order to perform some initialization steps. For example PEC_PAM_WITH_PDC gathers the Patient Encounter Consumer actor from the Patient Admission Management profile and the Patient Demographic Consumer from the same profile; this is required because we need to populate the database of the PEC with some data received thanks to the PDC. Keep in mind that such associations must be meaningful that means that the gathered actors are linked by an IHE dependency.
Finally, _HELPER means that the role is not tested but is required to ensure the coherence of the test.
Here are some examples to let you better understand the naming convention:
DOC_CONSUMER_XDS.b_ANY_OPTIONS gathers all the Document Consumer of the XDS.b profile no matter the options they support.
IM_SWF_HELPER gathers all the Image Manager from the Schedule Workflow profile but those actors are not tested.
If the test participant is a tool or a simulator, we will used the system name as test role name: <SIMULATOR or UTILITY_NAME>, for instance ORDER_MANAGER, CENTRAL_ARCHIVE, NIST_REGISTRY and so on.
Once you have chosen the roles involved in your test, you will be asked, for each of them to give some more information such as:
- # to realize: the number of times the system must realize with success this test for the tested actor to be validated. Typically, this is "3" for peer to peer tests, and "1" for No Peer and Group tests.
- Card Min: (cardinality) how many (at least) systems with this role must be involved in the test
- Card Max: (cardinality) how many (at most) systems with this role can be involved in the test
- Optionality: whether or not this role is a mandatory participant in the test. "Required" means a vendor cannot start the test until a system is identified for this role. "Optional" means that the test may be run whether or not a system is identified for this role in the test.
4. Test Steps
To help vendors with performing the test, we cut the test into small entities called test steps. In a newly defined test, when you first arrive on this page, you will find a sequence diagram only filled with the different roles you have previously defined. As you add test steps, you will be able to see the sequence diagram is automatically updated according to the steps you have defined. The red arrows stand for secured transaction (TLS set to true)
Test steps are ordered based on the step index, in most of the cases, vendors will have to respect the given order, especially if the test is run against a simulator.
Each step is described as follows:
- Step index: index of the step
- Initiator role: test role in charge of initiating the transaction (only roles with actors which are initiators for at least one transaction can be used as initiator role)
- Responder role: test role acting as the receiver for this step (only roles with actors which are responders for at least one transaction can be used as responder role)
- Transaction: the transaction to perform by the two actors
- Secured: indicates whether the transaction must be perform over TLS or not
- Message type: the message sent by the initiator
- Option: indicates whether the test step is required or not
- Description: detailed instructions on how to perform the step
When editing a step, you can choose to check or not the Auto Response box. When it is checked, it indicates that the selected role has to perform a step alone (initialization, log ...), no transaction nor message type have to be specified.
In order not to waste time editing steps for a little change, the step index field, secured checkbox, option selection and description fields can be filled from the main page of test steps. The change is recorded in database each time you lose the focus of the modified field.
If you have chosen to write an orchestrated test, that means that the system under test will communicate with a simulator, you may have to enter some more informations called "Contextual Information". In some cases, those informations are needed by the simulator to build a message which match the system configuration or data. This can be used to specifiy a patient ID known by the system under test for instance.
Two kinds of contextual informations are defined:
- Input Contextual Information: The information provided to the simulator
- Output Contextual Information: The information sent back by the simulator and that can be used as input contextual information for the next steps
For each contextual information, you are expected to provide the label of the field and the path (it can be XPath or HL7 path if you need to feed a specific XML element or HL7v2 message segment). A default value can also be set.
If you have defined output contextual informations for previous steps, you can use them as input contextual information for next steps by importing them, as it is shown on the capture below. So that, the simulator will received the return of a previous step as new information and will be able to build next messages.
For more details about the expectation of simulators, read the developer manual of the simulator you want to involve in your test. A short example based on XCA Initiating Gateway Simulator use is given below.
XCA Initiating Gateway supports two transactions: ITI-38 for querying the responding gateway about the documents for a specific patient and ITI-39 to retrieve those documents. In a first step we may ask the responding gateway for the documents of patient 1234^^^&220.127.116.11.5.6&ISO, in the second step we will ask the responding gateway to send the first retrieved document.
|Input Contextual Information|
|Output Contextual Information|
|step 2||Input Contextual Information|
In this way, no action on the simulator side is required from the vendor, he/she only has to set up his/her system under test and give the first input contextual information to the simulator through the Test Management user interface.