Automated Testing in MDW

Typical steps for function testing of MDW applications include:

  1. Start a process by sending a message whose content corresponds to a registered Event Handler, or by launching directly.
  2. If the process generates manual tasks, complete them so that process flow will proceed.
  3. If the process calls an external system through an adapter activity, let the system respond, or in situations where the system is unavailable or its interface is not a focus of current testing, supply a simulated response (through stubbing) .
  4. If the process waits for an asynchronous external message, arrange for the external system to send the required message, or emulate this by invoking the relevant Event Handler with an appropriate message.
  5. Verify that the process instance has completed as expected and that its workflow has progressed according to the correct route through the appropriate activities.
  6. Verify that any requests/responses to external systems are correct.
  7. Verify that outcome data values (process variables) are populated as expected.

Using MDW Studio you can create repeatable test cases to automate these and other actions. Your test cases can be executed on demand in MDW Studio or from your build server as part of a continuous integration lifecycle using the Ant Automated Test task. MDW also supports load testing to exercise your workflow processes and capture performance metrics.

Sections in This Guide

Test Cases

A test case is a way to specify how to run an automated function or load test. Each test case is stored as an asset. A typical test case will make use of the following resources:

When a test case is executed, the MDW automated tester generates result and log files. By default these are written to a subdirectory under testResults with the same path as your test case asset. This test case result directory may contain the following files:

Creating Test Cases

To launch the Test Case Wizard, right-click on a workflow package in Process Explorer and select New > Test Case from the menu.

When you click Finish, a blank Groovy test script command file will be generated and opened in an editor pane. If you've got the MDW framework test processes present in your workspace you can use the following test script verbatim. If you don't have the MDW test processes, you can create a simple process and reference that in the "start" and "verify" commands.

// start-stop process test
start process("com.centurylink.mdw.tests/StartStop")
wait process
verify process 
  

The meaning of this sequence is fairly obvious. For a full discussion of the available testing commands, refer to the Groovy Test Script Syntax guide.

Note: You can import the MDW framework test cases (or any other shared cases) using the MDW Asset Discovery mechanism. The framework test cases provide examples of over one hundred functioning tests that you can refer to when creating your own. The framework test case assets are in workflow packages com.centurylink.mdw.tests and com.centurylink.mdw.tests.cases.

Running Tests

If your server is running you can execute the test case now by right-clicking on it and selecting "Run..." from the menu. Make sure to enable "Create/Replace Results" or else the case will fail with an error telling you that no expected result file exists for the process being verified.

As your test case executes the Test Exec view in MDW Studio shows a green bar to indicate progress and displays the test output in its console pane on the right side. The test should succeed since the expected results YAML asset is generated from the actual outcome. And on completion the results asset start-stop.yaml should appear in your workflow package in Process Explorer.

Note: Test cases run on the client in MDW Studio and interact with the server through built-in REST services.

There are three main ways to launch automated tests in MDW Studio:

The automated test launch dialog supports the options summarized below. These are saved as IntelliJ/Eclipse run configurations, which is a named setup that remembers these settings as well as the list of test cases to run. The launch configuration dialog contains separate tabs for Function Testing and Load Testing. For Load Testing, next to each selected case a Count column appears to specify the number of executions for that test.

Test Results and Output

The green bar in Test Exec view will be familar to developers who've used the JUnit runner in IntelliJ/Eclipse. The Test Exec tree pane shows which cases are scheduled to run, along with their statuses. While a test is running its tree icon displays an arrow symbol indicating that it is in progress, and when it's completed, the icon is updated to to indicate success or failure. The same status indication appears on the test case in Process Explorer view. The difference is that Process Explorer remembers the history of all test cases, whereas Test Exec view shows only the current run(s).

While a test is executing or when it's completed, you can select it in Test Exec tree view to see its output. Once the test has produced results, the icon for its result node in Test Exec view changes to indicate that there are now actual results corresponding to the expected results file.

Now that your start-stop test is complete, you can right-click on the start-stop.yaml results in Test Exec and select Compare Results and the IntelliJ/Eclipse Text Compare editor should show the differences, with the expected results on the left and the actual results on the right. Using the merge-left center icon, or simply by cutting and pasting, you can copy the actual results into the expected results file and hit ctrl-s to save. This is another way you can overwrite an existing results asset once you've had a good test case run.

For a detailed explanation of the contents of these test result files format, refer to the MDW Test Results Format document.

Another option available by right-clicking on the StartStopProcess_I1 result is "Open Process Instance", which is especially handy for troubleshooting when you need to look into the reason why a test case failed.

HTML Test Results

Process Explorer shows the status of all the tests that have been executed for a project. This same information can be displayed as an HTML page by right-clicking on the Automated Tests folder and selecting Format Results > Function Tests (or Load Tests, if available).

With VCS Assets, the XSL stylesheet for transforming raw test results into HTML in MDW Studio can be customized by creating any asset package ending in ".testing" and within that creating an XSL asset named function-test-results.xsl (for function testing) and/or load-test-results.xsl (for load testing). The default stylesheets with these names are available in the com.centurylink.mdw.testing package as a starting point.

The HTML results summary can also be generated during test case execution using Ant. In a continuous integration environment it's a good idea to generate the test case HTML summary into a location that's accessible to a web server so that a link to the current test results can always be accessed from a browser.

Load Testing

Load tests are similar to automated function tests, and in MDW Studio they're launched using the same dialog. On the Load Testing tab there's a Count column in the Test Case table where you'll enter the desired number of executions for each selected test.

The loader tester can run most test cases defined for function testing, but any verify commands are ignored. The load tester does not generate per-test result files or validate outcomes; instead focusing on overall throughput. Therefore test cases to be used for load testing should be already known to execute correctly. Load tests performance results are generated in main testResults directory. An HTML summary can be viewed in Process Explorer by right-clicking on the Automated Tests folder and selecting Format Results > Load Tests

Load Test Placeholders

You can add a test resource file called placeHolderMap.csv to supply different values for separate runs. The load tester uses the rows in the file sequentially. If the number of runs is larger than the number of rows in the file, the load tester repeats from the first row again once the last row is used. In your process variables or message content, the placeholder syntax #{placeholdeNamer} will be substituted from corresponding column value in the CSV file. An special implicit value, #{RunNumber} is assigned a 1-based index value corresponding to the sequential run number within the given case. For example, consider the following test script:

// start process with CSV placeholders
start process("com.centurylink.mdw.tests/MDWLoadTestRegular") {
    variables = [Run: "#{RunNumber}", Color: "#{Color}", Month: "#{Month}" ]
}

Values for RunNumber, Color and Month would be automatically populated from this CSV content:

Color,Month
Red,January
Green,Feburary
Blue,March

Running Tests Using Ant

To provide test coverage as part of your continuous integration procedure, you can use the MDW automated test Ant task in a build script. Here's an example automated test target which runs regular groovy tests:

<target>
  <echo message="Running MDW Automated Tests" />                    
  <taskdef name="mdwtests" onerror="report"
    classname="com.centurylink.mdw.designer.testing.AutoTestAntTask" 
    classpathref="maven.test.classpath" />

  <taskdef name="testReport" onerror="ignore" 
    classname="com.centurylink.mdw.ant.taskdef.AutoTestReport" 
    classpathref="maven.test.classpath" />  
              
  <mdwtests
    suiteName="mdwdemo"
    excludes="*-gherkin"
    serverUrl="http://localhost:8080/mdw"
    workflowDir="workflow/assets"
    testCasesDir="testCases"
    testResultsDir="testResults"
    testResultsSummaryFile="testResults/TestSuiteResults.xml"
    threadCount="5"
    intervalSecs="2"
    sslTrustStore="src/main/resources/CenturyLinkQCA.jks"
    user="mdwapp"
    password="mdwapp"
    stubbing="false"
    verbose="false" />
  <!-- mdwtests
    suiteName="mdwdemo-gherkin"
    includes="*-gherkin"
    serverUrl="http://localhost:8080/mdw"
    workflowDir="workflow/assets"
    testCasesDir="testCases"
    testResultsDir="testResults"
    testResultsSummaryFile="testResults/TestSuiteResults.xml"
    threadCount="5"
    intervalSecs="2"
    sslTrustStore="src/main/resources/CenturyLinkQCA.jks"
    user="mdwapp"
    password="mdwapp"
    stubbing="false"
    verbose="false" /-->  
  <testReport todir="testResults"
      testOutputFile="testResults/TestSuiteResults.xml" 
      xslFile="./testCases/test-results.xsl"
      emailRecipients="manoj.agrawal@centurylink.com">
      <report todir="testResults" format="noframes" />      
    </testReport>
  <copy file="testResults/junit-noframes.html" 
    tofile="testResults/mdwdemoTestResults.html" />
</target>  

There is good way to get a good pom.xml sample to start with is to use Test Case import feature in Intellij/Eclipse and select a gherkin test case from standard out of the box tests provided by MDW team.

The test results XML can be transformed into friendlier HTML and e-mailed to a distribution using the test report task:

  <taskdef name="testReport" onerror="ignore" 
    classname="com.centurylink.mdw.ant.taskdef.AutoTestReport" 
    classpathref="tests.classpath" />  

  <target name="reportTestResults">
    <testReport todir="testResults"
      testOutputFile="testResults/TestSuiteResults.xml" 
      xslFile="./testCases/test-results.xsl"
      emailRecipients="mdw.development@centurylink.com">
      <report todir="testResults" format="noframes" />      
    </testReport>  
    <copy file="testResults/junit-noframes.html" 
      tofile="${publish.dir}/NightlyTestResults.html" />
  </target>

Web UI for Automated Tests

Here is an example for testing your process from AdminUI

  • Make sure your server is up and running.
  • Bring up the mdw hub: localhost:8080/mdw
  • Click the Admin tab and click Testing from the left navigation to bring up a list of your test cases.
  • Expand the package that contains your test case.
  • Put a check mark on the test case and click the Run button (it is at the top right corner.):
  • Once it is suceefully run, you will see a green chek mark on your test case. By clicking it will bring up three different tabs: Test Case, Result and Log.