This topic explains how to define a custom test execution flow for cross-platform testing (e.g., for building tests using the cross compiler and then performing platform-specific operations for deployment, test execution, and result collection).

Section includes:

Understanding Test Execution Flow

When you run a Test Configuration set to execute tests, C/C++test performs a series of actions that usually lead to the unit testing results being loaded into the C/C++test UI. These actions are specified in a test flow definition, which is stored in XML format and saved as part of the Test Configuration. All built-in Test Configurations that perform test execution have corresponding preconfigured test flow definitions. 

In addition to providing preconfigured execution flows designed specifically for host-based testing, C/C++test also allows you to customize test flows to provide an easy way of working with non-standard environments. 

Customizing the Test Execution Flow

You can configure test execution flow parameters to suit your specific needs by using the editor available in the  Execution> General> Execution Details section of the Test Configuration manager.

In most cases, there is no need to create a custom version of the execution flow because built-in Test Configurations allow easy editing of the most critical and commonly-modified flow properties. 

To define a custom execution flow for a user-defined Test Configuration:

  1. Choose Parasoft> Test Configurations to open the Test Configuration panel.
  2. Open the Execution> General tab.
  3. To modify existing properties, edit the parameters directly in the properties table.



    If you right-click in the table, the following commands are available: 
  4. To adjust a test flow property that is not currently shown in the table (e.g. properties that you expect to modify frequently):
    1. Select the appropriate flow, then click Edit.
    2. Find that property definition in the test flow. It will look like:

      <SetProperty key="property_key"
      value="property_default_value"
    3. Add two additional attributes — uiEditable="true" and displayName="user_friendly_name" — so that the property definition looks as follows

      <SetProperty key="property_key"
      value="property_default_value"
      uiEditable="true"
      displayName="This is a customizable property" />

      Since the uiEditable="true" attribute is added to the SetProperty flow step, this property will be customizable in the properties table. It will be listed here using the value of the displayName attribute. The value attribute will be used as a default value.

    4. Click OK to save the modified file. The XML document will be validated. If any problems are found during the validation, they will be reported.
  5. Click Apply, then OK to save the modified Test Configuration.

The customizations will then be saved with the Test Configuration.

Defining a Custom Test Execution Flow: Advanced

We recommend that you try to modify the properties list (as described above) first, then—if additional flow customization is required—you define a custom test flow.

Custom test execution flow definitions are stored in XML format and saved as part of the Test Configuration (which makes it possible to share it across the team). Typically, such definitions describe the sequence of steps necessary to perform unit testing in a non-standard environment. They can also be used to start any command line utility. In fact, a custom test flow definition can include C/C++test internal actions or external utilities started as processes in the operating system. The default test flow can be modified by:

To define a custom execution flow for a user-defined Test Configuration:

  1. Choose Parasoft> Test Configurations to open the Test Configuration panel.
  2. Expand the Built-in> Unit Testing folder.
  3. Right-click Build Test Executable, then choose Duplicate from the shortcut menu.




  4. Ensure that  User-defined> Build Test Executable is selected.
  5. Enter a new name for this Test Configuration (for example, something like Build <target> Executable).
  6. Open the Execution> Symbols tab and clear the Use stubs found in check box.

    If you build the test executable without disabling the Use stubs found in option, you will get the following error when building the test executable:

    Error preprocessing file "C:\Program 
    Files\ParaSoft\C++test9.0\plugins\com.parasoft.xtest.libs.cpp.win32.x86_9.0.4.43\ os\win32\x86\etc\safestubs\safe_stubs_Win32.c": 
    
    Process exited with code: 1
  7. Edit the test flow:
    1. Open the Execution> General tab.
    2. Select Custom flow (license required) from the Test Execution flow combo box, then click the Edit button.
    3. Enter your modified test flow by choosing an existing test flow from the Available built-in test execution flow box, clicking Restore, then editing the XML for that test flow. See Customizing the Test Execution Flow Definition - Example for details on how to modify the flow definition.
  8. Click OK to save the modified file. The XML document will be validated. If any problems are found during the validation, they will be reported.
  9. Click Apply, then OK to save the modified Test Configuration.

Defining a Test Flow with a Socket Communication Channel

In embedded environments where TCP/IP sockets are available, it is common to use them to automate the test flow. Test execution using a socket communication channel typically involves:

  1. Building the test executable by running the appropriate Test Configuration.
  2. Deploying the test executable to the target device.
  3. Running the C/C++test SocketListener tool, which is configured to collect the results through a socket communication channel.
  4. Starting the test executable. After initialization, it will open two ports and try to send data to C/C++test. This data will be collected by the listening agent and loaded into C++test for further analysis.
<CustomStep
id="run_socket_listeners"
label="Running Socket Listeners..."
commandLine="&quot;java&quot; -cp &quot;${cpptest:cfg_dir}/../lib/source/socket_listener&quot;
SocketListener --channel &quot;${cpptestproperty:results_port}@$
{cpptest:testware_loc}/cpptest_results.tlog&quot; --channel 
&quot;${cpptestproperty:coverage_port}@$
{cpptest:testware_loc}/cpptest_results.clog&quot; -sf 
&quot;${cpptest:testware_loc}/sync_file&quot; -to 60"
workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.res"
runInBackground="true"
/>
<CustomStep
id="run_synchronization"
label="Running Synchronization..."
commandLine="&quot;java&quot; -cp &quot;${cpptest:cfg_dir}/../lib/source/socket_listener&quot;
Synchronize -sf &quot;${cpptest:testware_loc}/sync_file.init&quot; -to 60"
workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.res"
runInBackground="false"
/>
<CustomStep
id="run_test_exec"
label="Running Tests..."
commandLine="${cpptest:testware_loc}/${project_name}Test.foo" workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.res"
runInBackground="false"
/>
<CustomStep
id="run_synchronization"
label="Running Synchronization..."
commandLine="&quot;java&quot; -cp &quot;${cpptest:cfg_dir}/../lib/source/socket_listener&quot;
Synchronize -sf &quot;${cpptest:testware_loc}/sync_file.final&quot; -to 60"
workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.res"
runInBackground="false"
/>

The above steps perform the following actions (in the listed order):

Selecting External Embedded Debugging Mode

Selecting External Embedded debugging mode can only be done by direct modification to the Test Flow recipe. Insert the following line into the Test Flow recipe somewhere near the start of the first <RunnableExecution> section; among other unpublished properties:

<SetProperty key="emb_dbg" value="true" />

For examples, please review the built-in Test Configurations for environments for which the External Embedded mode is supported (see appropriate chapters).

For information about External Embedded debugging mode see Debugging Test Cases.

Changing the Test Entry Point Function

Entry point macros control the testing entry point. By default, main() is called. However, sometimes the tests can only be called from a different point in the program you are testing, or need to be called in a special manner. You can control this using the following macros:

NameDescription
CPPTEST_EPT_mainIf defined, the 'int main(int, char*[])' is used as an entry point.
CPPTEST_EPT_wmainIf defined, the 'int wmain(int, wchar_t*[])' is used as an entry point.
CPPTEST_EPT_WinMainIf defined, the 'int WINAPI _tWinMain(HINSTANCE, HINSTANCE, LPSTR, int)' is used as an entry point.
CPPTEST_EPT_WinMainIf defined, the 'int WINAPI _tWinMain(HINSTANCE, HINSTANCE, LPWSTR, int)' is used as an entry point.
CPPTEST_EPT_void_mainIf defined, the 'void main(int, char*[])' is used as an entry point.
CPPTEST_EPT_main_no_argsIf defined, the 'int main(void)' is used as an entry point.
CPPTEST_EPT_void_main_no_argsIf defined, the 'void main(void)' is used as an entry point.
CPPTEST_ENTRY_POINT_C_LINKAGEIf defined, the main function declaration starts with 'extern "C"' if compiled as c++ code.

You can also define the CPPTEST_ENTRY_POINT macro. If this macro will be defined, the generated main function will look like this:

    CPPTEST_ENTRY_POINT_RETURN_TYPE 
CPPTEST_ENTRY_POINT(CPPTEST_ENTRY_POINT_ARGS)
{
	CppTest_Main(CPPTEST_ENTRY_POINT_ARGC, CPPTEST_ENTRY_POINT_ARGV); 
	CPPTEST_ENTRY_POINT_RETURN_STATEMENT
}

CPPTEST_ENTRY_POINT_RETURN_TYPE, CPPTEST_ENTRY_POINT_ARGS, CPPTEST_ENTRY_POINT_ARGC, CPPTEST_ENTRY_POINT_ARGV and CPPTEST_ENTRY_POINT_RETURN_STATEMENT have the following default values that can be redefined:

NameValue
CPPTEST_ENTRY_POINT_RETURN_TYPEint
CPPTEST_ENTRY_POINT_ARGSvoid
CPPTEST_ENTRY_POINT_ARGC0
CPPTEST_ENTRY_POINT_ARGV0
CPPTEST_ENTRY_POINT_RETURN_STATEMENTreturn 0;

You can also define the CPPTEST_ENTRY_POINT_DEFINED macro. Defining this macro prevents the main routine from generating. In these cases, you need to call a CppTest_Main(0, 0) function to execute the test cases. 

You must include the cpptest.h header file in the source file that contains the CppTest_Main(0, 0) call. Do not call the CppTest_Main(0, 0) from a function that is called during unit testing. Doing so will result in a loop where CppTest_Main(0, 0) is called infinitely. 

Be sure to control changes to the source file with the PARASOFT_CPPTEST macro. For example:

#ifdef PARASOFT_CPPTEST 
#include "cpptest.h"
#endif
...
#ifdef PARASOFT_CPPTEST
    CppTest_Main(0, 0); 
#endif

These macros can be set in the Compiler Options area of the Project Options’ Build Settings.


See Setting Project and File Options for details on project options.

Customizing the Test Execution Flow Definition - Example

Here is a sample test flow for Windows host-based unit testing:

<?xml version="1.0" encoding="UTF-8"?>
<FlowRecipeTemplate toolName="C++test" formatVersion="1.0">
	<Name>Default host-based unit testing</Name>
 
	<RunnableExecution>
 
	<SetProperty key="stub_config_file" value="${cpptest:testware_loc}/stubconfig.xml" />
	<SetProperty key="stub_config_header_file" value="${cpptest:testware_loc}/cpptest_stubconfig.h" />
 
	<TestCaseFindingStep
		testSuiteConfigFile="${cpptest:testware_loc}/testsuites.xml" allowNoTestCasesRun="false"
	/>
 
	<PrecompileStep />
		<AppendIncludedTestCases />
		<HarnessInstrumentationStep /> 
		<ReadStaticCoverageStep />
		<SendStaticCoverageStep />
		<UserStubsInstrumentationStep />
		<ReadSymbolsDataStep /> 
		<LsiStep />
		<ReadLsiConfigStep />
 
	<AnalyzeMissingDefinitions stopOnUndefined="true" generateS-tubs="false" />
	
	<ConfigureStubs />
 
	<CreateStubConfigHeader />
 
	<TestRunnerGenerationStep
		testSuiteConfigFile="${cpptest:testware_loc}/testsuites.xml"
		testrunnerCFile="${cpptest:testware_loc}/cpptest_testrunner.c"
		testrunnerCppFile="${cpptest:testware_loc}/cpptest_testrunner.cpp"
		testLogFile="${cpptest:testware_loc}/cpptest_results.tlog" 
		covLogFile="${cpptest:testware_loc}/cpptest_results.clog"
	/>
 
	<CompileStep />
 
	<LinkStep result="${cpptest:testware_loc}/${project_name}Test.exe"/>
 
	</RunnableExecution>
	     <ExecuteTestsExecution>
			<RemoveFileStep
				file="${cpptest:testware_loc}/cpptest_results.tlog"
			/>
 
	<CustomStep
		id="run_tests"
		label="Running tests..."
		commandLine="&quot;${cpptest:testware_loc}/${project_name}Test.exe&quot; --start-after=${cpptestprop-erty:test_case_start_number}"
		workingDir="${cpptest:test_exec_work_dir}"
		result="${cpptest:testware_loc}/cpptest_results.tlog" 
		timeoutTrackFile="${cpptest:testware_loc}/cpptest_results.tlog" 
		timeoutInfoProperty="test_exec_timeouted"
		runInDebugger="${cpptest:run_in_debugger}"
	/>
 
	<ReadTestLogStep
		testLogFile="${cpptest:testware_loc}/cpptest_results.tlog" timeoutInfoProperty="test_exec_timeouted"
	/>

	<ReadDynamicCoverageStep
		covLogFile="${cpptest:testware_loc}/cpptest_results.clog"
	/>
 
	   </ExecuteTestsExecution>
		
		   <RunnableExecution>
				     <ClearTempCoverageData />
							</RunnableExecution>
</FlowRecipeTemplate>
  • runInBackground specifies whether the step should be run in the  background or foreground. When set to true, it allows the following action to run immediately, without waiting for the previous action to finish. When set to false, the running process must finish before the next-in-order can be started. The default value is false.

If the custom (cross) compiler definition was added correctly and the prebuilt host C/C++test runtime library was replaced with the cross-compiled target-specific one (see Configuring Testing with the Cross Compiler), then C/C++test should be able to successfully perform almost all actions— except the steps used for starting the test executable and loading the test log files. These steps are described below.

<CustomStep
id="run_tests"
label="Running tests..."
commandLine="&quot;${cpptest:testware_loc}/${project_name}Test.exe&quot;" workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.tlog"
runInBackground="false"
timeoutTrackFile="${cpptest:testware_loc}/cpptest_results.tlog"
/>
<ReadTestLogStep
testLogFile="${cpptest:testware_loc}/cpptest_results.tlog"
/>
<ReadDynamicCoverageStep
covLogFile="${cpptest:testware_loc}/cpptest_results.clog"
contextID=""
/>

You will probably need to replace these steps with ones suitable for the embedded environment you are working with. For example, assume your target is an Embedded Linux based Power PC board that supports FTP but does not support remote execution (no rsh or any similar utilities). In this case, the general schema for testing could be:

  1. Prepare the test executable and perform the cross build (this should be handled by the default set of actions but with the cross compiler).
  2. Deploy the test executable to the target board using FTP.
  3. Wait for test execution to complete (the test executable will be started by the agent located on the target).
  4. Download the test results using FTP.

For instance, let’s consider a trivial implementation of this schema.

If this is the existing step:

<CustomStep
id="run_tests"
label="Running tests..."
commandLine="&quot;${cpptest:testware_loc}/${project_name}Test&quot;" workingDir="${cpptest:testware_loc}"
result="${cpptest:testware_loc}/cpptest_results.tlog" runInBackground="false"
timeoutTrackFile="${cpptest:testware_loc}/cpptest_results.tlog"
/>

You could have this:

<CustomStep
id="deploy_test"
label="Deploying tests..."
commandLine="&quot;/home/machine/user/cptests/helper/Store.py${project_name}Test&quot;"
workingDir="${cpptest:testware_loc}"
/>
<CustomStep
Customizing the Test Flow 341
id="sleep"
label="Waiting for test execution..." 
commandLine="sleep 30" 
workingDir="${cpptest:testware_loc}" 
result="cpptest_results.tlog" 
runInBackground="false"
/>
  • The FTP transfer was automated using the helper python script, which is provided at the end of this topic.

When test execution flow reaches the “sleep” step, the test executable should already be deployed to the target board. The sleep command is necessary to provide trivial synchronization so that the next step (downloading test results) will not start before test execution completes. 

In production environments, you would typically want to implement more accurate synchronization (for example, with the help of a file marker being created after the test execution is finished). In this example, we will simply use sleep, which stops the flow execution before trying to access the results files on the target platform. It requires a simple agent on the target platform—an agent that waits for the uploaded test executable and starts it. A simple example of such a script is presented below:

#!/bin/bash
while [ 1 ]
do
if [ -e *Test ]; then
# Give it some time to finish upload...
# test executable transfer may be in progress
echo "Found test executable, waiting for upload to be finished..." sleep 10
echo "Cleaning up before tests..."
rm results.*log
echo "Adding exe perm .... to *Test"
chmod +x *Test
# execute
echo "Executing *Test"
./*Test
# Remove test executable
echo "Removing *Test"
rm -f ./*Test
else
echo "Test executable not detected ..." fi
echo "Sleeping..."
sleep 3
done

Again, it does not implement any synchronization; it just looks for a specified pattern of file names to appear in file system. Once it appears, it waits some extra time for the upload to finish (this can be replaced by creating a file marker on the host side after the upload is finished) and then the test executable is started, and results files (results.tlog with test results and results.clog with coverage results) are created. These files need to be downloaded back to the host machine and loaded into C++test. To accomplish this, you need to modify another piece of the test flow definition. Before these "reading log" steps...

<ReadTestLogStep
testLogFile="${cpptest:testware_loc}/cpptest_results.tlog" />
<ReadDynamicCoverageStep
covLogFile="${cpptest:testware_loc}/cpptest_results.clog" contextID=""
/>

you would add custom steps for downloading the results files from the target platform; for example:

<CustomStep
id="results_dwl"
label="downloading tests results..."
commandLine="&quot;/home/machine/user/cptests/helper/Get.py cpptest_results.tlog&quot;"
workingDir="${cpptest:testware_loc}"
result="cpptest_results.tlog"
runInBackground="false"
/>
<CustomStep
id="coverage_dwl"
label="downloading coverage results..."
commandLine="&quot;/home/machine/user/cptests/helper/Get.py cpptest_results.clog&quot;"
workingDir="${cpptest:testware_loc}"
result="cpptest_results.clog"
runInBackground="false"
/>

Having this set up, you should be able to automatically perform the complete testing loop with unit test execution on the target device. Depending on the development environment, numerous variations of this example can be applied. 

Sample FTP Automation Scripts

This section provides sample Python scripts for automating FTP operations.

Store.py

#!/usr/bin/python
 
import sys
print "Argv = ", sys.argv
from ftplib import FTP
ftp = FTP('10.9.1.36', 'user', 'pass') ftp.set_debuglevel(1)
print ftp.getwelcome()
file = open(sys.argv[1], "rb")
try:
	     ftp.storbinary("STOR %s" % sys.argv[1], file)
finally:
	     file.close()      
		ftp.quit()

Get.py

#!/usr/bin/python
 
import sys
print "Argv = ", sys.argv 
if len(sys.argv) < 2:
     print "Too few arguments"
from ftplib import FTP
ftp = FTP('10.9.1.36', 'user', 'pass') ftp.set_debuglevel(1)
print ftp.getwelcome()
file = open(sys.argv[1], "wb")
try:
     ftp.retrbinary("RETR %s" % sys.argv[1], file.write)
finally:
     file.close() 
ftp.quit()

Test Flow Descriptions and Examples

Execution Control Elements

These are highest level elements (besides the 'FlowRecipeTemplate' document root) of every recipe. They are containers for execution flow steps (commands): steps have to be placed inside their context and are executed sequentially in it. However, they cannot be nested. Introducing a new one requires closing its predecessor.

RunnableExecution

Unconditionally execute contained commands.

Example:

<RunnableExecution>
    <Echo msg="Hello world!" /> 
</RunnableExecution>"

ConditionalExecution

Conditionally execute contained commands. Useful when test value contains one of C/C++test execution flow variables described later.

Attributes: 

Example:

 <ConditionalExecution value="${cpptest:os}" equals="unix">
     <Echo msg="This is UNIX-like OS." /> 
</ConditionalExecution>"

ExecuteTestsExecution

Execute contained commands in a loop until all Test Cases have been executed. This element is only useful/valid when contains the test execution step (a 'CustomStep' launching the Test Executable) followed by the 'ReadTestLogStep'. Inside the {cpptestproperty:test_case_start_number} variable is updated automatically and shall be passed to the '--start-after=' parameter of the Test Executable.

Example:

<ExecuteTestsExecution>
    <CustomStep commandLine="&quot;${cpptest:testware_loc}/
	${project_name}Test.exe&quot; --start-after=${cpptestprop-erty:test_case_start_number}"
...other attributes... /> 
    <ReadTestLogStep testLogFile="${cpptest:testware_loc}/cpptest_results.tlog" />
</ExecuteTestsExecution>"

Commands

AnalyzeMissingDefinitions

Internal step used to analyze information about missing symbols. It can be also used to automatically generate stubs for missing symbols if the appropriate attribute is set. If (after optional stub generation) there are still some missing symbols, execution flow will be stopped if the Execution -> Symbols -> Perform early check for potential linker problems option is checked in the Test Configuration.

Attributes:

Example:

     <AnalyzeMissingDefinitions generateStubs="true" />

AppendIncludedTestCases

Internal step that bundles the included test suite files with the appropriate project source files.

Example:

      <AppendIncludedTestCases />

ClearTempCoverageData

Internal step used to clear temporary static and dynamic coverage data. It should be included at the end of the execution flow if there is no need to store static coverage data for future use. If you have  two execution flow recipes  that are meant to be run in sequence (for example, "Build test executable" and "Read test results"), only the second one should contain ClearTempCoverageData step at the end.

Example:

     <ClearTempCoverageData />

CompileStep

Internal step that compiles the instrumented source files to be linked into the test executable.

Example:

     <CompileStep />

ConditionalStop

Stops test execution flow if given the condition is fulfilled or not fulfilled (depending on attributes).

Deprecated - This command will be removed in subsequent versions of C++test. Use combinations of 'ConditionalExecution' control element and unconditional 'Stop' instead.

Attributes:

Example:

     <ConditionalStop property="test_exec_timeouted" equals="true" />

ConfigureStubs

Internal step used to create stub configuration xml file. The location of the resulting file is controlled by the "stub_config_file" execution flow property.

Example:

     <ConfigureStubs />

CustomStep

Runs external executable.

Attributes:

Example:

<CustomStep
	id="ctdt_nm"
	label="CTDT Generation - extracting symbols..."
	commandLine="&quot;${cpptestproperty:nm}&quot; -g ${cpptest:test_objects_quoted}"
	workingDir="${cpptest:testware_loc}"
	stdOut="${cpptest:testware_loc}/ctdt_nm.out" 
	result="${cpptest:testware_loc}/ctdt_nm.out" 
	bypassConsole="true"
	dependencyFiles="${cpptest:test_objects_quoted}"
/>

CreateStubConfigHeader

Internal step used to produce a stub configuration header file to be included by the instrumented sources. The location of the resulting header is controlled by the "stub_config_header_file" execution flow property. 

Example:

     <CreateStubConfigHeader />

Echo

Prints given message on the C++test console or to the file.

 Attributes:

Example:

     <Echo msg="Hello world!" />

HarnessInstrumentationStep

Internal step instruments the project source files.

Example:

     <HarnessInstrumentationStep />

LinkStep

Links the test executable using previously prepared object files.

Attributes:

Example:

      <LinkStep result="${cpptest:testware_loc}/${project_name}Test.exe" />

LsiStep

Internal step used to analyze information about used and available symbols.

Attributes:

Example:

     <LsiStep />

PrecompileStep

Internal step that precompiles the project source files for the LSI analysis.

Example:

      <PrecompileStep />

PrepareDataSources

Converts managed data sources into the format supported by C++test test execution runtime library— either a CSV file or a source file containing a source code array to be compiled into test executable.

Attributes:

Example:

     <PrepareDataSources limit="100" type="csv" />

ReadDynamicCoverageStep

Reads the log file containing coverage results from the test execution.

Attributes:

ReadLsiConfigStep

Internal step used to read data from the LSI module.

Example:

     <ReadLsiConfigStep />

ReadNativeCoverage

Reads test coverage information from the native test cases imported from the C++test 2.3/6.x (if there are any in the current test unit).

Example:

      <ReadNativeCoverage />

ReadNativeTestsLog

Reads test execution log from the native test cases imported from the C++test 2.3/6.x (if there are any in the current test unit).

Example:

      <ReadNativeTestsLog />

ReadStaticCoverageStep

Internal step that reads the static coverage information file prepared when instrumenting project source files.

Example:

     <ReadStaticCoverageStep />

ReadSymbolsDataStep

Internal step used to read information about symbols used / defined in processed source files.

Attributes:

Example:

    <ReadSymbolsDataStep />

ReadTestLogStep

Reads test execution results from the test log.

Attributes:

Example:    

<ReadTestLogStep
	testLogFile="${cpptest:testware_loc}/cpptest_results.tlog" 
	timeoutInfoProperty="test_exec_timeouted"
	logTime="'Date': yy/MM/dd, 'Time': HH:mm:ss.SSS"
/>

RemoveFileStep

Removes given file/files from the file system.

Attributes:

Example:

     <RemoveFileStep file="${cpptest:testware_loc}/*.clog" />

RunNativeTests

Executes the native test cases imported from the C++test 2.3/6.x (if there are any in the current test unit).

Example:

     <RunNativeTests />

SendStaticCoverageStep

Internal step used to pass all read static coverage information to the coverage results system.

Example:

     <SendStaticCoverageStep/>

SetProperty

Sets the execution flow property. You can use regular expressions with the 'search' and 'replace' attributes to search for and replace values . This is especially useful when the test value contains one of the C++test execution flow variables described later in this section.

Attributes:

Example 1:

<SetProperty key="stub_config_file" value="${cpptest:testware_loc}/stubconfig.xml" />

stores the plain value into "${cpptestproperty:stub_config_file}"

Example 2:

<SetProperty key="twl_mixed_path" value="${cpptest:testware_loc}" search="\\" replace="/" />

replaces all backslashes with forwardslashes for the provided value and stores the outcome into "${cpptestproperty:twl_mixed_path}":

Stop

Unconditionally stops the execution flow.

TestCaseFindingStep

Internal step used to prepare an xml file with the list of the test cases to be executed. The xml file is later used by the TestRunnerGenerationStep.

Attributes:

Example:

<TestCaseFindingStep

       testSuiteConfigFile="${cpptest:testware_loc}/testsuites.xml" allowNoTestCasesRun="false"

/>

TestRunnerGenerationStep

TestRunnerWithSocketsGenerationStep

Prepares test runner source code that will work as a driver of the test case execution. Depending on the specified attributes, the test executable will produce log files or will send results via socket connection.

Attributes:

UserStubsInstrumentationStep

Internal step that instruments the stub files.

Example:

     <UserStubsInstrumentationStep />

Variables

Since the execution flow recipe is an xml document, all values used as flow step attributes need to be valid xml strings. This means that all occurrences of special characters (such as " or < )  need to be substituted with appropriate escape sequences (for example,  &quot; or &lt; respectively).

Variables that can be used in the flow step attributes are:

FileSynchronizeStep

This step pauses the execution flow while waiting on a specific file to either be created or for the file to be inactive for a specified amount of time. Behavior depends on the attributes used described below.

Attributes:

Examples:

The following setting will pause the execution flow for 10 seconds or until the sync_file.txt file appears at the specified location:

<FileSynchronizeStep
	fileSync="${cpptest:testware_loc}/sync_file.txt" 
	timeout="10000"
/>

The following setting will track the timestamp of the cpptest_results.tlog file. When C++test detects that the file is inactive for 10 seconds, then the execution flow will resume:

<FileSynchronizeStep
	fileSync="${cpptest:testware_loc}/cpptest_results.tlog" 
	fileInactiveTimeout="10000"
/>