Command Line Interface

Invocation

$ altwalker [...]

You can also invoke the command through the Python interpreter from the command line:

$ python -m altwalker [...]

Help

Getting help on version, available commands, arguments or option names:

$ altwalker -v/--version

$ # show help message and all available commands
$ altwalker -h/--help

$ # show help message for the specified command
$ altwalker command_name -h/--help

Possible exit codes

Running altwalker can result in five different exit codes:

  • Exit Code 0: Tests were successfully run and passed.

  • Exit Code 1: Tests were successfully run and failed.

  • Exit Code 2: Command line errors.

  • Exit Code 3: GraphWalker errors.

  • Exit Code 4: AltWalker internal errors.

Commands

altwalker

A command line tool for running model-based tests.

altwalker [OPTIONS] COMMAND [ARGS]...

Options

-v, --version

Show the version and exit.

--log-level <log_level>

Sets the logger level to the specified level.

Default

WARNING

Options

CRITICAL|ERROR|WARNING|INFO|DEBUG|NOTSET

--log-file <log_file>

Sends logging output to a file.

Environment variables

ALTWALKER_LOG_LEVEL

Provide a default for --log-level

ALTWALKER_LOG_FILE

Provide a default for --log-file

Commands

init

Initialize a new project.

generate

Generate test code template based on the…

check

Check and analyze model(s) for issues.

verify

Verify test code from TEST_PACAKGE against…

online

Run the tests from TEST_PACKAGE path using…

offline

Generate a test path once, that can be runned…

walk

Run the tests from TEST_PACKAGE with steps…


altwalker init

Initialize a new project.

altwalker init [OPTIONS] DEST_DIR

Options

-m, --model <models>

The model, as a graphml/json file.

--git, -n, --no-git

If set to true will initialize a git repository.

Default

True

-l, --language <language>

The programming language of the tests.

Options

python|c#|dotnet

Arguments

DEST_DIR

Required argument

Examples:

$ altwalker init test-project -l python

The command will create a directory named test-project with the following structure:

test-project/
    .git
    models/
        default.json
    tests/
        __init__.py
        test.py
  • test-project: The project root directory.

  • models: A directory containing the models files

    (.json or .graphml).

  • tests: A python package containing the test code.

  • tests/tests.py: A python module containing the code for the models.

If you don’t want test-project to be git repository run the command with --no-git:

$ altwalker init test-project -l python --no-git

Note

If you don’t have git installed on your machine use the --no-git flag.

If you specify models (with the -m/--models option) init will copy the models in the models directory and test.py will contain a template with all the classes and methods needed for the models:

$ altwalker init test-project -m ./first.json -m ./second.json -l python

The test-project directory will have the following structure:

test-project/
    .git
    models/
        first.json
        second.json
    tests/
        __init__.py
        test.py

altwalker generate

Generate test code template based on the given model(s).

altwalker generate [OPTIONS] DEST_DIR

Options

-m, --model <models>

Required The model, as a graphml/json file.

-l, --language <language>

The programming language of the tests.

Options

python|c#|dotnet

Arguments

DEST_DIR

Required argument

Examples:

$ altwalker generate test-project -m models/models.json

The command will create a directory named test with the following structure:

test-project/
    tests/
        __init__.py
        test.py

For a models.json file with a simple model named Model, with an edge named edge_name and a vertex named vertex_name, test.py will containe:

class Model:

    def vertex_name(self):
        pass

    def edge_name(self):
        pass

The -m/--model option is required and can be used multiple times. And the generate command will generate a class for each model you provide.


altwalker check

Check and analyze model(s) for issues.

altwalker check [OPTIONS]

Options

-m, --model <models>

Required The model, as a graphml/json file followed by generator with stop condition.

-b, --blocked

Will fiter out elements with the keyword BLOCKED.

Note

The -m/--model is required but you can use it multiple times to provide multiple models:

Further Reading/Useful Links:

For more details and a list of all available Generators and Sotop Conditions read the Path Generation or the GraphWalker Documentation.

Examples:

$ altwalker check -m models/login.json "random(never)" -m models/blog.json "random(never)"
No issues found with the model(s).
$ altwalker check -m models/invalid.json "random(never)"
AltWalker Error:

Id of the edge is not unique: e_0

altwalker verify

Verify test code from TEST_PACAKGE against the model(s).

altwalker verify [OPTIONS] TEST_PACKAGE

Options

-m, --model <models>

Required The model, as a graphml/json file.

-x, -l, --executor, --language <executor>

Configure the executor to be used.

Default

python

Options

python|c#|dotnet|http

--url <url>

The url for the executor.

Default

http://localhost:5000/

Arguments

TEST_PACKAGE

Required argument

Examples:

$ altwalker verify tests -m models.json
No issues found with the code.

The verify command will check that every element from the provided models is implemented in the tests/test.py (models as classes and vertices/edges as methods inside the model class).

If methods or classes are missing the command will return a list of errors:

$ altwalker verify tests -m models.json
AltWalker Error: Expected to find vertex_0 method in class Model_A.
Expected to find vertex_1 method in class Model_A.
Expected to find vertex_2 method in class Model_A.
Expected to find class Model_B.
Expected to find vertex_0 method in class Model_B.
Expected to find vertex_1 method in class Model_B.
Expected to find edge_0 method in class Model_B.
Expected to find edge_1 method in class Model_B.

altwalker online

Run the tests from TEST_PACKAGE path using the GraphWalker online RESTFUL service.

altwalker online [OPTIONS] TEST_PACKAGE

Options

-p, --port <port>

Sets the port of the GraphWalker service.

-m, --model <models>

Required The model, as a graphml/json file followed by generator with stop condition.

-e, --start-element <start_element>

Sets the starting element in the first model.

-x, -l, --executor, --language <executor>

Configure the executor to be used.

Default

python

Options

python|c#|dotnet|http

--url <url>

The url for the executor.

Default

http://localhost:5000/

-o, --verbose

Will also print the model data and the properties for each step.

-u, --unvisited

Will also print the remaining unvisited elements in the model.

-b, --blocked

Will fiter out elements with the keyword BLOCKED.

--report-path

Report the path.

--report-file <report_file>

Save the report in a file.

Arguments

TEST_PACKAGE

Required argument

Examples:

For the -m/--model option you need to pass a model_path and a stop_condtion.

  • model_path: Is the file (.json or .graphml) containing

    the model(s).

  • stop_condition: Is a string that specifies the generator and

    the stop condition.

    For example random(never), a_star(reached_edge(edge_name)), where random, a_star are the generators and never, reached_edge(edge_name) are the stop conditions.

    For more details and a list of all available options read the GraphWalker Documentation.

The -m/--model is required but you can use it multiple times to provide multiple models.

For example:

$ altwalker online tests -m models.json "random(vertex_coverage(30))" -p 9999
Running:
[2019-02-07 12:56:42.986142] ModelName.vertex_A Running
[2019-02-07 12:56:42.986559] ModelName.vertex_A Status: PASSED
...
Status: True

If you use the -o/--verbose flag, the command will print for each step the data (the data for the current module) and properties (the properties of the current step defined in the model):

[2019-02-18 12:53:13.721322] ModelName.vertex_A Running
Data:
{
    "a": "0",
    "b": "0",
    "itemsInCart": "0"
}
Properties:
{
    "x": 1,
    "y": 2
}

If you use the -u/--unvisited flag, the command will print for each step the current list of all unvisited elements:

[2019-02-18 12:55:07.173081] ModelName.vertex_A Running
Unvisited Elements:
[
    {
        "elementId": "v1",
        "elementName": "vertex_B"
    },
    {
        "elementId": "e0",
        "elementName": "edge_A"
    }
]

altwalker offline

Generate a test path once, that can be runned later.

altwalker offline [OPTIONS]

Options

-f, --output-file <output_file>

Output file.

-m, --model <models>

Required The model, as a graphml/json file followed by generator with stop condition.

-e, --start-element <start_element>

Sets the starting element in the first model.

-o, --verbose

Will also print the model data and the properties for each step.

-u, --unvisited

Will also print the remaining unvisited elements in the model.

-b, --blocked

Will fiter out elements with the keyword BLOCKED.

Note

If you are using in your models guards and in the test code you update the models data, the offline command may produce invalid paths.

Examples:

For the -m/--model option you need to pass a model_path and a stop_condition.

  • model_path: Is the file (.json or .graphml) containing

    the model(s).

  • stop_condition: Is a string that specifies the generator and

    the stop condition.

    For example random(reached_vertex(vertex_name)), a_star(reached_edge(edge_name)),where random, a_star are the generators and reached_vertex(vertex_name), reached_edge(edge_name) are the stop conditions.

    For more details and a list of all available options read the GraphWalker Documentation.

Note

The never and time_duration stop condition is not usable with the offline command only with the online command.

The -m/--model is required but you can use it multiple times to provide multiple models.

Example:

$ altwalker offline -m models.json "random(vertex_coverage(100))"
[
    {
        "id": "v0",
        "modelName": "Example",
        "name": "start_vertex"
    },
    {
        "id": "e0",
        "modelName": "Example",
        "name": "from_start_to_end"
    },
    {
        "id": "v1",
        "modelName": "Example",
        "name": "end_vertex"
    }
]

If you want to save the steps in a .json file you can use the -f/--output-file <FILE_NAME> option:

$ altwalker offline -m models.json "random(vertex_coverage(100))" -f steps.json

If you use the -o/--verbose flag, the command will add for each step data (the data for the current module) and properties (the properties of the current step defined in the model):

{
    "id": "v0",
    "name": "vertex_A",
    "modelName": "ModelName",

    "data": {
        "a": "0",
        "b": "0",
        "itemsInCart": "0"
    },
    "properties": []
}

If you use the -u/--unvisited flag, the command will add for each step the current list of all unvisited elements, the number of elements and the number of unvisited elements:

{
    "id": "v0",
    "name": "vertex_A",
    "modelName": "ModelName",

    "numberOfElements": 3,
    "numberOfUnvisitedElements": 3,
    "unvisitedElements": [
        {
            "elementId": "v0",
            "elementName": "vertex_A"
        },
        {
            "elementId": "v1",
            "elementName": "vertex_B"
        },
        {
            "elementId": "e0",
            "elementName": "edge_A"
        }
    ]
}

altwalker walk

Run the tests from TEST_PACKAGE with steps from STEPS_PATH.

altwalker walk [OPTIONS] TEST_PACKAGE STEPS_PATH

Options

-x, -l, --executor, --language <executor>

Configure the executor to be used.

Default

python

Options

python|c#|dotnet|http

--url <url>

The url for the executor.

Default

http://localhost:5000/

--report-path

Report the path.

--report-file <report_file>

Save the report in a file.

Arguments

TEST_PACKAGE

Required argument

STEPS_PATH

Required argument

Examples:

Usually the walk command will execute a path generated by the offline command, but it can execute any list of steps, that respects that format.

A simple example:

$ altwalker walk tests steps.json
Running:
[2019-02-15 17:18:09.593955] ModelName.vertex_A Running
[2019-02-15 17:18:09.594358] ModelName.vertex_A Status: PASSED
[2019-02-15 17:18:09.594424] ModelName.edge_A Running
[2019-02-15 17:18:09.594537] ModelName.edge_A Status: PASSED
[2019-02-15 17:18:09.594597] ModelName.vertex_B Running
[2019-02-15 17:18:09.594708] ModelName.vertex_B Status: PASSED

Status: True