Type-safe StrictYAML integration tests run from pytest. They can:
Rewrite themselves from program output (command line test example)
Autogenerate documentation (website test example)
Demo projects with demo tests
Project | Storytests | Python code | Doc template | Autogenerated docs |
---|---|---|---|---|
Website | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
REST API | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
Interactive command line app | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
A Python API | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
Minimal example (two files) demonstrating two short YAML tests and the python code necessary to run them from within a pytest file.
Code Example
example.story:
Log in as James:
given:
browser: firefox # test preconditions
steps:
- Enter text:
username: james
password: password
- Click: log in
See James analytics:
based on: log in as james # test inheritance
following steps:
- Click: analytics
from hitchstory import BaseEngine, GivenDefinition, GivenProperty
from hitchstory import Failure, strings_match
from hitchstory import StoryCollection
from strictyaml import Str
from pathlib import Path
from os import getenv
class Engine(BaseEngine):
"""Interprets and validates the hitchstory stories."""
given_definition = GivenDefinition(
browser=GivenProperty(
# Available validators: https://hitchdev.com/strictyaml/using/
Str()
),
)
def __init__(self, rewrite=False):
self._rewrite = rewrite
def set_up(self):
print(f"Using browser {self.given['browser']}")
def click(self, name):
print(f"Click on {name}")
if name == "analytics":
raise Failure(f"button {name} not found")
def enter_text(self, **textboxes):
for name, text in textboxes.items():
print(f"Enter {text} in {name}")
def tear_down(self):
pass
collection = StoryCollection(
# All .story files in this file's directory.
Path(__file__).parent.glob("*.story"),
Engine(
# If REWRITE environment variable is set to yes -> rewrite mode.
rewrite=getenv("REWRITE", "no") == "yes"
)
)
#You can embed the stories in tests manually:
#def test_log_in_as_james():
# collection.named("Log in as james").play()
#def test_see_james_analytics():
# collection.named("See James analytics").play()
# Or autogenerate runnable tests from the YAML stories like so:
# E.g. "Log in as James" -> "def test_login_in_as_james"
collection.with_external_test_runner().ordered_by_name().add_pytests_to(
module=__import__(__name__) # This module
)
Run passing "log in as James" test
Running test_log_in_as_james runs the "Log in as James" story.
pytest -s -k test_log_in_as_james
Outputs:
============================= test session starts ==============================
platform linux -- Python n.n.n, pytest-n.n.n, pluggy-n.n.n
rootdir: /path/to
collected 2 items / 1 deselected / 1 selected
test_hitchstory.py Using browser firefox
Enter james in username
Enter password in password
Click on log in
.
======================= 1 passed, 1 deselected in 0.1s ========================
Run failing "see James' analytics" test
Failing tests also have colors and highlighting when run for real.
pytest -k test_see_james_analytics
Outputs:
============================= test session starts ==============================
platform linux -- Python n.n.n, pytest-n.n.n, pluggy-n.n.n
rootdir: /path/to
collected 2 items / 1 deselected / 1 selected
test_hitchstory.py F [100%]
=================================== FAILURES ===================================
___________________________ test_see_james_analytics ___________________________
story = Story('see-james-analytics')
def hitchstory(story=story):
> story.play()
E hitchstory.exceptions.StoryFailure: RUNNING See James analytics in /path/to/example.story ... FAILED in 0.1 seconds.
E
E based on: log in as james # test inheritance
E following steps:
E - Click: analytics
E
E
E hitchstory.exceptions.Failure
E
E Test failed.
E
E button analytics not found
/src/hitchstory/story_list.py:51: StoryFailure
----------------------------- Captured stdout call -----------------------------
Using browser firefox
Enter james in username
Enter password in password
Click on log in
Click on analytics
=========================== short test summary info ============================
FAILED test_hitchstory.py::test_see_james_analytics - hitchstory.exceptions.StoryFailure: RUNNING See James analytics in /path/to/example.story ... FAILED in 0.1 seconds.
based on: log in as james # test inheritance
following steps:
- Click: analytics
hitchstory.exceptions.Failure
Test failed.
button analytics not found
======================= 1 failed, 1 deselected in 0.1s ========================
Install
$ pip install hitchstory
Community
Help is available if you ask questions in these places: Github discussions | Github issues (not just for bugs) | Slack channel
Using HitchStory
Every feature of this library is documented and listed below. It is tested and documented with itself.
Using HitchStory: With Pytest
If you already have pytest set up and running integration tests, you can use it with hitchstory:
Using HitchStory: Engine
How to use the different features of the story engine:
- Hiding stacktraces for expected exceptions
- Given preconditions
- Gradual typing of story steps
- Match two JSON snippets
- Match two strings and show diff on failure
- Extra story metadata - e.g. adding JIRA ticket numbers to stories
- Story with parameters
- Story that rewrites itself
- Story that rewrites the sub key of an argument
- Raising a Failure exception to conceal the stacktrace
- Arguments to steps
- Strong typing
Using HitchStory: Documentation Generation
How to autogenerate documentation from your tests:
Using HitchStory: Inheritance
Inheriting stories from each other:
- Inherit one story from another simply
- Story inheritance - given mapping preconditions overridden
- Story inheritance - override given scalar preconditions
- Story inheritance - parameters
- Story inheritance - steps
- Variations
Using HitchStory: Runner
Running the stories in different ways:
- Continue on failure when playing multiple stories
- Flaky story detection
- Play multiple stories in sequence
- Run one story in collection
- Shortcut lookup for story names
Approach to using HitchStory
Best practices, how the tool was meant to be used, etc.
- Is HitchStory a BDD tool? How do I do BDD with hitchstory?
- Complementary tools
- Domain Appropriate Scenario Language (DASL)
- Executable specifications
- Flaky Tests
- The Hermetic End to End Testing Pattern
- ANTIPATTERN - Analysts writing stories for the developer
- Separation of Test Concerns
- Snapshot Test Driven Development (STDD)
- Test Artefact Environment Isolation
- Test concern leakage
- Tests as an investment
- What is the difference betweeen a test and a story?
- The importance of test realism
- Testing non-deterministic code
- Specification Documentation Test Triality
Design decisions and principles
Design decisions are justified here:
- Declarative User Stories
- Why does hitchstory mandate the use of given but not when and then?
- Why is inheritance a feature of hitchstory stories?
- Why does hitchstory not have an opinion on what counts as interesting to "the business"?
- Why does hitchstory not have a command line interface?
- Principles
- Why does HitchStory have no CLI runner - only a pure python API?
- Why Rewritable Test Driven Development (RTDD)?
- Why does HitchStory use StrictYAML?
Why not X instead?
HitchStory is not the only integration testing framework. This is how it compares with the others:
- Why use Hitchstory instead of Behave, Lettuce or Cucumber (Gherkin)?
- Why not use the Robot Framework?
- Why use hitchstory instead of a unit testing framework?
Using HitchStory: Setup on its own
If you want to use HitchStory without pytest:
Using HitchStory: Behavior
Miscellaneous docs about behavior of the framework: