Mimetic Tests
A mimetic test mirrors the implementation of code rather than testing the behavior.
When such a test passes it does not mean that the code works. When it fails it does not mean that it doesn't work.
The most common type of mimetic test is a unit test surrounding "integrating code" - code that connects rather than calculating, that calls APIs rather than making decisions.
For example, taking this snippet of code which sends a message on a queue:
def send_message(queue, body, user_id):
return queue.send_message(
MessageBody="derp",
MessageAttributes={
"UserId": {
"StringValue": "1493147359900",
"DataType": "Number",
}
},
)
Would be mimetically tested with:
import boto3
import pytest
from uuid import uuid4
from moto import mock_sqs
@mock_sqs
def test_message_send_with_attributes():
sqs = boto3.resource("sqs", region_name="us-east-1")
queue = sqs.create_queue(QueueName=str(uuid4())[0:6])
msg = send_message(queue, "", "")
assert "MessageId" in msg
An integration mimetic test is typically written as a result of trying to surround all or most code with a unit test of some kind.
It is particularly common as a result of trying to apply unit test driven development to integration code or trying to boost code coverage statistics with unit tests.
An algorithmic mimetic test is a kind of test where the developer writes the code for a complex calculation, writes the test that runs the calculation and plugs the result directly into the test.
For example, writing a piece of code like this:
def regression(data):
"""Regression"""
average_x = 0.0
average_y = 0.0
for i in data:
average_x += i[0] / len(data)
average_y += i[1] / len(data)
total_xy = 0
total_xx = 0
for i in data:
total_xx += (i[0] - average_y) ** 2
total_xy += (i[0] - average_x) * (i[1] - average_y)
m = total_xy / total_xx
c = average_y - m * average_x
return m, c
Could be followed up with a unit test like this:
def test_regression():
gradient, intercept = regression(
{
(2, 6),
(4, 1),
(4, 4),
}
)
assert gradient == -1.5555555555555558
assert intercept == 8.851851851851851
The developer makes the assumption that the output value is correct. If they are lucky it is. If it is not, the test will lock in the behavior - remaining green in the presence of the bug and breaking if it is fixed.
This test is harder to detect in a pull request than an integration mimetic test, as it's usually impossible to tell if the expected calculation outputs have been independently verified.
Mimetic tests usually occur in an environment where developers feel that the level of testing is, measurably just barely sufficient or insufficient.
Where they exist on code that developers are editing, they can lead to a pattern where a developer:
- Makes a change to the code
- Makes a change to the unit test or tests surrounding that code to accommodate that change.
- Ignores the result of the unit test entirely.
- Manually tests the change to gain confidence.
In such cases both the writing and the maintenance of the test are a deadweight loss.
For algorithmic code, mimetic tests can be partly avoided by doing TDD - TDD wherein the developer figures out the right value for the calculation beforehand.
For integration code, mimetic tests can be avoided by writing integration tests at a higher level.