Background
In a typical beamline definition script, a sequence of devices are defined and a device factory decorator is applied to each.
Issue
The issue is that if a decorator is left out from a single device, there is currently nothing to detect this
( except for the Mark 1 human eyeball ).
Task
Make some form of automated test that will detect any discrepancies.
Caveat
This should not require a separate source of ground truth to provide a set of expected devices.
That would merely create the potential for two sources of error and the classic "Which one do You believe?" problem.
One option for a simple solution might be to insist that every python def line in such definition files comes with a decorator.
Acceptance Criteria
- That a beamline script with decorators proper and correct passes any new test
- That a beamline script with the decorator forgotten ( or deliberately removed in a local repo just to exercise the test )
should cause the same test to fail
Background
In a typical beamline definition script, a sequence of devices are defined and a device factory decorator is applied to each.
Issue
The issue is that if a decorator is left out from a single device, there is currently nothing to detect this
( except for the Mark 1 human eyeball ).
Task
Make some form of automated test that will detect any discrepancies.
Caveat
This should not require a separate source of ground truth to provide a set of expected devices.
That would merely create the potential for two sources of error and the classic "Which one do You believe?" problem.
One option for a simple solution might be to insist that every python def line in such definition files comes with a decorator.
Acceptance Criteria
should cause the same test to fail