The Case for Self-Describing Systems
Research suggests that between 30–60% of the effort involved in managing processes such as IFRS9, RWA, and regulatory stress testing — is spent on documentation. Not building models but explaining how they were deployed, evidencing controls, attesting to system configurations, and completing internal audit reviews. This drains time and energy from risk, modelling, and finance teams — time that should be spent on higher-value analysis and decision-making.
We believe documentation shouldn’t be a separate task. It should be a natural by-product of using the system— automatically captured, complete, and audit-ready by design.
Beyond narrative
In recent years, many risk teams have begun exploring tooling — including generative AI — to assist with model development documentation. This may help automate the initial write-up of model rationale or training data, but it only addresses a narrow slice of the problem.
The far larger and more persistent burden lies after model approval: the manual documentation and control evidencing required when models are embedded in live production processes.
Self-Describing vs. Self-Documenting: What’s the Difference?
These two concepts are often used interchangeably, but they solve subtly different problems.
🔹 Self-Describing
A self-describing system can explain its own structure — such as inputs, outputs, configuration, dependencies, and constraints. It’s designed so someone (or something) can understand how to interact with it, even without external instructions. Think of it as: “You can inspect this and understand what it is and how it works.”
Examples:
An API with OpenAPI/Swagger definitions
A dataset with clear column types, units, and lineage
🔹 Self-Documenting Systems
A self-documenting system can explain what happened as part of its operation. It produces a traceable, reliable record of actions without requiring manual intervention. Think of it as: “You can inspect this and understand what was done, when, by whom”.
Examples:
Git repositories with full commit history
Workflow engines that log every step and user action
Why Both are Important
In regulated model environments, you need both self-description and self-documentation to reduce the burden of manual tracking and evidence gathering. Whether the focus is data lineage, model oversight, or financial control, documentation downstream of model development must answer:
How production processes are built up from models, data, and parameters, and how outputs relate back to those elements (self-documenting)
How, when, and why each component has changed, and how those changes align with internal decisions and formal approvals (self-describing)
A Different Type of Intelligent System
In most financial institutions today, production model environments are little more than EUCs with additional access controls. Everything else — from documenting what was deployed, to tracking how it’s used and when it changes — is done manually.
This is inefficient, error-prone and a major barrier to agility.
We’ve taken a different approach, building trac to be both self-describing and self-documenting — not just for convenience, but as a foundation for trust, transparency, and operational scale. This isn’t the kind of “intelligent machine” most people are writing about these days. But it’s exactly the kind of intelligence banks need — to shrink the burden of documentation and scale model governance with confidence.