Portable Models: Killing the traditional route-to-live
In many banks, updating models in production systems can take weeks, or sometimes months, because models must be re-coded as they move from development to production. This is typically for one of two reasons:
Different environments: Production uses a different language or execution framework from the one used by model developers
Data handling: Data extraction, validation, and formatting are managed inside the model code, and input sources differ between development and production
As a result, every deployment or significant model change becomes a mini-project: rewriting the data wrangling steps, retesting model logic, validating outputs, and updating documentation. It’s slow, costly, and unnecessary.
The Portability Principle
Model portability has become a core principle of MLOps — the discipline focused on streamlining the deployment and management of machine learning models as enterprise assets. The idea is simple: a model should run in any environment without modification, because it is decoupled from both the infrastructure and the physical data sources. This is typically achieved through:
Standardised interfaces for inputs, outputs, and parameters
Containerised or environment-agnostic execution
Separation of concerns — the model doesn’t “know” where its data comes from, it just receives the right data in the right format
The same principle applies to highly regulated models in banking (i.e. “structural analytics”).
As the industry moves from closed, proprietary model platforms to open, modern technology, there’s no reason that the same code shouldn’t be able to run in development, pre-production, and production — without having to re-write a single line.
How TRAC Achieves It
Model portability can be achieved in several ways. We do it via the trac runtime service — a python package that provides the context in which all models are executed. There are four core elements that make this possible:
Model API: The runtime service instantiated a standard Model API, which defines how models interact with the outside world.
Self-describing models: Models are wrapped in a function that declares their schema to the runtime service — their inputs, outputs, and parameters
Type-safe data provision: Data is provided to the model via the declared schema — if the available data doesn’t match, the model won’t run.
Output conformance: The model can only return outputs that it declared upfront in its schema, — so there are no surprises when it runs.
The outcome is simple but powerful: executing models via the runtime service guarantees that they behaves identically wherever they run. By de-coupling the model from the data sources, you eliminate the need for re-coding in the route-to-live.
Why This Matters
As banks migrate off SAS, adopt cloud-native architectures, and modernise their model estates, portability isn’t just a technical enhancement — it’s the foundation for efficient modelling operations. With portable models you get:
Efficiency: Significantly less effort to productionise a new model
Speed: Faster deployments and reduced time-to-value
Control: Lower chance of defects from re-implementation and a single validated artefact used across all environments
Flexibility: Move models between platforms without recoding
If you’re still re-coding models to get them live, you’re solving the wrong problem.
Why Open-Source
True portability isn’t just about running the same model across dev, test, and prod within a closed architecture — it’s about being able to take a model and run it anywhere, with no vendor lock-in. That’s why the TRAC (Python) Runtime is completely open-source. Whether you run your models locally, in the TRAC platform, or in any other execution environment, your model works exactly the same.