MASTIX

Products

ALM and derivatives analytics, built on one engine

Built on the same analytical foundation. Start with one or deploy both.

ALM Studio

Balance-sheet analytics with full attribution

See ALM Studio

What teams use it for

  • Model cash flows across the banking book in one framework.
  • Test rate shocks and assumption changes interactively.
  • Explain EVE, NII, and sensitivities with built-in attribution.

What it is not

Not a treasury operations system. It does not manage cash or payments. It is the analytical layer for understanding balance-sheet risk.

Derivatives Studio

Greeks and P&L attribution for trading desks

See Derivatives Studio

What teams use it for

  • Compute full sensitivity sets in one pass instead of rerunning per factor.
  • Run pre-trade analytics fast enough for desk decisions.
  • Explain risk and P&L consistently from trade to portfolio.

What it is not

Not a trading system. It does not book trades or manage positions. It is the analytical engine for understanding derivatives risk.

At a glance

A quick orientation to where each product starts.

Best fit

ALM Studio

Treasury, ALM, and banking-book risk workflows

Derivatives Studio

Trading-desk and derivatives workflows

Starts from

ALM Studio

Loans, deposits, securities, and banking-book derivatives

Derivatives Studio

Trades, portfolios, and desks

Typical questions

ALM Studio

Why did EVE or NII move? What if rates rise? Where did this IRRBB figure come from?

Derivatives Studio

What are the full Greeks? What drove the P&L move? What is the risk impact before execution?

Primary workflow

ALM Studio

Balance-sheet scenarios, committee questions, reporting, and audit trail

Derivatives Studio

Pre-trade analysis, hedging, desk decisions, and trade-to-portfolio risk

What both products share

Whether you start from the balance sheet or the trading book, the analytical foundation is the same.

Exact Sensitivity Engine

Adjoint Algorithmic Differentiation (AAD) computes exact sensitivities in the valuation itself.

Full Attribution

Decompose any change into rates, volumes, models, and assumptions.

Audit Trail

Trace any result back through the full calculation chain, from output to inputs.

Flexible Delivery

Use it from Python, C#, or Excel. Deploy on-premise or as a managed service.

Frequently Asked Questions

Common questions while choosing between the products

A short FAQ for teams comparing the two products and thinking through evaluation, implementation, and data readiness.

ALM Studio is for treasury and ALM teams working with balance-sheet risk: EVE, NII, IRRBB scenarios, and banking-book attribution. Derivatives Studio is for trading desks that need Greeks and P&L attribution across derivatives portfolios.

The deciding factor is usually where the analytical need starts. If it starts from the balance sheet, ALM Studio. If it starts from the trading book, Derivatives Studio.

Yes. Both products share the same analytical engine, so they work together without integration overhead. Some institutions deploy ALM Studio for the banking book and Derivatives Studio for the trading desk.

Derivatives Studio includes its own curve construction, but it can also consume curves from your existing market data or risk infrastructure. You choose what fits your workflow.

Either way, sensitivities to the curve inputs are computed exactly through Adjoint Algorithmic Differentiation (AAD), so attribution and Greeks are consistent with however the curves are built.

Margin attribution decomposes changes in initial margin into the factors that drove them: new trades, market moves, model parameter changes, and portfolio effects.

Instead of seeing that margin went up and investigating manually, you get a breakdown of why it moved and by how much per driver.

ALM Studio is designed for treasury and ALM teams at banks and financial institutions that need explainable balance-sheet analytics rather than static report production alone.

It fits teams that want contract-level drill-down, interactive scenario analysis, and one analytical foundation across valuation, sensitivities, attribution, and reporting.

The main difference is architectural. ALM Studio keeps valuation, sensitivities, scenarios, and attribution on one analytical foundation instead of splitting them across separate model chains.

That changes the workflow: teams can test alternatives during the working session, trace reported movements back to their drivers, and work through Excel, Python, or connected reporting tools without losing consistency.

The usual starting point is a guided demo or a lightweight benchmark exercise rather than a long procurement project. We can show the workflow on a representative portfolio and focus on the questions your team actually needs to answer.

Depending on the stage, the evaluation can use your own data, a curated subset, or a synthetic portfolio that mirrors the dynamics you care about.

Yes. AAD computes the full sensitivity set in a single valuation pass, so it scales better than bump-and-reprice as the number of risk factors grows. The computational cost grows with the complexity of the model, not the number of sensitivities requested.

No. Both products are designed to work alongside existing infrastructure. They can consume curves, positions, and market data from your current systems and deliver results back through Python, C#, Excel, or REST.

The typical path is to start with a specific analytical workflow, not a full platform replacement.

The audit trail is built in. Every result can be traced back through the calculation chain to its inputs, assumptions, and model choices. That gives model validation teams a clear path from output to explanation, which is typically harder to achieve with black-box or bump-and-reprice approaches.

A typical implementation starts with scoping the use case, mapping source systems, and agreeing which instruments, assumptions, and outputs matter first. From there, the model is built, validated, and used for the first scenario or reporting workflows.

The exact sequence depends on the scope, but the aim is the same: get from raw source data to a repeatable analytical workflow without turning the project into a multi-year rebuild.

That is normal. Most teams do not start with perfectly harmonized source data, so the practical question is what is good enough for the first useful workflow and what should be improved over time.

A sensible rollout usually handles this in phases: start with the data needed for the target use case, make modelling assumptions explicit, and tighten the data layer as the workflow matures.

That depends on the use case, the complexity of the portfolio, and how ready the source data is. A benchmark or demo can usually produce insight much earlier than a full production rollout.

The important distinction is between seeing analytical value and completing full implementation. Teams can often validate the workflow first and expand the production scope after that.

See It on Your Portfolio

The difference is clearest when you see attribution and scenario analysis on a portfolio that looks like yours.