The Theory of Escrow

Through time immemorial, attorneys negotiating technology deals have recommended that software licensees push for escrow of source code. The original theory behind escrow was simple and reasonable – in the world of 1990s or 2000s software, source code escrow could eliminate counterparty risk.

If the licensor who built your application went bankrupt, lost its key persons, or simply failed to meet their contractual obligations, the licensee could trigger their escrow rights and assume the role of software developer or maintainer. Like forking an open source component, escrow rights could provide organizations with insurance to manage future bug fixes or security patches themselves.

Into the Cloud(s)

Over time, however, the nature of software consumption has changed. Increasingly, software is consumed not via CD-ROM or download, but through web applications, web services, or thin clients bundled into Software-as-a-Service (SaaS) or cloud subscriptions. Applications that were designed as self-contained executables are now the exception; the majority of “desktop” or “mobile” applications now rely on remote interaction for some or all of their functionality. This remote interaction typically involves connecting to vendor APIs that are running in complex, multi-tenant architectures that require specialist knowledge and high opex to run.

As a result, many attorneys now accept the idea that source code escrow may not be as relevant today as it once was. More and more deals are omitting escrow rights or obligations, and few cloud or SaaS agreements today even mention such terms.

Are licensees setting themselves up for future regret? Will the ghost of counterparty risk or zombie vendor lock-in rear its head again? Only time will tell.


But while many organizations are at least conscious of the risks related to SaaS or cloud providers, there is another risk that looms even larger over many organizations – the risks related to loss of machine learning or AI assets. If you thought labor relations could be thorny, just wait until your “AI robo-replacement” doesn’t show up to work or starts acting funny.

For many buyers, software vendors today create value not through traditional declarative logic or user experiences, but through inferential platforms that create machine learning or artificial intelligence assets. These platforms collect data from their users, sometimes combining it across customers, and then train machine learning models from the feedback. In some cases, these platforms begin with large-scale, pretrained models built by organizations like Google, Facebook, or OpenAI. In other cases, the models begin and end within the four corners of the cloud provider or licensor.

Either way, if the goal is to remove counterparty risk, source code escrow doesn’t cut it. At best, you might get the “production” feature engineering pipeline or neural network architecture. Is it enough to retrain the model or maintain the model from an R&D perspective? Slim chance.

No matter how models are trained, the reality is that many organizations today rely on machine learning models across their business – models that are often owned by third parties. Whether it’s something as simple as an email spam filter or as complicated as a legal contract automation tool, there are hundreds of these “hidden” machine learning models your company. By their very nature, this “AI automation” is often scale transparently to take on more and more activity; before you know it, significant dependence and business continuity risks emerge.

Compliance and Cashflow

Losing these third-party machine learning models can have a range of potential impacts. Some of these impacts might be specific to litigation or regulatory action. For example, if key workflows related to sensitive decisions like underwriting a borrower or screening a prospective employee are made using a third-party machine learning model, organizations might regret not having access to audit or explain these models at a later date. As state and federal agencies are pursuing issues related to algorithmic bias more frequently, access to the source code or training data of a now-defunct service provider might be key for large lenders or employers.

More operationally, many organizations create value or reduce risk through these machine learning models. Whether through information security risk reduction or through margin expansion, third-party machine learning models may be the difference between profit and insolvency for some companies. If the service provider that collects their data and owns critical machine learning models goes out of business, what can the “subscriber” or licensee do?

The answer, just as before, is escrow. But instead of focusing the scope of escrow on software source code, organizations should instead identify and focus on the escrow of key intellectual property like machine learning models or training data. In the event that a service provider stops providing access or updates to these models, organizations need an insurance policy.

So, next time you think about striking that escrow clause from your cloud subscription or software license agreement, think twice. Should you, as a licensee, be asking for access to training data, model architectures, or trained models? If you’re a licensor or cloud provider, can you differentiate yourself from the RFP pack by winning over customer procurement and legal departments?

Like most risks, this one doesn’t go away when we ignore it. As many cloud providers struggle with profitability and a tough funding environment, now might be a good time to button up these terms before it’s too late.