Immunefi Arbitration: Making the internet's first court for bug bounty disputes
Summary:
We created the world's first dispute resolution system (a court, if you will) for vulnerabilities, focused on onchain critical vulnerabilities.
We had to do this, or billions of dollars will probably be lost to future hacks as a result of fearful security researchers refusing to disclose (or worse, executing the hacks themselves).
Immunefi Arbitration sets a new high bar for security in bug bounty transactions globally; rulings are legally binding and enforceable worldwide. There are no comparable solutions, either onchain or in Web2.
We believe Immunefi’s court will bring a gradual end to the problem of bad faith projects, making bug bounties safe for all future waves of security researchers.
The Immunefi team and I have just launched the first dispute resolution system for onchain bug bounties, and the odds are good that it changes the bug bounty world for the better. Through Immunefi Arbitration, security researchers and projects receive globally enforceable (covering 172 countries) resolutions by some of the best arbitrators in the world, provided by the London Chamber of Arbitration and Mediation. Such arbitration ensures that good faith security players will be protected from abuse.
But what does that even mean? And why should you care? To answer this, I’ll reveal a few of the big challenges of running a bug bounty platform, why no one has made a serious effort to protect whitehat interests, and why we think we’ve built a strong solution for safeguarding both projects and security researchers.
But let’s start at the beginning.
In the beginning, there was the bug bounty program. It was a great idea; bug bounty programs let you hunt on all sorts of different projects and technologies whenever you feel like it. You can become a security champion, proactively reviewing code and disclosing groundbreaking vulnerabilities before they can be exploited. The results of web3 bug bounties are undeniable: They reliably surface critical vulnerabilities that other security measures miss and prevent countless billions of dollars in damages. Web3 bounties bring security researchers fame, fortune, and respect.
But there’s a big problem with bug bounty programs: despite being binding legal agreements between counterparties, they still depend on trust to work. The project hosting the bug bounty program has to trust that security researchers will responsibly report vulnerabilities rather than exploit them, and security researchers have to trust that projects will actually pay out the bounties they’ve promised at an appropriate level, per the terms of their bounty programs.
The counterparties must trust each other to execute the contract as expected, and relying on trust is not easy between two absolute strangers. But it gets worse, because trust is diminished from the start by the following factors:
The counterparties are typically unknown to each other, differing in just about every way. They are random internet strangers.
Bug bounty programs are contracts designed to surface and specify rewards for unknown vulnerabilities. Since these desired vulnerabilities are unknown by definition, specifying their validity and appropriate reward tier criteria is challenging.
Counterparties are incentivized to disagree with one another. Security researchers are financially incentivized to inflate severity to maximize bounty size and reputation, and security professionals running bug bounty programs are incentivized to minimize bounties to safeguard project treasuries and any perceived error on their side.
Security researchers have to disclose their vulnerability upfront for it to be evaluated for a bounty, and this removes much of the negotiating leverage they might have in determining reward amount and payout.
This is not an ideal start to what should be a positive-sum transaction that makes everyone better off, and if not managed, this dynamic can lead to negative consequences.
For example, in some cases, security researchers who would otherwise find and disclose high-impact vulnerabilities choose not to do so. The second-order consequence is that hacks that could have been prevented instead occur. This is caused by:
Security researchers refusing to hunt.
Security researchers finding vulnerabilities, but refusing to disclose due to lack of trust in bug bounties.
Security researchers (blackhats) finding vulnerabilities and exploiting them due to lack of perceived safety in responsibly disclosing via a bug bounty.
These consequences lead to data breaches in web2, but in the onchain economy, they lead to billions of dollars in losses, and millions of ordinary people are caught in the crossfire.
Bug bounty platforms like Immunefi are designed to solve these problems. As Immunefi has shown with $110m+ in payouts to security researchers, some platforms do succeed in bringing trust and security to these transactions.
But it’s not enough; a single missed hack can compromise billions of dollars. Bug bounties need to operate at maximum effectiveness so that the absolute maximum number of vulnerabilities are disclosed and mitigated.
That requires that trust in bug bounties be maximized, and these problems as mitigated or resolved as is possible. And we’re just not there yet; even though whitehats get paid every single day on Immunefi, I still get such questions from security researchers as to whether Immunefi still has a payment enforcement problem, and whether it's really safe for them to spend time bug hunting.
These are fair questions. Even though the vast majority of cases end well, a relevant minority of cases leave some security researchers dissatisfied.
So, we need a solution that provides the maximal trust assurances to bug bounty transactions, minimizing the possibility of abuse and thereby making bug bounties as safe as they can practically be. Such a solution would have the following necessary qualities:
In case of uncertainty or dispute, it must be possible to objectively evaluate vulnerability disclosures and assess their severity and reward amount according to the bug bounty program.
It must be able to deliver legally enforceable rulings; the conclusions should be able to withstand scrutiny in any legitimate court of law and be recognized by existing legal systems globally.
It must enable realistically enforceable bounties; the rulings should be practically enforceable according to the means of most security researchers.
You might think that a solution would already exist, given that bug bounty platforms have existed for over a decade. Instead, I’ve only found all the reasons why they haven’t existed.
Since the rise of the bug bounty platform and the relatively small amounts paid (typically five-figure amounts), projects have had all the leverage. Rarely has a bounty been worth going through the immense hassle that a traditional court case and enforcement entails when the typical case lasts years and can easily cost tens of thousands of dollars. If the case is international or against a large corporation, then you could expect the costs to be hundreds of thousands. There have been some notable cases in this area, and when they happen, the process itself virtually kills any win-win outcome because of cost and time.
Disputes over the occasional web2 six-figure bounty have occurred, but web2 bug bounty platforms have had little incentive to develop a comprehensive trust assurance solution. Development would be expensive, time-consuming, and require expert skills, and its enforcement could harm customer adoption.
Furthermore, far more hackers were available to work than companies hosting programs. The incentives simply pushed in the direction of protecting whitehat interests, given that it would create such a big headache for traditional web2 bounty platforms.
But what about crypto? Trust assurance is a normal commercial problem, so it seemed safe to assume solutions would be available for use, but this was not so. Here are a few examples of things I found that didn’t work, and why:
Traditional court systems: There were many problems with relying on existing courts to secure whitehat interests. First, they are incredibly slow, taking years to resolve cases. Second, they are expensive, and variably so; a case appealed over and over could cost enormous sums. Third, they are necessarily local; you could win a local case but fail to have it enforced against a counterparty in their jurisdiction. Fourth, they have no understanding of crypto, and generally disdain the subject; getting tried by a judge who hates you from the start is a recipe for bad judgments.
Zero-knowledge proof of exploit: Zkpoex is a much-loved topic in the onchain security community because it seems like a panacea to this whole problem, through letting security researchers prove impact without disclosing the vulnerability until it has been paid for! Practically, there are many problems with this. First, most projects have no interest in using this technology in this way, due to the obviously extortionate feeling of its use in this context. Second, Zkpoex does not work for all (most?) types of vulnerabilities, where impact can be challenging to demonstrate in a Zkpoex VM environment. Third, Zkpoex does not help you resolve complicated edge case scenarios, which bug bounties routinely involve. Fourth, Zkpoex technology remains experimental and needs further investment before it can be applied to bug bounty platforms at scale. Fifth, Zkpoex-proven impact may not comply with the terms and requirements of the program. Sixth, Zkpoex turns the tables on projects by forcing them to put up capital first, and if in case of an edge case, you still need some kind of dispute resolution solution to evaluate and make legal judgments. In my view, Zkpoex may have a future place in the bug bounty workflow (it’s certainly an interest of mine!) but it does not actually solve the trust problem meaningfully; it just flips the power dynamic in a similarly dysfunctional way.
Onchain dispute resolution: Dispute resolution native to crypto has been proposed by a few projects, most notably Aragon, Kleros, and UMA’s optimistic oracle. However, all such courts have failed to deliver reliable dispute resolution. Aragon Court has shut down with the wind-down of the Aragon DAO, Kleros is thoroughly compromised by scandals and has subordinated rulings to its own financial interests (as its token model dictates), and UMA’s optimistic oracle is incapable of making contextual, fine-grained judgments. Furthermore, none of these courts have real and reliable enforceability in most jurisdictions, never mind international enforceability of judgments.
A crypto-native security council: This was my original idea, but I would find out later that it doesn’t work well at all. The first major detriment of this approach is the fact that security experts, while very knowledgeable of the underlying vulnerabilities, tend to be poor interpreters of the underlying contract (the bug bounty program) that takes primacy, unless they’ve managed such programs before. Second, rulings by such small groups create a huge opening for legal challenge on grounds of partiality and poor process; I would expect that any dispute resolution system of this type to be challenged by either projects or security researchers in their local court of law and found wanting and liable as a result. When bug bounty payouts can land in the millions, the chance of disputing lawsuits are very high. Third, it’s very difficult to make rulings by such groups legally enforceable in a meaningful way; you can get some powers under local contract law, but they are unlikely to extend beyond local borders, where most disputes actually occur and need enforcement. Fourth, projects generally feel that they cannot trust the impartiality of such adjudicators because the onchain security community is small and everyone knows each other, which is a very legitimate concern. For all these reasons, I concluded (painfully) that this approach would lead to reliably poor outcomes for security researchers.
It became clear that there could be only one solution: We would have to build a hybrid, onchain-offchain dispute resolution system that included international legal enforceability – all in a crypto-native manner that would be readily adoptable by our customers.
Immunefi Blockchain Arbitration System
Enter Immunefi arbitration, the world’s first bug bounty arbitration system (and software vulnerability court generally, as far as I know) with legal and practical enforceability worldwide.
The Blockchain Expedited Arbitration Rules, designed with the brilliant legal experts over at Greenberg Traurig and the London Chamber of Mediation and Arbitration, solves the problem of ensuring global legal enforceability and impartial and objective evaluation of disputed bug reports, all within the means of the typical security researcher.
We built the entire legal system on the New York Convention to achieve global enforceability. By basing it on the core requirements of the convention, legal rulings created through the Blockchain Expedited Arbitration Rules make judgments enforceable across all the New York Arbitration Convention’s signatories, which amount to 172 countries today. This means that Immunefi arbitration awards have legal force across all countries colored gray in the map below; that’s as global as it gets!
We chose to base the arbitral system on English law to deliver judgments of the highest standard. The UK, for context, is the world’s leading hub for arbitration expertise and resolutions. There would be no better place to create a court that is the first of its kind.
To ensure both impartiality of arbitrators as well as the best combination of speed and cost, we partnered with the London Chamber of Mediation and Arbitration (LCAM). In addition to having a great roster of world-class arbitrators, LCAM is the arbitral house eager to move into the onchain age. They’ve also been the core party in determining how to make disputes both as fast as possible and as low cost as possible, resolving cases in weeks to months, not years (which is lightspeed as far as most courts are concerned), and bringing costs safely into the thousands with flat case fees.
As an added and crypto-native bonus, Greenberg Traurig figured out how to conduct proceedings with maximum privacy, ensuring that identities are only shared on an as-needed basis, allowing for pseudonymous use of Immunefi Arbitration. To the best of my knowledge, our arbitral system is the only one in the world that is safe for anons to use.
This gives us a first-of-its-kind hybrid Arbitration system for bug bounties that enables fast, low-cost dispute resolution from LCAM’s blockchain court.
We are exploring additional features for the Arbitration system that make rulings even more enforceable and meaningful to security researchers, yet stays entirely true to our crypto-native roots where projects control their own funds.
If this Arbitration system works, the bug bounty dispute problem will be solved in a fair and transparent way that preserves the crypto ethos of self-custody of funds. Every security researcher in the world will receive the strongest possible trust assurances that if they hunt with Immunefi, their interests will receive the strongest possible protections in turn.
And adoption is looking good! We’ve found that many newly onboarding projects agree to arbitration. And it makes sense that they would! Arbitration is a far safer solution for them as well. Not only does it play a meaningful role in protecting them against the greatest and most deadly risk of all, getting hacked, but it does so while eliminating the risk of costly cross-border disputes, wasted time and money on lawyers, and allows for a trustworthy and reputable party to resolve the dispute amicably so that they can get back to building. From the perspective of most projects, these benefits far outweigh the costs.
And that’s how it should be; just as security researchers show themselves to be good faith actors by participating in the bug bounty program according to the rules, projects prove themselves to be high-integrity partners by adopting Immunefi arbitration.
If projects continue adopting Immunefi Arbitration, the era of low-trust bug bounty transactions will come to an end, and the entire onchain world will benefit from reduced hacks. Success in adoption will mean billions saved from hacks that could have happened, but were instead prevented because whitehats could count on being rewarded for their good deeds.
Finally, it’s worth noting that Immunefi Arbitration puts onchain bug bounties into a league of their own. Not only do we have the best payouts and mediations in the bug bounty world, but we will also have the best and most effective protections in the entire cybersecurity world. How’s that for raising the bar for the cybersecurity industry worldwide?
I’m beyond excited to launch this product into the world. It is now live and applies to reports submitted to arbitration-enabled bug bounty programs after January 21st 2025.
To learn more about how to use Immunefi Arbitration as a project or security researcher, read the guide here.
Before you leave, do me a favor and hit the Subscribe button to receive my research going forward. I don’t send anything except my research, typically once every month or two.