Hyperbolic Introduces Novel, Practical Solution to the Problem of Verification in Decentralized AI


This post originally appeared on the Hyperbolic blog on May 9th, 2024

Trust, efficiency and verifiability are significant blockers to the widespread use of decentralized AI compute and inference. In the recent past, techniques leveraging zero-knowledge proof (zkML) or optimistic assumptions and fraud proofs (opML) have emerged as potential solutions to these problems, but each of these has drawbacks that make it economically infeasible, technically impractical or both.

Today, following months of collaboration with our research partners at UC Berkeley and Columbia , Hyperbolic is proud to introduce the first practical solution to this growth blocker based on a new verification mechanism we call Proof-of-Sampling, or PoSP for short.

Read the full paper here

Proof of Sampling (PoSP): Pure Strategy Nash Equilibrium

Proof of Sampling (PoSP) uses sampling techniques and game theory to ensure security and efficiency by creating a system where all participants are incentivized to act honestly. At the heart of this method is an incentive structure that rewards actors for acting honestly and penalizes those who do not. By randomly selecting and verifying transactions, PoSP reduces the computational burden on the network, making it a scalable solution for decentralized systems.

Read a detailed breakdown of PosP here

Here’s how PoSP works:

  1. A node (called an Asserter) computes a function and submits a cryptographic commitment to an orchestrator.

  2. The Orchestrator, a trustless random beacon, then decides whether to accept the Asserter’s claim or initiate a challenge by randomly selecting validators to independently compute and submit their own commitments.

  3. If all commitments match, the result is accepted; otherwise, an arbitration process is initiated to identify and penalize dishonest nodes.

The same philosophy also exists in the real world. In the Swiss bus system, instead of checking every passenger’s ticket on every trip, inspectors conduct random checks. The penalty is so high that no one will try to avoid paying for the tickets.

The Hyperbolic research team conducted rigorous mathematical analysis and showed that as long as the probability of challenge is higher than a certain threshold based on the parameters of the setup, this PoSP achieves what is known in game theory as a pure strategy Nash Equilibrium. It’s a state where all participants in the network are incentivized to act honestly because any other strategy would be less beneficial. In other words, when the system is set up correctly, with rewards and penalties balanced against the probability of being challenged, the most rational and profitable approach for all nodes is to compute the result accurately.

Introducing SpML: a better way to support verification in decentralized AI

Building on the principles of PoSP, Hyperbolic has developed spML, an implementation of PoSP designed specifically for decentralized AI inference. SpML addresses the limitations of existing verification mechanisms, such as opML (which relies on fraud proofs) and zkML (which uses zero-knowledge proofs).

OpML is theoretically secure provided there is at least one honest validator verifying the outcomes. However, the absence of an incentive mechanism to ensure the presence of such a validator renders it susceptible to undetected fraud. ZkML, on the other hand, enhances security and privacy but suffers from high computational overhead, limiting scalability and efficiency.

SpML utilizes an effective method to ensure both rapid processing and high security without the computational overhead associated with zkML or the vulnerability to fraud seen in opML. With spML, the protocol enhances security by randomly challenging the outcome by another validator. The asserter is unaware of whether their results will be challenged or which validator will be selected to perform the challenge, ensuring an unbiased verification process. If their outputs match, the transaction is considered valid, and the servers are rewarded. If there’s a discrepancy, the network votes to resolve the issue, maintaining system integrity and trust.

Read a detailed breakdown of spML here

So, spML improves on opML and zkML by synthesizing their strengths and effectively addressing their weaknesses. It offers better security than opML by reducing the probability of undetected fraud, while maintaining scalability and efficiency. Compared to zkML on the other hand, spML provides a more scalable and efficient solution by reducing the computational overhead associated with zero-knowledge proofs, while still ensuring a high level of security through its sampling-based approach.

By combining the best aspects of both opML and zkML, spML provides a simple, practical and effective solution for decentralized AI applications.

The impact of PoSP and spML on decentralized AI and decentralized systems

PoSP and spML have a significant impact on decentralized AI systems. By providing scalable, secure, and efficient verification mechanisms, these solutions enable the development of decentralized AI networks that can handle large-scale operations without compromising trust or performance.

One of the key applications is in decentralized AI inference, where users can be confident that the correct model is being used. This ensures that they receive high-quality, trustworthy results without relying on a single centralized entity. This technology has the potential to revolutionize numerous fields where trustworthy AI is essential, including virtual assistants, robotics, content creation, and on-chain transactions.

Moreover, spML’s sampling-based approach enables the creation of decentralized AI marketplaces, where providers can offer their AI models and services in a trustless and permissionless manner. This opens up new opportunities for AI developers and researchers to monetize their work and collaborate with others in the field, driving innovation and accelerating the progress of AI technologies.

In the future, the PoSP protocol has significant potential for use in various decentralized systems. As suggested in the paper, it could improve the security of blockchain layer 2 solutions by ensuring pure strategy Nash Equilibrium. The protocol is also ideally suited for Actively Validated Services (AVS), enhancing security and reliability in restaking protocols like EigenLayer or Exocore. This exploration will result in innovative applications that leverage the strengths of PoSP to create robust, scalable, and secure systems.

PosP & spML underline Hyperbolic’s commitment to building open-access AI

At Hyperbolic, our mission is to make AI accessible to everyone by advancing decentralized technologies through novel solutions like PoSP and spML. We’re building secure, scalable, and efficient verification mechanisms for decentralized AI systems that will enable anyone with an idea to harness the power of AI.

By creating an open-access AI ecosystem, we aim to democratize the field and unlock the untapped potential of people from all over the world. Our goal is to break down the barriers preventing many from engaging with AI technology, creating a more inclusive and collaborative approach to AI development.

We’re excited about the possibilities PoSP and spML open up for the future. Whether you’re an AI researcher, startup founder, user or simply believe in AI’s power to positively impact the world, there’s a place for you in the decentralized AI ecosystem we’re building.

We invite you to join us on this journey.

To learn more about this innovative research work, connect with Jasper Zhang on X and follow Hyperbolic’s journey as they redefine the future where AI empowers humanity to its fullest potential.

Collaborative research

This research is done in collaboration with researchers from UC Berkeley and Columbia University. You can read the full paper here.


Dr. Jasper (Yue) Zhang: CEO and co-founder of Hyperbolic Labs, leveraging his expertise from previous roles as a Senior Blockchain Researcher at Ava Labs and a Quantitative Researcher at Citadel Securities. He completed a math Ph.D. at UC Berkeley in two years and won Gold Medals at the Alibaba Global Math Competition and the Chinese Mathematical Olympiad. Follow Jasper on X.

Shouqiao Wang: Ph.D. student from Decision Risk and Operations Division at Columbia Business School, specializing in incentive mechanism design for decentralized systems. He won the Finalist prize at the Mathematical Contest in Modeling and two Silver Medals at the Chinese Mathematical Olympiad.

Xiaoyuan Liu: Ph.D. student in EECS at UC Berkeley, formerly of Shanghai Jiao Tong University’s ACM Honors Class. He is involved with Berkeley’s RDI, IC3, and BAIR Lab. His research focuses on computer security, systems, and machine learning, especially building systems for secure and privacy-preserving data processing.

Sijun Tan: CS Ph.D. student at UC Berkeley, affiliated with the Sky Computing Lab. Sijun’s experience includes internships at Facebook AI Research and a senior algorithm engineer role at Ant Group. His research covers computer security, machine learning, and applied cryptography.

Prof. Raluca Ada Popa: Associate Professor at UC Berkeley, specializing in security, systems, and applied cryptography. She co-founded and co-directs the RISELab and SkyLab Computing Lab aiming to build secure intelligent systems for the cloud and for the sky of clouds, respectively, and the DARE program for promoting diversity and equity.

Prof. Ciamac C. Moallemi: William von Mueffling Professor of Business in the Decision, Risk and Operations Division of Columbia Business University, specializing in analysis, optimization, and control of stochastic systems, applications in financial engineering and blockchain technology. He is a research advisor at Paradigm and the Director of the Briger Family Digital Finance Lab.

More Articles