Research

spML Breakdown

XDiscordRedditYoutubeLinkedin

This post originally appeared on the Hyperbolic Medium blog on May 9th, 2024

As AI technologies continue to evolve and integrate into various aspects of our lives, the centralization of AI inference poses significant challenges and risks. Firstly, centralization means that vast amounts of sensitive data are collected and stored by a single entity, raising serious privacy concerns and increasing vulnerability to data breaches. Moreover, centralized systems are constrained by the physical and computational limits of their servers, which may struggle to keep up with the escalating global demand for AI services.

How can we address the risks posed by centralized AI inference? Decentralized AI inference is the answer! Transitioning to decentralized AI inference offers a promising avenue for addressing these challenges. Yet, adopting such a model introduces new considerations, especially around the crucial aspect of trust. How can we ensure and verify that the AI model being used in a decentralized network is indeed what it claims to be? The subsequent section will delve into the innovative solutions that decentralized systems employ to foster and reinforce trust among users.

Trust issues

In the realm of centralized AI solutions, such as those offered by well-known entities like OpenAI, trust is largely derived from the reputation of the company. Users typically rely on this established trust, despite having no practical way to verify whether the services they are paying for — like complex computations from a GPT-4.0 model — are not actually being substituted with outputs from a cheaper, less capable model such as GPT-3.5. This lack of verifiability leaves room for potential misrepresentation, where the quality of service might not match the expectation or investment. In a decentralized setting, this issue of trust becomes even more complex. With multiple servers operated by various entities, the challenge of verifying the integrity and authenticity of AI outputs intensifies. For instance, if a user requests an analysis of a governance dilemma using the sophisticated Llama2–70B model, there’s an underlying concern: how can they be sure that a less capable model, like Llama2–13B, isn’t being used instead, providing subpar analysis and potentially skewing critical decisions?

To tackle the trust issues in decentralized AI inference networks, several solutions have been proposed, each with its distinct strengths and weaknesses. The two most commonly utilized approaches are the opML (optimistic fraud proof-based approach) and the zkML (zero-knowledge proof-based approach). OpML is favored for its scalability and efficiency; it simplifies the validation process by assuming that most participants will act honestly, with fraud proofs serving as a safeguard against dishonest behavior. However, this assumption can compromise the robustness of its security, as it relies on the rarity of disputes to maintain system integrity. On the other hand, zkML offers a higher degree of security through the use of zero-knowledge proofs, which allow for the verification of computations without exposing any underlying data. While zkML significantly enhances security and privacy, it is hampered by its high computational overhead, which can limit scalability and reduce overall system efficiency.

Building on the foundational strategies of opML and zkML, we introduce spML (sampling-based approach). This approach combines the high scalability and simplicity of opML with the enhanced security features of zkML, establishing spML as a superior alternative within the decentralized AI inference landscape that stands as a scalable and secure solution. SpML leverages the strengths of both systems to offer a reliable verification process. In the following sections, we will delve deeper into the mechanics of opML, zkML, and spML to illustrate how they contribute to enhancing trust and integrity in decentralized AI systems.

Limitations of existing verification mechanisms

1. OpML

In the evolving landscape of blockchain technology and machine learning, the innovative approach known as opML serves as a reasonable solution for executing AI tasks on the blockchain. Similar to optimistic rollups in the blockchain space, opML employs a fraud-proof mechanism to ensure the correctness and integrity of machine learning results without the high computational costs associated with on-chain operations.

OpML assumes that all submitted machine learning results are correct unless proven otherwise. This optimistic approach minimizes the need for continuous validation, relying instead on specific challenges only when a result is disputed. Such a system considerably reduces the computational burden on the blockchain, allowing for the efficient handling of more extensive and complex models directly on-chain. However, the existence of a challenge period, necessary for validators to dispute results, can extend the time until finality, potentially delaying the confirmation of legitimate transactions.

Additionally, this approach carries inherent risks, primarily revolving around the possibility of fraud if all validators fail to adequately check the results. The system depends crucially on having at least one honest validator who can challenge incorrect results during a designated dispute period. The verifier’s dilemma also presents the similar challenge in opML, much like in other blockchain verification mechanisms. Validators must decide whether to expend resources to verify results, with the risk that no discrepancies will be found, thus potentially wasting computational effort. This scenario underlines the importance of designing incentive mechanisms within opML to encourage validators to perform their roles diligently. Moreover, this leads to a mixed strategy Nash Equilibrium, where each validator, behaving rationally to maximize their own benefits, might choose to skip verification assuming others will verify. This inherent system feature means there’s always a positive probability of undetected fraud, especially if every validator behaves according to this rational but minimal-effort strategy.

2. ZkML

ZkML offers a pivotal advancement in ensuring privacy and security within the realm of decentralized AI inference network. At its heart, zkML employs zero-knowledge proofs, a sophisticated cryptographic technique that allows the validation of computations without revealing any underlying data or computational details. This attribute is especially beneficial for applications that demand strict privacy measures, safeguarding sensitive information while ensuring the correctness of computations.

However, the application of zkML is not without its challenges. The process of generating zero-knowledge proofs is computationally intensive and complex, which can significantly limit the scalability and practicality of zkML in real-world applications. For instance, existing studies show that generating a proof for a nanoGPT model with 1 million parameters can take around 16 minutes. When scaling this up to more complex models like the Llama2–70B, which has 70,000 times more parameters than nanoGPT, the time required to generate a single proof could extend to several days or even weeks. Such extended durations for proof generation render zkML impractical for timely decentralized AI inference tasks, as the computational overhead and delays can severely hamper the efficiency and responsiveness required in many AI driven applications.

SpML — the best verification mechanism for decentralized AI

While both opML and zkML offer innovative approaches to managing decentralized AI, they each come with limitations that can impede scalability, speed, and overall system integrity. OpML, reliant on an optimistic assumption of correctness, risks occasional undetected fraud, whereas zkML faces challenges with extensive computational demands and delays due to complex proof generation. Given these challenges, we introduce a more robust solution, spML, that enhances both security and efficiency, which ensures that all participants act honestly — a system where a pure strategy Nash equilibrium is attainable, promoting straightforward, honest behavior by all nodes without reliance on probabilistic checks.

Enter spML, a verification mechanism designed to address the pitfalls of previous systems by streamlining the process of AI inference verification in decentralized networks. The spML protocol utilizes a straightforward yet effective method to ensure both rapid processing and high security without the computational overhead and complexity associated with zkML or the vulnerability to fraud seen in opML.

How does spML work?

The protocol begins when a user sends an input along with their digital signature to a randomly selected server, known as Server A. Server A processes the input and returns the output along with its hash, also signed to verify its authenticity. To ensure the reliability of the inference, the protocol may randomly involve an additional server, Server B, to independently verify the output. This occurs with a predetermined probability; if Server B is not selected, Server A receives a reward, and the transaction concludes successfully.

If Server B is involved, it also processes the same input and sends back its output and hash to the user. The user then compares both hashes. If they match, indicating consistent results, acknowledging the integrity of the process, and both servers are rewarded. If the hashes differ, indicating a potential discrepancy or fraud, the user broadcasts this information to the entire network. The network, consisting of multiple nodes, votes to adjudicate the claim based on the majority rule. Sanctions are imposed on any dishonest party, to maintain the system’s integrity and trust.

SpML incorporates two ingenious design elements that enhance the system’s integrity and efficiency:

  1. No free-riding: A critical feature of spML is that the results from Servers A and B are sent independently to the user. This design prevents Server B from seeing Server A’s results, which effectively eliminates the possibility of free-riding. By ensuring that each server operates without prior knowledge of the other’s outputs, the protocol promotes genuine, independent calculation of results.

  2. SpML also addresses the potential for collusion, a common concern in decentralized systems where one party might control multiple nodes. The protocol is structured so that when Server A computes the result, it is unaware of whether there will be a challenge from Server B or who Server B might be. This uncertainty plays a crucial role in ensuring honest behavior; since Server A cannot predict a challenge and does not know the identity of Server B, the optimal strategy is to produce an honest and accurate computation.

This design not only incentivizes honest participation but also ensures that spML is scalable and fast, capable of handling large-scale AI operations without the bottlenecks of intensive computational demands. This makes spML an attractive option for decentralized AI applications seeking to combine rapid processing with robust security measures.

Comparing spML vs. opML and zkML

Blog
More Articles