股市老人币圈新

股市老人币圈新

Zypher Network: A Trustworthy Framework for AI Based on Zero-Knowledge Proof Solutions

The "Black Box Feature" of LLMs

Large language models (LLMs), such as ChatGPT, DeepSeek, and Grok, have become some of the most common examples of AI applications in recent years. Focusing on LLMs themselves, they typically refer to a type of natural language processing model trained on large-scale text data using deep learning techniques, possessing strong language understanding and generation capabilities, and widely used in scenarios such as text creation, translation, dialogue interaction, and question answering. LLMs can not only provide services directly to end-users, but the vast majority of LLM service providers (such as OpenAI) can also open up model capabilities through API interfaces, allowing them to play a role in broader scenarios. For example, IoT devices and intelligent automotive systems can integrate LLM APIs to achieve smarter interaction experiences, providing users with more efficient and natural AI-driven services.

In fact, generally speaking, LLM models themselves possess a series of characteristics such as high complexity, commercial opacity, data invisibility, and the inherent lack of interpretability of deep learning models. Therefore, when LLM service providers offer API services to users, the internal operational logic of the models is usually not visible. The entire model presents itself as a complete black box system, meaning that users can only send requests via the API and receive responses, without being able to directly access or understand the specific computational processes, parameter weights, or training mechanisms inside.

This prevalent "black box feature" is causing users to face two potential core issues when using large models or integrating APIs:

One is the consistency issue.

The system prompts are provided by developers, which directly affect model behavior. For example, the model may exhibit certain biases in its final reasoning due to preferences for specific prompts, thereby influencing the reasoning results.

Similarly, users are usually unable to verify whether the actual system prompts used during each API call have been tampered with, which may lead to model behavior deviating from expectations.

The other is the privacy issue.

System prompts often contain highly sensitive business information, such as pricing strategies, risk control rules, internal processes, etc. These prompts typically involve the core competitiveness of the enterprise, which developers are reluctant to disclose.

Currently, although TLS (Transport Layer Security) schemes are used to ensure data encryption during transmission, preventing data from being eavesdropped or tampered with, TLS cannot prove whether the system prompts actually executed on the server side have been altered. This means that even if API communication is secure, users still cannot verify whether the prompts used by the LLM are consistent with what the developers have promised. If developers wish to prove to third parties or partners that their related AI services are trustworthy, they typically require a mechanism to ensure the integrity of the system prompts, which traditional TLS cannot provide. This also means that most LLMs lack the ability to guarantee the trustworthiness of prompts.

For this reason, the application of LLMs in scenarios with high compliance, privacy, and security requirements, such as finance and healthcare, faces certain limitations. With the launch of the zkPrompt solution based on zkTLS technology by Zypher Network, it is expected to become a key breakthrough. This solution not only effectively protects the privacy of system prompts based on ZK solutions but also verifies the consistency of system prompts in each API call, which will play a crucial role in promoting the widespread application of LLMs in more industry sectors.

Zypher Network's zkPrompt Solution

Zypher Network is a co-processing facility centered around ZK technology, aimed at providing ZK services for all application scenarios and facilities that require zero-knowledge proofs. The Zypher Network system consists of an off-chain computing network made up of distributed computing nodes and an on-chain engine called Zytron. When a zero-knowledge computing task arises in the Zypher network, the system delegates the computing task to computing miners and generates a ZKP. This ZKP can be verified on-chain, ensuring the credibility and honesty of data, transactions, behaviors, etc. Meanwhile, the distributed computing network not only significantly reduces the system's computing costs but also endows the network with excellent scalable computing capabilities.

On this basis, Zypher Network has launched the zkPrompt solution specifically for LLM services, further expanding it into an important trusted and privacy infrastructure in the AI field. The zkPrompt solution, centered around its unique zkTLS technology, combines traditional TLS protocols with ZK technology, enabling users to verify the authenticity of data without exposing sensitive information. This innovation effectively compensates for the shortcomings of traditional TLS in data proof capabilities, providing higher credibility for AI operations while ensuring privacy.

zkTLS, ZK + TLS

TLS (Transport Layer Security) is a widely used encryption protocol designed to ensure the security of data transmission over computer networks. By encrypting and verifying data in transit, TLS can effectively prevent data from being stolen, tampered with, or forged during transmission. It is commonly applied in various internet communication scenarios, such as web browsing, email, and instant messaging, to ensure the privacy and data integrity of both communicating parties.

The basic principle of the TLS protocol combines symmetric and asymmetric encryption: both parties first authenticate each other through asymmetric encryption and exchange encryption keys, then use symmetric encryption to encrypt data, thereby improving encryption efficiency. Meanwhile, TLS also uses message authentication codes (MAC) to verify data integrity, ensuring that data has not been tampered with or corrupted during transmission.

In the application of LLMs, API calls between clients and servers are typically based on the TLS encryption protocol to ensure that LLM's API services can secure data during transmission, preventing data from being stolen or tampered with, thus safeguarding the privacy and integrity of the model when processing user requests. This provides LLMs with basic security protection when handling sensitive information, ensuring the confidentiality of communication. Of course, we have discussed its limitations earlier, so we will not elaborate further.

Combining cryptographic solutions with the TLS encryption protocol is expected to improve the consistency and privacy issues faced by LLM models. In fact, zero-knowledge proofs are a good solution in themselves, allowing one party (the prover) to prove to another party (the verifier) that a statement is true without revealing any additional information. Although the TLS protocol can ensure the integrity and confidentiality of data transmission, it is difficult to provide proof of the integrity and authenticity of this data to third parties. Therefore, leveraging ZK solutions can allow for the protection of privacy while proving the integrity and authenticity of data to third parties.

Of course, to achieve the above goals, zkTLS typically introduces a trusted third party (commonly referred to as a Verifier or Notary), which can verify interactions without compromising the security of the original connection. Currently, depending on the different technical routes, zkTLS is mainly divided into three modes:

  1. TEE-based mode: The TLS protocol runs securely in a TEE, and the TEE provides proof of session content.
  2. MPC-based mode: Typically adopts a 2-MPC model, introducing a Verifier. In this mode, the prover and verifier jointly generate a session key through MPC, which is divided into two parts held by both parties, and the prover can selectively disclose some information to the verifier.
  3. Proxy-based mode: The proxy (Verifier) acts as an intermediary between the client and server, responsible for forwarding and verifying the encrypted data exchanged during communication.

Zypher Network itself includes a scalable and cost-effective off-chain computing network, and it also contains an on-chain AI engine called Zytron, which not only deploys a large number of precompiled contracts but also builds a set of sharded, dedicated P2P on-chain communication node networks for contract verification. By connecting the P2P network, it ensures that each node can communicate directly and efficiently. This approach effectively reduces intermediate transmission steps, making data transfer faster. Moreover, communication and address location between nodes use the Kademlia algorithm, and this structured design allows nodes to find and contact other nodes more quickly and accurately.

In execution, Zytron also shards the execution process of contracts based on the node distance rules defined in the Kademlia algorithm. This means that different parts of the contract are assigned to different network nodes for execution based on the distance between nodes. This distance-based allocation method helps to evenly distribute the computational load within the Zytron network, thereby improving the speed and efficiency of the entire system.

Thanks to performance and cost advantages, Zypher Network adopts a proxy-based zkTLS implementation in zkPrompt. Compared to other modes, the proxy mode not only avoids the additional computational overhead brought by multi-party computation protocols but also circumvents the hardware costs associated with TEE.

How does zkPrompt operate?

Focusing on the zkPrompt solution itself, in its proxy mode, the verifier acts as an intermediary between the client and the large model server, responsible for forwarding TLS traffic and recording all encrypted data exchanged between both parties. At the end of the session, with the support of the off-chain computing network, the client generates a ZKP based on the recorded ciphertext, allowing any third party to verify the consistency of the system prompts in that TLS session without exposing the content of the prompts or any sensitive information.

Before starting any interaction, the client commits to the system prompts, meaning that the system prompts are processed through encryption and generate a commitment value, which is stored on the blockchain, ensuring that the prompts cannot be tampered with in subsequent operations. This commitment value serves as proof that the system prompts remain consistent and will not change in subsequent interactions.

When the client sends a request to the LLM large model through the proxy, the proxy acts as an intermediary between the client and the server. It is responsible for forwarding TLS traffic and recording all encrypted data packets exchanged between both parties. During this process, the proxy generates a commitment value for the request and stores it on-chain, ensuring the integrity and consistency of each request data packet. The purpose of this process is to ensure that the request data and system prompts are not tampered with.

When the LLM service returns a response, the proxy similarly records the response data packet and generates a commitment value for the response. These response commitment values are also stored on-chain to ensure that the content of the response is consistent with expectations. In this way, the system can verify whether the response has been tampered with during transmission, further safeguarding the integrity and credibility of the data.

At the end of the session, the client generates a zero-knowledge proof (ZKP) based on all encrypted records, which allows any third party to verify the consistency of the system prompts in the TLS session without exposing the specific content of the prompts or other sensitive information. This method effectively protects the privacy of the prompts while ensuring that the system prompts have not been tampered with throughout the communication process.

The generated zero-knowledge proof is then submitted to an on-chain smart contract for verification by the Zytron engine. Through this verification process, it can confirm whether the prompt content has not been tampered with and whether the LLM model has executed according to the predetermined behavior. If the prompt content has been tampered with or the execution behavior does not conform to the initial settings, the verification will fail, thereby promptly identifying and preventing any non-compliance or potential risks.

Zypher's Zytron engine provides strong assurance for the reliability of prompts, ensuring that the LLM model always operates as expected, avoiding risks from external interference or tampering. This verification mechanism not only enhances the credibility of the system but also provides important security protection for the zkPrompt solution, making applications in highly compliant fields more robust.

From a feature perspective, zkPrompt ensures that LLMs:

  • Data Privacy: Users can verify the correctness of prompts without seeing or understanding the specific content of the system prompts, protecting the sensitivity of the prompts.
  • Credibility and Transparency: Through zero-knowledge proofs, users can trust that the AI's behavior has not been maliciously tampered with.
  • Distributed Verification: Any user or third party can confirm the consistency of prompts and models through the verification process without relying on centralized entities.

Based on zkPrompt, it can not only ensure the credibility of prompts but also further extend to Proof of Inference, which can also ensure that the reasoning process of LLMs is credible and that the reasoning results are generated based on legitimate inputs.

It is worth mentioning that Zypher Network's zkPrompt is presented in an easy-to-use SDK format, relying on a set of advanced cryptographic schemes, including strong encryption, Pedersen commitments, and zkSnarks (Plonk) cryptographic primitives. For different types of LLM models, Zypher can flexibly adapt different zero-knowledge schemes to ensure optimal results for each LLM.

zkInference

In addition to zkPrompt, Zypher Network has also innovatively proposed the zkInference framework based on the ZKP scheme. This framework utilizes zero-knowledge proof algorithms to ensure that AI agents strictly adhere to predetermined rules or AI model operations, guaranteeing that their decision-making processes align with principles of fairness, accuracy, and security. This framework allows the behavior of AI agents to be verified without exposing the underlying models or data. Therefore, zkInference effectively prevents collusion and malicious behavior among multiple AI agents, ensuring fairness and security in a range of scenarios, including Web3 games and AI agents.

The zkInference framework is more suitable for lightweight models that need to perform basic and deterministic tasks, such as AI robots in Web3 games.

Overall, the characteristics of the zkInference framework can be summarized as follows:

  • Verifiability: Utilizing zero-knowledge proof technology to verify the behavior of AI agents without exposing the underlying models or data.
  • Anti-collusion: Effectively preventing collusion among different AI agents to ensure a fair gaming experience.
  • Infinite Computing Power: Providing a decentralized mining pool market to offer unlimited computing resources for verifiable AI agents.

Zypher Network Trusted Framework Use Cases

Alpha Girl

Alpha Girl is the first trustless multimodal AI agent based on Zypher's Proof of Prompt framework, aimed at predicting Bitcoin's market behavior through real-time market data and making intelligent predictions and decisions. As a cutting-edge AI solution, Alpha Girl utilizes advanced algorithms and data analysis to help users better understand and predict market trends. This AI agent was developed by a well-known Prompt Engineer team over three months and is currently launched to the market, with its first batch supporting Bitcoin price predictions. According to actual tests, Alpha Girl's trend prediction accuracy reaches 72%, and compared to a hold strategy, the strategies it provides can yield an excess return of 25%.

By integrating Zypher Network's zkPrompt solution, Alpha Girl's AI agent model can ensure that the system prompts it uses maintain consistency and correctness without revealing any underlying data, ensuring transparency and reliability in each prediction, thus guaranteeing a high match rate between the predicted results and expectations.

As an early example of a trusted AI agent, Alpha Girl demonstrates how to ensure the transparency and verifiability of the prediction process through the technology provided by Zypher Network. Zypher Network is expected to provide assurance for prediction tools in the cryptocurrency market and set a benchmark for similar AI agents in terms of privacy protection and data security.

AI Agent Game Engine Trusted Framework

Zypher Network has also made practical efforts in the on-chain gaming field. Currently, it has launched a Game Engine component, where developed game agents utilize smart contracts for game operations and ensure fairness between different players based on zkPrompt.

In this game engine, developers can use native game engines such as Cocos Creator, Unity, and Unreal to create on-chain games with low barriers. With the support of these tools, the core state management of the game is realized, ensuring real-time updates and verification of game states through interfaces with decentralized data management layers. Game state management includes input data, generated content, and test results, all of which will be processed by multiple AI agents such as content generation agents and game testing agents to optimize the gaming experience and ensure data accuracy.

The game's data inputs, generated content, and testing feedback will be transmitted to a decentralized game data management and storage layer. In this layer, the data will be used to support the execution of game logic and verified through zero-knowledge proof validation integrated with zkPrompt to ensure the immutability and authenticity of the data. Meanwhile, based on decentralized proof protocols, the game's data is processed and submitted through encrypted mining pools, verified by the blockchain network, ensuring that all game operations can be recorded transparently and securely.

This technology stack further combines an optimized resource layer to provide optimized computing and storage resources, allowing all participating AI agents (such as content generation agents, game testing agents, and data insight agents) to collaborate efficiently. Ultimately, this system not only provides the efficient computing power needed for game development but also ensures the transparency and fairness of each game operation through decentralized verification mechanisms, avoiding any tampering or unfair behavior.

Additionally, proxy players can also stake under the "LP" structure and share game profits, i.e., game mining, with other stakers. This not only allows the game to support cross-platform operation (including mobile and computer) but also provides players with more profit opportunities through the revenue-sharing mechanism of the LP structure. By collaborating with LPs, players can further increase their earnings. Currently, games launched based on this Game Engine component include dozens of titles such as Protect T-RUMP, Zombie Survival, and Big Whale.

At this stage, Zypher's zkPrompt solution is also exploring more fields to further promote the large-scale adoption of LLMs and AI agents in more areas in a privacy-preserving and trustworthy manner.

Overall, the AI field is still in its early stages of development. Although applications such as LLMs and AI agents have made initial progress, they remain exploratory and face numerous challenges. The lack of credibility and verifiability caused by the black box feature is becoming one of the main bottlenecks restricting further development. The series of solutions proposed by Zypher Network is gradually becoming the key to breaking this dilemma, providing a trusted framework for the adoption of LLMs and AI agents, and paving the way for their application in broader industries. This solution is expected to significantly enhance the reliability and transparency of AI systems while laying a solid foundation of trust for the widespread application of AI in the field.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.