Phala Network Docs
  • Home
    • 👾Phala Network Docs
  • Overview
    • ⚖️Phala Network
      • 💎Phala Cloud
      • 🥷Dstack
      • 🔐GPU TEE
    • 💎PHA Token
      • 🪙Introduction
      • 👐Delegation
        • Delegate to StakePool
        • What is Vault
        • What is Share
        • WrappedBalances & W-PHA
        • Examples of Delegation
        • Use Phala App to Delegate
        • Estimate Your Reward
      • 🗳️Governance
        • Governance Mechanism
        • Join the Council
        • Voting for Councillors
        • Apply for Project Funding
        • Phala Treasury
        • Phala Governance
        • Setting Up an Account Identity
  • Phala Cloud
    • 🚀Getting Started
      • Sign-up for Cloud Account
      • Start from Cloud UI
      • Start from Cloud CLI
      • Start from Template
    • 🟧Launch an Eliza Agent
    • 📦Create CVM
      • Create CVM with Docker Compose
      • Create CVM with Private Docker Image
      • Set Secure Environment Variables
      • Access Your Applications
      • Setting Up Custom Domain
      • Debug Your Application
      • Deployment Cheat Sheet
    • ⚙️CVM Management
      • Upgrade Application
      • Resize Resources
      • Check Logs
      • Private Log Viewer
    • 🔄Deploy Docker App in TEE
      • Expose Service Port
      • Generate RA Report
      • Access Database
      • Create Crypto Wallet
    • 🛳️Setup a CI/CD Pipeline
    • 🛠️Phala Cloud CLI Reference
      • phala
        • auth
        • cvms
        • docker
        • simulator
    • Production Checklist
    • ❓FAQs
    • 🔍Troubleshooting
    • 📖Glossary
    • 📋References
    • 🔒Use Cases
      • TEE with AI
      • TEE with FHE and MPC
      • TEE with ZK and ZKrollup
  • Dstack
    • Overview
    • Getting Started
    • Hardware Requirements
    • Design Documents
      • Decentralized Root-of-Trust
      • Key Management Protocol
      • Zero Trust HTTPs (TLS)
    • Acknowledgement
  • LLM in GPU TEE
    • 👩‍💻Host LLM in GPU TEE
    • 🔐GPU TEE Inference API
    • 🏎️GPU TEE Benchmark
  • Tech Specs
    • ⛓️Blockchain
      • Blockchain Entities
      • Cluster of Workers
      • Secret Key Hierarchy
  • References
    • 🔐Setting Up a Wallet on Phala
      • Acquiring PHA
    • 🌉SubBridge
      • Cross-chain Transfer
      • Supported Assets
      • Asset Integration Guide
      • Technical Details
    • 👷Community Builders
    • 🤹Hackathon Guides
      • ETHGlobal Singapore
      • ETHGlobal San Francisco
      • ETHGlobal Bangkok
    • 🤯Advanced Topics
      • Cross Chain Solutions
      • System Contract and Drivers
      • Run Local Testnet
      • SideVM
    • 🆘Support
      • Available Phala Chains
      • Resource Limits
      • Transaction Costs
      • Compatibility Matrix
      • Block Explorers
      • Faucet
    • ⁉️FAQ
  • Compute Providers
    • 🙃Basic Info
      • Introduction
      • Gemini Tokenomics (Worker Rewards)
      • Budget balancer
      • Staking Mechanism
      • Requirements in Phala
      • Confidence Level & SGX Function
      • Rent Hardware
      • Error Summary
    • 🦿Run Workers on Phala
      • Solo Worker Deployment
      • PRBv3 Deployment
      • Using PRBv3 UI
      • PRB Worker Deployment
      • Switch Workers from Solo to PRB Mode
      • Headers-cache deployment
      • Archive node deployment
    • 🛡️Gatekeeper
      • Collator
      • Gatekeeper
  • Web Directory
    • Discord
    • GitHub
    • Twitter
    • YouTube
    • Forum
    • Medium
    • Telegram
  • Legacy
    • Information
    • ⚒️Phala SDK
    • 👨‍🚀Builders Program
    • 🥷AI Agent Contract
      • WapoJS Functions
      • Phala Agent Gateway
  • AI Agent Contract (Legacy)
    • 👩‍💻Getting Started
      • Build Your First AI Agent Contract
      • Build An Agent to Transact Onchain
      • Build Your AI Agent Contract with OpenAI
      • Build Your AI Agent Contract with LangChain
      • Integrate with 3rd Party API with HTTP Request
      • Run a Local Testnet With Docker
      • AI Agent Contract Templates
    • 🧙‍♂️Examples
      • Create a Weather Agent w/ Function Calling
    • ⛓️Supported Chains
    • FAQ
  • Agent Wars (Legacy)
    • 📜Introduction
    • 💸Tokenomics
    • ▶️Getting Started
      • Wallet Setup & Get PHA
      • Buy and Sell Keys
    • 🧑‍🏫Tutorial
Powered by GitBook
LogoLogo

Participate

  • Compute Providers
  • Node
  • Community
  • About Us

Resources

  • Technical Whitepaper
  • Token Economics
  • Docs
  • GitHub

More

  • Testnet
  • Explorer
  • Careers
  • Responsible Disclosure

COPYRIGHT © 2024 PHALA.LTD ALL RIGHTS RESERVED. May Phala be with you!

On this page
  • Basic Requirements
  • Server Configuration Requirements
  • PRB Worker requirements
  • PRB Components Deployment
  • Preparations
  • Document Editing
  • Program Execution

Was this helpful?

Edit on GitHub
  1. Compute Providers
  2. Run Workers on Khala - Archived

PRBv3 Deployment Guide - Archived

Last updated 1 year ago

Was this helpful?

Basic Requirements

To use PRBv3 (Runtime Bridge) for worker deployment, you need at least 1 additional device as the management server. The device connection is shown in the following diagram:

The node service and PRB service can be run on the same server as needed (depending on the number of workers and server performance).

Server Configuration Requirements

The PRB management server needs to run 2 main components, Node and PRB. The requirements for each component are as follows:

Components
RAM Space
Harddisk Space
Remark

Node

4GB+

3TB+ NVME

harddisk requirement increasing, 8TB will be best

PRB

4GB+

0

RAM requirement depends on worker number, 16GB+ will be better

Totally

32GB+

4TB

-

You also need to ensure good network connectivity between the management server and PRB workers, and the network of the PRB management server needs to have more than 10TB of traffic space per month.

PRB Worker requirements

PRB’s worker only needs to run a pRuntime, so the requirements for running a PRB worker are:

  • Support for SGX features

  • Ubuntu 22.04.2 LTS operating system and a system kernel of 5.13 or higher

  • At least 4 CPU cores

  • 8GB of memory

  • 128GB NVME

PRB Components Deployment

Preparations

After installing the Ubuntu OS, first install the necessary Docker program.

sudo apt update && sudo apt upgrade -y && sudo apt autoremove -y
sudo apt install docker-compose

Then create a folder locally, and create a docker-compose document and other necessary documents within it.

mkdir prb-deployment
cd ./prb-deployment
touch docker-compose.yml
touch wm.yml
mkdir prb-wm-data
cd ./prb-wm-data
touch ds.yml
cd ..

The relationship of the file path is like:

  • prb-deployment folder

    • docker-compose.yml

    • wm.yml

    • prb-wm-data folder

      • ds.yml

Document Editing

You need to edit a total of 3 documents: the main PRB docker-compose.yml file; the wm.yml file (worker manager); and the ds.yml file (data source).

First is the main PRB docker-compose file. In this document, the following code has been added to the configuration of the node-related components. If you don’t need to run the node service and the PRB service on the same server, you can optionally delete the unnecessary parts.

Use the following command to edit the docker-compose.yml document.

vim ./docker-compose.yml 

After entering, you will access the document.

At this point, enter a and you will start editing the document. Paste the following content into the document. (Please note that the file content remains consistent and the indentation alignment of each line is consistent with this document)

version: "3"
services:
  node:
    image: phalanetwork/khala-node:latest
    container_name: node
    hostname: node
    restart: always
    ports:
     - "9944:9944"
     - "9945:9945"
     - "30333:30333"
     - "30334:30334"
    environment:
     - NODE_NAME=PNODE
     - NODE_ROLE=MINER
     - PARACHAIN_EXTRA_ARGS=--max-runtime-instances 32 --runtime-cache-size 8 --rpc-max-response-size 64
     - RELAYCHAIN_EXTRA_ARGS=--max-runtime-instances 32 --runtime-cache-size 8 --rpc-max-response-size 64
    volumes:
     - /var/khala/node-data:/root/data

  wm:
    image: phalanetwork/prb3:latest
    hostname: prb-local
    restart: always
    network_mode: host
    logging:
      options:
        max-size: "1g"
    environment:
      - MGMT_LISTEN_ADDRESSES=0.0.0.0:3001
      - RUST_BACKTRACE=1
      - RUST_LOG=info,pherry=off,phactory_api=off,prb=info
    volumes:
      - ./prb-wm-data:/var/data/prb-wm
  monitor:
    image: phalanetwork/prb3-monitor:latest
    restart: always
    network_mode: host
    volumes:
      - ./wm.yml:/app/public/wm.yml

After entering, complete the following steps to finish the text editing and save successfully.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter",quit the editing page

Next is the wm.yml file. Edit the wm.yml document with the following command:

vim ./wm.yml 

Similarly, enter a to start editing the document and paste the following content into the document.

- name: local-prb
  endpoint: http://127.0.0.1:3001
  proxied: true

After entering the content, save and return to the previous directory.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter",quit the editing page

Finally, edit the ds document.

vim ./prb-wm-data/ds.yml 

enter a to start editing the document and paste the following content into the document.

---
relaychain:
  select_policy: Failover # or Random
  data_sources:
    - !SubstrateWebSocketSource
      endpoint: ws://{node-ip}:9945
      pruned: false
parachain:
  select_policy: Failover
  data_sources:
    - !SubstrateWebSocketSource
      endpoint: ws://{node-ip}:9944
      pruned: false

There are 2 parameters here that need to be user-defined: ws://{node-ip}:9945 & ws://{node-ip}:9944;

You need to replace {node-ip} with the IP of the server where the node is located. If you are running the node and PRB on the same server, use your own ip there.

If you don't need the PRBv3 connect to the headers-cache, delete the part of

- !HeadersCacheHttpSource

endpoint: http://{headerscache-ip}:21111

After entering the content, save and return to the previous directory.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter",quit the editing page

Program Execution

Inside the newly created folder prb-deployment, run the docker-compose, and the essential components for PRB will run successfully.

sudo docker-compose up -d

\

🏃‍♀️