Phala Network Docs
  • Home
    • 👾Phala Network Docs
  • Overview
    • ⚖️Phala Network
      • 💎Phala Cloud
      • 🥷Dstack
      • 🔐GPU TEE
    • 💎PHA Token
      • 🪙Introduction
      • 👐Delegation
        • Delegate to StakePool
        • What is Vault
        • What is Share
        • WrappedBalances & W-PHA
        • Examples of Delegation
        • Use Phala App to Delegate
        • Estimate Your Reward
      • 🗳️Governance
        • Governance Mechanism
        • Join the Council
        • Voting for Councillors
        • Apply for Project Funding
        • Phala Treasury
        • Phala Governance
        • Setting Up an Account Identity
  • Phala Cloud
    • 🚀Getting Started
      • Create Your Phala Cloud Account
      • Your First CVM Deployment
      • Explore Templates
        • Launch an Eliza Agent
        • Start from Template
    • 🪨TEEs, Attestation & Zero Trust Security
      • Attestation
      • Security Architecture
    • 🥷Phala Cloud User Guides
      • Deploy and Manage CVMs
        • Deploy CVM with Docker Compose
        • Set Secure Environment Variables
        • Deploy Private Docker Image to CVM
        • Debugging and Analyzing Logs
          • Check Logs
          • Private Log Viewer
          • Debug Your Application
        • Application Scaling & Resource Management
        • Upgrade Application
        • Deployment Cheat Sheet
      • Building with TEE
        • Access Your Applications
        • Expose Service Port
        • Setting Up Custom Domain
        • Secure Access Database
        • Create Crypto Wallet
        • Generate Remote Attestation
      • Advanced Deployment Options
        • Deploy CVM with Phala Cloud CLI
        • Deploy CVM with Phala Cloud API
        • Setup a CI/CD Pipeline
    • 🚢Be Production Ready
      • CI/CD Automation
        • Setup a CI/CD Pipeline
      • Production Checklist
      • Troubleshooting Guide
      • Glossary
    • 🔒Use Cases
      • TEE with AI
      • TEE with FHE and MPC
      • TEE with ZK and ZKrollup
    • 📋References
      • Phala Cloud CLI Reference
        • phala
          • auth
          • cvms
          • docker
          • simulator
      • Phala Cloud API & SDKs
        • API Endpoints & Examples
        • SDKs and Integrations
      • Phala Cloud Pricing
    • ❓FAQs
  • Dstack
    • Overview
    • Local Development Guide
    • Getting Started
    • Hardware Requirements
    • Design Documents
      • Whitepaper
      • Decentralized Root-of-Trust
      • Key Management Service
      • Zero Trust HTTPs (TLS)
    • Acknowledgement
    • ❓FAQs
  • LLM in GPU TEE
    • 👩‍💻Host LLM in GPU TEE
    • 🔐GPU TEE Inference API
    • 🏎️GPU TEE Benchmark
    • ❓FAQs
  • Tech Specs
    • ⛓️Blockchain
      • Blockchain Entities
      • Cluster of Workers
      • Secret Key Hierarchy
  • References
    • 🔐Setting Up a Wallet on Phala
      • Acquiring PHA
    • 🌉SubBridge
      • Cross-chain Transfer
      • Supported Assets
      • Asset Integration Guide
      • Technical Details
    • 👷Community Builders
    • 🤹Hackathon Guides
      • ETHGlobal Singapore
      • ETHGlobal San Francisco
      • ETHGlobal Bangkok
    • 🤯Advanced Topics
      • Cross Chain Solutions
      • System Contract and Drivers
      • Run Local Testnet
      • SideVM
    • 🆘Support
      • Available Phala Chains
      • Resource Limits
      • Transaction Costs
      • Compatibility Matrix
      • Block Explorers
      • Faucet
    • ⁉️FAQ
  • Compute Providers
    • 🙃Basic Info
      • Introduction
      • Gemini Tokenomics (Worker Rewards)
      • Budget balancer
      • Staking Mechanism
      • Requirements in Phala
      • Confidence Level & SGX Function
      • Rent Hardware
      • Error Summary
    • 🦿Run Workers on Phala
      • Solo Worker Deployment
      • PRBv3 Deployment
      • Using PRBv3 UI
      • PRB Worker Deployment
      • Switch Workers from Solo to PRB Mode
      • Headers-cache deployment
      • Archive node deployment
    • 🛡️Gatekeeper
      • Collator
      • Gatekeeper
  • Web Directory
    • Discord
    • GitHub
    • Twitter
    • YouTube
    • Forum
    • Medium
    • Telegram
Powered by GitBook
On this page
  • Basic Requirements
  • Server Configuration Requirements
  • PRB Worker requirements
  • PRB Components Deployment
  • Preparations
  • Document Editing
  • Program Execution

Was this helpful?

Edit on GitHub
  1. Compute Providers
  2. 🦿Run Workers on Phala

PRBv3 Deployment

PreviousSolo Worker DeploymentNextUsing PRBv3 UI

Last updated 2 months ago

Was this helpful?

LogoLogo

Participate

  • Compute Providers
  • Node
  • Community
  • About Us

Resources

  • Technical Whitepaper
  • Token Economics
  • Docs
  • GitHub

More

  • Testnet
  • Explorer
  • Careers
  • Responsible Disclosure

COPYRIGHT © 2024 PHALA.LTD ALL RIGHTS RESERVED. May Phala be with you!

Basic Requirements

To use PRBv3 (Runtime Bridge) for worker deployment, you need at least 1 additional device as the management server. The device connection is shown in the following diagram:

The node service and PRB service can be run on the same server as needed (depending on the number of workers and server performance).

Server Configuration Requirements

The PRB management server needs to run 2 main components, Node and PRB. The requirements for each component are as follows:

Components
RAM Space
Harddisk Space
Remark

Node

4GB+

900GB+ NVME

harddisk requirement increasing, 2TB will be best

PRB

4GB+

0

RAM requirement depends on worker number, 16GB+ will be better

Totally

32GB+

2TB

-

You also need to ensure good network connectivity between the management server and PRB workers, and the network of the PRB management server needs to have more than 10TB of traffic space per month.

PRB Worker requirements

PRB’s worker only needs to run a pRuntime, so the requirements for running a PRB worker are:

  • Support for SGX features

  • Ubuntu 22.04.2 LTS operating system and a system kernel of 5.13 or higher

  • At least 4 CPU cores

  • 8GB of memory

  • 128GB NVME

PRB Components Deployment

Preparations

After installing the Ubuntu OS, first install the necessary Docker program.

sudo apt update && sudo apt upgrade -y && sudo apt autoremove -y
sudo apt install docker-compose

Then create a folder locally, and create a docker-compose document and other necessary documents within it.

mkdir prb-deployment
cd ./prb-deployment
touch docker-compose.yml
touch wm.yml
mkdir prb-wm-data
cd ./prb-wm-data
touch ds.yml
cd ..

The relationship of the file path is like:

  • prb-deployment folder

    • docker-compose.yml

    • wm.yml

    • prb-wm-data folder

      • ds.yml

Document Editing

You need to edit a total of 3 documents: the main PRB docker-compose.yml file; the wm.yml file (worker manager); and the ds.yml file (data source).

First is the main PRB docker-compose file. In this document, the following code has been added to the configuration of the node-related components. If you don’t need to run the node service and the PRB service on the same server, you can optionally delete the unnecessary parts.

Use the following command to edit the docker-compose.yml document.

vim ./docker-compose.yml 

After entering, you will access the document.

At this point, enter a and you will start editing the document. Paste the following content into the document. (Please note that the file content remains consistent and the indentation alignment of each line is consistent with this document)

version: "3"
services:
  node:
    image: phalanetwork/phala-node-with-launcher:latest
    container_name: node
    hostname: node
    restart: always
    ports:
     - "9944:9944"
     - "9945:9945"
     - "30333:30333"
     - "30334:30334"
    environment:
     - NODE_NAME=PNODE
     - NODE_ROLE=MINER
     - PARACHAIN_EXTRA_ARGS=--max-runtime-instances 32 --runtime-cache-size 8 --rpc-max-response-size 256
     - RELAYCHAIN_EXTRA_ARGS=--max-runtime-instances 32 --runtime-cache-size 8 --rpc-max-response-size 256
    volumes:
     - /var/phala/node-data:/root/data

  wm:
    image: phalanetwork/prb3:25031701
    hostname: prb-local
    restart: always
    network_mode: host
    logging:
      options:
        max-size: "1g"
    environment:
      - MGMT_LISTEN_ADDRESSES=0.0.0.0:3001
      - RUST_BACKTRACE=1
      - RUST_LOG=info,pherry=off,phactory_api=off,prb=info
    volumes:
      - ./prb-wm-data:/var/data/prb-wm
  
  monitor:
    image: phalanetwork/prb3-monitor:latest
    restart: always
    network_mode: host
    volumes:
      - ./wm.yml:/app/public/wm.yml

After entering, complete the following steps to finish the text editing and save successfully.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter",quit the editing page

Next is the wm.yml file. Edit the wm.yml document with the following command:

vim ./wm.yml 

Similarly, enter a to start editing the document and paste the following content into the document.

- name: local-prb
  endpoint: http://127.0.0.1:3001
  proxied: true

After entering the content, save and return to the previous directory.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter",quit the editing page

Finally, edit the ds document.

vim ./prb-wm-data/ds.yml 

enter a to start editing the document and paste the following content into the document.

---
relaychain:
  select_policy: Failover # or Random
  data_sources:
    - !SubstrateWebSocketSource
      endpoint: ws://{node-ip}:9945
      pruned: false
parachain:
  select_policy: Failover
  data_sources:
    - !SubstrateWebSocketSource
      endpoint: ws://{node-ip}:9944
      pruned: false

There are 2 parameters here that need to be user-defined: ws://{node-ip}:9945 & ws://{node-ip}:9944;

You need to replace {node-ip} with the IP of the server where the node is located. If you are running the node and PRB on the same server, use your own ip there.

If you don't need the PRBv3 connect to the headers-cache, delete 2 parts of

- !HeadersCacheHttpSource

endpoint: http://{headerscache-ip}:21111

After entering the content, save and return to the previous directory.

1、Click "esc"
2、Enter ":wq"
3、Click "Enter", quit the editing page

Program Execution

Inside the newly created folder prb-deployment, run the docker-compose, and the essential components for PRB will run successfully.

sudo docker-compose up -d