Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ make chat-tool
- **Dual Authentication**: Model Hub (email/password) + blockchain (private key)
- **x402 Payments**: Streamed micropayment protocol for LLM inference
- **Digital Twins**: Chat integration with twin.fun personas
- **Dual Chain**: Base Sepolia (LLM) + OpenGradient testnet (Alpha on-chain inference)
- **Dual Chain**: Base mainnet (LLM) + OpenGradient testnet (Alpha on-chain inference)

## Configuration

Expand Down
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,9 @@ For current network RPC endpoints, contract addresses, and deployment informatio

Before using the SDK, you will need:

1. **Private Key**: An Ethereum-compatible wallet private key funded with **Base Sepolia OPG tokens** for x402 LLM payments
2. **Test Tokens**: Obtain free test tokens from the [OpenGradient Faucet](https://faucet.opengradient.ai) for testnet LLM inference
3. **Alpha Testnet Key** (Optional): A private key funded with **OpenGradient testnet gas tokens** for Alpha Testnet on-chain inference (can be the same or a different key)
4. **Model Hub Account** (Optional): Required only for model uploads. Register at [hub.opengradient.ai/signup](https://hub.opengradient.ai/signup)
1. **Private Key**: An Ethereum-compatible wallet private key funded with **Base OPG tokens** for x402 LLM payments
2. **Alpha Testnet Key** (Optional): A private key funded with **OpenGradient testnet gas tokens** for Alpha Testnet on-chain inference (can be the same or a different key)
3. **Model Hub Account** (Optional): Required only for model uploads. Register at [hub.opengradient.ai/signup](https://hub.opengradient.ai/signup)

### Configuration

Expand Down Expand Up @@ -87,7 +86,7 @@ The SDK provides separate clients for each service. Create only the ones you nee
import os
import opengradient as og

# LLM inference — settles via x402 on Base Sepolia using OPG tokens
# LLM inference — settles via x402 on Base using OPG tokens
llm = og.LLM(private_key=os.environ.get("OG_PRIVATE_KEY"))

# Alpha Testnet — on-chain inference on the OpenGradient network using testnet gas tokens
Expand Down
2 changes: 1 addition & 1 deletion docs/CLAUDE_SDK_USERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ print(result.chat_output["content"])
Each service has its own client class:

```python
# LLM inference (Base Sepolia OPG tokens for x402 payments)
# LLM inference (Base OPG tokens for x402 payments)
llm = og.LLM(private_key="0x...")

# Connect directly to a known TEE IP instead of using the on-chain registry.
Expand Down
4 changes: 2 additions & 2 deletions docs/opengradient/client/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ OpenGradient Client -- service modules for the SDK.

## Modules

- **[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base Sepolia OPG tokens)
- **[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base OPG tokens)
- **[model_hub](./model_hub)** -- Model repository management: create, version, and upload ML models
- **[alpha](./alpha)** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens)
- **[twins](./twins)** -- Digital twins chat via OpenGradient verifiable inference
Expand All @@ -22,7 +22,7 @@ OpenGradient Client -- service modules for the SDK.
```python
import opengradient as og

# LLM inference (Base Sepolia OPG tokens)
# LLM inference (Base OPG tokens)
llm = og.LLM(private_key="0x...")
llm.ensure_opg_approval(min_allowance=5)
result = await llm.chat(model=og.TEE_LLM.CLAUDE_HAIKU_4_5, messages=[...])
Expand Down
8 changes: 4 additions & 4 deletions docs/opengradient/client/llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,11 +231,11 @@ Permit2ApprovalResult: Contains ``allowance_before``,

### Automatic Endpoint Discovery (Production)

By default, `LLM()` constructor automatically discovers active TEE endpoints from the on-chain TEE registry using the `rpc_url` parameter (defaults to the Base Sepolia testnet).
By default, `LLM()` constructor automatically discovers active TEE endpoints from the on-chain TEE registry using the `rpc_url` parameter (defaults to the OpenGradient devnet).

**Key Points:**
- The TEE endpoint is **dynamically discovered** from the registry
- x402 payments are always settled on **Base Sepolia**, regardless of which TEE endpoint serves your request
- x402 payments are always settled on **Base**, regardless of which TEE endpoint serves your request
- This is the recommended approach for production use

**Example:**
Expand All @@ -250,7 +250,7 @@ For development, testing, or self-hosted TEE servers, use `LLM.from_url()` with

**Key Points:**
- TLS certificate verification is disabled (suitable for self-signed certs)
- x402 payment settlement still occurs on Base Sepolia
- x402 payment settlement still occurs on Base
- Intended for non-production environments only

**Example:**
Expand All @@ -264,5 +264,5 @@ llm = og.LLM.from_url(

### Important: x402 Payment Settlement

Regardless of which TEE endpoint serves your inference request, **x402 payment settlement always occurs on Base Sepolia blockchain**. This ensures all payments are recorded on-chain for auditability and ensures consistent settlement across all TEE providers.
Regardless of which TEE endpoint serves your inference request, **x402 payment settlement always occurs on Base blockchain**. This ensures all payments are recorded on-chain for auditability and ensures consistent settlement across all TEE providers.

2 changes: 1 addition & 1 deletion docs/opengradient/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ inference was performed correctly.

The SDK operates across two chains with separate private keys:

- **[llm](./client/llm)** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base Sepolia** (requires OPG tokens).
- **[llm](./client/llm)** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base** (requires OPG tokens).
- **[alpha](./client/alpha)** (``og.Alpha``) -- On-chain ONNX model inference with VANILLA, TEE, or ZKML verification. Pays gas on the **OpenGradient alpha testnet**.
- **[model_hub](./client/model_hub)** (``og.ModelHub``) -- Model repository management: create, version, and upload ML models. Requires email/password auth.
- **[twins](./client/twins)** (``og.Twins``) -- Digital twins chat via verifiable inference. Requires a twins API key.
Expand Down
4 changes: 2 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Before running any examples, ensure you have:

1. **Installed the SDK**: `pip install opengradient`
2. **Set up your credentials**: Configure your OpenGradient account using environment variables:
- `OG_PRIVATE_KEY`: Private key funded with **Base Sepolia OPG tokens** for x402 LLM payments (can be obtained from our [faucet](https://faucet.opengradient.ai)). Also used for Alpha Testnet on-chain inference (requires **OpenGradient testnet gas tokens**).
- `OG_PRIVATE_KEY`: Private key funded with **Base OPG tokens** for x402 LLM payments.
- `OG_MODEL_HUB_EMAIL`: (Optional) Your Model Hub email for model uploads
- `OG_MODEL_HUB_PASSWORD`: (Optional) Your Model Hub password for model uploads

Expand Down Expand Up @@ -140,7 +140,7 @@ Each sub-client is created independently with the credentials it needs:
import os
import opengradient as og

# LLM inference (Base Sepolia OPG tokens for x402 payments)
# LLM inference (Base OPG tokens for x402 payments)
llm = og.LLM(private_key=os.environ.get("OG_PRIVATE_KEY"))

# On-chain model inference (OpenGradient testnet gas tokens)
Expand Down
4 changes: 2 additions & 2 deletions integrationtest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ End-to-end tests that exercise the OpenGradient SDK against live services.

### LLM (`llm/`)

Tests LLM chat and streaming chat via the x402 payment flow on Base Sepolia.
Tests LLM chat and streaming chat via the x402 payment flow on Base.

Each run creates a **fresh Ethereum account**, funds it with ETH (for gas) and OPG tokens from a funder wallet, approves Permit2, and then runs chat requests against a TEE-verified model.

**Requirements:**
- `PRIVATE_KEY` env var — private key of a funded wallet on Base Sepolia that holds OPG tokens.
- `PRIVATE_KEY` env var — private key of a funded wallet on Base that holds OPG tokens.

```bash
make llm_integrationtest
Expand Down
4 changes: 2 additions & 2 deletions integrationtest/llm/test_llm_chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from web3 import Web3

import opengradient as og
from opengradient.client.opg_token import BASE_OPG_ADDRESS, BASE_SEPOLIA_RPC
from opengradient.client.opg_token import BASE_OPG_ADDRESS, BASE_MAINNET_RPC

# Minimal ERC20 ABI for transfer
ERC20_TRANSFER_ABI = [
Expand Down Expand Up @@ -37,7 +37,7 @@

def _fund_account(funder_key: str, recipient_address: str):
"""Transfer ETH (for gas) and OPG tokens from the funder to the recipient."""
w3 = Web3(Web3.HTTPProvider(BASE_SEPOLIA_RPC))
w3 = Web3(Web3.HTTPProvider(BASE_MAINNET_RPC))
funder = Account.from_key(funder_key)
funder_addr = Web3.to_checksum_address(funder.address)
recipient = Web3.to_checksum_address(recipient_address)
Expand Down
2 changes: 1 addition & 1 deletion src/opengradient/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

The SDK operates across two chains with separate private keys:

- **`opengradient.client.llm`** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base Sepolia** (requires OPG tokens).
- **`opengradient.client.llm`** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base** (requires OPG tokens).
- **`opengradient.client.alpha`** (``og.Alpha``) -- On-chain ONNX model inference with VANILLA, TEE, or ZKML verification. Pays gas on the **OpenGradient alpha testnet**.
- **`opengradient.client.model_hub`** (``og.ModelHub``) -- Model repository management: create, version, and upload ML models. Requires email/password auth.
- **`opengradient.client.twins`** (``og.Twins``) -- Digital twins chat via verifiable inference. Requires a twins API key.
Expand Down
4 changes: 2 additions & 2 deletions src/opengradient/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

## Modules

- **`opengradient.client.llm`** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base Sepolia OPG tokens)
- **`opengradient.client.llm`** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base OPG tokens)
- **`opengradient.client.model_hub`** -- Model repository management: create, version, and upload ML models
- **`opengradient.client.alpha`** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens)
- **`opengradient.client.twins`** -- Digital twins chat via OpenGradient verifiable inference
Expand All @@ -15,7 +15,7 @@
```python
import opengradient as og

# LLM inference (Base Sepolia OPG tokens)
# LLM inference (Base OPG tokens)
llm = og.LLM(private_key="0x...")
llm.ensure_opg_approval(min_allowance=5)
result = await llm.chat(model=og.TEE_LLM.CLAUDE_HAIKU_4_5, messages=[...])
Expand Down
13 changes: 6 additions & 7 deletions src/opengradient/client/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,13 @@

X402_PROCESSING_HASH_HEADER = "x-processing-hash"
X402_PLACEHOLDER_API_KEY = "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
BASE_TESTNET_NETWORK = "eip155:84532"
BASE_TESTNET_RPC = os.getenv("BASE_TESTNET_RPC", "https://sepolia.base.org")
BASE_MAINNET_NETWORK = "eip155:8453"
BASE_MAINNET_RPC = os.getenv("BASE_MAINNET_RPC", "https://base-rpc.publicnode.com")

_CHAT_ENDPOINT = "/v1/chat/completions"
_COMPLETION_ENDPOINT = "/v1/completions"
_REQUEST_TIMEOUT = 60


@dataclass(frozen=True)
class _ChatParams:
"""Bundles the common parameters for chat/completion requests."""
Expand Down Expand Up @@ -87,7 +86,7 @@ def __init__(
raise ValueError("A private key is required to use the LLM client. Pass a valid private_key to the constructor.")
self._wallet_account: LocalAccount = Account.from_key(private_key)

x402_client = LLM._build_x402_client(private_key, rpc_url=BASE_TESTNET_RPC)
x402_client = LLM._build_x402_client(private_key, rpc_url=BASE_MAINNET_RPC)
onchain_registry = TEERegistry(rpc_url=rpc_url, registry_address=tee_registry_address)
self._tee: TEEConnectionInterface = RegistryTEEConnection(x402_client=x402_client, registry=onchain_registry)

Expand Down Expand Up @@ -117,16 +116,16 @@ def from_url(
return instance

@staticmethod
def _build_x402_client(private_key: str, rpc_url: str = BASE_TESTNET_RPC) -> x402Client:
def _build_x402_client(private_key: str, rpc_url: str = BASE_MAINNET_RPC) -> x402Client:
"""Build the x402 payment stack from a private key."""
account = Account.from_key(private_key)
signer = EthAccountSigner(account)
client = x402Client()
register_exact_evm_client(client, signer, networks=[BASE_TESTNET_NETWORK])
register_exact_evm_client(client, signer, networks=[BASE_MAINNET_NETWORK])
register_upto_evm_client(
client,
signer,
networks=[BASE_TESTNET_NETWORK],
networks=[BASE_MAINNET_NETWORK],
rpc_url=rpc_url,
)
return client
Expand Down
7 changes: 4 additions & 3 deletions src/opengradient/client/opg_token.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
"""OPG token Permit2 approval utilities for x402 payments."""

import os
import logging
import time
from dataclasses import dataclass
Expand All @@ -12,8 +13,8 @@

logger = logging.getLogger(__name__)

BASE_OPG_ADDRESS = "0x240b09731D96979f50B2C649C9CE10FcF9C7987F"
BASE_SEPOLIA_RPC = "https://sepolia.base.org"
BASE_OPG_ADDRESS = "0xFbC2051AE2265686a469421b2C5A2D5462FbF5eB"
BASE_MAINNET_RPC = os.getenv("BASE_MAINNET_RPC", "https://base-rpc.publicnode.com")
APPROVAL_TX_TIMEOUT = 120
ALLOWANCE_CONFIRMATION_TIMEOUT = 120
ALLOWANCE_POLL_INTERVAL = 1.0
Expand Down Expand Up @@ -138,7 +139,7 @@ def _send_approve_tx(

def _get_web3_and_contract():
"""Create a Web3 instance and OPG token contract."""
w3 = Web3(Web3.HTTPProvider(BASE_SEPOLIA_RPC))
w3 = Web3(Web3.HTTPProvider(BASE_MAINNET_RPC))
token = w3.eth.contract(address=Web3.to_checksum_address(BASE_OPG_ADDRESS), abi=ERC20_ABI)
spender = Web3.to_checksum_address(PERMIT2_ADDRESS)
return w3, token, spender
Expand Down
2 changes: 1 addition & 1 deletion tests/opg_token_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def _setup_approval_mocks(mock_web3, mock_wallet, contract):

mock_web3.eth.get_transaction_count.return_value = 7
mock_web3.eth.gas_price = 1_000_000_000
mock_web3.eth.chain_id = 84532
mock_web3.eth.chain_id = 8453

signed = MagicMock()
signed.raw_transaction = b"\x00"
Expand Down
5 changes: 2 additions & 3 deletions tutorials/01-verifiable-ai-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,8 @@ Ethereum private key works -- you can generate one with any Ethereum wallet.
export OG_PRIVATE_KEY="0x..."
```

> **Faucet:** Get free OPG tokens on Base Sepolia at <https://faucet.opengradient.ai/>
> so your wallet can pay for inference transactions. All x402 LLM payments currently
> settle on Base Sepolia.
>All x402 LLM payments currently
> settle on Base.

## Step 1: Initialize and Create the LangChain Adapter

Expand Down
6 changes: 2 additions & 4 deletions tutorials/02-streaming-multi-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,8 @@ Export your OpenGradient private key:
export OG_PRIVATE_KEY="0x..."
```

> **Faucet:** Get free OPG tokens on Base Sepolia at https://faucet.opengradient.ai/
>
> All x402 LLM payments currently settle on Base Sepolia using OPG tokens. If you see
> x402 payment errors, make sure your wallet has sufficient OPG on Base Sepolia.

> All x402 LLM payments currently settle on Base using OPG tokens.

## Step 1: Basic Non-Streaming Chat

Expand Down
5 changes: 1 addition & 4 deletions tutorials/03-verified-tool-calling.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,7 @@ You need an OpenGradient private key funded with test tokens:
export OG_PRIVATE_KEY="0x..."
```

> **Faucet:** Get free OPG tokens on Base Sepolia at https://faucet.opengradient.ai/
>
> All x402 LLM payments currently settle on Base Sepolia using OPG tokens. If you see
> x402 payment errors, make sure your wallet has sufficient OPG on Base Sepolia.
> All x402 LLM payments currently settle on Base using OPG tokens.

## Step 1: Initialize the Client

Expand Down
Loading