diff --git a/CLAUDE.md b/CLAUDE.md index d19c36e..3180c0d 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -99,7 +99,7 @@ make chat-tool - **Dual Authentication**: Model Hub (email/password) + blockchain (private key) - **x402 Payments**: Streamed micropayment protocol for LLM inference - **Digital Twins**: Chat integration with twin.fun personas -- **Dual Chain**: Base Sepolia (LLM) + OpenGradient testnet (Alpha on-chain inference) +- **Dual Chain**: Base mainnet (LLM) + OpenGradient testnet (Alpha on-chain inference) ## Configuration diff --git a/README.md b/README.md index f278d67..a57412a 100644 --- a/README.md +++ b/README.md @@ -52,10 +52,9 @@ For current network RPC endpoints, contract addresses, and deployment informatio Before using the SDK, you will need: -1. **Private Key**: An Ethereum-compatible wallet private key funded with **Base Sepolia OPG tokens** for x402 LLM payments -2. **Test Tokens**: Obtain free test tokens from the [OpenGradient Faucet](https://faucet.opengradient.ai) for testnet LLM inference -3. **Alpha Testnet Key** (Optional): A private key funded with **OpenGradient testnet gas tokens** for Alpha Testnet on-chain inference (can be the same or a different key) -4. **Model Hub Account** (Optional): Required only for model uploads. Register at [hub.opengradient.ai/signup](https://hub.opengradient.ai/signup) +1. **Private Key**: An Ethereum-compatible wallet private key funded with **Base OPG tokens** for x402 LLM payments +2. **Alpha Testnet Key** (Optional): A private key funded with **OpenGradient testnet gas tokens** for Alpha Testnet on-chain inference (can be the same or a different key) +3. **Model Hub Account** (Optional): Required only for model uploads. Register at [hub.opengradient.ai/signup](https://hub.opengradient.ai/signup) ### Configuration @@ -87,7 +86,7 @@ The SDK provides separate clients for each service. Create only the ones you nee import os import opengradient as og -# LLM inference — settles via x402 on Base Sepolia using OPG tokens +# LLM inference — settles via x402 on Base using OPG tokens llm = og.LLM(private_key=os.environ.get("OG_PRIVATE_KEY")) # Alpha Testnet — on-chain inference on the OpenGradient network using testnet gas tokens diff --git a/docs/CLAUDE_SDK_USERS.md b/docs/CLAUDE_SDK_USERS.md index 1619c68..2d049cd 100644 --- a/docs/CLAUDE_SDK_USERS.md +++ b/docs/CLAUDE_SDK_USERS.md @@ -40,7 +40,7 @@ print(result.chat_output["content"]) Each service has its own client class: ```python -# LLM inference (Base Sepolia OPG tokens for x402 payments) +# LLM inference (Base OPG tokens for x402 payments) llm = og.LLM(private_key="0x...") # Connect directly to a known TEE IP instead of using the on-chain registry. diff --git a/docs/opengradient/client/index.md b/docs/opengradient/client/index.md index 0a9397f..c32e8cb 100644 --- a/docs/opengradient/client/index.md +++ b/docs/opengradient/client/index.md @@ -10,7 +10,7 @@ OpenGradient Client -- service modules for the SDK. ## Modules -- **[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base Sepolia OPG tokens) +- **[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base OPG tokens) - **[model_hub](./model_hub)** -- Model repository management: create, version, and upload ML models - **[alpha](./alpha)** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens) - **[twins](./twins)** -- Digital twins chat via OpenGradient verifiable inference @@ -22,7 +22,7 @@ OpenGradient Client -- service modules for the SDK. ```python import opengradient as og -# LLM inference (Base Sepolia OPG tokens) +# LLM inference (Base OPG tokens) llm = og.LLM(private_key="0x...") llm.ensure_opg_approval(min_allowance=5) result = await llm.chat(model=og.TEE_LLM.CLAUDE_HAIKU_4_5, messages=[...]) diff --git a/docs/opengradient/client/llm.md b/docs/opengradient/client/llm.md index 274be67..8c4f548 100644 --- a/docs/opengradient/client/llm.md +++ b/docs/opengradient/client/llm.md @@ -231,11 +231,11 @@ Permit2ApprovalResult: Contains ``allowance_before``, ### Automatic Endpoint Discovery (Production) -By default, `LLM()` constructor automatically discovers active TEE endpoints from the on-chain TEE registry using the `rpc_url` parameter (defaults to the Base Sepolia testnet). +By default, `LLM()` constructor automatically discovers active TEE endpoints from the on-chain TEE registry using the `rpc_url` parameter (defaults to the OpenGradient devnet). **Key Points:** - The TEE endpoint is **dynamically discovered** from the registry -- x402 payments are always settled on **Base Sepolia**, regardless of which TEE endpoint serves your request +- x402 payments are always settled on **Base**, regardless of which TEE endpoint serves your request - This is the recommended approach for production use **Example:** @@ -250,7 +250,7 @@ For development, testing, or self-hosted TEE servers, use `LLM.from_url()` with **Key Points:** - TLS certificate verification is disabled (suitable for self-signed certs) -- x402 payment settlement still occurs on Base Sepolia +- x402 payment settlement still occurs on Base - Intended for non-production environments only **Example:** @@ -264,5 +264,5 @@ llm = og.LLM.from_url( ### Important: x402 Payment Settlement -Regardless of which TEE endpoint serves your inference request, **x402 payment settlement always occurs on Base Sepolia blockchain**. This ensures all payments are recorded on-chain for auditability and ensures consistent settlement across all TEE providers. +Regardless of which TEE endpoint serves your inference request, **x402 payment settlement always occurs on Base blockchain**. This ensures all payments are recorded on-chain for auditability and ensures consistent settlement across all TEE providers. diff --git a/docs/opengradient/index.md b/docs/opengradient/index.md index c9b01fd..3901436 100644 --- a/docs/opengradient/index.md +++ b/docs/opengradient/index.md @@ -19,7 +19,7 @@ inference was performed correctly. The SDK operates across two chains with separate private keys: -- **[llm](./client/llm)** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base Sepolia** (requires OPG tokens). +- **[llm](./client/llm)** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base** (requires OPG tokens). - **[alpha](./client/alpha)** (``og.Alpha``) -- On-chain ONNX model inference with VANILLA, TEE, or ZKML verification. Pays gas on the **OpenGradient alpha testnet**. - **[model_hub](./client/model_hub)** (``og.ModelHub``) -- Model repository management: create, version, and upload ML models. Requires email/password auth. - **[twins](./client/twins)** (``og.Twins``) -- Digital twins chat via verifiable inference. Requires a twins API key. diff --git a/examples/README.md b/examples/README.md index cdd321f..4f8c76f 100644 --- a/examples/README.md +++ b/examples/README.md @@ -8,7 +8,7 @@ Before running any examples, ensure you have: 1. **Installed the SDK**: `pip install opengradient` 2. **Set up your credentials**: Configure your OpenGradient account using environment variables: - - `OG_PRIVATE_KEY`: Private key funded with **Base Sepolia OPG tokens** for x402 LLM payments (can be obtained from our [faucet](https://faucet.opengradient.ai)). Also used for Alpha Testnet on-chain inference (requires **OpenGradient testnet gas tokens**). + - `OG_PRIVATE_KEY`: Private key funded with **Base OPG tokens** for x402 LLM payments. - `OG_MODEL_HUB_EMAIL`: (Optional) Your Model Hub email for model uploads - `OG_MODEL_HUB_PASSWORD`: (Optional) Your Model Hub password for model uploads @@ -140,7 +140,7 @@ Each sub-client is created independently with the credentials it needs: import os import opengradient as og -# LLM inference (Base Sepolia OPG tokens for x402 payments) +# LLM inference (Base OPG tokens for x402 payments) llm = og.LLM(private_key=os.environ.get("OG_PRIVATE_KEY")) # On-chain model inference (OpenGradient testnet gas tokens) diff --git a/integrationtest/README.md b/integrationtest/README.md index 1846f73..ba3aafa 100644 --- a/integrationtest/README.md +++ b/integrationtest/README.md @@ -6,12 +6,12 @@ End-to-end tests that exercise the OpenGradient SDK against live services. ### LLM (`llm/`) -Tests LLM chat and streaming chat via the x402 payment flow on Base Sepolia. +Tests LLM chat and streaming chat via the x402 payment flow on Base. Each run creates a **fresh Ethereum account**, funds it with ETH (for gas) and OPG tokens from a funder wallet, approves Permit2, and then runs chat requests against a TEE-verified model. **Requirements:** -- `PRIVATE_KEY` env var — private key of a funded wallet on Base Sepolia that holds OPG tokens. +- `PRIVATE_KEY` env var — private key of a funded wallet on Base that holds OPG tokens. ```bash make llm_integrationtest diff --git a/integrationtest/llm/test_llm_chat.py b/integrationtest/llm/test_llm_chat.py index 1241e01..b8d618f 100644 --- a/integrationtest/llm/test_llm_chat.py +++ b/integrationtest/llm/test_llm_chat.py @@ -6,7 +6,7 @@ from web3 import Web3 import opengradient as og -from opengradient.client.opg_token import BASE_OPG_ADDRESS, BASE_SEPOLIA_RPC +from opengradient.client.opg_token import BASE_OPG_ADDRESS, BASE_MAINNET_RPC # Minimal ERC20 ABI for transfer ERC20_TRANSFER_ABI = [ @@ -37,7 +37,7 @@ def _fund_account(funder_key: str, recipient_address: str): """Transfer ETH (for gas) and OPG tokens from the funder to the recipient.""" - w3 = Web3(Web3.HTTPProvider(BASE_SEPOLIA_RPC)) + w3 = Web3(Web3.HTTPProvider(BASE_MAINNET_RPC)) funder = Account.from_key(funder_key) funder_addr = Web3.to_checksum_address(funder.address) recipient = Web3.to_checksum_address(recipient_address) diff --git a/src/opengradient/__init__.py b/src/opengradient/__init__.py index a198203..201513a 100644 --- a/src/opengradient/__init__.py +++ b/src/opengradient/__init__.py @@ -10,7 +10,7 @@ The SDK operates across two chains with separate private keys: -- **`opengradient.client.llm`** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base Sepolia** (requires OPG tokens). +- **`opengradient.client.llm`** (``og.LLM``) -- LLM chat and completion with TEE-verified execution. Pays via x402 on **Base** (requires OPG tokens). - **`opengradient.client.alpha`** (``og.Alpha``) -- On-chain ONNX model inference with VANILLA, TEE, or ZKML verification. Pays gas on the **OpenGradient alpha testnet**. - **`opengradient.client.model_hub`** (``og.ModelHub``) -- Model repository management: create, version, and upload ML models. Requires email/password auth. - **`opengradient.client.twins`** (``og.Twins``) -- Digital twins chat via verifiable inference. Requires a twins API key. diff --git a/src/opengradient/client/__init__.py b/src/opengradient/client/__init__.py index 7206d38..9ac760b 100644 --- a/src/opengradient/client/__init__.py +++ b/src/opengradient/client/__init__.py @@ -3,7 +3,7 @@ ## Modules -- **`opengradient.client.llm`** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base Sepolia OPG tokens) +- **`opengradient.client.llm`** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base OPG tokens) - **`opengradient.client.model_hub`** -- Model repository management: create, version, and upload ML models - **`opengradient.client.alpha`** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens) - **`opengradient.client.twins`** -- Digital twins chat via OpenGradient verifiable inference @@ -15,7 +15,7 @@ ```python import opengradient as og -# LLM inference (Base Sepolia OPG tokens) +# LLM inference (Base OPG tokens) llm = og.LLM(private_key="0x...") llm.ensure_opg_approval(min_allowance=5) result = await llm.chat(model=og.TEE_LLM.CLAUDE_HAIKU_4_5, messages=[...]) diff --git a/src/opengradient/client/llm.py b/src/opengradient/client/llm.py index b850ba1..2d03512 100644 --- a/src/opengradient/client/llm.py +++ b/src/opengradient/client/llm.py @@ -28,14 +28,13 @@ X402_PROCESSING_HASH_HEADER = "x-processing-hash" X402_PLACEHOLDER_API_KEY = "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef" -BASE_TESTNET_NETWORK = "eip155:84532" -BASE_TESTNET_RPC = os.getenv("BASE_TESTNET_RPC", "https://sepolia.base.org") +BASE_MAINNET_NETWORK = "eip155:8453" +BASE_MAINNET_RPC = os.getenv("BASE_MAINNET_RPC", "https://base-rpc.publicnode.com") _CHAT_ENDPOINT = "/v1/chat/completions" _COMPLETION_ENDPOINT = "/v1/completions" _REQUEST_TIMEOUT = 60 - @dataclass(frozen=True) class _ChatParams: """Bundles the common parameters for chat/completion requests.""" @@ -87,7 +86,7 @@ def __init__( raise ValueError("A private key is required to use the LLM client. Pass a valid private_key to the constructor.") self._wallet_account: LocalAccount = Account.from_key(private_key) - x402_client = LLM._build_x402_client(private_key, rpc_url=BASE_TESTNET_RPC) + x402_client = LLM._build_x402_client(private_key, rpc_url=BASE_MAINNET_RPC) onchain_registry = TEERegistry(rpc_url=rpc_url, registry_address=tee_registry_address) self._tee: TEEConnectionInterface = RegistryTEEConnection(x402_client=x402_client, registry=onchain_registry) @@ -117,16 +116,16 @@ def from_url( return instance @staticmethod - def _build_x402_client(private_key: str, rpc_url: str = BASE_TESTNET_RPC) -> x402Client: + def _build_x402_client(private_key: str, rpc_url: str = BASE_MAINNET_RPC) -> x402Client: """Build the x402 payment stack from a private key.""" account = Account.from_key(private_key) signer = EthAccountSigner(account) client = x402Client() - register_exact_evm_client(client, signer, networks=[BASE_TESTNET_NETWORK]) + register_exact_evm_client(client, signer, networks=[BASE_MAINNET_NETWORK]) register_upto_evm_client( client, signer, - networks=[BASE_TESTNET_NETWORK], + networks=[BASE_MAINNET_NETWORK], rpc_url=rpc_url, ) return client diff --git a/src/opengradient/client/opg_token.py b/src/opengradient/client/opg_token.py index 864dd64..7c4904d 100644 --- a/src/opengradient/client/opg_token.py +++ b/src/opengradient/client/opg_token.py @@ -1,5 +1,6 @@ """OPG token Permit2 approval utilities for x402 payments.""" +import os import logging import time from dataclasses import dataclass @@ -12,8 +13,8 @@ logger = logging.getLogger(__name__) -BASE_OPG_ADDRESS = "0x240b09731D96979f50B2C649C9CE10FcF9C7987F" -BASE_SEPOLIA_RPC = "https://sepolia.base.org" +BASE_OPG_ADDRESS = "0xFbC2051AE2265686a469421b2C5A2D5462FbF5eB" +BASE_MAINNET_RPC = os.getenv("BASE_MAINNET_RPC", "https://base-rpc.publicnode.com") APPROVAL_TX_TIMEOUT = 120 ALLOWANCE_CONFIRMATION_TIMEOUT = 120 ALLOWANCE_POLL_INTERVAL = 1.0 @@ -138,7 +139,7 @@ def _send_approve_tx( def _get_web3_and_contract(): """Create a Web3 instance and OPG token contract.""" - w3 = Web3(Web3.HTTPProvider(BASE_SEPOLIA_RPC)) + w3 = Web3(Web3.HTTPProvider(BASE_MAINNET_RPC)) token = w3.eth.contract(address=Web3.to_checksum_address(BASE_OPG_ADDRESS), abi=ERC20_ABI) spender = Web3.to_checksum_address(PERMIT2_ADDRESS) return w3, token, spender diff --git a/tests/opg_token_test.py b/tests/opg_token_test.py index b237ccd..cfddf2f 100644 --- a/tests/opg_token_test.py +++ b/tests/opg_token_test.py @@ -54,7 +54,7 @@ def _setup_approval_mocks(mock_web3, mock_wallet, contract): mock_web3.eth.get_transaction_count.return_value = 7 mock_web3.eth.gas_price = 1_000_000_000 - mock_web3.eth.chain_id = 84532 + mock_web3.eth.chain_id = 8453 signed = MagicMock() signed.raw_transaction = b"\x00" diff --git a/tutorials/01-verifiable-ai-agent.md b/tutorials/01-verifiable-ai-agent.md index b7737a8..22a31ab 100644 --- a/tutorials/01-verifiable-ai-agent.md +++ b/tutorials/01-verifiable-ai-agent.md @@ -29,9 +29,8 @@ Ethereum private key works -- you can generate one with any Ethereum wallet. export OG_PRIVATE_KEY="0x..." ``` -> **Faucet:** Get free OPG tokens on Base Sepolia at -> so your wallet can pay for inference transactions. All x402 LLM payments currently -> settle on Base Sepolia. +>All x402 LLM payments currently +> settle on Base. ## Step 1: Initialize and Create the LangChain Adapter diff --git a/tutorials/02-streaming-multi-provider.md b/tutorials/02-streaming-multi-provider.md index e941137..1ab3baf 100644 --- a/tutorials/02-streaming-multi-provider.md +++ b/tutorials/02-streaming-multi-provider.md @@ -27,10 +27,8 @@ Export your OpenGradient private key: export OG_PRIVATE_KEY="0x..." ``` -> **Faucet:** Get free OPG tokens on Base Sepolia at https://faucet.opengradient.ai/ -> -> All x402 LLM payments currently settle on Base Sepolia using OPG tokens. If you see -> x402 payment errors, make sure your wallet has sufficient OPG on Base Sepolia. + +> All x402 LLM payments currently settle on Base using OPG tokens. ## Step 1: Basic Non-Streaming Chat diff --git a/tutorials/03-verified-tool-calling.md b/tutorials/03-verified-tool-calling.md index 8ddead3..adfc5d5 100644 --- a/tutorials/03-verified-tool-calling.md +++ b/tutorials/03-verified-tool-calling.md @@ -28,10 +28,7 @@ You need an OpenGradient private key funded with test tokens: export OG_PRIVATE_KEY="0x..." ``` -> **Faucet:** Get free OPG tokens on Base Sepolia at https://faucet.opengradient.ai/ -> -> All x402 LLM payments currently settle on Base Sepolia using OPG tokens. If you see -> x402 payment errors, make sure your wallet has sufficient OPG on Base Sepolia. +> All x402 LLM payments currently settle on Base using OPG tokens. ## Step 1: Initialize the Client