Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,8 @@ For CUDA 12.x:
pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh-client==25.5.* nvidia-cuda-runtime-cu12==12.8.*
```

Development wheels are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/` to install latest nightly packages.

### Conda

cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel:
Expand All @@ -74,19 +76,22 @@ Users who are used to conda env based workflows would benefit with conda package
For CUDA 12.x:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8
cuopt-server=25.05.* cuopt-sh-client=25.05.* python=3.12 cuda-version=12.8
```

We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.
of our latest development branch. Just replace `-c rapidsai` with `-c rapidsai-nightly`.

### Container

Users can pull the cuOpt container from the NVIDIA container registry.

```bash
docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
docker pull nvidia/cuopt:latest-cuda12.8-py312
```

Note: The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>`_ for the list of available tags.

More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).

Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.
Expand Down
14 changes: 12 additions & 2 deletions docs/cuopt/source/cuopt-python/quick-start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ For CUDA 12.x:
pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12==25.5.* nvidia-cuda-runtime-cu12==12.8.*


.. note::
For development wheels which are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/`.


Conda
-----

Expand All @@ -29,6 +33,9 @@ For CUDA 12.x:
conda install -c rapidsai -c conda-forge -c nvidia \
cuopt=25.05.* python=3.12 cuda-version=12.8

.. note::
For development conda packages which are available as nightlies, please update `-c rapidsai` to `-c rapidsai-nightly`.


Container
---------
Expand All @@ -37,13 +44,16 @@ NVIDIA cuOpt is also available as a container from Docker Hub:

.. code-block:: bash

docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
docker pull nvidia/cuopt:latest-cuda12.8-py312

.. note::
The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>`_ for the list of available tags.

The container includes both the Python API and self-hosted server components. To run the container:

.. code-block:: bash

docker run --gpus all -it --rm nvidia/cuopt:25.5.0-cuda12.8-py312
docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.8-py312

This will start an interactive session with cuOpt pre-installed and ready to use.

Expand Down
12 changes: 10 additions & 2 deletions docs/cuopt/source/cuopt-server/quick-start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ For CUDA 12.x:

pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh-client==25.5.* nvidia-cuda-runtime-cu12==12.8.*

.. note::
For development wheels which are available as nightlies, please update `--extra-index-url` to `https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/`.

Conda
-----
Expand All @@ -27,6 +29,9 @@ For CUDA 12.x:
conda install -c rapidsai -c conda-forge -c nvidia \
cuopt-server=25.05.* cuopt-sh-client=25.05.* python=3.12 cuda-version=12.8

.. note::
For development conda packages which are available as nightlies, please update `-c rapidsai` to `-c rapidsai-nightly`.


Container from Docker Hub
-------------------------
Expand All @@ -35,13 +40,16 @@ NVIDIA cuOpt is also available as a container from Docker Hub:

.. code-block:: bash

docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
docker pull nvidia/cuopt:latest-cuda12.8-py312

.. note::
The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>`_ for the list of available tags.

The container includes both the Python API and self-hosted server components. To run the container:

.. code-block:: bash

docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:25.5.0-cuda12.8-py312 /bin/bash -c "python3 -m cuopt_server.cuopt_service"
docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:latest-cuda12.8-py312 /bin/bash -c "python3 -m cuopt_server.cuopt_service"

.. note::
Make sure you have the NVIDIA Container Toolkit installed on your system to enable GPU support in containers. See the `installation guide <https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html>`_ for details.
Expand Down
10 changes: 10 additions & 0 deletions docs/cuopt/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,16 @@ Command Line Interface (cuopt-cli)

Command Line Interface Overview <cuopt-cli/index.rst>

========================================
Third-Party Modeling Languages
========================================
.. toctree::
:maxdepth: 4
:caption: Third-Party Modeling Languages
:name: Third-Party Modeling Languages

thirdparty_modeling_languages/index.rst

=============
Resources
=============
Expand Down
4 changes: 4 additions & 0 deletions docs/cuopt/source/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,10 @@ cuOpt supports the following APIs:
- `Linear Programming (LP) - Server <cuopt-server/quick-start.html>`_
- `Mixed Integer Linear Programming (MILP) - Server <cuopt-server/quick-start.html>`_
- `Routing (TSP, VRP, and PDP) - Server <cuopt-server/quick-start.html>`_
- Third-party modeling languages
- `AMPL <https://www.ampl.com/>`_
- `PuLP <https://pypi.org/project/PuLP/>`_


==================================
Installation Options
Expand Down
6 changes: 6 additions & 0 deletions docs/cuopt/source/lp-features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@ Availability

The LP solver can be accessed in the following ways:

- **Third-Party Modeling Languages**: cuOpt's LP and MILP solver can be called directly from the following third-party modeling languages. This allows you to leverage GPU acceleration while maintaining your existing optimization workflow in these modeling languages.

Supported modeling languages:
- AMPL
- PuLP

- **C API**: A native C API that provides direct low-level access to cuOpt's LP capabilities, enabling integration into any application or system that can interface with C.

- **As a Self-Hosted Service**: cuOpt's LP solver can be deployed as a in your own infrastructure, enabling you to maintain full control while integrating it into your existing systems.
Expand Down
2 changes: 1 addition & 1 deletion docs/cuopt/source/lp-milp-settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Solution File
Note: the default value is ``""`` and no solution file is written.

User Problem File
^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^
``CUOPT_USER_PROBLEM_FILE`` controls the name of a file where cuOpt should write the user problem.

Note: the default value is ``""`` and no user problem file is written.
Expand Down
6 changes: 6 additions & 0 deletions docs/cuopt/source/milp-features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@ Availability

The MILP solver can be accessed in the following ways:

- **Third-Party Modeling Languages**: cuOpt's LP and MILP solver can be called directly from the following third-party modeling languages. This allows you to leverage GPU acceleration while maintaining your existing optimization workflow in these modeling languages.

Currently supported solvers:
- AMPL
- PuLP

- **C API**: A native C API that provides direct low-level access to cuOpt's MILP solver, enabling integration into any application or system that can interface with C.

- **As a Self-Hosted Service**: cuOpt's MILP solver can be deployed in your own infrastructure, enabling you to maintain full control while integrating it into your existing systems.
Expand Down
17 changes: 17 additions & 0 deletions docs/cuopt/source/thirdparty_modeling_languages/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
===============================
Third-Party Modeling Languages
===============================


--------------------------
AMPL Support
--------------------------

AMPL can be used with near zero code changes: simply switch to cuOpt as a solver to solve linear and mixed-integer programming problems. Please refer to the `AMPL documentation <https://www.ampl.com/>`_ for more information.

--------------------------
PuLP Support
--------------------------

PuLP can be used with near zero code changes: simply switch to cuOpt as a solver to solve linear and mixed-integer programming problems.
Please refer to the `PuLP documentation <https://pypi.org/project/PuLP/>`_ for more information. Also, see the example notebook in the `cuopt-examples <https://github.com/NVIDIA/cuopt-examples>`_ repository.