Skip to content

[DOC] Codebase walkthrough with Vector add example#2206

Closed
masahi wants to merge 0 commit intoapache:masterfrom
masahi:doc-codebase-walkthrough
Closed

[DOC] Codebase walkthrough with Vector add example#2206
masahi wants to merge 0 commit intoapache:masterfrom
masahi:doc-codebase-walkthrough

Conversation

@masahi
Copy link
Copy Markdown
Member

@masahi masahi commented Nov 30, 2018

The first stab at #2160.

@tqchen @yzhliu @merrymercy @Ravenwater please review.

Comment thread docs/dev/codebase_walkthrough.rst Outdated
- ``topi`` - Compute definitions and backend schedules for standard neural network operators.
- ``nnvm`` - C++ code and Python frontend for graph optimization and compilation. Depends on three directories above.

Using standard Deep Learning terminologies, ``nnvm`` is the component that manages a computational graph, and nodes in a graph are compiled and executed using infrastructures implemented in ``src`` and ``python``. Operators corresponding to each node are registered in ``nnvm``. Registration can be done via C++ or Python. Implemenations for operators are in ``topi``, and they are also coded in either C++ or Python.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemenations -> Implementations

Comment thread docs/dev/codebase_walkthrough.rst Outdated

Using standard Deep Learning terminologies, ``nnvm`` is the component that manages a computational graph, and nodes in a graph are compiled and executed using infrastructures implemented in ``src`` and ``python``. Operators corresponding to each node are registered in ``nnvm``. Registration can be done via C++ or Python. Implemenations for operators are in ``topi``, and they are also coded in either C++ or Python.

When an user invokes graph compilation by ``nnvm.compiler.build(...)``, the following sequence of actions happens for each node in the graph:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a user

Comment thread docs/dev/codebase_walkthrough.rst Outdated

The process of ``tvm.build()`` can be divided into two steps:

- Lowering, where an high level, initial loop nest structures are transformed into a final, low level IR
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a high level

Comment thread docs/dev/codebase_walkthrough.rst Outdated

One of the interesting aspects of TVM codebase is that interop between C++ and Python is not unidirectional. Typically, all code that do heavy liftings are implemented in C++, and Python bindings are provided for user interface. This is also true in TVM, but in TVM codebase, C++ code also call into functions defined in a Python module. For example, the convolution operator is implemented in Python, and its implementation is invoked from C++ code in nnvm.

At the time of writing (Nov. 30, 2018), there is an going effort to reimplement functionality offered by ``nnvm`` in a new intermidiate representation called Relay. New Relay code resides in ``src/relay`` and ``python/tvm/relay``.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an ongoing effort

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intermidiate -> intermediate

Comment thread docs/dev/codebase_walkthrough.rst Outdated
B = tvm.placeholder((n,), name='B')
C = tvm.compute(A.shape, lambda i: A[i] + B[i], name="C")

Here, types of ``A``, ``B``, ``C`` are ``tvm.tensor.Tensor``, defined in ``python/tvm/tensor.py``. The Python ``Tensor`` is backed by C++ ``Tensor``, implemented in ``include/tvm/tensor.h`` and ``src/lang/tensor.cc``. All Python types in TVM can be thought of as a handle to the underlining C++ type with the same name. If you look at the definition of Python ``Tensor`` type below, you can see it is an subclass of ``NodeBase``.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an subclass -> a subclass

@masahi
Copy link
Copy Markdown
Member Author

masahi commented Dec 1, 2018

@vinx13 Thanks. Fixed.

@jroesch
Copy link
Copy Markdown
Member

jroesch commented Dec 1, 2018

Would probably be good to port this to Relay soonish too. One of the Relay developers can probably manage that though, I will add a note to the tracking issue.

Comment thread docs/dev/codebase_walkthrough.rst Outdated
B = tvm.placeholder((n,), name='B')
C = tvm.compute(A.shape, lambda i: A[i] + B[i], name="C")

Here, types of ``A``, ``B``, ``C`` are ``tvm.tensor.Tensor``, defined in ``python/tvm/tensor.py``. The Python ``Tensor`` is backed by C++ ``Tensor``, implemented in ``include/tvm/tensor.h`` and ``src/lang/tensor.cc``. All Python types in TVM can be thought of as a handle to the underlining C++ type with the same name. If you look at the definition of Python ``Tensor`` type below, you can see it is a subclass of ``NodeBase``.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

underlining -> underlying

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants