Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

[Numpy] Backward error in mixed int64 + float32 #18084

@sxjscience

Description

@sxjscience

This is related to #18022.

Reproducible example:

import mxnet as mx
from mxnet.gluon import HybridBlock
mx.npx.set_np()

class Foo(HybridBlock):
    def hybrid_forward(self, F, query):
        query_shape = F.npx.shape_array(query)
        return query / F.np.sqrt(query_shape[-1])

foo = Foo()
foo.hybridize()
a = mx.np.ones((5, 5, 5))
out = foo(a)
print(out)

a.attach_grad()
with mx.autograd.record():
    out = foo(a)
    out.backward()
print(a.grad)

Error message:

MXNetError: Traceback (most recent call last):
  File "include/mxnet/./tensor_blob.h", line 256
MXNetError: Check failed: mshadow: :DataType<DType>::kFlag == type_flag_: TBlob.get_with_shape: data type do not match specified type.Expected: long long v.s. given float

Currently, I have to use query / F.np.sqrt(query_shape[-1].astype(np.float32)).

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions