Lower saturating_cast in bounds inference#6970
Conversation
|
Should this be applied in addition to #6961, or instead of? |
|
Sorry for the lack of clarification - this is instead of #6961 |
|
OK, looks like this one is passing -- Let's just insert that fix for OpenGLCompute too; I monkey-patched it for my testing |
steven-johnson
left a comment
There was a problem hiding this comment.
LGTM pending the fix to OpenGLCompute as well
|
A proper fix seems intractable without getting some sort of repro for the harder failure cases. AJ's earlier attempt at a fix really should have worked. |
|
Maybe we need a test-case reducer like bugpoint |
I can try to devote more time to the other repro case, but don't rely on it, it may be hard to disentangle |
Just added.
To clarify for @steven-johnson , Andrew and I both think that 129b464 really should have worked, and cannot figure out what could have changed by #6900 that was not solved by that commit. It's very understandable if you can't disentangle that, but I think looking at that change would probably be easier than looking at the entire #6961 change. |
|
The only failure on the buildbots appears unrelated, I think we should merge this in as a temporary fix for now. I've opened #6974 to track the fundamental issue. |
* lower saturating_cast in bounds inference * openGL fix to saturating_cast
This should revert bounds inference on
saturating_castto the behavior of pre - #6900 .@steven-johnson could you test to see if this fixes things?
The issues detected were not a direct result of #6900 - before that PR, there was still
signed_integer_overflowinside of Bounds.cpp, but it was being hidden by complexCasthandling inside the bounds visitor. @abadams and I are discussing a fix to the underlying issue, but it is incredibly involved and requires possibly using infinite-precision integers in some parts of the compiler.