You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
batch_size=128network_input_shape= [1, 1, 1, 32]
placeholder_batch=torch.rand((batch_size,) +tuple(network_input_shape))
placeholder_batch=placeholder_batch.to("cuda")
compiled_model=trt.compile(
conv_block,
inputs=[placeholder_batch],
enabled_precisions={torch.float32},
optimization_level=5, # max is 5, compilation takes longer but gives the best speedupdebug=True, # very verbose, only turn on if neededuse_fast_partitioner=True, # cant disable, results in error when exportingdynamic=False,
disable_tf32=True, # reduce precision errors at the expense of small slowdown
)
I have found a temporary hotfix by replacing BatchNorm3d with BatchNorm1d. This works for some of our use-cases but we still have many models which do need the full 3d BatchNorm, so a fix would still be greatly appreciated.
Bug Description
I can't compile this model and the error seems to be caused by
nn.BatchNorm3d
To Reproduce
Steps to reproduce the behavior:
Get the below error (here for full trace: https://gist.github.com/orioninthesky98/9e51a9e83232aa3cac64ce68fe0e512b)
Expected behavior
The model should compile successfully.
Environment
conda
,pip
,libtorch
, source): conda + pipAdditional context
btw, this is a side bug, but if I try to set
use_fast_partitioner=False
, the model actually compiles fine, but I get this error at the very end and the script crashes, https://gist.github.com/orioninthesky98/a784c361ebbdfa9000564b3f8a1ac1c0) somebody already filed this bug: #3157The text was updated successfully, but these errors were encountered: