Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

❓ [Question] failed to run the examples/dynamo/vgg16_fp8_ptq.y example #3075

Open
broken-dream opened this issue Aug 9, 2024 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@broken-dream
Copy link

broken-dream commented Aug 9, 2024

❓ Question

I'm trying to run the examples/dynamo/vgg16_fp8_ptq.y example but got following error:

Traceback (most recent call last):
  File "/home/wh/generative_action/SynHSI/vgg_quat.py", line 232, in <module>
    exp_program = torch.export.export(model, (input_tensor,))
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
    return _export(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1066, in wrapper
    raise e
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1039, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/exported_program.py", line 100, in wrapper
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 2034, in _export
    export_artifact = export_func(  # type: ignore[operator]
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1273, in _strict_export
    return _strict_export_lower_to_aten_ir(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1412, in _strict_export_lower_to_aten_ir
    aten_export_artifact = lower_to_aten_callback(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 633, in _export_to_aten_ir
    gm, graph_signature = transform(aot_export_module)(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1194, in aot_export_module
    fx_g, metadata, in_spec, out_spec = _aot_export_function(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1426, in _aot_export_function
    fx_g, meta = create_aot_dispatcher_function(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 429, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 730, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 105, in aot_dispatch_export
    graph, _, _ = aot_dispatch_base_graph(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 138, in aot_dispatch_base_graph
    fw_module = _create_graph(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 46, in _create_graph
    fx_g = make_fx(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1805, in wrapped
    return make_fx_tracer.trace(f, *args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1751, in trace
    return self._trace_inner(f, *args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1737, in _trace_inner
    t = dispatch_trace(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_compile.py", line 31, in inner
    return disable_fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 899, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1392, in trace
    res = super().trace(root, concrete_args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
    (self.create_arg(fn(*args)),),
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 920, in wrapped
    out = f(*tensors)
  File "<string>", line 1, in <lambda>
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 403, in _functionalized_f_helper
    f_outs = fn(*f_args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 74, in inner_fn
    outs = fn(*args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 179, in flat_fn
    tree_out = fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 782, in functional_call
    out = PropagateUnbackedSymInts(mod).run(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py", line 147, in run
    self.env[node] = self.run_node(node)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5566, in run_node
    result = super().run_node(n)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py", line 207, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/interpreter.py", line 279, in call_function
    return target(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py", line 198, in __torch_dispatch__
    raise RuntimeError(
RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()

While executing %amax_1 : [num_users=0] = call_function[target=torch._C._autograd._get_data_attr](args = (%amax,), kwargs = {})
Original traceback:
  File "/home/wh/generative_action/SynHSI/vgg_quat.py", line 71, in forward
    x = self.features(x)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/quant_module.py", line 82, in forward
    return super().forward(input, *args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/quant_module.py", line 38, in forward
    input = self.input_quantizer(input)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/tensor_quantizer.py", line 668, in forward
    outputs = self._quant_forward(inputs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/tensor_quantizer.py", line 430, in _quant_forward
    amax = self._get_amax(inputs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/modelopt/torch/quantization/nn/modules/tensor_quantizer.py", line 392, in _get_amax
    amax = amax.data

What you have already tried

I trained a vgg16 model using the code in examples/int8/training/vgg, then ran examples/dynamo/vgg16_fp8_ptq.y but got above errors.

I also tried to add the FunctionalTensorMode() manually like this:

with torch.no_grad():
    with export_torch_mode():
        # Compile the model with Torch-TensorRT Dynamo backend
        input_tensor = images.cuda()
        with FunctionalTensorMode():
            exp_program = torch.export.export(model, (input_tensor,))
        trt_model = torchtrt.dynamo.compile(
            exp_program,
            inputs=[input_tensor],
            enabled_precisions={torch.float8_e4m3fn},
            min_block_size=1,
            debug=False,
        )

but got following error:

Traceback (most recent call last):
  File "/home/wh/generative_action/SynHSI/vgg_quat.py", line 236, in <module>
    exp_program = torch.export.export(model, (input_tensor,))
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/__init__.py", line 173, in export
    return _export(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1069, in wrapper
    raise e
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1042, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/exported_program.py", line 96, in wrapper
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 2035, in _export
    export_artifact = export_func(  # type: ignore[operator]
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1276, in _strict_export
    return _strict_export_lower_to_aten_ir(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 1415, in _strict_export_lower_to_aten_ir
    aten_export_artifact = lower_to_aten_callback(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/export/_trace.py", line 629, in _export_to_aten_ir
    gm, graph_signature = transform(aot_export_module)(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1208, in aot_export_module
    fx_g, metadata, in_spec, out_spec = _aot_export_function(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1440, in _aot_export_function
    fx_g, meta = create_aot_dispatcher_function(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 435, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(flat_fn, flat_args, aot_config)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 736, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 105, in aot_dispatch_export
    graph, _, _ = aot_dispatch_base_graph(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 142, in aot_dispatch_base_graph
    fw_module = _create_graph(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 50, in _create_graph
    fx_g = make_fx(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1805, in wrapped
    return make_fx_tracer.trace(f, *args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1751, in trace
    return self._trace_inner(f, *args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1737, in _trace_inner
    t = dispatch_trace(
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
    return disable_fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 630, in _fn
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 900, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1392, in trace
    res = super().trace(root, concrete_args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 630, in _fn
    return fn(*args, **kwargs)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 823, in trace
    (self.create_arg(fn(*args)),),
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 921, in wrapped
    out = f(*tensors)
  File "<string>", line 1, in <lambda>
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 390, in _functionalized_f_helper
    f_args = pytree.tree_map(to_fun, args)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/utils/_pytree.py", line 964, in tree_map
    return treespec.unflatten(map(func, *flat_args))
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/utils/_pytree.py", line 803, in unflatten
    leaves = list(leaves)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/functional_utils.py", line 41, in to_fun
    return FunctionalTensor.to_functional(t)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py", line 223, in to_functional
    functional_mode = _detect_infra_mode(torch._C._TorchDispatchModeKey.FUNCTIONAL)
  File "/home/wh/miniconda3/envs/hsi-torch-dev/lib/python3.10/site-packages/torch/utils/_python_dispatch.py", line 146, in _detect_infra_mode
    post_dispatch_mode is None
AssertionError

Environment

  • PyTorch Version (e.g., 1.0): 2.5.0.dev20240808+cu124
  • CPU Architecture: x86_64
  • OS (e.g., Linux): ubuntu 20.04
  • How you installed PyTorch (conda, pip, libtorch, source): pip
  • Python version: 3.10
  • CUDA version: 12.4
  • GPU models and configuration: 4070ti
  • torch_tensorrt version: 2.5.0.dev20240804+cu124
  • nvidia modelopt version: 0.15.1
@broken-dream broken-dream added the question Further information is requested label Aug 9, 2024
@peri044
Copy link
Collaborator

peri044 commented Aug 23, 2024

Thanks for filing this @broken-dream
I was able to reproduce this with torch 2.5 nightly. It works with Pytorch 2.4 so this is a regression. We will investigate further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants