Complete the code to import the TensorRT Python module.
import [1]
The TensorRT Python API is accessed by importing the tensorrt module.
Complete the code to create a TensorRT logger with warning level.
logger = tensorrt.Logger([1])To create a logger that shows warnings, use tensorrt.Logger.WARNING.
Fix the error in building a TensorRT engine from a network definition.
builder = tensorrt.Builder(logger)
network = builder.create_network()
config = builder.create_builder_config()
engine = builder.[1](network, config)The correct method to build an engine from a network and config is build_cuda_engine.
Fill both blanks to create an execution context and allocate device memory for input.
context = engine.[1]() input_memory = cuda.mem_alloc(engine.[2](0))
Use create_execution_context to get the context, and get_binding_shape to find input size for memory allocation.
Fill all three blanks to run inference and copy output from device to host.
context.execute_v2(bindings=[input_memory, [1]]) cuda.memcpy_dtoh([2], [3])
Pass device memory pointers in bindings to execute_v2. Then copy output from device (output_device) to host (output_host).