Skip to content

Conversation

dataartist-og
Copy link

# DONE:
* Parse onnx model into tensorrt engine
* Allow Models larger than 2GB
* Set Builder Flags
* Set Precision Flags

# TODO:
* Start Making Threadsafe/Multiprocessing capable

 def __init__(self, model, device,
            max_workspace_size=None, serialize_engine=False, verbose=False, fp16=False, flags=None,  **kwargs)
external_data_format
engine_path
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant