I am using pybind11 in my C++ code. When I try to import onnx, my code crashes with Segmentation fault (core dumped). However, if I import onnxruntime, everything is well. Of course both onnx and onnxruntime are installed on my system via pip. The order of the import lines is irrelevant. Wherever it is, py::module::import(“onnx”) crashes with segmentation fault. How
Tag: onnx
Trying to install onnx using pip and it throwing error: metadata-generation-failed
So i trying to install onnx using pip install. OS: Windows 10 LTSC Build 17763 (i installed all packages from requirements.txt using pip install -r | onnx entry in req.txt “# onnx>=1.9.0 # ONNX export” but onnx won’t install) I try to install it using: pip install onnx pip3 install pip install onnx>=1.9.0 pip3 install onnx>=1.9.0 on python versions 3.7.0
Import onnx models to tensorflow2.x?
I created a modified lenet model using tensorflow that looks like this: When I finish training I save the model using tf.keras.models.save_model : Then I transform this model into onnx format using “tf2onnx” module: I want a method that can retrieve the same model into tensorflow2.x. I tried to use “onnx_tf” to transform the onnx model into tensorflow .pb model:
Does converting a seq2seq NLP model to the ONNX format negatively affect its performance?
I was looking at potentially converting an ml NLP model to the ONNX format in order to take advantage of its speed increase (ONNX Runtime). However, I don’t really understand what is fundamentally changed in the new models compared to the old models. Also, I don’t know if there are any drawbacks. Any thoughts on this would be very appreciated.