Skip to content

I am using onnxruntime to infer the onnx model of float16 type, the following error occurs, how should I solve it[Build] #17210

Unanswered
1615070057 asked this question in General
Discussion options

You must be logged in to vote

Replies: 2 comments 9 replies

Comment options

You must be logged in to vote
8 replies
@snnn
Comment options

@1615070057
Comment options

@snnn
Comment options

@1615070057
Comment options

@tianleiwu
Comment options

Comment options

You must be logged in to vote
1 reply
@tianleiwu
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants
Converted from issue

This discussion was converted from issue #17206 on August 18, 2023 06:26.