-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Issues: microsoft/onnxruntime
[DO NOT UNPIN] ORT 1.21.0 Release Candidates available for te...
#23885
opened Mar 4, 2025 by
MaanavD
Open
5
[DO NOT UNPIN] onnxruntime-gpu v1.10.0 PyPI Removal Notice
#22747
opened Nov 6, 2024 by
sophies927
Open
3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Mobile] QNN GPU backend crashes.
ep:QNN
issues related to QNN exeution provider
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#24004
opened Mar 12, 2025 by
monokim
[Build] Missmatch between CMake config and folder structure of onnxruntime-linux-x64-1.21.0.tgz
build
build issues; typically submitted using template
#24003
opened Mar 12, 2025 by
Zyrin
TensorRT backend not work when the device_id is greater than 0 in multiple threads.
ep:TensorRT
issues related to TensorRT execution provider
#24001
opened Mar 12, 2025 by
83204273
Crashes when executing model quantification on Deeplabv3
quantization
issues related to quantization
#23985
opened Mar 11, 2025 by
EzioQR
[Performance] does onnxruntime 1.19.0 support sve?
ep:ACL
issues related to ACL execution provider
performance
issues related to performance regressions
#23983
opened Mar 11, 2025 by
Serenagirl
Implement masking during softmax / softcap computation in GQA CPU operator
#23982
opened Mar 10, 2025 by
derdeljan-msft
[preprocess] Pad is not folded in Conv when opset_import is > 20
#23973
opened Mar 10, 2025 by
Johansmm
TensorRT Support for Multiple Profiles
ep:TensorRT
issues related to TensorRT execution provider
#23965
opened Mar 10, 2025 by
adjhawar
[Build] .pc file asks for -lonnxruntime but onnxruntime.a isn't installed
build
build issues; typically submitted using template
#23959
opened Mar 9, 2025 by
yurivict
[Web] issues related to ONNX Runtime web; typically submitted using template
onnxruntime-node
Linux addon binaries contain duplicate identical copies of libonnxruntime.so.x
taking up extra ~40MB
platform:web
#23956
opened Mar 8, 2025 by
rotemdan
MatMulInteger Report an error 1.22.0 in DML;
contributions welcome
external contributions welcome
ep:DML
issues related to the DirectML execution provider
#23950
opened Mar 8, 2025 by
qing-shang
[Web] WASM sigmoid producing numbers below 0 or above 1
ep:WebGPU
ort-web webgpu provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
.NET
Pull requests that update .net code
platform:web
issues related to ONNX Runtime web; typically submitted using template
#23943
opened Mar 7, 2025 by
xenova
Error when I use cuda_runtime.h and OpenVINO EP at the same time
ep:OpenVINO
issues related to OpenVINO execution provider
#23941
opened Mar 7, 2025 by
ddrepkk89
[Feature Request] Add more options to load models at InferenceSession constructor
api:CSharp
issues related to the C# API
feature request
request for unsupported feature or enhancement
#23940
opened Mar 7, 2025 by
vpenades
Bad Allocation Error in ONNX Runtime on Windows x86 CPU When Processing Multiple Images Sequentially
platform:windows
issues related to the Windows platform
#23938
opened Mar 7, 2025 by
tanbo1
[Feature Request] Multi-Head Latent Attention(DeepSeek) support on CPU/NPU
feature request
request for unsupported feature or enhancement
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23925
opened Mar 6, 2025 by
bkaruman
[Web] Facing this error in WebGPU: Model warmup failed: Error: input 'detection' is missing in 'feeds'.
ep:WebGPU
ort-web webgpu provider
.NET
Pull requests that update .net code
platform:web
issues related to ONNX Runtime web; typically submitted using template
#23921
opened Mar 6, 2025 by
KabirSinghMehrok
[Build] memory leaked
build
build issues; typically submitted using template
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23915
opened Mar 6, 2025 by
zxj329
[Documentation] Memory Leak in TensorRTProvider example
documentation
improvements or additions to documentation; typically submitted using template
#23901
opened Mar 5, 2025 by
axbycc-mark
[OpenVINO GPU] OpenVINO EP shouldn't override the "ACCURACY" precision to "FP32"
ep:OpenVINO
issues related to OpenVINO execution provider
#23895
opened Mar 5, 2025 by
mingmingtasd
Xnnpack execution provider Resize::IsOnnxNodeSupported causes crash for models where Resize layer scales tensor is an empty tensor
ep:Xnnpack
issues related to XNNPACK EP
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#23886
opened Mar 4, 2025 by
pl121lp
[DO NOT UNPIN] ORT 1.21.0 Release Candidates available for testing
api:Java
issues related to the Java API
ep:CUDA
issues related to the CUDA execution provider
ep:DML
issues related to the DirectML execution provider
ep:TensorRT
issues related to TensorRT execution provider
.NET
Pull requests that update .net code
platform:web
issues related to ONNX Runtime web; typically submitted using template
release:1.21.0
#23885
opened Mar 4, 2025 by
MaanavD
When using the int8 quantization model to convert to onnx, an error occurs during runtime
quantization
issues related to quantization
#23879
opened Mar 4, 2025 by
jungyin
Previous Next
ProTip!
no:milestone will show everything without a milestone.