Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception during initialization using Intel NPU (Intel AI boost) #23305

Open
bonihaniboni opened this issue Jan 9, 2025 · 0 comments
Open

Exception during initialization using Intel NPU (Intel AI boost) #23305

bonihaniboni opened this issue Jan 9, 2025 · 0 comments
Labels
ep:DML issues related to the DirectML execution provider

Comments

@bonihaniboni
Copy link

Describe the issue

After i build the project using --use_dml option, i ran onnxruntime_perf_test.exe to validate whether this can use Intel NPU. So, i add "#define ENABLE_NPU_ADAPTER_ENUMERATION" to 'dml_provider_factory.cc'.
On Intel Lunar Lake Platform, .onnx which use CNN and RNN both ran successful, but On Intel Meteor Lake Platform, only RNN model could run. If i try to run CNN model on MTL platform, it shows "2025-01-09 18:42:29.1493690 [E:onnxruntime:, inference_session.cc:2154 onnxruntime::InferenceSession::Initialize::<lambda_ddd6d80b203c0fd79bf36f74745e4e94>::operator ()] Exception during initialization:" error. Please give me some advice. Thank you

To reproduce

add #define ENABLE_NPU_ADAPTER_ENUMERATION to dml_provider_factory.cc
build.bat --use_dml --build_shared_lib --cmake_generator "Visual Studio 17 2022" --skip_submodule_sync --config RelWithDebInfo
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\60_80.onnx (input tensor: fp16[1,15])
onnxruntime_perf_test.exe -I -m "times" -r 999999999 -e "dml" -i "device_filter|npu" .\85_95.onnx (input: tensor fp16[1,3,256,256])

Urgency

No response

Platform

Windows

OS Version

Windows11 24H2 26100.2605

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

v1.20.1

ONNX Runtime API

C++

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

@github-actions github-actions bot added the ep:DML issues related to the DirectML execution provider label Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider
Projects
None yet
Development

No branches or pull requests

1 participant