Skip to content

v1.8.2

Latest
Compare
Choose a tag to compare
@alvarobartt alvarobartt released this 09 Sep 14:45
· 1 commit to main since this release
d7af1fc

🔧 Fixed Intel MKL Support

Since Text Embeddings Inference (TEI) v1.7.0, Intel MKL support had been broken due to changes in the candle dependency. Neither static-linking nor dynamic-linking worked correctly, which caused models using Intel MKL on CPU to fail with errors such as: "Intel oneMKL ERROR: Parameter 13 was incorrect on entry to SGEMM".

Starting with v1.8.2, this issue has been resolved by fixing how the intel-mkl-src dependency is defined. Both features, static-linking and dynamic-linking (the default), now work correctly, ensuring that Intel MKL libraries are properly linked.

This issue occurred in the following scenarios:

  • Users installing text-embeddings-router via cargo with the --feature mkl flag. Although dynamic-linking should have been used, it was not working as intended.
  • Users relying on the CPU Dockerfile when running models without ONNX weights. In these cases, Safetensors weights were used with candle as backend (with MKL optimizations), instead of ort.

The following table shows the affected versions and containers:

Version Image
1.7.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.0
1.7.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.1
1.7.2 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2
1.7.3 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.3
1.7.4 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.4
1.8.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.0
1.8.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1

More details: PR #715

Full Changelog: v1.8.1...v1.8.2