Skip to main content

OpenVINO

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. OpenVINO™ Runtime can enable running the same model optimized across various hardware devices. Accelerate your deep learning performance across use cases like: language + LLMs, computer vision, automatic speech recognition, and more.

OpenVINO models can be run locally through the HuggingFacePipeline class. To deploy a model with OpenVINO, you can specify the backend="openvino" parameter to trigger OpenVINO as backend inference framework.

To use, you should have the optimum-intel with OpenVINO Accelerator python package installed.

%pip install --upgrade-strategy eager "optimum[openvino,nncf]" langchain-huggingface --quiet

Model Loading

Models can be loaded by specifying the model parameters using the from_model_id method.

If you have an Intel GPU, you can specify model_kwargs={"device": "GPU"} to run inference on it.

from langchain_huggingface import HuggingFacePipeline

ov_config = {"PERFORMANCE_HINT": "LATENCY", "NUM_STREAMS": "1", "CACHE_DIR": ""}

ov_llm = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
backend="openvino",
model_kwargs={"device": "CPU", "ov_config": ov_config},
pipeline_kwargs={"max_new_tokens": 10},
)
API Reference:HuggingFacePipeline

They can also be loaded by passing in an existing optimum-intel pipeline directly

from optimum.intel.openvino import OVModelForCausalLM
from transformers import AutoTokenizer, pipeline

model_id = "gpt2"
device = "CPU"
tokenizer = AutoTokenizer.from_pretrained(model_id)
ov_model = OVModelForCausalLM.from_pretrained(
model_id, export=True, device=device, ov_config=ov_config
)
ov_pipe = pipeline(
"text-generation", model=ov_model, tokenizer=tokenizer, max_new_tokens=10
)
ov_llm = HuggingFacePipeline(pipeline=ov_pipe)

Create Chain

With the model loaded into memory, you can compose it with a prompt to form a chain.

from langchain_core.prompts import PromptTemplate

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | ov_llm

question = "What is electroencephalography?"

print(chain.invoke({"question": question}))
API Reference:PromptTemplate

To get response without prompt, you can bind skip_prompt=True with LLM.

chain = prompt | ov_llm.bind(skip_prompt=True)

question = "What is electroencephalography?"

print(chain.invoke({"question": question}))

Inference with local OpenVINO model

It is possible to export your model to the OpenVINO IR format with the CLI, and load the model from local folder.

!optimum-cli export openvino --model gpt2 ov_model_dir

It is recommended to apply 8 or 4-bit weight quantization to reduce inference latency and model footprint using --weight-format:

!optimum-cli export openvino --model gpt2  --weight-format int8 ov_model_dir # for 8-bit quantization

!optimum-cli export openvino --model gpt2 --weight-format int4 ov_model_dir # for 4-bit quantization
ov_llm = HuggingFacePipeline.from_model_id(
model_id="ov_model_dir",
task="text-generation",
backend="openvino",
model_kwargs={"device": "CPU", "ov_config": ov_config},
pipeline_kwargs={"max_new_tokens": 10},
)

chain = prompt | ov_llm

question = "What is electroencephalography?"

print(chain.invoke({"question": question}))

You can get additional inference speed improvement with Dynamic Quantization of activations and KV-cache quantization. These options can be enabled with ov_config as follows:

ov_config = {
"KV_CACHE_PRECISION": "u8",
"DYNAMIC_QUANTIZATION_GROUP_SIZE": "32",
"PERFORMANCE_HINT": "LATENCY",
"NUM_STREAMS": "1",
"CACHE_DIR": "",
}

Streaming

To get streaming of LLM output, you can create a Huggingface TextIteratorStreamer for _forward_params.

from threading import Thread

from transformers import TextIteratorStreamer

streamer = TextIteratorStreamer(
ov_llm.pipeline.tokenizer,
timeout=30.0,
skip_prompt=True,
skip_special_tokens=True,
)
pipeline_kwargs = {"pipeline_kwargs": {"streamer": streamer, "max_new_tokens": 100}}
chain = prompt | ov_llm.bind(**pipeline_kwargs)

t1 = Thread(target=chain.invoke, args=({"question": question},))
t1.start()

for new_text in streamer:
print(new_text, end="", flush=True)

For more information refer to:


Was this page helpful?


You can leave detailed feedback on GitHub.