Inference on GPU#

Apart from the significant acceleration capabilites on Intel CPUs, BigDL-LLM also supports optimizations and acceleration for running LLMs (large language models) on Intel GPUs. With BigDL-LLM, PyTorch models (in FP16/BF16/FP32) can be optimized with low-bit quantizations (supported precisions include INT4, INT5, INT8, etc).

Compared with running on Intel CPUs, some additional operations are required on Intel GPUs. To help you better understand the process, here we use a popular model Llama-2-7b-chat-hf as an example.

Make sure you have prepared environment following instructions here.

Note

If you are using an older version of bigdl-llm (specifically, older than 2.5.0b20240104), you need to manually add import intel_extension_for_pytorch as ipex at the beginning of your code.

Load and Optimize Model#

You could choose to use PyTorch API or transformers-style API on Intel GPUs according to your preference.

Once you have the model with BigDL-LLM low bit optimization, set it to to('xpu').

You could optimize any PyTorch model with “one-line code change”, and the loading and optimizing process on Intel GPUs maybe as follows:

# Take Llama-2-7b-chat-hf as an example
from transformers import LlamaForCausalLM
from bigdl.llm import optimize_model

model = LlamaForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf', torch_dtype='auto', low_cpu_mem_usage=True)
model = optimize_model(model) # With only one line to enable BigDL-LLM INT4 optimization

model = model.to('xpu') # Important after obtaining the optimized model

Tip

When running LLMs on Intel iGPUs for Windows users, we recommend setting cpu_embedding=True in the optimize_model function. This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.

See the API doc for optimize_model to find more information.

Especially, if you have saved the optimized model following setps here, the loading process on Intel GPUs maybe as follows:

from transformers import LlamaForCausalLM
from bigdl.llm.optimize import low_memory_init, load_low_bit

saved_dir='./llama-2-bigdl-llm-4-bit'
with low_memory_init(): # Fast and low cost by loading model on meta device
   model = LlamaForCausalLM.from_pretrained(saved_dir,
                                            torch_dtype="auto",
                                            trust_remote_code=True)
model = load_low_bit(model, saved_dir) # Load the optimized model

model = model.to('xpu') # Important after obtaining the optimized model

Run Optimized Model#

You could then do inference using the optimized model on Intel GPUs almostly the same as on CPUs. The only difference is to set to('xpu') for input tensors.

Continuing with the example of Llama-2-7b-chat-hf, running as follows:

import torch

with torch.inference_mode():
   prompt = 'Q: What is CPU?\nA:'
   input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') # With .to('xpu') specifically for inference on Intel GPUs
   output = model.generate(input_ids, max_new_tokens=32)
   output_str = tokenizer.decode(output[0], skip_special_tokens=True)

Note

The initial generation of optimized LLMs on Intel GPUs could be slow. Therefore, it’s recommended to perform a warm-up run before the actual generation.

Note

If you are a Windows user, please also note that for the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.

See also

See the complete examples here