The BigDL Project#
BigDL-LLM: low-Bit LLM library#
bigdl-llm
is a library for running LLM (large language model) on Intel XPU (from Laptop to GPU to Cloud) using INT4 with very low latency [1] (for any PyTorch model).
Note
It is built on top of the excellent work of llama.cpp, gptq, bitsandbytes, qlora, etc.
Latest update#
bigdl-llm
now supports Intel GPU (including Arc, Flex and MAX); see the the latest GPU examples here.bigdl-llm
tutorial is released here.Over 20 models have been verified on
bigdl-llm
, including LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, InternLM, QWen, Baichuan, MOSS and more; see the complete list here.
bigdl-llm
demos#
See the optimized performance of chatglm2-6b
and llama-2-13b-chat
models on 12th Gen Intel Core CPU and Intel Arc GPU below.
12th Gen Intel Core CPU | Intel Arc GPU | ||
![]() |
![]() |
![]() |
![]() |
chatglm2-6b |
llama-2-13b-chat |
chatglm2-6b |
llama-2-13b-chat |
bigdl-llm
quickstart#
CPU Quickstart#
You may install bigdl-llm
on Intel CPU as follows as follows:
pip install --pre --upgrade bigdl-llm[all]
Note
bigdl-llm
has been tested on Python 3.9.
You can then apply INT4 optimizations to any Hugging Face Transformers models as follows.
#load Hugging Face Transformers model with INT4 optimizations
from bigdl.llm.transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)
#run the optimized model on Intel CPU
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_ids = tokenizer.encode(input_str, ...)
output_ids = model.generate(input_ids, ...)
output = tokenizer.batch_decode(output_ids)
GPU Quickstart#
You may install bigdl-llm
on Intel GPU as follows as follows:
# below command will install intel_extension_for_pytorch==2.0.110+xpu as default
# you can install specific ipex/torch version for your need
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
Note
bigdl-llm
has been tested on Python 3.9.
You can then apply INT4 optimizations to any Hugging Face Transformers models on Intel GPU as follows.
#load Hugging Face Transformers model with INT4 optimizations
from bigdl.llm.transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)
#run the optimized model on Intel GPU
model = model.to('xpu')
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_ids = tokenizer.encode(input_str, ...).to('xpu')
output_ids = model.generate(input_ids, ...)
output = tokenizer.batch_decode(output_ids.cpu())
For more details, please refer to the bigdl-llm Document, Readme, Tutorial and API Doc.
Overview of the complete BigDL project#
BigDL seamlessly scales your data analytics & AI applications from laptop to cloud, with the following libraries:
LLM: Low-bit (INT3/INT4/INT5/INT8) large language model library for Intel CPU/GPU
Orca: Distributed Big Data & AI (TF & PyTorch) Pipeline on Spark and Ray
Nano: Transparent Acceleration of Tensorflow & PyTorch Programs on Intel CPU/GPU
DLlib: “Equivalent of Spark MLlib” for Deep Learning
Chronos: Scalable Time Series Analysis using AutoML
Friesian: End-to-End Recommendation Systems
PPML: Secure Big Data and AI (with SGX Hardware Security)
Choosing the right BigDL library#
[1]
Performance varies by use, configuration and other factors. bigdl-llm
may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.