BigDL-LLM ========================= .. raw:: html

bigdl-llm is a library for running LLM (large language model) on Intel XPU (from Laptop to GPU to Cloud) using INT4 with very low latency [1] (for any PyTorch model).

------- .. grid:: 1 2 2 2 :gutter: 2 .. grid-item-card:: **Get Started** ^^^ Documents in these sections helps you getting started quickly with BigDL-LLM. +++ :bdg-link:`BigDL-LLM in 5 minutes <./Overview/llm.html>` | :bdg-link:`Installation <./Overview/install.html>` .. grid-item-card:: **Key Features Guide** ^^^ Each guide in this section provides you with in-depth information, concepts and knowledges about BigDL-LLM key features. +++ :bdg-link:`PyTorch <./Overview/KeyFeatures/optimize_model.html>` | :bdg-link:`transformers-style <./Overview/KeyFeatures/transformers_style_api.html>` | :bdg-link:`LangChain <./Overview/KeyFeatures/langchain_api.html>` | :bdg-link:`GPU <./Overview/KeyFeatures/gpu_supports.html>` .. grid-item-card:: **Examples & Tutorials** ^^^ Examples contain scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. +++ :bdg-link:`Examples <./Overview/examples.html>` .. grid-item-card:: **API Document** ^^^ API Document provides detailed description of BigDL-LLM APIs. +++ :bdg-link:`API Document <../PythonAPI/LLM/index.html>` ------ .. raw:: html

[1] Performance varies by use, configuration and other factors. bigdl-llm may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.

.. toctree:: :hidden: BigDL-LLM Document