Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
813 Discussions

AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound

IntelAI
Employee
0 0 108

Author: Intel Neural Compressor Team

Overview

We are thrilled to announce an official collaboration between SGLang and AutoRound, enabling low-bit quantization for efficient LLM inference.

Through this integration, developers can now quantize large models with AutoRound’s signed-gradient optimization and directly deploy them in SGLang’s efficient runtime, achieving low-bit model inference with minimal accuracy loss and significant latency reduction.

What Is AutoRound?

AutoRound is an advanced post-training quantization (PTQ) toolkit designed for Large Language Models (LLMs) and **Vision-Language Models (VLMs)**. It uses signed gradient descent to jointly optimize weight rounding and clipping ranges, enabling accurate low-bit quantization (e.g., INT2 - INT8) with minimal accuracy loss in most scenarios. For example, at INT2 precision, it outperforms popular baselines by up to 2.1x higher in relative accuracy. At INT4 precision, AutoRound continues to hold a competitive edge in most cases. The image below provides an overview of the core algorithm in AutoRound.

Full technical details are presented in the AutoRound paper:

Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs

 

Picture1.png

 

Despite its robust performance, AutoRound remains fast and lightweight—quantizing a 72B model takes only 37 minutes on a single GPU under light mode.

It further supports mixed-bit tuning, lm-head quantization, GPTQ/AWQ/GGUF format exports, and customizable tuning recipes.

AutoRound Highlights

Accuracy: deliver superior accuracy at low-bit precision

Picture2.png

Average accuracy of 10+ tasks at INT4 weight

 

Schemes: support weight-only quantization, weight & activation quantization, dynamic and static for activation quantization.

Mixed-bits: propose an effective algorithm to generate mixed-bits / other data types schemes in minutes.

Broad Compatibility:

  • Support nearly all popular LLM architectures and over 10 vision-language models (VLMs)
  • Support Devices: CPU, Intel GPU, CUDA
  • Support Data Types: INT2 - INT8, MXFP4, NVFP4, FP8, and MXFP8

Efficiency: Enables block-wise tuning to lower VRAM usage without sacrificing throughput yet fast.

 

Picture3.png

 Quantization time cost comparison

Community Adoption:

  • Work seamlessly with SGLang, TorchAO, Transformers, and vLLM.
  • Widely used by HuggingFace model hubs such as Intel, OPEA, Kaitchup, and fbaldassarri with approximately two million downloads.

Export Formats:

  • AutoRound
  • GPTQ
  • AWQ
  • GGUF
  • Compressed-tensor (initial support)

Integration Overview

SGLang provides a next-generation inference runtime built for scalable, low-latency LLM deployment. Its multi-modal, multi-GPU, and streaming execution model enables both chat and agentic reasoning workloads with exceptional efficiency.

SGLang’s flexible architecture now offers native hooks for quantized model loading, unlocking AutoRound’s full potential for deployment.

1. Quantize with AutoRound

AutoRound automatically optimizes weight rounding and exports quantized weights that compatible with SGLang.

1.1 API Usage

Picture4.png

 

1.2 CMD Usage

Picture5.png

 

 

2. Deploying with SGLang

SGLang supports AutoRound-quantized models directly (Version>=v0.5.4.post2). It is compatible with SGLang-supported modeling architectures, including common LLM, VLM, and MoE models, and also supports inference and evaluation of AutoRound mixed-bit quantized models.

 

2.1 OpenAI-Compatible Inference Usage

Picture6.png

 

 2.2 Offline Engine API Inference Usage

Picture7.png

 

More flexible configurations and deployment options are waiting for you to explore!

 

Quantization Roadmap

AutoRound’s quantization benchmark results demonstrate robust accuracy retention at low precision. The results below highlight AutoRound’s strong advantages and potential in MXFP4, NVFP4, and mixed-bits model quantization. Note that the accuracy result is measured by average accuracy across lambada_openai, hellaswag, piqa, winogrande, and mmlu task.

MXFP4 & NVFP4 Quantization. RTN (Round-to-nearest) algorithm is baseline, and 'alg_ext' option indicates experimental optimization algorithms enabled.

msfp4lllama3.1-8B-InstructQwen2-7.5-InstructPhi4Qwen3-32B
RTN0.62120.65500.71670.6901
AutoRound0.66860.67580.72470.7211
AutoRound +alg_ext0.67320.68090.72250.7201

 

msfp4lllama3.1-8B-InstructQwen2-7.5-InstructPhi4Qwen3-32B
RTN0.68760.69060.72960.7164
AutoRound0.69180.69730.73060.7306
AutoRound +alg_ext0.69650.69890.73180.7295

 

Auto MXFP4 & MXFP8 Mixed-Bits Quantization

Average Bitslllama3.1-8B-InstructQwen2.5-7B-InstructQwen3-8BQwen3-32B
BF160.7076 (100%)0.7075 (100%)0.6764 (100%)0.7321 (100%)
4-bit0.6626 (93.6%)0.6550 (92.6%)0.6316 (93.4%)0.6901 (94.3%)
4.5-bit0.6808 (96.2%)0.6776 (95.8%)0.6550 (96.8%)0.7176 (98.0%)
5-bit0.6857 (96.9%)0.6823 (96.4%)0.6594 (97.5%)0.7201 (98.3%)
6-bit0.6975 (98.6%)0.6970 (98.5%)0.6716 (99.3%)0.7303 (99.8%)

 

As part of AutoRound roadmap, we plan to continue enhancing MXFP4 & NVFP4 accuracy for common models and auto mixed-bits quantization in the upcoming releases.

Conclusion

The integration of AutoRound and SGLang marks a major milestone in efficient AI model deployment. This collaboration bridges precision optimization and runtime scalability, allowing developers to move seamlessly from quantization to real-time inference with minimal friction. AutoRound’s signed-gradient quantization ensures high fidelity even at extreme compression ratios, while SGLang’s high-throughput inference engine unlocks the full potential of low-bit execution across CPUs, GPUs, and multi-node clusters.

Looking forward, we aim to expand support for advanced quantization formats, optimize kernel efficiency, and bring AutoRound quantization into broader multimodal and agentic workloads. Together, AutoRound and SGLang are setting a new standard for intelligent, efficient, and scalable LLM deployment.

 

Notices and Disclaimers

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.