Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 27k 4k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 419 32

Repositories

Showing 7 of 7 repositories

Sponsors

  • @yangalan123
  • @vincentkoc
  • @mgoin
  • @upstash
  • @robertgshaw2-neuralmagic
  • Private Sponsor

Most used topics

Loading…