Generative large language models (LLMs) are well known for their remarkable performance in a variety of tasks, including complex natural language processing (NLP), creative writing, question answering, and code generation. In recent times, LLMs have been run on affordable on-premises systems, including home PCs with consumer GPUs for improved data privacy, customizable models, and lower inference costs. On-premises installations prioritize low latency over high performance; However, LLMs are difficult to implement on consumer GPUs due to high memory requirements.
These models, which are often autoregressive transformers, produce text token by token and, for each inference, need access to the full model with hundreds of billions of parameters. This limitation is noticeable in local implementations because there is less room for parallel processing when handling individual requests. Two current strategies to address these memory issues are model offloading and model compression.
In a recent study, a team of researchers presented PowerInfer, an efficient LLM inference system designed for on-premises deployments using a single commodity GPU. PowerInfer reduces the requirement for costly Peripheral Component Interconnect Express (PCIe) data transfers by preselecting and preloading hot-activated neurons on the GPU offline and using online predictors to identify active neurons at runtime.
The core idea behind PowerInfer's design is to make use of the high locality that comes with LLM inference, which is characterized by a power-law distribution in neural activation. This distribution shows that the majority of cold neurons change depending on certain inputs, while a small fraction of hot neurons activate consistently across different inputs.
The team has shared that PowerInfer is a hybrid GPU-CPU inference engine that uses this knowledge. It preloads cold-triggered neurons into the CPU for computation and hot-triggered neurons into the GPU for instant access. By strategically distributing the workload, GPU memory requirements are greatly reduced and there are fewer data transfers between the CPU and GPU.
PowerInfer integrates neuron-aware sparse operators and adaptive predictors to further optimize performance. Neuron-aware sparse operators interact directly with individual neurons, eliminating the need to operate on entire matrices, while adaptive predictors help identify and predict active neurons at runtime. These optimizations improve computational sparsity and effective neural activation.
The team evaluated the performance of PowerInfer, which showed an average token creation rate of 13.20 per second and a maximum throughput of 29.08 tokens per second. These results were achieved using a single NVIDIA RTX 4090 GPU and a variety of LLMs, including the OPT-175B model. This performance is only 18% below the best-in-class server-grade A100 GPU, demonstrating the effectiveness of PowerInfer on mainstream hardware.
Upon evaluation, PowerInfer also demonstrated that it has the ability to run up to 11.69 times faster than the current llama.cpp system while maintaining model fidelity. In conclusion, PowerInfer offers a significant increase in LLM inference speed, indicating its potential as a solution for running advanced language models on desktop PCs with restricted GPU capabilities.
Review the Paper and GitHub. All credit for this research goes to the researchers of this project. Also, don't forget to join. our 34k+ ML SubReddit, 41k+ Facebook community, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you'll love our newsletter.
Tanya Malhotra is a final year student of University of Petroleum and Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with specialization in artificial intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with a burning interest in acquiring new skills, leading groups and managing work in an organized manner.
<!– ai CONTENT END 2 –>