This is the Linux app named PowerInfer whose latest release can be downloaded as PowerInfersourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named PowerInfer with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
SCREENSHOTS:
PowerInfer
DESCRIPTION:
PowerInfer is a high-performance inference engine designed to run large language models efficiently on personal computers equipped with consumer-grade GPUs. The project focuses on improving the performance of local AI inference by optimizing how neural network computations are distributed between CPU and GPU resources. Its architecture exploits the observation that only a subset of neurons in large models are frequently activated, allowing the system to preload frequently used neurons into GPU memory while processing less common activations on the CPU. This hybrid execution strategy significantly reduces memory bottlenecks and improves overall inference speed. PowerInfer incorporates specialized algorithms and sparse operators to manage neuron activation patterns and minimize data transfers between hardware components. As a result, it enables powerful language models to run on consumer hardware while achieving performance comparable to more expensive server-grade systems.
Features
- High-speed local inference for large language models on consumer GPUs
- Hybrid CPU-GPU execution that optimizes neuron activation workloads
- Sparse operator optimizations to improve computational efficiency
- Reduced GPU memory usage through selective neuron loading
- Support for large transformer models running on personal computers
- Architecture designed for local deployment of AI applications
Programming Language
C++
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/powerinfer.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.