This is the Linux app named gpu_poor whose latest release can be downloaded as Addedwaytocalculate~token_s_trainingtimesourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named gpu_poor with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
SCREENSHOTS
Ad
gpu_poor
DESCRIPTION
gpu_poor is an open-source tool designed to help developers determine whether their hardware is capable of running a specific large language model and to estimate the performance they can expect from it. The project focuses on calculating GPU memory requirements and predicted inference speed for different models, hardware configurations, and quantization strategies. By analyzing factors such as model size, context length, batch size, and GPU specifications, the system estimates how much VRAM will be required and how fast tokens can be generated during inference. The tool also provides a detailed breakdown of where GPU memory is allocated, including model weights, KV cache, activations, and other runtime overhead. This information allows developers to evaluate trade-offs between different quantization methods such as GGML, bitsandbytes, and QLoRA before attempting to deploy a model. gpu_poor is particularly useful for researchers and hobbyists.
Features
- GPU memory requirement estimation for running large language models
- Token generation speed prediction based on model and hardware configuration
- Support for quantization approaches including GGML, bitsandbytes, and QLoRA
- Breakdown of memory usage across model weights, activations, and KV cache
- Estimation of training iteration time for fine-tuning workflows
- Hardware compatibility evaluation for GPUs and CPU-based inference
Programming Language
JavaScript
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/gpu-poor.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.