This is the Windows app named shimmy whose latest release can be downloaded as Shimmyv1.9.0sourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named shimmy with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start any OS OnWorks online emulator from this website, but better Windows online emulator.
- 5. From the OnWorks Windows OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application and install it.
- 7. Download Wine from your Linux distributions software repositories. Once installed, you can then double-click the app to run them with Wine. You can also try PlayOnLinux, a fancy interface over Wine that will help you install popular Windows programs and games.
Wine is a way to run Windows software on Linux, but with no Windows required. Wine is an open-source Windows compatibility layer that can run Windows programs directly on any Linux desktop. Essentially, Wine is trying to re-implement enough of Windows from scratch so that it can run all those Windows applications without actually needing Windows.
SCREENSHOTS:
shimmy
DESCRIPTION:
The shimmy project is a lightweight local inference server designed to run large language models with minimal overhead. Written primarily in Rust, the tool provides a small standalone binary that exposes an API compatible with the OpenAI interface, allowing existing applications to interact with local models without significant code changes. This compatibility enables developers to replace remote AI services with locally hosted models while keeping their existing software architecture intact. Shimmy focuses on performance and simplicity, using efficient runtime components to minimize memory usage and startup time compared to heavier inference frameworks. It supports modern model formats such as GGUF and SafeTensors and can automatically discover models stored locally or in common directories used by other AI tools. Advanced capabilities include CPU offloading for Mixture-of-Experts models and GPU acceleration, enabling large models to run on consumer hardware with limited VRAM.
Features
- Local inference server with OpenAI-compatible API endpoints
- Support for GGUF and SafeTensors model formats
- Single-binary deployment with minimal dependencies
- Automatic discovery of models in local directories and caches
- GPU acceleration and CPU offloading for large models
- Hot model swapping and runtime inference configuration
Programming Language
Rust
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/shimmy.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.