This is the Windows app named LLaMA-MoE whose latest release can be downloaded as v1.0.0-publishsourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named LLaMA-MoE with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start any OS OnWorks online emulator from this website, but better Windows online emulator.
- 5. From the OnWorks Windows OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application and install it.
- 7. Download Wine from your Linux distributions software repositories. Once installed, you can then double-click the app to run them with Wine. You can also try PlayOnLinux, a fancy interface over Wine that will help you install popular Windows programs and games.
Wine is a way to run Windows software on Linux, but with no Windows required. Wine is an open-source Windows compatibility layer that can run Windows programs directly on any Linux desktop. Essentially, Wine is trying to re-implement enough of Windows from scratch so that it can run all those Windows applications without actually needing Windows.
SCREENSHOTS
Ad
LLaMA-MoE
DESCRIPTION
LLaMA-MoE is an open-source project that builds mixture-of-experts language models from LLaMA through expert partitioning and continual pre-training. The repository is centered on making MoE research more accessible by offering smaller and more affordable models with only about 3.0 to 3.5 billion activated parameters, which helps reduce deployment and experimentation costs. Its architecture works by splitting LLaMA feed-forward networks into sparse experts and adding gating mechanisms so that only selected experts are activated during inference and training. The project is not just a model release, but also a research framework that includes multiple expert construction methods, several gating strategies, and tooling for continual pre-training on filtered SlimPajama-based datasets. It also emphasizes training efficiency through features such as FlashAttention-v2 integration and fast streaming dataset loading, which are important for large-scale experimentation.
Features
- Sparse MoE models with roughly 3.0 to 3.5B activated parameters
- Multiple expert construction methods for partitioning feed-forward networks
- Support for TopK noisy gating and Switch gating strategies
- Continual pre-training pipeline built around filtered SlimPajama data
- FlashAttention-v2 integration and streaming dataset loading
- Monitoring utilities for routing, loss, throughput, and model utilization
Programming Language
Python
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/llama-moe.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.