This is the Linux app named Llama 2 LLM whose latest release can be downloaded as llama2.csourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named Llama 2 LLM with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
SCREENSHOTS
Ad
Llama 2 LLM
DESCRIPTION
llama2.c is a minimalist, end-to-end LLM toolkit that lets you train a Llama-2–style model in PyTorch and run inference with a single ~700-line C program (run.c). The project emphasizes simplicity and education: the Llama-2 architecture is hard-coded, there are no external C dependencies, and you can see the full forward pass plainly in C. Despite the tiny footprint, it’s “full-stack”: you can train small models (e.g., 15M/42M/110M params on TinyStories) and then sample tokens directly from the C runtime at interactive speeds on a laptop. You can also export and run Meta’s Llama-2 models (currently practical up to 7B due to fp32 inference and memory limits), plus try chat/Code Llama variants with proper tokenizers. A quantized int8 path (runq.c) reduces checkpoint size (e.g., 26GB→6.7GB for 7B) and speeds up inference (e.g., ~3× vs fp32 in author’s notes), with modest quality tradeoffs.
Features
- Runs LLaMA 2 inference in ~700 lines of C
- Supports loading Meta’s official LLaMA 2 models (up to 7B params)
- Includes training scripts in PyTorch for small models
- Educational, with no external dependencies
- Compatible with Linux, macOS, and Windows
- Inspired by llama.cpp, but simpler and more minimal
Programming Language
C, Python
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/llama-2-llm.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.