This is the Linux app named llama2.c whose latest release can be downloaded as llama2.csourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
使用 OnWorks 免费下载并在线运行名为 llama2.c 的应用程序。
请按照以下说明运行此应用程序:
- 1. 在您的 PC 中下载此应用程序。
- 2. 在我们的文件管理器 https://www.onworks.net/myfiles.php?username=XXXXX 中输入您想要的用户名。
- 3. 在这样的文件管理器中上传这个应用程序。
- 4. 从此网站启动OnWorks Linux online 或Windows online emulator 或MACOS online emulator。
- 5. 从您刚刚启动的 OnWorks Linux 操作系统,使用您想要的用户名转到我们的文件管理器 https://www.onworks.net/myfiles.php?username=XXXXX。
- 6. 下载应用程序,安装并运行。
SCREENSHOTS
Ad
llama2.c
商品描述
llama2.c is a minimalist implementation of the Llama 2 language model architecture designed to run entirely in pure C. Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c). While it can technically load Meta’s official Llama 2 models, current support is limited to fp32 precision, meaning practical use is capped at models up to around 7B parameters. The goal of llama2.c is to demonstrate how a compact and transparent implementation can perform meaningful inference even with small models, emphasizing simplicity, clarity, and accessibility. The project builds upon lessons from nanoGPT and takes inspiration from llama.cpp, focusing instead on minimalism and educational value over large-scale performance.
功能
- Implements the full Llama 2 architecture for both training and inference
- Provides a compact, 700-line C-based inference engine (run.c)
- Allows training in PyTorch and running models directly in C
- Supports fp32 model precision for smaller, educational-scale LLMs
- Offers a clean, dependency-free implementation for easy study and modification
- Inspired by llama.cpp but designed for simplicity and minimalism
程式语言
C、Python
分类
This is an application that can also be fetched from https://sourceforge.net/projects/llama2-c.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.