This is the Linux app named LLaMA whose latest release can be downloaded as llamav2sourcecode.zip. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named LLaMA with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
SCREENSHOTS
Ad
LLaMA
DESCRIPTION
“Llama” is the repository from Meta (formerly Facebook/Meta Research) containing the inference code for LLaMA (Large Language Model Meta AI) models. It provides utilities to load pre-trained LLaMA model weights, run inference (text generation, chat, completions), and work with tokenizers. This repo is a core piece of the Llama model infrastructure, used by researchers and developers to run LLaMA models locally or in their infrastructure. It is meant for inference (not training from scratch) and connects with aspects like model cards, responsible use, licensing, etc.
Features
- Provides reference code to load various LLaMA pre-trained weights (7B, 13B, 70B, etc.) and perform inference (chat or completion)
- Tokenizer utilities, download scripts, shell helpers to fetch model weights with correct licensing / permissions
- Support for multi-parameter setups (batch size, context length, number of GPUs / parallelism) to scale to larger models / machines
- License / Responsible Use guidance; a model card and documentation for how the model may be used or restricted
- Includes example scripts for chat completions and text completions to show how to call the models in code
- Compatibility with standard deep learning frameworks (PyTorch etc.) for inference usage, including ensuring the required dependencies and setup scripts are included
Programming Language
Python
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/llama.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.