This is the Linux app named gpt-oss whose latest release can be downloaded as gpt-oss-main.zip. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named gpt-oss with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
gpt-oss
Ad
DESCRIPTION
gpt-oss is OpenAI’s open-weight family of large language models designed for powerful reasoning, agentic workflows, and versatile developer use cases. The series includes two main models: gpt-oss-120b, a 117-billion parameter model optimized for general-purpose, high-reasoning tasks that can run on a single H100 GPU, and gpt-oss-20b, a lighter 21-billion parameter model ideal for low-latency or specialized applications on smaller hardware. Both models use a native MXFP4 quantization for efficient memory use and support OpenAI’s Harmony response format, enabling transparent full chain-of-thought reasoning and advanced tool integrations such as function calling, browsing, and Python code execution. The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
Features
- Two model sizes: gpt-oss-120b (117B params) and gpt-oss-20b (21B params)
- Native MXFP4 quantization for MoE layers enabling efficient inference
- Supports full chain-of-thought reasoning with configurable effort levels (low, medium, high)
- Harmony response format for standardized, debuggable model output
- Built-in agentic tool capabilities: function calling, web browsing, Python code execution, structured outputs
- Multiple inference backends: PyTorch, Triton (optimized), Metal (Apple Silicon)
- Reference tools and clients: terminal chat app, Responses API example server
- Licensed under permissive Apache 2.0 for experimentation, customization, and commercial deployment
Programming Language
Python, C++, C
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/gpt-oss/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.