This is the Windows app named Following Instructions with Feedback whose latest release can be downloaded as following-instructions-human-feedbacksourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named Following Instructions with Feedback with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start any OS OnWorks online emulator from this website, but better Windows online emulator.
- 5. From the OnWorks Windows OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application and install it.
- 7. Download Wine from your Linux distributions software repositories. Once installed, you can then double-click the app to run them with Wine. You can also try PlayOnLinux, a fancy interface over Wine that will help you install popular Windows programs and games.
Wine is a way to run Windows software on Linux, but with no Windows required. Wine is an open-source Windows compatibility layer that can run Windows programs directly on any Linux desktop. Essentially, Wine is trying to re-implement enough of Windows from scratch so that it can run all those Windows applications without actually needing Windows.
SCREENSHOTS
Ad
Following Instructions with Feedback
DESCRIPTION
The following-instructions-human-feedback repository contains the code and supplementary materials underpinning OpenAI’s work in training language models (InstructGPT models) that better follow user instructions through human feedback. The repo hosts the model card, sample automatic evaluation outputs, and labeling guidelines used in the process. It is explicitly tied to the “Training language models to follow instructions with human feedback” paper, and serves as a reference for how OpenAI collects annotation guidelines, runs preference comparisons, and evaluates model behaviors. The repository is not a full implementation of the entire RLHF pipeline, but rather an archival hub supporting the published research—providing transparency around evaluation and human labeling standards. It includes directories such as automatic-eval-samples (samples of model outputs on benchmark tasks) and a model-card.md that describes the InstructGPT models’ intended behavior, limitations, and biases.
Features
- Archive of evaluation sample outputs from InstructGPT experiments
- model-card.md describing model usage, limitations, and safety considerations
- Labeling guidelines / annotation instructions used for human evaluators
- Structured “automatic-eval-samples” folder showing baseline vs fine-tuned outputs
- Transparency around how OpenAI measured model preference ranking and alignment
- Links and references to the original research paper and documentation
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/following-inst-feedback.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.