OnWorks favicon

Hallucination Leaderboard download for Linux

Free download Hallucination Leaderboard Linux app to run online in Ubuntu online, Fedora online or Debian online

This is the Linux app named Hallucination Leaderboard whose latest release can be downloaded as hallucination-leaderboardsourcecode.zip. It can be run online in the free hosting provider OnWorks for workstations.

Download and run online this app named Hallucination Leaderboard with OnWorks for free.

Follow these instructions in order to run this app:

- 1. Downloaded this application in your PC.

- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.

- 3. Upload this application in such filemanager.

- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.

- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.

- 6. Download the application, install it and run it.

SCREENSHOTS

Ad


Hallucination Leaderboard


DESCRIPTION

Hallucination Leaderboard is an open research project that tracks and compares the tendency of large language models to produce hallucinated or inaccurate information when generating summaries. The project provides a standardized benchmark that evaluates different models using a dedicated hallucination detection system known as the Hallucination Evaluation Model. Each model is tested on document summarization tasks to measure how often generated responses introduce information that is not supported by the original source material. The results are published as a leaderboard that allows researchers and developers to compare model reliability and factual consistency. By focusing on hallucination rates rather than traditional metrics such as accuracy or fluency, the benchmark highlights an important aspect of AI system safety and trustworthiness. The leaderboard is regularly updated as new models are released and evaluation methods evolve.



Features

  • Benchmark that measures hallucination frequency in language model outputs
  • Evaluation framework based on document summarization tasks
  • Leaderboard comparing hallucination rates across multiple LLMs
  • Automated scoring using a dedicated hallucination evaluation model
  • Public dataset and evaluation pipeline for reproducible testing
  • Regular updates tracking performance of newly released models


Programming Language

Python


Categories

Large Language Models (LLM)

This is an application that can also be fetched from https://sourceforge.net/projects/hallucination-leaderb.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.


Free Servers & Workstations

Download Windows & Linux apps

Linux commands

Ad




×
❤️Amazon - Shop, book, or buy here — no cost, helps keep services free.