This is the Linux app named MoCo v3 whose latest release can be downloaded as moco-v3sourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MoCo v3 with OnWorks for free.
Volg deze instructies om deze app uit te voeren:
- 1. Download deze applicatie op uw pc.
- 2. Voer in onze bestandsbeheerder https://www.onworks.net/myfiles.php?username=XXXXX in met de gebruikersnaam die u wilt.
- 3. Upload deze applicatie in zo'n bestandsbeheerder.
- 4. Start de OnWorks Linux online of Windows online emulator of MACOS online emulator vanaf deze website.
- 5. Ga vanuit het OnWorks Linux-besturingssysteem dat u zojuist hebt gestart naar onze bestandsbeheerder https://www.onworks.net/myfiles.php?username=XXXXX met de gewenste gebruikersnaam.
- 6. Download de applicatie, installeer hem en voer hem uit.
SCREENSHOTS
Ad
MoCo v3
PRODUCTBESCHRIJVING
MoCo v3 is a PyTorch reimplementation of Momentum Contrast v3 (MoCo v3), Facebook Research’s state-of-the-art self-supervised learning framework for visual representation learning using ResNet and Vision Transformer (ViT) backbones. Originally developed in TensorFlow for TPUs, this version faithfully reproduces the paper’s results on GPUs while offering an accessible and scalable PyTorch interface. MoCo v3 introduces improvements for training self-supervised ViTs by combining contrastive learning with transformer-based architectures, achieving strong linear and end-to-end fine-tuning performance on ImageNet benchmarks. The repository supports multi-node distributed training, automatic mixed precision, and linear scaling of learning rates for large-batch regimes. It also includes scripts for self-supervised pretraining, linear classification, and fine-tuning within the DeiT framework.
Kenmerken
- Compatible with ImageNet and standard vision benchmarks for transfer learning
- Configurable via command-line flags with scalable hyperparameters and batch settings
- Integrated scripts for self-supervised pretraining, linear evaluation, and DeiT fine-tuning
- Achieves strong ImageNet results (e.g., 74.6% linear top-1 on ResNet-50, 83.2% fine-tuned ViT-B)
- Supports large-scale multi-GPU distributed training with mixed precision
- PyTorch implementation of self-supervised MoCo v3 for ResNet and ViT models
Programmeertaal
Python
Categorieën
This is an application that can also be fetched from https://sourceforge.net/projects/moco-v3.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.