This is the Linux app named MoCo v3 whose latest release can be downloaded as moco-v3sourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MoCo v3 with OnWorks for free.
Siga estas instrucciones para ejecutar esta aplicación:
- 1. Descargue esta aplicación en su PC.
- 2. Ingrese en nuestro administrador de archivos https://www.onworks.net/myfiles.php?username=XXXXX con el nombre de usuario que desee.
- 3. Cargue esta aplicación en dicho administrador de archivos.
- 4. Inicie el emulador en línea OnWorks Linux o Windows en línea o el emulador en línea MACOS desde este sitio web.
- 5. Desde el SO OnWorks Linux que acaba de iniciar, vaya a nuestro administrador de archivos https://www.onworks.net/myfiles.php?username=XXXXX con el nombre de usuario que desee.
- 6. Descarga la aplicación, instálala y ejecútala.
SCREENSHOTS
Ad
MoCo v3
DESCRIPCIÓN
MoCo v3 is a PyTorch reimplementation of Momentum Contrast v3 (MoCo v3), Facebook Research’s state-of-the-art self-supervised learning framework for visual representation learning using ResNet and Vision Transformer (ViT) backbones. Originally developed in TensorFlow for TPUs, this version faithfully reproduces the paper’s results on GPUs while offering an accessible and scalable PyTorch interface. MoCo v3 introduces improvements for training self-supervised ViTs by combining contrastive learning with transformer-based architectures, achieving strong linear and end-to-end fine-tuning performance on ImageNet benchmarks. The repository supports multi-node distributed training, automatic mixed precision, and linear scaling of learning rates for large-batch regimes. It also includes scripts for self-supervised pretraining, linear classification, and fine-tuning within the DeiT framework.
Caracteristicas
- Compatible with ImageNet and standard vision benchmarks for transfer learning
- Configurable via command-line flags with scalable hyperparameters and batch settings
- Integrated scripts for self-supervised pretraining, linear evaluation, and DeiT fine-tuning
- Achieves strong ImageNet results (e.g., 74.6% linear top-1 on ResNet-50, 83.2% fine-tuned ViT-B)
- Supports large-scale multi-GPU distributed training with mixed precision
- PyTorch implementation of self-supervised MoCo v3 for ResNet and ViT models
Lenguaje de programación
Python
Categorías
This is an application that can also be fetched from https://sourceforge.net/projects/moco-v3.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.