This is the Linux app named MoCo v3 whose latest release can be downloaded as moco-v3sourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MoCo v3 with OnWorks for free.
Siga estas instruções para executar este aplicativo:
- 1. Baixe este aplicativo em seu PC.
- 2. Entre em nosso gerenciador de arquivos https://www.onworks.net/myfiles.php?username=XXXXX com o nome de usuário que você deseja.
- 3. Carregue este aplicativo em tal gerenciador de arquivos.
- 4. Inicie o emulador OnWorks Linux online ou Windows online ou emulador MACOS online a partir deste site.
- 5. No sistema operacional OnWorks Linux que você acabou de iniciar, acesse nosso gerenciador de arquivos https://www.onworks.net/myfiles.php?username=XXXXX com o nome de usuário que deseja.
- 6. Baixe o aplicativo, instale-o e execute-o.
SCREENSHOTS
Ad
MoCo v3
DESCRIÇÃO
MoCo v3 is a PyTorch reimplementation of Momentum Contrast v3 (MoCo v3), Facebook Research’s state-of-the-art self-supervised learning framework for visual representation learning using ResNet and Vision Transformer (ViT) backbones. Originally developed in TensorFlow for TPUs, this version faithfully reproduces the paper’s results on GPUs while offering an accessible and scalable PyTorch interface. MoCo v3 introduces improvements for training self-supervised ViTs by combining contrastive learning with transformer-based architectures, achieving strong linear and end-to-end fine-tuning performance on ImageNet benchmarks. The repository supports multi-node distributed training, automatic mixed precision, and linear scaling of learning rates for large-batch regimes. It also includes scripts for self-supervised pretraining, linear classification, and fine-tuning within the DeiT framework.
Recursos
- Compatible with ImageNet and standard vision benchmarks for transfer learning
- Configurable via command-line flags with scalable hyperparameters and batch settings
- Integrated scripts for self-supervised pretraining, linear evaluation, and DeiT fine-tuning
- Achieves strong ImageNet results (e.g., 74.6% linear top-1 on ResNet-50, 83.2% fine-tuned ViT-B)
- Supports large-scale multi-GPU distributed training with mixed precision
- PyTorch implementation of self-supervised MoCo v3 for ResNet and ViT models
Linguagem de Programação
Python
Categorias
This is an application that can also be fetched from https://sourceforge.net/projects/moco-v3.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.