This is the Linux app named MAE (Masked Autoencoders) whose latest release can be downloaded as maesourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MAE (Masked Autoencoders) with OnWorks for free.
Следуйте этим инструкциям, чтобы запустить это приложение:
- 1. Загрузил это приложение на свой компьютер.
- 2. Введите в нашем файловом менеджере https://www.onworks.net/myfiles.php?username=XXXXX с желаемым именем пользователя.
- 3. Загрузите это приложение в такой файловый менеджер.
- 4. Запустите онлайн-эмулятор OnWorks Linux или Windows или онлайн-эмулятор MACOS с этого веб-сайта.
- 5. В только что запущенной ОС OnWorks Linux перейдите в наш файловый менеджер https://www.onworks.net/myfiles.php?username=XXXXX с желаемым именем пользователя.
- 6. Скачайте приложение, установите его и запустите.
СКРИНШОТЫ:
MAE (Маскированные автоэнкодеры)
ОПИСАНИЕ:
MAE (Masked Autoencoders) is a self-supervised learning framework for visual representation learning using masked image modeling. It trains a Vision Transformer (ViT) by randomly masking a high percentage of image patches (typically 75%) and reconstructing the missing content from the remaining visible patches. This forces the model to learn semantic structure and global context without supervision. The encoder processes only the visible patches, while a lightweight decoder reconstructs the full image—making pretraining computationally efficient. After pretraining, the encoder serves as a powerful backbone for downstream tasks like image classification, segmentation, and detection, achieving top performance with minimal fine-tuning. The repository provides pretrained models, fine-tuning scripts, evaluation protocols, and visualization tools for reconstruction quality and learned features.
Особенности
- Masked image modeling with random high-ratio patch masking
- Efficient pretraining via encoder-decoder separation (encoder sees only visible patches)
- Scalable Vision Transformer backbone for downstream vision tasks
- Pretrained models and fine-tuning scripts for classification, detection, and segmentation
- Visualization tools for reconstruction and representation analysis
- Self-supervised training paradigm requiring no labeled data
Язык программирования
Питон
Категории
This is an application that can also be fetched from https://sourceforge.net/projects/mae-masked-autoencoders.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.