This is the Linux app named MAE (Masked Autoencoders) whose latest release can be downloaded as maesourcecode.tar.gz. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named MAE (Masked Autoencoders) with OnWorks for free.
Volg deze instructies om deze app uit te voeren:
- 1. Download deze applicatie op uw pc.
- 2. Voer in onze bestandsbeheerder https://www.onworks.net/myfiles.php?username=XXXXX in met de gebruikersnaam die u wilt.
- 3. Upload deze applicatie in zo'n bestandsbeheerder.
- 4. Start de OnWorks Linux online of Windows online emulator of MACOS online emulator vanaf deze website.
- 5. Ga vanuit het OnWorks Linux-besturingssysteem dat u zojuist hebt gestart naar onze bestandsbeheerder https://www.onworks.net/myfiles.php?username=XXXXX met de gewenste gebruikersnaam.
- 6. Download de applicatie, installeer hem en voer hem uit.
SCREENSHOTS
Ad
MAE (gemaskeerde auto-encoders)
PRODUCTBESCHRIJVING
MAE (Masked Autoencoders) is a self-supervised learning framework for visual representation learning using masked image modeling. It trains a Vision Transformer (ViT) by randomly masking a high percentage of image patches (typically 75%) and reconstructing the missing content from the remaining visible patches. This forces the model to learn semantic structure and global context without supervision. The encoder processes only the visible patches, while a lightweight decoder reconstructs the full image—making pretraining computationally efficient. After pretraining, the encoder serves as a powerful backbone for downstream tasks like image classification, segmentation, and detection, achieving top performance with minimal fine-tuning. The repository provides pretrained models, fine-tuning scripts, evaluation protocols, and visualization tools for reconstruction quality and learned features.
Kenmerken
- Masked image modeling with random high-ratio patch masking
- Efficient pretraining via encoder-decoder separation (encoder sees only visible patches)
- Scalable Vision Transformer backbone for downstream vision tasks
- Pretrained models and fine-tuning scripts for classification, detection, and segmentation
- Visualization tools for reconstruction and representation analysis
- Self-supervised training paradigm requiring no labeled data
Programmeertaal
Python
Categorieën
This is an application that can also be fetched from https://sourceforge.net/projects/mae-masked-autoencoders.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.