GoGPT Best VPN GoSearch

OnWorks favicon

XNNPACK download for Linux

ດາວໂຫຼດແອັບ XNNPACK Linux ຟຣີເພື່ອແລ່ນອອນໄລນ໌ໃນ Ubuntu ອອນໄລນ໌, Fedora ອອນໄລນ໌ ຫຼື Debian ອອນໄລນ໌

ນີ້ແມ່ນແອັບ Linux ທີ່ມີຊື່ວ່າ XNNPACK ເຊິ່ງລຸ້ນຫຼ້າສຸດສາມາດດາວໂຫຼດໄດ້ໃນນາມ XNNPACKsourcecode.tar.gz. ມັນສາມາດດໍາເນີນການອອນໄລນ໌ຢູ່ໃນ OnWorks ຜູ້ໃຫ້ບໍລິການໂຮດຕິ້ງຟຣີສໍາລັບບ່ອນເຮັດວຽກ.

Download and run online this app named XNNPACK with OnWorks for free.

ປະຕິບັດຕາມຄໍາແນະນໍາເຫຼົ່ານີ້ເພື່ອດໍາເນີນການ app ນີ້:

- 1. ດາວ​ໂຫຼດ​ຄໍາ​ຮ້ອງ​ສະ​ຫມັກ​ນີ້​ໃນ PC ຂອງ​ທ່ານ​.

- 2. ໃສ່ໃນຕົວຈັດການໄຟລ໌ຂອງພວກເຮົາ https://www.onworks.net/myfiles.php?username=XXXXX ດ້ວຍຊື່ຜູ້ໃຊ້ທີ່ທ່ານຕ້ອງການ.

- 3. ອັບໂຫລດແອັບພລິເຄຊັນນີ້ຢູ່ໃນຕົວຈັດການໄຟລ໌ດັ່ງກ່າວ.

- 4. ເລີ່ມ OnWorks Linux ອອນລາຍ ຫຼື Windows online emulator ຫຼື MACOS online emulator ຈາກເວັບໄຊທ໌ນີ້.

- 5. ຈາກ OnWorks Linux OS ທີ່ເຈົ້າຫາກໍ່ເລີ່ມຕົ້ນ, ໄປທີ່ຕົວຈັດການໄຟລ໌ຂອງພວກເຮົາ https://www.onworks.net/myfiles.php?username=XXXXX ດ້ວຍຊື່ຜູ້ໃຊ້ທີ່ທ່ານຕ້ອງການ.

- 6. ດາວນ໌ໂຫລດຄໍາຮ້ອງສະຫມັກ, ຕິດຕັ້ງມັນແລະດໍາເນີນການ.

XNNPACK


Ad


ລາຍລະອຽດ

XNNPACK is a highly optimized, low-level neural network inference library developed by Google for accelerating deep learning workloads across a variety of hardware architectures, including ARM, x86, WebAssembly, and RISC-V. Rather than serving as a standalone ML framework, XNNPACK provides high-performance computational primitives—such as convolutions, pooling, activation functions, and arithmetic operations—that are integrated into higher-level frameworks like TensorFlow Lite, PyTorch Mobile, ONNX Runtime, TensorFlow.js, and MediaPipe. The library is written in C/C++ and designed for maximum portability, efficiency, and performance, leveraging platform-specific instruction sets (e.g., NEON, AVX, SIMD) for optimized execution. It supports NHWC tensor layouts and allows flexible striding along the channel dimension to efficiently handle channel-split and concatenation operations without additional cost.



ຄຸນ​ລັກ​ສະ​ນະ

  • Cross-platform neural network inference backend optimized for ARM, x86, WebAssembly, and RISC-V
  • High-performance implementations for 2D convolutions, pooling, activation, and quantization operators
  • Supports both FP32 and INT8 inference with per-channel quantization
  • Efficient NHWC tensor layout with flexible channel stride
  • Integrates seamlessly with frameworks like TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, and MediaPipe
  • Multi-threaded and vectorized operator implementations


ພາສາການຂຽນໂປຣແກຣມ

ສະພາແຫ່ງ, C, C++, Unix Shell


ປະເພດ

ຫໍສະໝຸດ Neural Network

This is an application that can also be fetched from https://sourceforge.net/projects/xnnpack.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.


ເຊີບເວີ ແລະສະຖານີເຮັດວຽກຟຣີ

ດາວໂຫຼດແອັບ Windows ແລະ Linux

Linux ຄຳ ສັ່ງ

Ad




×
ການ​ໂຄ​ສະ​ນາ
?ຊື້ເຄື່ອງ, ຈອງ, ຫຼືຊື້ທີ່ນີ້ — ບໍ່ມີຄ່າໃຊ້ຈ່າຍ, ຊ່ວຍໃຫ້ການບໍລິການຟຣີ.