MIL
COMPRESSA:
DL Models Compression Platform

Production-ready solution for infrastructure optimization in several weeks, instead of several months

Great variability of supported model architectures

ML Infrastructure cost reduction by decreasing the computational complexity of DL models
On-device ML Infrastructure transfer by compression the DL models into device limitations
BUSINESS CASES
1
RAM, Energy, CPU/GPU lower consumption while models inference
2
Transfer high-quality and complex models on device
3
Models inference on low-bit CPU
4
Speeding up calculations
SOLVING METHODS
Post-training and Low-Bit Quantisation (Adaround, GDRQ, LSQ, own modification of LSQ, APoT, Symmetric, Asymmetric, etc)
Pruning and Knowledge Distillation (HRank, CUP, Cluster-based, Magnitude)
Device Placement
DL Optimization Solvers
EXPERIENCE & EXAMPLES
Заказать проект
Для начала проекта нам нужно поговорить. Достаточно заполнить поля ниже, и мы свяжемся с Вами.