MIL
We make the development pipeline of Deep Learning model compression via quantization and pruning techniques several times faster with FAI-OPT product
FAI-OPT is a library of components which saves your time in these stages
Model compression during learning stage
  • Quantisation
  • Pruning
Model compression in post-training stage
  • Quantisation
  • Pruning
Deployment
  • Model deployment code
If our library doesn't cover your needs completely, our team customizes a solution for you. We'll develop a complete algorithm for you if needed.
Possible applications
Examples of features we help to create
Power consumption
RAM usage
Memory usage
Inference speed growth
We already helped RnD departments from Samsung and Huawei to accomplish their projects.
Or contact our CEO alex.goncharov@phystech.edu