Pruning
Pruning removes less important or redundant connections and neurons from a model, resulting in a sparser, more efficient network.
Quantization
Quantization reduces the precision of the model’s weights and activations, making them much smaller in terms of memory required.
Batching
Batching groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing overall processing time.
Enhancing
Enhancers improve the quality of the model’s output. They range from post-processing to test time compute algorithms.
Caching
Caching is a technique used to store intermediate results of computations to speed up subsequent operations, particularly useful in reducing inference time for machine learning models.
Recovery
Recovery restores the performance of a model after compression.
Factorization
Factorization batches several small matrix multiplications into one large fused operation which, while neutral on memory and raw latency, unlocks notable speed-ups when used alongside quantization.
Distillation
Distillation trains a smaller, simpler model to mimic a larger, more complex model.
Compilation
Compilation optimizes the model for specific hardware.
Distributers
Distributers distribute the model or certain calculations across multiple devices, improving computational efficiency and reducing overall processing time.
We combine +50 algorithms methods across nine combination techniques, including proprietary ones, so you don’t have to implement or test them manually.
And more...
Pruna combines several compression
algorithms with one feature
Our SmashConfig feature, let you defines your objectives and choose the algorithms methods you need to optimize your model in just a few lines of code. And if you don’t know what combination to use, have a look at our tutorials or our Optimization Agent.
Recommendation of configuration to compress Qwen
Learn more about Combination Engine with our blog articles





