Pruna AI, a European startup that has been working on compression algorithms for AI models, is making its optimization framework open source on Thursday.
Pruna AI has been creating a framework that applies several efficiency methods, such as caching, pruning, quantization and distillation, to a given AI model.
“We also standardize saving and loading the compressed models, applying combinations of these compression methods, and also evaluating your compressed model after you compress it,” Pruna AI co-fonder and CTO John Rachwan told TechCrunch.
In particular, Pruna AI’s framework can evaluate if there’s significant quality loss after compressing a model and the performance gains that you get.
“If I were to use a metaphor, we are similar to how Hugging Face standardized transformers and diffusers — how to call them, how to save them, load them, etc. We are doing the same, but for efficiency methods,” he added.
Big AI labs have already been using various compression methods already. For instance, OpenAI has been relying on distillation to create faster versions of its flagship models.
This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. Similarly, the Flux.1-schnell image generation model is a distilled version of the Flux.1 model from Black Forest Labs.
Distillation is a technique used to extract knowledge from a large AI model with a “teacher-student” model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior.
“For big companies, what they usually do is that they build this stuff in-house. And what you can find in the open source world is usually based on single methods. For example, let’s say one quantization method for LLMs, or one caching method for diffusion models,” Rachwan said. “But you cannot find a tool that aggregates all of them, makes them all easy to use and combine together. And this is the big value that Pruna is bringing right now.”
While Pruna AI supports any kind of models, from large language models to diffusion models, speech-to-text models and computer vision models, the company is focusing more specifically on image and video generation models right now.
Some of Pruna AI’s existing users include Scenario and PhotoRoom. In addition to the open source edition, Pruna AI has an enterprise offering with advanced optimization features including an optimization agent.
“The most exciting feature that we are releasing soon will be a compression agent,” Rachwan said. “Basically, you give it your model, you say: ‘I want more speed but don’t drop my accuracy by more than 2%.’ And then, the agent will just do its magic. It will find the best combination for you, return it for you. You don’t have to do anything as a developer.”
Pruna AI charges by the hour for its pro version. “It’s similar to how you would think of a GPU when you rent a GPU on AWS or any cloud service,” Rachwan said.
And if your model is a critical part of your AI infrastructure, you’ll end up saving a lot of money on inference with the optimized model. For example, Pruna AI has made a Llama model eight times smaller without too much loss using its compression framework. Pruna AI hopes its customers will think about its compression framework as an investment that pays for itself.
Pruna AI raised a $6.5 million seed funding round a few months ago. Investors in the startup include EQT Ventures, Daphni, Motier Ventures and Kima Ventures.