DeepMind scientists have treated compression technology to a major upgrade thanks to a large language model (LLM) that has achieved astonishing lossless compression rates with image and audio data.
Thanks to the company’s Chinchilla 70B LLM, the researchers used a special compression algorithm to reduce images to 43.4% and audio files to 16.4% of their original sizes, as detailed in their paper – making it better than some of the best compression software out there.
By contrast, standard image compression algorithm PNG reduces images to 58.5% and FLAC compressors shrink audio to 30.3% of their original file sizes. It means storing so much more on any one of the best SSDs.
Although Chinchilla 70B is trained mainly on text, they achieved these results by leaning on the predictive capabilities of the model, and framed the “prediction problem” through the lens of file compression. In other words, they retooled the best qualities of an LLM and found these traits also serve to compress large files.
AI is great at compression – up to a point
The DeepMind researchers showed that due to this equivalence between prediction and compression, any compressor can be used as a conditional generative model – and even the other way around.
But, they added, they can only achieve such compression results up to a certain file size, meaning using generative AI as a compression solution may not be practical for everyone.
“We evaluated large pretrained models used as compressors against various standard compressors, and showed they are competitive not only on text but also on modalities they have never been trained on,” the researchers noted.
“We showed that the compression viewpoint provides novel insights on scaling laws since it takes the model size into account, unlike the log-loss objective, which is standard in current language modeling research.”
Due to this scaling limitation, the models used in this research aren’t better than, say, the likes of 7zip when you’re looking at files above a certain threshold. They may not compress as impressively as the results show, and they may also not be as fast as conventional compression algorithms.