Small is beautiful, as the old saying goes, and nowhere is that more true than in media files. Compressed images are considerably easier to transmit and store than uncompressed ones are, and now Google is using neural networks to beat JPEG at the compression game.
Google began by taking a random sample of 6 million 1,280×720 images on the web. It then broke those down into nonoverlapping 32×32 tiles and zeroed in on 100 of those with the worst compression ratios. The goal there, essentially, was to focus on improving performance on the “hardest-to-compress” data, because it’s bound to be easier to succeed on the rest.
The researchers then used the TensorFlow machine-learning system Google open-sourced last year to train a set of experimental neural network architectures. They used one million steps to train them and then collected a series of technical metrics to find which training models produced the best-compressed results.