We’ve all seen it weekly in TV Police dramas. The cops are looking for a suspect, and all they have is a grainy/pixelated image. Well, that is until they’ve passed it through their advanced image recombinator thingy! And hey presto, the cops work is done, a new clean image with all the missing pixels appears. If only that were grounded in reality because it’s impossible to find information that isn’t there to create a picture like this! Well, it was, until researchers found a way to (almost) turn fantasy into reality, called Google Super Resolution.
Google Super Resolution
Now, you may be wondering how can something be created from nothing? Well, to start with, that’s not what this new imaging technique is doing. So, to help pass on what we know, you should look at the image above.
As you can see, the images on the left are blurry and pixelated. They are the source files Google Super Resolution/Google Brain had to use. Each file contained 64 pixels, so if you now look at the middle row. You can see that the image has been cleared up. This is what the Google Brain using Google Super Resolution was able to produce, which you must agree is remarkable! However, what more astonishing is just how accurate this new technique is. Look at the right column of the image above, and you will see the source file. It’s only then that you will be able to tell just how close Google Super Resolution gets it.
Philip Carret was an investor and founder of Pioneer Fund, one of the first mutual funds in the United States. Carret ran the mutual fund for 55 years, during which time an investment of $10,000 became $8 million. That suggests he achieved a compound annual return of nearly 13% for his investors. Q1 2021 hedge Read More
Synthesize Details from Low-resolution images
Now the real science behind how Google’s researchers were able to get this technique to work is heavy reading. So, we’ll give you the short version: Created by Ryan Dahl, Mohammad Norouzi, and Jonathon Shlens the full name for it is pixel recursive super resolution. It allowed the researchers to synthesize low-resolution images by combining the power of two neutral networks.
The above process allows the 8×8 source image to be compared to the high-resolution image. And then it approximates/guesses what the new picture may look like when zoomed in. However, to output something like a face, Google Super Resolution has to realistic details to the face. It does that by comparing what pixels go where on the high-res image and places them correspondingly.
How Accurate is it?
If you take another look at the image above and make your own comparison, you will see it’s fairly accurate. However, as researchers do Dahl, Norouzi, and Shlens put it to the test themselves. The results they discovered pointed towards people being fooled by the created image 10-percent of the time. However, when a recreation of a non-human face was used (a bedroom), it fooled those questioned 28-percent of the time.
Now to be considered useful, the researchers are looking for a figure of at least 50-percent, so as you can tell it’s some way off. Does this mean, it can’t help law enforcement agencies to do their job? To be honest were not sure. However, one thing is for certain, when it comes to convicting people of crimes it has to be more accurate.