The BBC's Szu Ping Chan takes a look at the futuristic technology depicted in 1982's Blade Runner, which was set in November 2019. It now being November 2019, how did it do? We're doing great on telecommunications and despoiling the planet, but not well on the genetically-engineered vat-grown human clones front.
computational photography is becoming the norm, helping our phones take incredible low-light pictures, and automatically blur the background of our portrait shots. But the Esper machine, which Deckard uses to find clues by zooming in on different things within photos, remains ahead of its time. It enables him to see objects and people from different angles, and items which were not previously visible. AI researchers are working on software that can create interactive 3D views from a single 2D source image, but it's likely to be many more years to come before Photoshop gets the feature.
How might Deckard's camera work, practically? The data could only represent what the camera can see at the moment of capture. Recent light-field cameras (with several lenses at different focal lengths) can do the Blade Runner trick, but not enough to offer the shift of perpective Deckard was able to explore on his bulky, single-purpose cathode ray tube photo viewer.
Perhaps the movie-world's cameras spit out little drones, snapping simultaneously from nearby points of view and baking all the data into the original. Or perhaps being a Blade Runner, he has access to encrypted information in the print captured from nearby surveillance cameras. What if the camera is also capturing all sorts of other data–sonar, radar, dim extrapolations from all the other reflective surfaces — and inferring details?
Or maybe it's pointless speculating how a photo could have gigapixel resolution and the light field of a meter-wide capture surface, yet have the dynamic range of a polaroid (the brand's embossed right there!) and require a machine the size and grace of a photocopier to examine.