Search My Blog

Monday, November 1, 2010

High dynamic range imaging - Wikipedia, the free encyclopedia

High dynamic range imaging

From Wikipedia, the free encyclopedia
Jump to: navigation, search
An example of tone mapped HDR image (top) and its source images (bottom)

In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminance between the lightest and darkest areas of an image than current standard digital imaging techniques or photographic methods. This wide dynamic range allows HDR images to more accurately represent the range of intensity levels found in real scenes, ranging from direct sunlight to faint starlight.[1]

The two main sources of HDR imagery are computer renderings and merging of multiple photographs, the latter of which in turn are individually referred to as low dynamic range (LDR)[2] or standard dynamic range (SDR)[3] photographs.

Tone mapping techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

Contents

[hide]

[edit] Example

[edit] Photography

In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.

Dynamic Ranges of Common Devices
Device Stops Contrast
Computer LCD 9.5 700:1
DSLR camera (Canon EOS-1D Mark II) 11[4] 2048:1
Print film 7[4] 128:1

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and also introduces undesirable effects due to the lossy compression).

Any camera that allows manual over- or under-exposure of a photo can be used to create HDR images.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[5] As the popularity of this imaging technique grows, several camera manufactures are now offering built in HDR features. For example, the Pentax K-7 DSLR has an HDR mode which captures an HDR image and then outputs (only) a tone-mapped JPEG file.[6] The Canon PowerShot G12 and Canon PowerShot S95 offer similar features in a smaller format.[7] Other examples of cameras with built-in HDR shooting processing are the Nikon P100 superzoom and Sony Cybershot DSC-TX5 and TX7 point and shoot cameras. Also the Apple iPhone 4 built-in camera and iOS smartphone software is capable of creating HDR images.

Dynamic range for each ISO setting of the Canon EOS-1D Mark II[8]
ISO Dynamic Range (Stops)
50 11.3
100 11.6
200 11.5
400 11.2
800 10.7
1600 9.7
3200 8.7

[edit] Mathematics

Contrast ratio = 2(EV difference)

EV difference = log2(Contrast ratio)

The fact that an increase of 1 EV indicates a doubling of light means that EV is often represented on a base-2 logarithmic scale.

The human perception of brightness is well approximated by a Steven's power law,[9] which over a reasonable range is close to logarithmic, as described by the Weber–Fechner law, which is one reason that logarithmic measures of light intensity are often used.[10][11]

[edit] Scanning film

In contrast to analogue photographs, color negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially color negatives) feature a very high dynamic range.

Dynamic ranges of photographic material
Material Dynamic Range Object Contrast
photograph 1.5 1:32
positive slide 2.4 1:256
color negative 3.6 1:4096

When digitising photographic material with a scanner, the device has to be able to capture the whole dynamic range of the original, or else details get lost. The manufacturer's declarations concerning the dynamic range of flatbed and film scanners are often slightly inaccurate and exaggerated. Multi-Exposure records an original's maximum dynamic range by performing a double scan with an increased exposure time of the second scan. This procedure captures the light image area's details in the first pass and the shadow details in the second. Afterwards an algorithm calculates the final scan.

[edit] Representing HDR images on LDR displays

[edit] Contrast reduction

HDR images can easily be represented on common LDR devices, such as computer monitors and photographic prints, by simply reducing the contrast, just as all image editing software is capable of doing.

[edit] Clipping and compressing dynamic range

Scenes with high dynamic ranges are often represented on LDR devices by cropping the dynamic range, cutting off the darkest and brightest details, or alternatively with an S conversion curve that compresses contrast progressively and more aggressively in the highlights and shadows while leaving the middle portions of the contrast range relatively unaffected.

An example of a rendering of an HDRI tone-mapped image in a New York City nighttime cityscape.

[edit] Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of the entire image, while retaining localized contrast (between neighboring pixels), tapping into research on how the human eye and visual cortex perceive a scene, trying to represent the whole dynamic range while retaining realistic color and contrast.

Images with too much tone mapping processing have their range over-compressed, creating a surreal low-dynamic-range rendering of a high-dynamic-range scene.

[edit] Comparison with traditional digital images

Information stored in high dynamic range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called "scene-referred", in contrast to traditional digital images, which are "device-referred" or "output-referred". Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called "gamma encoding" or "gamma correction". The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[12][13][14]

HDR images often use a higher number of bits per color channel than traditional images to represent many more colors over a much wider dynamic range. 16-bit ("half precision") or 32-bit floating point numbers are often used to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.[12][15]

[edit] History of HDR photography

[edit] 1850

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard techniques, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two in a single picture in positive.[16]

[edit] 1930

High dynamic range imaging was originally developed in the 1930s and 1940s by Charles Wyckoff. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid 1940s. Wyckoff implemented local neighborhood tone remapping to combine differently exposed film layers into one single image of greater dynamic range.

[edit] Mid-century

External images
"Schweitzer at the Lamp, by W. Eugene Smith[17][18]

Mid-century, manual tone mapping was particularly done using dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. An excellent example is the photograph "Schweitzer at the Lamp" by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to produce, in order to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to dark shadow.[18]

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two techniques. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing required during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone-mapping techniques.

[edit] 1980

The desirability of HDR has been recognized for decades, but its wider usage was, until quite recently, precluded by the limitations imposed by the available computer processing power. Probably the first practical application of HDRI was by the movie industry in late 1980s and, in 1985, Gregory Ward created the Radiance RGBE image file format which was the first (and still the most commonly used) HDR imaging file format.

Wyckoff's concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[19] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image.[20]

Modern HDR imaging uses a completely different approach, based on making a high-dynamic range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[21] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[22] In 1997 this global-HDR technique of combining several differently exposed images to produce a single HDR image was presented to the computer graphics community by Paul Debevec.

This method was developed to produce a high dynamic range image from a set of photographs taken with a range of exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now popularly used to refer to this process. This composite technique is different from (and may be of lesser or greater quality than) the production of an image from a single exposure of a sensor that has a native high dynamic range. Tone mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer screen.

[edit] 1996

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[23] Mann's method involved a two-step procedure: (1) generate a single floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a "lightspace image", "lightspace picture", or "radiance map". Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[23]

[edit] 1997

In 1997 this technique of combining several differently exposed images to produce a single HDR image was presented to the public by Paul Debevec.

[edit] 2005

Photoshop CS2 introduced the Merge to HDR function.[24] This led to many consumer experiments with digital HDR imaging. A trend in combining "extreme" dynamic ranges of 15+ stops into a single image that results in a more "painterly" image than traditional ideas of exposure control has resulted.

In many ways, Photoshop CS2's HDR function is the holy grail of dynamic range. With properly shot and processed files it allows photographers to easily create images that were previously impossible, or at least very difficult to accomplish. But, good as it is, like a gun or nuclear power, it can be a force for evil as well as good.

Not every image needs to have 10-15 stops of dynamic range. In fact, most photographs look quite nice, thank you very much, with the 5-7 stops of dynamic range that we're used to. I fully expect to see some really silly if not downright ugly images in the months ahead, as photographers get their copies of Photoshop CS2 and start discovering what the HDR function is capable of.

But, as with all such tool [sic], in the hands of sensitive artists and competent craftsmen, I'm sure that we will start to be shown the world in new and exciting ways.

Michael Reichmann, Luminous Landscape[24]

[edit] Video

While custom high dynamic range digital video solutions had been developed for industrial manufacturing during the 1980s, it wasn't until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[citation needed] A few companies such as RED[25] and Arri[26] have been developing digital sensors capable of a higher dynamic range, but have yet to be released or made affordable. With the advent of low cost consumer digital cameras, many amateurs began posting tone-mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. The independent studio Soviet Montage produced a popular HDR video from disparately exposed video streams in September, using a beam splitter and consumer grade HD video cameras.[27]

[edit] See also

[edit] Methods

[edit] File formats

[edit] Software

See HDR (Software)

[edit] References

  1. ^ Reinhard, Erik; Ward, Greg; Pattanaik, Sumanta; Debevec, Paul (2006). High dynamic range imaging: acquisition, display, and image-based lighting. Amsterdam: Elsevier/Morgan Kaufmann. p. 7. ISBN 978-0-12-585263-0. "Images that store a depiction of the scene in a range of intensities commensurate with the scene are what we call HDR, or 'radiance maps.' On the other hand, we call images suitable for display with current display technology LDR." 
  2. ^ Cohen, Jonathan and Tchou, Chris and Hawkins, Tim and Debevec, Paul E. (2001). Steven Jacob Gortler and Karol Myszkowski. ed. "Real-Time High Dynammic Range Texture Mapping". Proceedings of the 12th Eurographics Workshop on Rendering Techniques (Springer): 313–320. ISBN 3-211-83709-4. 
  3. ^ Vassilios Vonikakis and Ioannis Andreadis (2008). "Fast Automatic Compensation of Under/Over-Exposured Image Regions". In Domingo Mery and Luis Rueda. Advances in image and video technology: Second pacific rim symposium, PSIVT 2007, Santiago, Chile, December 17–19, 2007. p. 510. ISBN 9783540771289. http://books.google.com/?id=vkNfw8SsU3oC&pg=PA510&dq=hdr+sdr+%22standard+dynamic+range%22&q=hdr%20sdr%20%22standard%20dynamic%20range%22. 
  4. ^ a b R. N. Clark. "Film versus Digital Summary". http://www.clarkvision.com/imagedetail/film.vs.digital.summary1/index.html. Retrieved 2010-02-28. 
  5. ^ "Auto Exposure Bracketing by camera model". http://hdr-photography.com/aeb.html. Retrieved 18 August 2009. 
  6. ^ "The Pentax K-7: The era of in-camera High Dynamic Range Imaging has arrived!". http://www.adorama.com/alc/blogarticle/11608. Retrieved 18 August 2009. 
  7. ^ "Canon PowerShot G12 picks up HD video recording, built-in HDR". http://www.digitaltrends.com/photography/cameras/canon-powershot-g12-picks-up-hd-video-recording-built-in-hdr/?news=123. 
  8. ^ R. N. Clark. "Procedures for Evaluating Digital Camera Sensor Noise, Dynamic Range, and Full Well Capacities; Canon 1D Mark II Analysis". http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html. Retrieved 2009-08-21. 
  9. ^ Stanley Smith Stevens and Geraldine Stevens (1986). Psychophysics: Introduction to its Perceptual, Neural, and Social Prospects. Transaction Publishers. pp. 208–209. ISBN 9780887386435. http://books.google.com/?id=r5JOHlXX8bgC&pg=PA208&dq=eye+logarithmic+power-law&q=eye%20logarithmic%20power-law. 
  10. ^ Vernon B. Mountcastle (2005). The Sensory Hand: Neural Mechanisms of Somatic Sensation. Harvard University Press. pp. 16–17. ISBN 9780674019744. http://books.google.com/?id=WOmqKSheygYC&pg=PA17&dq=logarithmic+weber-fechner&q=logarithmic%20weber-fechner. 
  11. ^ Leslie Stroebel and Richard D. Zakia (1995). The Focal Encyclopedia of Photography (3rd ed.). Focal Press. p. 465. ISBN 9780240514178. http://books.google.com/?id=CU7-2ZLGFpYC&pg=PA465&dq=logarithmically+light+nearly&q=logarithmically%20light%20nearly. 
  12. ^ a b Greg Ward, Anyhere Software. "High Dynamic Range Image Encodings". http://www.anyhere.com/gward/hdrenc/hdr_encodings.html. 
  13. ^ "The RADIANCE Picture File Format". http://radsite.lbl.gov/radiance/refer/Notes/picture_format.html. Retrieved 2009-08-21. 
  14. ^ Fernando, Randima (2004). "26.5 Linear Pixel Values". Gpu Gems. Boston: Addison-Wesley. ISBN 0321228324. http://http.developer.nvidia.com/GPUGems/gpugems_ch26.html. 
  15. ^ Max Planck Institute for Computer Science. "Perception-motivated High Dynamic Range Video Encoding". http://www.mpi-sb.mpg.de/resources/hdrvideo/. 
  16. ^ J. Paul Getty Museum. Gustave Le Gray, Photographer. July 9 – September 29, 2002. Retrieved September 14, 2008.
  17. ^ The Future of Digital Imaging - High Dynamic Range Photography, Jon Meyer, Feb 2004
  18. ^ a b 4.209: The Art and Science of Depiction, Frédo Durand and Julie Dorsey, Limitations of the Medium: Compensation and accentuation – The Contrast is Limited, lecture of Monday, April 9. 2001, slide 57–59; image on slide 57, depiction of dodging and burning on slide 58
  19. ^ US patent application 5144442, Ginosar, R., Hilsenrath, O., Zeevi, Y., "Wide dynamic range camera", published 1992-09-01 
  20. ^ Technion — Israel Institute of Technology (1993). Adaptive Sensitivity. http://visl.technion.ac.il/research/isight/AS/. 
  21. ^ "Compositing Multiple Pictures of the Same Scene", by Steve Mann, in IS&T's 46th Annual Conference, Cambridge, Massachusetts, May 9–14, 1993
  22. ^ S. Mann and R. W. Picard. "On Being ‘Undigital’ With Digital Cameras: Extending Dynamic Range By Combining Differently Exposed Pictures". http://citeseer.ist.psu.edu/mann95being.html. 
  23. ^ a b US patent application 5828793, Steve Mann, "Method and apparatus for producing digital images having extended dynamic ranges", published 1998-10-27 
  24. ^ a b "Merge to HDR in Photoshop CS2". http://www.luminous-landscape.com/tutorials/hdr.shtml. Retrieved 2009-08-27. 
  25. ^ https://www.red.com/epic_scarlet/
  26. ^ http://www.arridigital.com/alexa
  27. ^ http://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/
  28. ^ "CinePaint Frequently Asked Questions". http://www.cinepaint.org/faq.html. Retrieved 2009-08-31. 

Namespaces
Variants
Views
Actions

Go there...
http://en.wikipedia.org/wiki/High_dynamic_range_imaging

Don

No comments: