Syllabus


NYU Interactive Telecommunications Program


“Digital Imaging: Reset”    Fall 2017


ITPG-GT 2550-001


Instructor: Eric Rosenthal  er77@nyu.edu


Monday   3:20PM to 6:15PM  Room 20


Sept 11 - Dec 4


Course Description:


This course is a workshop that changes the rules for capturing and printing digital imagery.  By grasping a better understanding of the fundamentals and limitations of digital photography the student uses hands on techniques to produce digital images that rival those of film without using Photoshop.  The course will include low cost tips and tricks to capture high dynamic range, expanded color, night color, high dynamic range imaging, 3D vision, autostereoscopic 3D techniques and time lapse images using a digital camera.


Required text:      


“The 123 of digital imaging Interactive Learning Suite”  Vincent Bockaert 123di.com Version 6.2 (A 30% student discount is available at  http://www.123di.com/order/educational/  After you have ordered, send a copy of your student ID card, or current class schedule, or any other academic document you can provide to 123DI and they will process the 30% educational refund.


Additional Recommended Text: “The Digital Photography Workflow Handbook” by Uwe Steinmueller and Juergen Gulbins  ISBN-10: 1933952717 ISBN-13: 978-1933952710  available on Amazon as a hard copy or Kindle ebook. (This text is not required but highly recommended)

 

Additional Reading: Students will be provided with url’s to web sites containing information pertinent to the syllabus to read outside of class.


Required equipment: You will need the following equipment:


  1. 1. Mac or a PC

  2. 2.A  digital camera, point and shoot or DSLR - you may use a camera from the ER

  3. 3.You may not use a cell phone camera.


Deliverables:


Classes will be instructional lectures based on the topics identified in the syllabus and hands on workshops using student owned equipment, the equipment the instructor may bring to the class for demonstration purposes or equipment that may be available from the University.  Students will be required to take a picture using their own digital camera and deliver that picture for presentation at the next class.  The image and a short description (1 to 2 paragraphs) of the concepts that were learned in the class and how those concepts were applied to the capture of each picture will be presented by the student in class. The image will be supplied as a JPG file so that it can be evaluated by the instructor and discussed in class.


Grading :


Grading will be based on:


50%           Demonstration of understanding of learned concepts through submission of photos and short description of the concepts that were learned in the class lectures and workshops and then applied to the capture of your image.


30%           Participation in class and workshops


20%           On Time Attendance. Attendance is taken each class and is an integral part of your grade. (two absences are grounds for a Failing grade) More than 10 minutes late counts as an absence.  If you are unable to attend class you must notify the professor before the class.


   1. Defining  the vision system


Over the past 400 years numerous theories have been suggested to describe how our human vision system (HVS) works.  We see images that are remarkably detailed in resolution and color, without pixilation or rasterization.


Color sense, in increments varying from 1 or 2 to 5 nm, covers the electromagnetic spectrum continuously from near IR to near UV with no discernible tri-chromatic bandpass gaps. Moreover, we maintain color constancy from scene to scene, discarding the color temperature of a full-spectral illuminant and even the band-pass gaps of some non-continuous illuminants such as fluorescent lamps.


Our luminance range allows us to see from a fraction of starlight to bright sunlight with a contrast ratio approaching 10 million to 1 for a specific scene. The HVS has evolved based on the need to perceive images formed by light from the sun as reflected off real, as opposed to virtual objects or moving shadows. The sun’s light is continuous, that is there are no significant band-pass spectral gaps, and therefore the light is full spectrum.


Image engineers have been following a traditional theory of how we see, based on tri-chromatism and an optical model of the eye, to design film and electronic imaging systems for the past 100 years, yet no observer has ever confused images from any imaging system for reality, no matter how sophisticated the components or processing. Nature has done a better job at handling complexity with physical and chemical structures than we as engineers and systems integrators have managed with electronics and optics. Current imaging technologies are missing critical information that the human vision system requires to perceive reality. If nothing were missing from the information stream then electronic images should look real.


         1. Color Space

         2. Dynamic Range

         3. Continuous tone

         4. Continuous color


   2. Defining the limitations of the Digital Photography system


         1. Color depth

         2. Gamma Profiles

         3. Exposure and Bracketing

         4. Interpreting the Histogram

         5. White Balance

         6. Depth of field (F stop)


   3. Capturing information


         1. High Dynamic Range HDR

         2. Expanded Color


Further Reading and support material:


http://www.hdrsoft.com/


http://www.luminous-landscape.com/tutorials/hdr.shtml


http://www.debevec.org/Research/HDR/




   4. Compression


Data compression algorithms for visual images are designed for optimal mathematical and electronic processing efficiency, often ignoring knowledge about human sensory processes. Impedance losses may be considerable due to the continual saccadic motion of the eyeball. There are several basic types of compression:


        -Spatial compression, to reduce the redundancy in digital sampling, usually a lossless exercise as long as the sampling is done correctly in the first place. (With film, analog spatial compression is represented by using smaller sizes of film for storage and enlarging display images from the negatives);


        - Chromatic compression, whereby portions of the spectrum are simply not captured and therefore cannot be reproduced. RGB or tri-chromatic imaging systems are an example of lossy spectral compression, and which adds undue complexity for recreating perfect color;


        - Dynamic range compression, whereby tonal range or contrast sensitivity functions are compressed, usually by using steeper slopes for modulation curves or linearizing what should be a complex function. Ignoring acutance for the moment, the results of dynamic compression can be understood by comparing an 8x10 Ansel Adams contact print to a postcard of the same scene, or a 4-bit color Web image to a 16-bit scan of a color transparency or to the color transparency itself; and,


          - Acutance compression, whereby edge enhancement and other contrast and chromatic tricks are used to make an image appear sharp, but in actuality has less detail than the original. Most commercial digital video is of that nature, disguising all the compression artifacts with super saturated color and hyper edge shading.


Without a full understanding of how the HVS deals with luminance, chromaticity, acutance, and phase relationships, in both temporal and spatial modalities, compression engineers tend to create a mismatch between the human sensory processes and artificial imaging devices rather than enhance the sensation of reality and potentially augment what one can see when the observer is in the field.


Digital systems include data compression algorithms that are mathematically efficient, such as MPEG and JPEG for motion and still images. These use discrete cosine transform (DCT) techniques; newer versions of MPEG use fractal and wavelet theory. But their results are constrained to a narrow tablet of scenes and human visual responses. The input data makes many unsubstantiated assumptions about cortical processes that increase the impedance between the HVS and artificial image devices rather than enhance the sensation of reality. Something is clearly missing in these data input assumptions.


         1. MPEG  mismatch to HVS

         2. Tri color compression

         3. Non destructive compression

         4. JPEG or RAW?


   5. Matching images to the Human Vision System


    * No matter how three primary colors are defined or arranged, no fixed tri-chromatic system can reproduce all the colors that a human can see, though it can be shown that any color, if broadly defined, can be reproduced approximately by three select, suitably spectrally spaced filtered lights. Imaging cameras using fixed RGB systems leave out details on the complex waveforms or spectral signatures that the HVS does see.


Conventional tri-chromaticity, including its opponent color corollary, does not explain the HVS’ ability to detect chromatic signatures such as fluorescence and textures, which add to our sense of reality. The opponent theory only accounts for relative chromatic information. Therefore, to improve the color response between an imaging system and the HVS, it would be necessary to capture and present the full spectral signature of a scene in full-motion. A frequency analyzer theory, such as is accepted for the human cochlea, could explain how we process such information.


    * Luminance: The contrast ratio of a real scene as sensed by the HVS is dependent on modulation and color functions that may be distorted by compressed irradiance. Radiance characteristics of textures, materials that are metallic, iridescent, luminescent, or fluorescent, etc., are impossible to replicate using only tri-chromatic systems with linear and non-responsive illuminance functions. Moreover, conventional systems, especially digital sampling with incorrect response functions, tend to suppress critical spectral signatures that may be important in scientific work.


    * Color temperature and color balance. Television images are captured using cameras that are color balanced at 3200 degrees Kelvin.  CRT, LCD  and DLP displays and projectors are color balanced for approximately 5600 degrees Kelvin. (CRT displays in Japan are color balanced for 9000 degrees Kelvin.) These color temperature inconsistencies shift the color balance of the images seen on television and computer monitors towards blue. Ambient illumination also varies, from full spectra sunlight to gaseous discharge lamps emitting only a few spectral lines.


Further Reading and support material:


http://www.outbackphoto.com/dp_essentials/index.html


http://www.luminous-landscape.com/tutorials/contrast-enhancement.shtml


http://www.photoxels.com/tutorial_sharpen_display.html



   6. Workflow for taking Digital Color Images


Further Reading and support material:


http://www.outbackphoto.com/workflow/index.html


Printing


         1. color space

         2. ICC profiles


Further Reading and support material:


http://people.csail.mit.edu/ericchan/


http://www.greatprinterprofiles.com/colormgmt.html


http://www.outbackphoto.com/printinginsights/pi.html




   7. Time Lapse


Further Reading and support material:



http://www.bmumford.com/photo/camaccess.html


http://www.granitebaysoftware.com/


http://www.sciencephotography.com/how2do2.shtml




   8. 3D techniques


Further Reading and support material:


http://www.stereoscopy.com/3d-concepts/


http://www.pokescope.com/cameras/shepherd.html




   9. Panoramas and ultra-resolution photography


Further Reading and support material:


http://www.panavue.com/index.htm


http://www.peakpanoramas.co.uk/


http://photocreations.ca/panotools/index.html


http://www.kekus.com/index.html


http://www.nurons.net/pancam/


http://www.flong.com/writings/lists/list_slit_scan.html


http://www.outbackphoto.com/workflow/wf_48/essay.html


  10. Imaging out of the RGB Box


    * Light and the full spectrum. There is substantial evidence that the human visual system detects the color or spectral signature of light directly. Any good photographer of fashion, product, or objects d’ art understands this implicitly because of the difficulty of reproducing the complex fluorescent and iridescent dyes and colors critical to colored objects via metameric filtration. It is unlikely that the HVS also works via metameric bandpass filtering — there has never been a satisfactory explanation of how we see the full spectrum via fixed filters, and moreover, no metameric system has ever been built following physiological tri-chromatic parameters that can reproduce all the colors that we see.


If the selected absorption filters are narrow-band, they cannot metamerically yield the complex waveform or spectral signatures found in natural scenes. As a result, engineers use a number of inconsistent tricks to reproduce even a subset of colors. This inconsistency makes accurate color reproduction extremely tedious and in many cases impossible. (By using more primary colors, the graphic arts community does somewhat better in reproduction than that from tri-chromatic still cameras. And, in printing processes, primaries may be specially selected to conform with a specific scene or paper color space.) The HVS doesn’t seem to have this problem in sensing a wider range of color than any electronic or film system. Instead, the HVS may actively oppose mismatches between tri-chromatic imaging systems and its own spectral processes.


Further Reading and support material:


http://www.drycreekphoto.com/tools/printer_gamuts/gamutmodel.html


http://www.outbackphoto.com/dp_essentials/dp_essentials_03/essay.html


  11. Night Color Imaging


Further Reading and support material:


http://www.williamoptics.com/


http://www.photoxels.com/tutorial-night-photography.html




12. Lighting




Copyright 2016 - All rights reserved - Creative Technology, LLC