US6088470A - Method and apparatus for removal of bright or dark spots by the fusion of multiple images - Google Patents

Method and apparatus for removal of bright or dark spots by the fusion of multiple images Download PDF

Info

Publication number
US6088470A
US6088470A US09/013,758 US1375898A US6088470A US 6088470 A US6088470 A US 6088470A US 1375898 A US1375898 A US 1375898A US 6088470 A US6088470 A US 6088470A
Authority
US
United States
Prior art keywords
image
subject
illuminator
additional
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US09/013,758
Inventor
Theodore A. Camus
Marcus Salganicoff
Thomas A. Chmielewski, Jr.
Keith James Hanna
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sarnoff Corp
Sensar Inc
Original Assignee
Sarnoff Corp
Sensar Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sarnoff Corp, Sensar Inc filed Critical Sarnoff Corp
Priority to US09/013,758 priority Critical patent/US6088470A/en
Assigned to SENSAR, INC., SARNOFF CORPORATION reassignment SENSAR, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CAMUS, THEODORE A., CHMIELEWSKI, JR., THOMAS A., HANNA, KEITH JAMES, SALGANICOFF, MARCOS
Priority to PCT/US1999/001541 priority patent/WO1999038121A1/en
Priority to KR1020007008222A priority patent/KR20010040433A/en
Priority to AU23411/99A priority patent/AU2341199A/en
Priority to EP99903372A priority patent/EP1050019A1/en
Priority to JP2000528952A priority patent/JP2002501265A/en
Publication of US6088470A publication Critical patent/US6088470A/en
Application granted granted Critical
Assigned to PERSEUS 2000, L.L.C., AS AGENT reassignment PERSEUS 2000, L.L.C., AS AGENT SECURITY AGREEMENT Assignors: IRIDIAN TECHNOLOGIES, INC.
Assigned to IRIDIAN TECHNOLOGIES, INC. reassignment IRIDIAN TECHNOLOGIES, INC. RELEASE & TERMINATION OF INTELLECTUAL PROPERTY SEC Assignors: PERSEUS 2000, L.L.C.
Assigned to BANK OF AMERICA, N.A. reassignment BANK OF AMERICA, N.A. SECURITY AGREEMENT Assignors: IBT ACQUISITION, LLC, IDENTIX IDENTIFICATION SERVICES, LLC, IDENTIX INCORPORATED, IMAGING AUTOMATION, INC., INTEGRATED BIOMETRIC TECHNOLOGY, LLC, IRIDIAN TECHNOLOGIES, INC., L-1 IDENTITY SOLUTIONS, INC., SECURIMETRICS, INC., TRANS DIGITAL TECHNOLOGIES CORPORATION
Assigned to BANK OF AMERICA, N.A. reassignment BANK OF AMERICA, N.A. SECURITY INTEREST Assignors: IRIDIAN TECHNOLOGIES, INC.
Assigned to IRIDIAN TECHNOLOGIES LLC reassignment IRIDIAN TECHNOLOGIES LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/15Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
    • A61B3/156Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for blocking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/83Protecting input, output or interconnection devices input devices, e.g. keyboards, mice or controllers thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the invention relates to the removal of bright spots or shadows from images created by a camera where the bright spots and shadows result from the type and location of illuminators.
  • Camera images are used in a variety of locations to identify a subject whose picture has been taken. These situations range from the identification of people for security or surveillance to the identification of products and product defects in automated manufacturing lines. Bright spots often occur if a person is wearing glasses or reflective clothing and if a product has a highly reflective surface or is contained in a glass or clear plastic package. The presence of a bright spot in an image may make it impossible to identify the person, product or product defect from the image. Hence, there is a need for a method and apparatus for eliminating bright spots from images.
  • biometrics for recognizing or identifying an individual from personal biological characteristics. Some of these methods involve imaging of the face or eye and analyzing the facial features, retinal vascular patterns of the eye, or patterns in the iris of the eye. In recent years there has been a demand for more reliable systems to rapidly identify individuals, particularly those persons who desire access to a secured area or system. A common example of such a secured system are automated teller machines which allow authorized users to conduct banking transactions. Many of these systems are used by a wide variety of people. Very often these people demand quick as well as accurate identification.
  • U.S. Pat. No. 5,717,512 to Chmielewski et al. discloses a compact system for rapidly obtaining images of the eye of a user of an automated teller machine. These images are then used to identify the user based upon patterns in the user's iris.
  • Reflections may come from the system's own illumination.
  • the irradiance (illuminance for visible light) at the camera lens from the specular reflection of an illuminator from eyeglasses is on the order of 1000 times greater than the irradiance at the camera of the image of the eye caused by diffuse reflection of the illuminator.
  • a camera viewing the eye must have a combination of lens, aperture, and exposure time that will result in a sufficiently bright image of the eye.
  • the much brighter specular reflection of the illuminator will saturate the picture elements (pixels) of the camera's image sensor that cover the area of the specular reflection, and all information about the portion of an eye image obscured by this reflection will be lost.
  • specular reflection of illumination on eyeglasses depends on the geometric arrangement of illumination with respect to the eyeglasses and an imaging camera, one could use multiple light sources with relatively wide spacing from one another, and turn off one or more of the light sources which cause specular reflections on the eyeglasses that obscure the camera's view of the iris. Yet, these techniques will not eliminate all specularities in images of all subjects using a system because the subjects change while for practical reasons the positions of the lighting and camera must remain fixed or can be varied very little. Nevertheless, the same physical arrangement of camera and illuminators may be used as a platform for a method of image fusion for removing the negative effects of these specular reflections.
  • image fusion involves three steps: (1) acquiring two or more images of the same scene such that good data for each point in the scene may be obtained from one of the images, (2) a method to detect good data at each point, and (3) a method to merge the data from the images into a single image.
  • Two or more images of the same scene may be created by using different sources and angles of illumination for each image, and one approach for finding good data and fusing it into a single image is multi-resolution image processing, also called pyramid processing.
  • a variation of our method can be used to remove shadows or remove both dark spots and bright spots.
  • the images produced by our method and apparatus can be used to identify the subject present in the image using even the most demanding subject identifying methods.
  • our images are used to identify individuals using the face or eye images.
  • Our method and apparatus are particularly useful for creating images that are used to identify a person from that person's iris.
  • the images are taken or modified so that we can pair pixels one from each image such that both pixels in the pair correspond to a single point on the subject in the image.
  • To create an image free of specular reflections we select the pixel from each pair having the lower gray scale value. Shadows can be removed by selecting the pixel having the higher gray scale value only if that value is below a threshold.
  • the same technique can be used with three or more images by creating sets of pixels, one from each image and selecting the minimum value pixel or the maximum value pixel from each set.
  • Our method and apparatus have particular applicability to images created by video cameras and which contain a set of raster lines. For such images we can choose only the even raster lines as our first image and only the odd raster lines of the same image as our second image. These images contain half as much data as the complete image and thus can be processed faster. Our image fusion methods can be combined with image subtraction to remove ambient lighting and can also be used with other imaging processing techniques.
  • FIG. 1 is a diagram showing two light sources are used to illuminate an eye behind an eyeglass lens for imaging by a camera that is synchronized to the two light sources.
  • FIG. 2 is a view of an eye as seen by an imaging camera through an eyeglass lens with the view of the iris partially obscured by specular reflection of illuminating light sources.
  • FIG. 3 shows a left-illuminated eye image, a right-illuminated image of the same eye, and a third image formed by fusion of the other two images.
  • FIG. 4 is a diagram of illustrating a first present preferred embodiment of our method for fusing two images.
  • FIG. 5 is a diagram of an off center image.
  • FIG. 6 is a series of images created in a Laplacian pyramid.
  • FIG. 7 is a flowchart of a second present preferred method of removing bright spots by the fusion of two images using Laplacian pyramids.
  • FIG. 8 shows a right-illuminated facial image of a mannequin, a left-illuminated facial image of the same mannequin, and a third image formed by fusion of the other two images using Laplacian pyramids.
  • our method and apparatus can be used for images of any subject, they are particularly useful for obtaining images of the eye of a person wearing eyeglasses.
  • the eye 1 of a subject with eyeglass lens 3 is looking into a camera 11.
  • the eye is illuminated by a first light source 2 and a second light source 4.
  • the emission patterns of the light sources 2 and 4 are such that either of them generates illumination that is fairly even across the front surface of the eye 1 with sufficient intensity for the camera 11 to record a good image of the eye 1.
  • the light sources 2 and 4 are pulsed or flashed in synchronization with the exposure times of the camera 11. This can be done using a strobing device 12 and an illumination controller 14 connected to the strobing device 12 and the camera 11. Both the intensity and duration of these pulses are controlled to get the correct exposure of the images of the eye 1.
  • At least one light path 7 from a light source 2 to the camera 11 produces a specular reflection from a first surface (front or back) of the eyeglass lens 3 and thus generates a virtual image 21 of the light source 2 that is visible to the camera 11 as seen in the camera's view illustrated in FIG. 2.
  • at least one light path 5 from a light source 4 to the camera 11 produces a specular reflection from the same first surface of the eyeglass lens 3 and thus generates a virtual image 23 of the light source 4 that is visible to a camera 11 as seen in the camera's view of an eye 1 illustrated in FIG. 2.
  • FIG. 2 shows only one pair of virtual images 21 and 23 caused by a first surface of an eyeglass lens 3, the second surface of the lens will normally cause another pair of virtual images.
  • Both virtual image 21 and virtual image 23 may obscure portions of the iris of eye 1 in the view of the camera 11. Therefore, the controller 14 will turn off the first light source 2 which causes the virtual image 21, while continuing to activate the second light source 4, during exposure of a first image as shown on the left of FIG. 3. Immediately thereafter, the controller 14 will turn off the second light source 4 which causes the virtual image 23, while continuing to activate the first light source 2, during exposure of a second image as shown in the middle of FIG. 3. From these two images we form the composite image shown in the right portion of FIG. 3 using an image processor 16. That composite image is formed from the first image and the second image by choosing a pixel from corresponding locations in the first and second images at each pixel location based upon gray scale values of the pixels. This can be more easily understood by referring to FIGS. 4 and 5.
  • An image is comprised of a matrix of pixels each having a gray scale value. Depending upon the size of an image there may be hundreds or even thousands of pixels arranged in an array. Each pixel has a unique x,y coordinate position. Therefore, if there are two images of the same subject there will be one pixel in each image that corresponds to a single point on the subject. If the two images were taken under different lighting conditions the gray scale value of the pixel in the first image that corresponds to a selected point on the subject may be different from the gray scale value of the pixel in the second image that corresponds to the same point. In FIG. 4 there are three images formed from thirty six pixels arranged in a six by six array.
  • the images that are used and created will be substantially larger than the six by six images presented here to illustrate our concept. Indeed, we prefer to use a camera that produces an image which is 640 pixels by 480 pixels. Image 1 and Image 2 were taken under different lighting conditions which created a bright spot in each image. The bright spot is represented by the open circles. The remaining circles are shaded to indicate three gray scale values represented by filled circles, circles containing several dots and circles containing only three dots. The bright spot in each image has obscured a portion of the subject. Therefore, we select a pixel from either Image 1 or Image 2 for each pixel position, or x,y coordinate position, in the six by six array to form the Final Image in FIG. 4. That Final Image presents a clear image of the subject which is the letter X.
  • Image 1 and Image 2 the same pixels location in both images corresponded to the same unique point on the subject. It may happen that two images are presented where this is not true. One example of this occurrence is illustrated by FIG. 5. That image shows much of the same content as Image 1 in FIG. 4. Should one image be off center relative to the other image, it is necessary to preprocess one or both images so that a proper pairing can be made. There are several techniques that are well known to those skilled in the art which will accomplish this preprocessing. In the discussion that follows we assume that there has been minimal movement of the subject during the time from capture of the first image to capture of the second image, or that some preprocessing has occurred, to ensure that each physical point on the subject appears at the same (x,y) position in both images.
  • Normal video images are made up of lines of pixels called raster lines. These raster lines can be consecutively numbered and grouped into two fields. One field contains the odd raster lines and the second field contains the even raster lines. If a first illuminator and a second illuminator are alternately illuminated at the same frame rate as the video camera, then one field for a single image will have been created with the first illuminator activated and the other field for that image will have been created with the second illuminator activated. Consequently, if video images are available we can select the even field as our first image and the odd field as our second image or vice versa. If this is done we assume that adjacent pairs of pixels in the video image, one pixels from an odd raster line and the other pixel from an even raster line, correspond to a single unique point on the subject.
  • MIN is the minimum function that selects the least of its two parameters.
  • our method also overcomes any blooming that may occur.
  • the irradiance (illuminance for visible light) at the camera lens from the specular reflection of an illuminator from eyeglasses is on the order of 1000 times greater than the irradiance at the camera of the image of the eye caused by diffuse reflection of the illuminator.
  • a camera viewing the eye must have a combination of lens, aperture, and exposure time that will result in a sufficiently bright image of the eye.
  • the much brighter specular reflection of the illuminator will saturate the picture elements (pixels) of the camera's image sensor that cover the area of the specular reflection, and all information about the portion of an eye image obscured by this reflection will be lost.
  • the values of pixels surrounding the area of the specular reflection may be corrupted by the saturated pixels in a phenomenon called "blooming". This occurs because the pixels of charge-coupled devices (CCD's), the most common electronic imagers, are not well isolated from one another. As long as the two light sources 2 and 4 are sufficiently separated so that the virtual image 23 and its associated blooming does not overlap virtual image 21 and its associated blooming, every portion of the subject is clearly visible in at least one of the first image or the second image. Hence, in one or both of the left image and center image of FIG. 3 every portion of a person's iris can be clearly seen. Therefore, the rightmost composite image of FIG. 3 contains a good view of the entire iris.
  • CCD's charge-coupled devices
  • the light sources 2 and 4 may be implemented with one or more high-power light-emitting diodes (such as the OD-669 IR LED array manufactured by Opto-Diode Corporation), a laser diode fed through an optical fiber, a laser fitted with a diverging lens, an incandescent lamp, or any other source that produces sufficient power with appropriate emission pattern in the appropriate spectral band.
  • the light sources may produce visible light or non-visible light such as infrared.
  • FIG. 1 shows two light sources 2 and 4 arranged horizontally
  • two or more light sources may be arranged horizontally, vertically, radially, or in any other geometry so long as the spacing of the light sources is sufficient for the virtual images of the sources reflected from an eyeglass lens in the camera's view of an eye to be separated far enough so that the illumination controller can use one or more of the sources to sufficiently illuminate each portion of the iris without an obscuring specular reflection or blooming in at least one of the multiple images to be used to form the composite output image.
  • a second preferred embodiment of our method may use any of the arrangements of the first embodiment such as the one shown in FIG. 1, but the image processing uses Laplacian pyramid computations of both the first and second input images. This variation is preferred when trying to find the eyes in a wider view of a subject's face.
  • the calculation of a Laplacian pyramid of a sample image is illustrated in FIG. 6.
  • the process begins with an original image which could be either the left or center image in FIG. 8.
  • the original image in FIG. 6 is the middle image of FIG. 8.
  • the original image is called the Level 0 Gaussian image.
  • the column of images on the left is the Gaussian pyramid of the original image which is composed of successively lower spatial frequency approximations of the original image. From top to bottom, these levels are identified as Level 0, Level 1, Level 2 and Level 3. It should be understood that additional levels may also be created.
  • the column of images on the right is the Laplacian pyramid of the original image which is composed of images representing the differences between successive levels of the Gaussian pyramid. From top to bottom, these are similarly called the Level 0, Level 1, Level 2 and Level 3 Laplacian images.
  • the Laplacian image at Level 0 is the Level 0 Gaussian image minus the Level 1 Gaussian image.
  • the Level 3 Laplacian image is the Level 3 Gaussian image minus the Level 4 Gaussian image which is not shown in FIG. 6. In effect, these are successively lower spatial frequency passband components of the original image.
  • the original image may be recovered from the Laplacian pyramid and the highest level (lowest frequency approximation) of the Gaussian pyramid by reversing the difference operations to perform an inverse Laplacian transform.
  • the mathematical details of this pyramid processing are given in "A VLSI Pyramid Chip for Multi-resolution Image Analysis” by Van der Wal and Burt (International Journal of Computer Vision, Vol. 8 No. 3, 1992, pp 177-189) where the required calculation of the Laplacian pyramid is called the RE (for reduce-expand) Laplacian.
  • the second preferred embodiment proceeds according to the flowchart shown in FIG. 7.
  • the system captures a right-illuminated image of a subject's face (step 32) as shown in the leftmost image of FIG. 8.
  • the system captures a left-illuminated image of a subject's face (step 34) as shown in the middle image of FIG. 8.
  • the system computes five-level Laplacian pyramids for both of these images in step 36. Only four levels of Laplacian images for the middle image of FIG. 8 are shown in FIG. 6.
  • step 42 we take the composite images formed at each level in steps 38 and 40 to be a third Laplacian pyramid. On this third pyramid, we perform an inverse Laplacian transform to get a filtered image as illustrated in the rightmost image of FIG. 8. The process is now complete (step 44).
  • step 40 it is easy to see in FIG. 8 that the specular reflections of illumination on the eyeglasses that are visible in the two input images (left and middle) have been removed in the filtered output image (right). This is achieved in the calculations of step 40, which are very similar to the simple calculations of the first preferred embodiment.
  • step 38 it may also be seen in FIG. 8 that the dark shadows on the edge of the face that is away from the illuminating light source in each of the input images (left and middle) have been removed in the filtered output image (right). This is achieved in the calculations of step 38, which are different than the simple calculations of the first preferred embodiment.
  • the overall effect of this second preferred embodiment is to remove small bright spots, usually caused by specular reflections, without including larger dark regions that may occur in one or more of the input views.
  • more than two input images may be used.
  • the Laplacian pyramids of the input images may have any number of levels appropriate to the resolution of the input images.
  • the first embodiment of our method shown in FIGS. 3, 4 and 5 is an inverse Laplacian transform of a degenerate Laplacian pyramid consisting of a single base Gaussian image.
  • the grouping of the higher level composite images for maximum selection and the lower level composite images for minimum selection may be adjusted to filter out larger or smaller bright spots by including more or fewer levels in the lower level group for minimum selection.
  • the roles of the maximum and minimum functions may be reversed to filter out smaller dark spots without including larger bright regions that may occur in one or more of the input images.
  • this embodiment we capture one image with only the first illuminator on, a second image with only the second illuminator on and a third image with both illuminators off.
  • Image 1 by subtraction the third image from the first image
  • Image 2 by subtracting the third image from the second image.
  • This subtraction has the effect of removing ambient illumination.
  • both fields have the second illuminator on.
  • Field two from the first image (both illuminators off) is subtracted from either of the fields of the second image to form Image 2.
  • Image 2 Field two from the first image (both illuminators off) is subtracted from either of the fields of the second image to form Image 2.
  • the resulting images after subtraction are 640 pixels by 240 pixels or half size.
  • this invention as used for imaging eyes or faces.
  • this method and apparatus could be used to obtain images of products packaged in light transmissive packaging such as glass jars or blister packages. Such images could be used for quality control or product identification purposes.
  • the light transmissive structure is not limited to clear materials. That structure may allow passage of limited wavelengths of light which could be visible or invisible to the human eye.
  • a common example of such a structure are the plastics used in sunglasses.

Abstract

A reliable method and apparatus for illuminating and imaging eyes uses multiple light sources producing multiple images of a subject each created under illumination by different illuminators. A composite image of the subject is formed by selecting pixels based upon their gray scale values or using pyramid image processing. A composite image can be created which is free of bright spots commonly caused by reflection of illumination from eyeglasses that may be worn by a subject or for which is free of dark shadows or which is free of both bright spots and dark shadows.

Description

FIELD OF THE INVENTION
The invention relates to the removal of bright spots or shadows from images created by a camera where the bright spots and shadows result from the type and location of illuminators.
BACKGROUND OF THE INVENTION
The phenomenon of bright spots and shadows appearing in images taken by a camera of a subject under one or more lights or other illuminators is well known. Their presence may be considered to be a mere inconvenience or as rendering an image unacceptable for its intended purpose. Consequently, professional photographers and video camera operators are acutely aware of the light sources that are present in a scene. In many situations, such as in a photography studio, it is possible to position the subject and arrange the lighting to eliminate bright spots or glare and to minimize shadows. This problem becomes more difficult in a television studio where the subject moves rather than remains still or even remains in a specific location. Even where bright spots can be eliminated by camera and lighting positions the process of positioning can be quite time consuming. In addition, if a person is required to stay at a particular location that person may appear to be nervous or otherwise uncomfortable. Hence, the control of lighting and camera positions often does not solve the problem.
Camera images are used in a variety of locations to identify a subject whose picture has been taken. These situations range from the identification of people for security or surveillance to the identification of products and product defects in automated manufacturing lines. Bright spots often occur if a person is wearing glasses or reflective clothing and if a product has a highly reflective surface or is contained in a glass or clear plastic package. The presence of a bright spot in an image may make it impossible to identify the person, product or product defect from the image. Hence, there is a need for a method and apparatus for eliminating bright spots from images.
The art has developed a number of processes for removing artifacts such as bright spots from images. These techniques range from airbrushing to digitizing the image and then applying one or more algorithms to the image. Some techniques use two or more images which are combined. Many of these prior art methods are quite time consuming taking several minutes or even hours to complete. Some prior art methods require computer hardware having large memory capacities which can be quite expensive. Thus, there is a preference for image processing that can be done rapidly using less memory and less expensive computer hardware.
There are several methods known as biometrics for recognizing or identifying an individual from personal biological characteristics. Some of these methods involve imaging of the face or eye and analyzing the facial features, retinal vascular patterns of the eye, or patterns in the iris of the eye. In recent years there has been a demand for more reliable systems to rapidly identify individuals, particularly those persons who desire access to a secured area or system. A common example of such a secured system are automated teller machines which allow authorized users to conduct banking transactions. Many of these systems are used by a wide variety of people. Very often these people demand quick as well as accurate identification. U.S. Pat. No. 5,717,512 to Chmielewski et al. discloses a compact system for rapidly obtaining images of the eye of a user of an automated teller machine. These images are then used to identify the user based upon patterns in the user's iris.
A technique for accurately identifying individuals using iris recognition is described in U.S. Pat. No. 4,641,349 to Flom et al. and in U.S. Pat. No. 5,291,560 to Daugman. The systems described in these references require clear, well-focused images of the eye. The presence of eyeglasses tends to interfere with good eye images because of reflections on the eyeglasses. Contact lenses may also cause reflections that interfere with eye imaging. However, because contact lenses have a greater curvature than eyeglasses reflections from contact lenses are smaller and less of a problem than reflections from eyeglasses.
Reflections may come from the system's own illumination. In this case, calculations show that the irradiance (illuminance for visible light) at the camera lens from the specular reflection of an illuminator from eyeglasses is on the order of 1000 times greater than the irradiance at the camera of the image of the eye caused by diffuse reflection of the illuminator. A camera viewing the eye must have a combination of lens, aperture, and exposure time that will result in a sufficiently bright image of the eye. Thus, the much brighter specular reflection of the illuminator will saturate the picture elements (pixels) of the camera's image sensor that cover the area of the specular reflection, and all information about the portion of an eye image obscured by this reflection will be lost.
It is possible to ask the subject to remove his or her eyeglasses in order to get a good image of the subject's eye. However, this is potentially annoying, and the subject may refuse to remove the glasses, or avoid using the system. Consequently, there is a need for an imaging system that can obtain useful images of the eye while minimizing the effect of bright spots, often called specular reflections, caused by the system's own illumination without requiring the subject to remove any eyeglasses or contact lenses that may be present.
Since specular reflection of illumination on eyeglasses depends on the geometric arrangement of illumination with respect to the eyeglasses and an imaging camera, one could use multiple light sources with relatively wide spacing from one another, and turn off one or more of the light sources which cause specular reflections on the eyeglasses that obscure the camera's view of the iris. Yet, these techniques will not eliminate all specularities in images of all subjects using a system because the subjects change while for practical reasons the positions of the lighting and camera must remain fixed or can be varied very little. Nevertheless, the same physical arrangement of camera and illuminators may be used as a platform for a method of image fusion for removing the negative effects of these specular reflections.
In general, image fusion involves three steps: (1) acquiring two or more images of the same scene such that good data for each point in the scene may be obtained from one of the images, (2) a method to detect good data at each point, and (3) a method to merge the data from the images into a single image. Two or more images of the same scene may be created by using different sources and angles of illumination for each image, and one approach for finding good data and fusing it into a single image is multi-resolution image processing, also called pyramid processing.
In "A VLSI Pyramid Chip for Multiresolution Image Analysis" by Van der Wal and Burt (International Journal of Computer Vision, Vol. 8 No. 3, 1992, pp. 177-189), multiple types of pyramid processing of images are briefly but precisely described. In particular, the Laplacian pyramid is defined. As described in detail in "The Laplacian Pyramid as a Compact Image Code" by Burt and Adelson (IEEE Transactions on Communications, Vol. COM-31, No. 4, April 1983, pp. 532-540), the Laplacian pyramid represents an image as a set of bandpass components. The Laplacian pyramid representation of an image enables examination and filtering of various spatial frequency bands, and also the composition of a single image from spatial frequency components selected from multiple images.
Several United States patents show the use of the Laplacian pyramid and related multi-resolution image processing to achieve various objectives. In U.S. Pat. No. 4,661,986, for "Depth-of-focus Imaging Process Method", Adelson teaches a method of using pyramid processing to synthesize an optimally focused image from multiple images. U.S. Pat. Nos. 5,325,449 and 5,488,674, both titled "Method for Fusing Images and Apparatus Therefor", to Burt et al. teach the use of pyramid processing and directionally-sensitive spatial frequency filters to form a composite image with extended information content. "The Noise Reduction System" of Adelson et al in U.S. Pat. No. 5,526,446 uses multi-resolution image processing to filter noise from images. These methods are directed to the hardware and procedures used to process images without concern as to how the images are obtained. They tend to require expensive hardware and can be relatively slow.
SUMMARY OF THE INVENTION
We provide a method and apparatus that combines camera and lighting control with a particular image fusion technique to produce high quality images that are free of bright spots or specularities. A variation of our method can be used to remove shadows or remove both dark spots and bright spots. The images produced by our method and apparatus can be used to identify the subject present in the image using even the most demanding subject identifying methods. In a preferred embodiment our images are used to identify individuals using the face or eye images. Our method and apparatus are particularly useful for creating images that are used to identify a person from that person's iris.
First we select multiple light sources with relatively wide spacing from one another. We fuse images of the subject captured in quick succession and illuminated by different light sources. We use at least two images each comprised of pixels having a gray scale value. The images are taken or modified so that we can pair pixels one from each image such that both pixels in the pair correspond to a single point on the subject in the image. To create an image free of specular reflections we select the pixel from each pair having the lower gray scale value. Shadows can be removed by selecting the pixel having the higher gray scale value only if that value is below a threshold. The same technique can be used with three or more images by creating sets of pixels, one from each image and selecting the minimum value pixel or the maximum value pixel from each set. In yet another variation we use pyramid image processing to remove bright spots, dark spots or both bright spots or dark spots.
Our method and apparatus have particular applicability to images created by video cameras and which contain a set of raster lines. For such images we can choose only the even raster lines as our first image and only the odd raster lines of the same image as our second image. These images contain half as much data as the complete image and thus can be processed faster. Our image fusion methods can be combined with image subtraction to remove ambient lighting and can also be used with other imaging processing techniques.
Other objects and advantages of our method and apparatus will become apparent from a description of certain present preferred embodiments thereof which are shown in the drawings.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 is a diagram showing two light sources are used to illuminate an eye behind an eyeglass lens for imaging by a camera that is synchronized to the two light sources.
FIG. 2 is a view of an eye as seen by an imaging camera through an eyeglass lens with the view of the iris partially obscured by specular reflection of illuminating light sources.
FIG. 3 shows a left-illuminated eye image, a right-illuminated image of the same eye, and a third image formed by fusion of the other two images.
FIG. 4 is a diagram of illustrating a first present preferred embodiment of our method for fusing two images.
FIG. 5 is a diagram of an off center image.
FIG. 6 is a series of images created in a Laplacian pyramid.
FIG. 7 is a flowchart of a second present preferred method of removing bright spots by the fusion of two images using Laplacian pyramids.
FIG. 8 shows a right-illuminated facial image of a mannequin, a left-illuminated facial image of the same mannequin, and a third image formed by fusion of the other two images using Laplacian pyramids.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Although our method and apparatus can be used for images of any subject, they are particularly useful for obtaining images of the eye of a person wearing eyeglasses. In a preferred embodiment we use two illuminators and a video camera as illustrated by the diagram of FIG. 1. The eye 1 of a subject with eyeglass lens 3 is looking into a camera 11. The eye is illuminated by a first light source 2 and a second light source 4. The emission patterns of the light sources 2 and 4 are such that either of them generates illumination that is fairly even across the front surface of the eye 1 with sufficient intensity for the camera 11 to record a good image of the eye 1.
Instead of leaving both light sources 2 and 4 on during the time that a subject is present, the light sources 2 and 4 are pulsed or flashed in synchronization with the exposure times of the camera 11. This can be done using a strobing device 12 and an illumination controller 14 connected to the strobing device 12 and the camera 11. Both the intensity and duration of these pulses are controlled to get the correct exposure of the images of the eye 1.
At least one light path 7 from a light source 2 to the camera 11 produces a specular reflection from a first surface (front or back) of the eyeglass lens 3 and thus generates a virtual image 21 of the light source 2 that is visible to the camera 11 as seen in the camera's view illustrated in FIG. 2. Similarly, at least one light path 5 from a light source 4 to the camera 11 produces a specular reflection from the same first surface of the eyeglass lens 3 and thus generates a virtual image 23 of the light source 4 that is visible to a camera 11 as seen in the camera's view of an eye 1 illustrated in FIG. 2. Although FIG. 2 shows only one pair of virtual images 21 and 23 caused by a first surface of an eyeglass lens 3, the second surface of the lens will normally cause another pair of virtual images. These images act the same as the images caused by the first surface in all material respects. Both virtual image 21 and virtual image 23 may obscure portions of the iris of eye 1 in the view of the camera 11. Therefore, the controller 14 will turn off the first light source 2 which causes the virtual image 21, while continuing to activate the second light source 4, during exposure of a first image as shown on the left of FIG. 3. Immediately thereafter, the controller 14 will turn off the second light source 4 which causes the virtual image 23, while continuing to activate the first light source 2, during exposure of a second image as shown in the middle of FIG. 3. From these two images we form the composite image shown in the right portion of FIG. 3 using an image processor 16. That composite image is formed from the first image and the second image by choosing a pixel from corresponding locations in the first and second images at each pixel location based upon gray scale values of the pixels. This can be more easily understood by referring to FIGS. 4 and 5.
An image is comprised of a matrix of pixels each having a gray scale value. Depending upon the size of an image there may be hundreds or even thousands of pixels arranged in an array. Each pixel has a unique x,y coordinate position. Therefore, if there are two images of the same subject there will be one pixel in each image that corresponds to a single point on the subject. If the two images were taken under different lighting conditions the gray scale value of the pixel in the first image that corresponds to a selected point on the subject may be different from the gray scale value of the pixel in the second image that corresponds to the same point. In FIG. 4 there are three images formed from thirty six pixels arranged in a six by six array. It should be understood that the images that are used and created will be substantially larger than the six by six images presented here to illustrate our concept. Indeed, we prefer to use a camera that produces an image which is 640 pixels by 480 pixels. Image 1 and Image 2 were taken under different lighting conditions which created a bright spot in each image. The bright spot is represented by the open circles. The remaining circles are shaded to indicate three gray scale values represented by filled circles, circles containing several dots and circles containing only three dots. The bright spot in each image has obscured a portion of the subject. Therefore, we select a pixel from either Image 1 or Image 2 for each pixel position, or x,y coordinate position, in the six by six array to form the Final Image in FIG. 4. That Final Image presents a clear image of the subject which is the letter X.
In Image 1 and Image 2 the same pixels location in both images corresponded to the same unique point on the subject. It may happen that two images are presented where this is not true. One example of this occurrence is illustrated by FIG. 5. That image shows much of the same content as Image 1 in FIG. 4. Should one image be off center relative to the other image, it is necessary to preprocess one or both images so that a proper pairing can be made. There are several techniques that are well known to those skilled in the art which will accomplish this preprocessing. In the discussion that follows we assume that there has been minimal movement of the subject during the time from capture of the first image to capture of the second image, or that some preprocessing has occurred, to ensure that each physical point on the subject appears at the same (x,y) position in both images.
Normal video images are made up of lines of pixels called raster lines. These raster lines can be consecutively numbered and grouped into two fields. One field contains the odd raster lines and the second field contains the even raster lines. If a first illuminator and a second illuminator are alternately illuminated at the same frame rate as the video camera, then one field for a single image will have been created with the first illuminator activated and the other field for that image will have been created with the second illuminator activated. Consequently, if video images are available we can select the even field as our first image and the odd field as our second image or vice versa. If this is done we assume that adjacent pairs of pixels in the video image, one pixels from an odd raster line and the other pixel from an even raster line, correspond to a single unique point on the subject.
One can consider the two initial images as image A and image B with the final composite image being image C. If the first image has pixel values A(x,y) where x and y are coordinates of a two-dimensional, orthogonal coordinate system for specifying the position of pixels in the view of the camera, and the second image has pixel values B(x,y), then the composite image created in accordance with our method has pixel values C(x,y)=MIN(A(x,y), B(x,y)) where MIN is the minimum function that selects the least of its two parameters. We have observed that most, if not all, pixels which represent a specularity have a gray scale value 255. By selecting a minimum gray scale we effectively remove the specularities from the image. Shadows can be removed by selecting pixels having a maximum gray scale value. However, to avoid selection of pixels that represent specularities or saturation we select the maximum gray scale value that is below a threshold gray scale value such as 255.
Our method also overcomes any blooming that may occur. Calculations show that the irradiance (illuminance for visible light) at the camera lens from the specular reflection of an illuminator from eyeglasses is on the order of 1000 times greater than the irradiance at the camera of the image of the eye caused by diffuse reflection of the illuminator. A camera viewing the eye must have a combination of lens, aperture, and exposure time that will result in a sufficiently bright image of the eye. Thus, the much brighter specular reflection of the illuminator will saturate the picture elements (pixels) of the camera's image sensor that cover the area of the specular reflection, and all information about the portion of an eye image obscured by this reflection will be lost. Furthermore, the values of pixels surrounding the area of the specular reflection may be corrupted by the saturated pixels in a phenomenon called "blooming". This occurs because the pixels of charge-coupled devices (CCD's), the most common electronic imagers, are not well isolated from one another. As long as the two light sources 2 and 4 are sufficiently separated so that the virtual image 23 and its associated blooming does not overlap virtual image 21 and its associated blooming, every portion of the subject is clearly visible in at least one of the first image or the second image. Hence, in one or both of the left image and center image of FIG. 3 every portion of a person's iris can be clearly seen. Therefore, the rightmost composite image of FIG. 3 contains a good view of the entire iris.
The light sources 2 and 4 may be implemented with one or more high-power light-emitting diodes (such as the OD-669 IR LED array manufactured by Opto-Diode Corporation), a laser diode fed through an optical fiber, a laser fitted with a diverging lens, an incandescent lamp, or any other source that produces sufficient power with appropriate emission pattern in the appropriate spectral band. The light sources may produce visible light or non-visible light such as infrared.
Although FIG. 1 shows two light sources 2 and 4 arranged horizontally, two or more light sources may be arranged horizontally, vertically, radially, or in any other geometry so long as the spacing of the light sources is sufficient for the virtual images of the sources reflected from an eyeglass lens in the camera's view of an eye to be separated far enough so that the illumination controller can use one or more of the sources to sufficiently illuminate each portion of the iris without an obscuring specular reflection or blooming in at least one of the multiple images to be used to form the composite output image.
A second preferred embodiment of our method may use any of the arrangements of the first embodiment such as the one shown in FIG. 1, but the image processing uses Laplacian pyramid computations of both the first and second input images. This variation is preferred when trying to find the eyes in a wider view of a subject's face. The calculation of a Laplacian pyramid of a sample image is illustrated in FIG. 6.
The process begins with an original image which could be either the left or center image in FIG. 8. The original image in FIG. 6 is the middle image of FIG. 8. The original image is called the Level 0 Gaussian image. The column of images on the left is the Gaussian pyramid of the original image which is composed of successively lower spatial frequency approximations of the original image. From top to bottom, these levels are identified as Level 0, Level 1, Level 2 and Level 3. It should be understood that additional levels may also be created. The column of images on the right is the Laplacian pyramid of the original image which is composed of images representing the differences between successive levels of the Gaussian pyramid. From top to bottom, these are similarly called the Level 0, Level 1, Level 2 and Level 3 Laplacian images. Thus, the Laplacian image at Level 0 is the Level 0 Gaussian image minus the Level 1 Gaussian image. The Level 3 Laplacian image is the Level 3 Gaussian image minus the Level 4 Gaussian image which is not shown in FIG. 6. In effect, these are successively lower spatial frequency passband components of the original image. Furthermore, the original image may be recovered from the Laplacian pyramid and the highest level (lowest frequency approximation) of the Gaussian pyramid by reversing the difference operations to perform an inverse Laplacian transform. The mathematical details of this pyramid processing are given in "A VLSI Pyramid Chip for Multi-resolution Image Analysis" by Van der Wal and Burt (International Journal of Computer Vision, Vol. 8 No. 3, 1992, pp 177-189) where the required calculation of the Laplacian pyramid is called the RE (for reduce-expand) Laplacian.
Using these pyramid processes, the second preferred embodiment proceeds according to the flowchart shown in FIG. 7. After starting at step 30, the system captures a right-illuminated image of a subject's face (step 32) as shown in the leftmost image of FIG. 8. Then the system captures a left-illuminated image of a subject's face (step 34) as shown in the middle image of FIG. 8. The system computes five-level Laplacian pyramids for both of these images in step 36. Only four levels of Laplacian images for the middle image of FIG. 8 are shown in FIG. 6.
In step 38, we construct composite images from the higher levels (3 and 4) of the two Laplacian pyramids computed in Step 36. Specifically, if the image at a particular level of the Laplacian pyramid derived from the left-illuminated image has pixel values L(x,y) where x and y are coordinates of a two-dimensional, orthogonal coordinate system for specifying the position of pixels in the view of the camera, and the image at the same level of the Laplacian pyramid derived from the right-illuminated image has pixel values R(x,y), then the composite image has pixel values C(x,y)=MAX(L(x,y), R(x,y) ) where MAX is the maximum function that selects the larger of its two parameters. As in the first preferred embodiment, we assume that there has been minimal movement of the subject during the time from capture of the left-illuminated image to capture of the right-illuminated image, or that some preprocessing has occurred, to ensure that each physical point on the subject appears at the same (x,y) position in the left-illuminated and right-illuminated images.
In step 40, we construct composite images from the lower levels (0, 1, and 2) of the two Laplacian pyramids computed in Step 36. Specifically, if the image at a particular level of the Laplacian pyramid derived from the left-illuminated image has pixel values L(x,y) where x and y are coordinates of a two-dimensional, orthogonal coordinate system for specifying the position of pixels in the view of the camera, and the image at the same level of the Laplacian pyramid derived from the right-illuminated image has pixel values R(x,y), then the composite image has pixel values C(x,y)=MIN(L(x,y), R(x,y)) where MIN is the minimum function that selects the least of its two parameters. As in the first preferred embodiment, we assume that there has been minimal movement of the subject during the time from capture of the left-illuminated image to capture of the right-illuminated image, or that some preprocessing has occurred, to ensure that each physical point on the subject appears at the same (x,y) position in the left-illuminated and right-illuminated images.
In step 42, we take the composite images formed at each level in steps 38 and 40 to be a third Laplacian pyramid. On this third pyramid, we perform an inverse Laplacian transform to get a filtered image as illustrated in the rightmost image of FIG. 8. The process is now complete (step 44).
It is easy to see in FIG. 8 that the specular reflections of illumination on the eyeglasses that are visible in the two input images (left and middle) have been removed in the filtered output image (right). This is achieved in the calculations of step 40, which are very similar to the simple calculations of the first preferred embodiment.
It may also be seen in FIG. 8 that the dark shadows on the edge of the face that is away from the illuminating light source in each of the input images (left and middle) have been removed in the filtered output image (right). This is achieved in the calculations of step 38, which are different than the simple calculations of the first preferred embodiment.
The overall effect of this second preferred embodiment is to remove small bright spots, usually caused by specular reflections, without including larger dark regions that may occur in one or more of the input views. As in the first preferred embodiment, more than two input images may be used.
Several variations of the second preferred embodiment may be implemented. The Laplacian pyramids of the input images may have any number of levels appropriate to the resolution of the input images. Those skilled in the art will recognize that the first embodiment of our method shown in FIGS. 3, 4 and 5 is an inverse Laplacian transform of a degenerate Laplacian pyramid consisting of a single base Gaussian image. The grouping of the higher level composite images for maximum selection and the lower level composite images for minimum selection may be adjusted to filter out larger or smaller bright spots by including more or fewer levels in the lower level group for minimum selection. The roles of the maximum and minimum functions may be reversed to filter out smaller dark spots without including larger bright regions that may occur in one or more of the input images. We may also replace minimum with maximum in the first preferred embodiment in order to remove dark regions, usually caused by shadows, in one or more of the input images.
In yet another embodiment we create the initial images, previously called Image 1 and Image 2, by image subtraction. In this embodiment we capture one image with only the first illuminator on, a second image with only the second illuminator on and a third image with both illuminators off. We then create Image 1 by subtraction the third image from the first image and create Image 2 by subtracting the third image from the second image. This subtraction has the effect of removing ambient illumination. We prefer to implement our method using a video camera. In a present preferred implementation we have a first video image in which one field has the first illuminator on and the second field has both illuminators off. In a second image both fields have the second illuminator on. We then subtract field two from field one in the first image to create Image 1. Field two from the first image (both illuminators off) is subtracted from either of the fields of the second image to form Image 2. We have implemented this procedure using a camera which creates a 640 pixels by 480 pixels image. Hence, the resulting images after subtraction are 640 pixels by 240 pixels or half size. Yet, sometimes we may need an illuminated, full size image in order to locate a corneal reflection that would tell us the position of an eye in the image. We will have that full size, fully illuminated image by using this illumination scheme.
We have described the present invention as used for imaging eyes or faces. However, there are other applications for this invention in which an image is taken of an object that is behind a lens or other light transmissive curved structure. For example, this method and apparatus could be used to obtain images of products packaged in light transmissive packaging such as glass jars or blister packages. Such images could be used for quality control or product identification purposes.
The light transmissive structure is not limited to clear materials. That structure may allow passage of limited wavelengths of light which could be visible or invisible to the human eye. A common example of such a structure are the plastics used in sunglasses.
Although we have shown certain present preferred embodiments of our invention, it should be distinctly understood that the invention is not limited thereto, but may be variously embodied within the scope of the following claims.

Claims (36)

We claim:
1. A method for creating an image of at least a portion of a head of a person who is wearing eyeglasses or contact lens and who is positioned at a subject location which image is free of specularities comprising:
a. positioning a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of at least a portion of a head of a person who is wearing eyeglasses or contact lens while the person's head is illuminated by the first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the person's head and having a gray scale value;
c. capturing at least one additional image of at least a portion of a head of a person who is wearing eyeglasses or contact lens while the person's head is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the person's head and having a gray scale value;
d. creating sets of pixels such that each set contains a pixel from the first image and a pixel from each additional image and all pixels in each set correspond to a same location relative to the subject; and
e. constructing an image of the subject that is free of specularities by selecting one pixel from each set of pixels by choosing that pixel which has a minimum gray scale value.
2. The method of claim 1 wherein there is only one additional illuminator and only one additional image.
3. The method of claim 1 wherein the illuminators emit infrared light.
4. The method of claim 1 wherein at least one of the illuminators is an illuminator selected from the group consisting of light-emitting diodes, laser diodes, lasers fitted with a diverging lens, and an incandescent lamp.
5. The method of claim 1 also comprising the step of using the image of the at least a portion of a head of a person that is free of specularities to identify the person.
6. The method of claim 5 wherein the person is identified using a biometric identification method.
7. The method of claim 6 wherein the biometric identification method is iris identification.
8. The method of claim 1 wherein the first image is captured by:
a. capturing a first preliminary video image of the at least a portion of a head of a person which contains a field created while the at least a portion of a head of a person is illuminated by the first illuminator and not illuminated by the at least one additional illuminator and a second field which was created while the at least a portion of a head of a person was not illuminated by any illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the at least a portion of a head of a person and having a gray scale value; and
b. forming the first image by subtraction of the second field from the first field.
9. The method of claim 8 wherein the at least one additional image is captured by:
a. capturing a second preliminary video image of the subject which contains two fields created while the at least a portion of a head of a person is illuminated by the at least one additional illuminator and not illuminated by the first illuminator and a second field which was created while the at least a portion of a head of a person was not illuminated by the first illuminator and wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the at least a portion of a head of a person and having a gray scale value; and
b. forming the second image by either subtraction from the first field of the second preliminary image of the second field of the first preliminary image or by subtraction from the second field of the second preliminary image of the second field of the first preliminary image.
10. A method for creating an image of a subject positioned at a subject location which image is free of specularities comprising:
a. positioning a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of the subject while the subject is illuminated by he first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
c. capturing at least one additional image of the subject while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
d. creating sets of pixels such that each set contains a pixel from the first image and a pixel from each additional image and all pixels in each set correspond to a same location relative to the subject;
e. constructing an image of the subject that is free of specularities by selecting one pixel from each set of pixels by choosing that pixel which has a minimum gray scale value;
f. capturing a third image of the subject wherein the third image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value; and
g. including a pixel from the third image in at least some of the sets of pixels.
11. A method for creating an image of a subject that is located in a package at least a portion of which package is light transmissive which image is free of specularities comprising:
a. positioning a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of the subject while the subject is illuminated by the first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
c. capturing at least one additional image of the subject while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
d. creating sets of pixels such that each set contains a pixel from the first image and a pixel from each additional image and all pixels in each set correspond to a same location relative to the subject; and
e. constructing an image of the subject that is free of specularities by selecting one pixel from each set of pixels by choosing that pixel which has a minimum gray scale value.
12. method of claim 11 wherein the illuminators emit infrared light.
13. The method of claim 11 wherein at least one of the illuminators is an illuminator selected from the group consisting of light-emitting diodes, laser diodes, lasers fitted with a diverging lens, and an incandescent lamp.
14. The method of claim 11 also comprising the step of using the image of the subject that is free of specularities to identify the subject.
15. The method of claim 11 wherein the first image is captured by:
a. capturing a first preliminary video image of the subject which contains a field created while the subject is illuminated by the first illuminator and not illuminated by the at least one additional illuminator and a second field which was created while the subject was not illuminated by any illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value; and
b. forming the first image by subtraction of the second field from the first field.
16. The method of claim 15 wherein the at least one additional image is captured by:
a. capturing a second preliminary video image of the subject which contains two fields created while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator and a second field which was created while the subject was not illuminated by the first illuminator and wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value; and
b. forming the second image by either subtraction from the first field of the second preliminary image the second field of the first preliminary image or by subtraction from the second field of the second preliminary image the second field of the first preliminary image.
17. A method for creating a shadow free image of a subject positioned at a subject location comprising:
a. positioning a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of the subject while the subject is illuminated by the first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
c. capturing at least one additional image of the subject while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
d. creating sets of pixels such that each set contains a pixel from the first image and a pixel from each additional image and all pixels in each set correspond to a same location relative to the subject; and
e. constructing an image of the subject by selecting one pixel from each set of pixels by choosing that pixel which has a maximum gray scale value that is less than a threshold gray scale value.
18. The method of claim 17 wherein the threshold gray scale value is 255.
19. The method of claim 17 wherein there is only one additional illuminator and only one additional image.
20. The method of claim 17 wherein the subject is at least a portion of a head of a person who is wearing eyeglasses or contact lenses.
21. The method of claim 17 wherein the subject is located in a package at least a portion of which is light transmissive.
22. The method of claim 17 wherein the illuminators emit infrared light.
23. The method of claim 17 wherein at least one of the illuminators is an illuminator selected from the group consisting of light-emitting diodes, laser diodes, lasers fitted with a diverging lens, and an incandescent lamp.
24. A method for creating an image of a subject positioned at a subject location comprised of:
a. positioning a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of the subject while the subject is illuminated by the first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
c. capturing at least one additional image of the subject while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
d. computing a Laplacian pyramid for each image;
e. selecting a first particular level of the Laplacian pyramids computed for each image such that a same level is selected for each image;
f. creating sets of pixels such that each set contains a pixel from the selected level of the Laplacian pyramid for each of the first image and each additional image and all pixels in each set correspond to a same location relative to the subject;
g. identifing a pixel in each set which has a maximum value and identifying the image from which each selected pixels was derived; and
h. constructing a first composite image of the subject by selecting one pixel from one of the first image and each additional image that corresponds to each pixel identified in step g.
25. The method of claim 24 wherein there is only one additional illuminator and only one additional image.
26. The method of claim 24 also comprising:
a. selecting a second particular level of the Laplacian pyramids computed for each image such that a same level is selected for each image and the second particular level selected is adjacent to the first selected level;
b. creating sets of pixels such that each set contains a pixel from the second selected level of the Laplacian pyramid for each of the first image and each additional image and all pixels in each set correspond to a same location relative to the subject;
c. identifying a pixel in each set which has a minimum value and identifying the image from which each selected pixels was derived; and
d. constructing a second composite image of the subject by selecting one pixel from one of the first image and each additional image that corresponds to each pixel identified in step c.
27. The method of claim 26 also comprising combining the first composite image and the second composite image to form a combined composite image.
28. The method of claim 26 also comprising:
a. selecting at least one additional particular level of the Laplacian pyramids computed for each image such that a same level is selected for each;
b. creating sets of pixels for each selected additional particular level such that each set contains a pixel from the additional particular selected level of the Laplacian pyramid for each of the first image and each additional image and all pixels in each set correspond to a same location relative to the subject;
c. identifying a pixel in each set which has a minimum value and identifying the image from which each selected pixels was derived; and
d. constructing an additional composite image of the subject for each additional selected level by selecting one pixel from one of the first image and each additional image that corresponds to each pixel identified in step c.
29. The method of claim 28 also comprising:
a. creating a constructed Laplacian pyramid which contains the first composite image, the second composite image and at least one additional composite images; and
b. constructing an image of the subject by applying an inverse Laplacian transform procedure to the created Laplacian pyramid.
30. The method of claim 26 also comprising:
a. selecting at least one additional particular level of the Laplacian pyramids computed for each image such that a same level is selected for each;
b. creating sets of pixels for each selected additional particular level such that each set contains a pixel from the additional particular selected level of the Laplacian pyramid for each of the first image and each additional image and all pixels in each set correspond to a same location relative to the subject;
c. identifying a pixel in each set which has a maximum value and identifying the image from which each selected pixels was derived; and
d. constructing an additional composite image of the subject for each additional selected level by selecting one pixel from one of the first image and each additional image that corresponds to each pixel identified in step c.
31. The method of claim 30 also comprising:
a. creating a constructed Laplacian pyramid which contains the first composite image, the second composite image and at least one additional composite images; and
b. constructing an image of the subject by applying an inverse Laplacian transform procedure to the created Laplacian pyramid.
32. A method for creating an image of a subject positioned at a subject location comprised of:
a. positioning a first illuminator and a at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
b. capturing a first image of the subject while the subject is illuminated by the first illuminator and not illuminated by the at least one additional illuminator wherein the first image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
c. capturing at least one additional image of the subject while the subject is illuminated by the at least one additional illuminator and not illuminated by the first illuminator; wherein the second image is comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value;
d. computing a Laplacian pyramid for each image;
e. selecting a first particular level of the Laplacian pyramids computed for each image such that a same level is selected for each image;
f. creating sets of pixels such that each set contains a pixel from the selected level of the Laplacian pyramid for each of the first image and each additional image and all pixels in each set correspond to a same location relative to the subject;
g. identifying a pixel in each set which has a minimum value and identifying the image from which each selected pixels was derived; and
h. constructing a first composite image of the subject by selecting one pixel from one of the first image and each additional image that corresponds to each pixel identified in step g.
33. The method of claim 32 wherein there is only one additional illuminator and only one additional image.
34. An apparatus for creating an image of a subject positioned at a subject location which image is free of specularities or shadows comprising:
a. a camera for taking an image of a subject located at a subject location;
b. a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
c. a controller attached to the camera and the illuminators for turning on a selected illuminator whenever the camera captures an image of the subject; and
d. an image processor connected to the camera for receiving a first image of the subject taken when the first illuminator is illuminated and the at least one additional illuminator is not illuminated and receiving at least one additional image of the subject taken when the first illuminator is not illuminated and the at least one additional illuminator is illuminated and combining the first image and at least one additional image to produce a composite image of the subject wherein the images are each comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value and wherein the image processor contains a program for combining the first and second images which program constructs a Laplacian pyramid for each image and uses at least some of levels of the Laplacian pyramids to form a composite image of the subject.
35. The apparatus of claim 34 wherein there is only one additional illuminator and only one additional image.
36. An apparatus for creating an image of a subject positioned at a subject location which image is free of specularities or shadows comprising:
a. a camera for taking an image of a subject located at a subject location;
b. a first illuminator and at least one additional illuminator at a selected distance from the subject location and spaced apart from one another;
c. a controller attached to the camera and the illuminators for turning on a selected illuminator whenever the camera captures an image of the subject; and
d. an image processor connected to the camera for receiving a first image of the subject taken when the first illuminator is illuminated and the at least one additional illuminator is not illuminated and receiving at least one additional image of the subject taken when the first illuminator is not illuminated and the at least one additional illuminator is illuminated and combining the first image and at least one additional image to produce a composite image of the subject wherein the images are each comprised of a set of pixels each pixel corresponding to a specific location relative to the subject and having a gray scale value and wherein the image processor contains a program for combining the first and second images by:
a. creating sets of pixels such that each set contains a pixel from the first image and a pixel from each additional image and all pixels in each set correspond to a same location relative to the subject; and
b. creating a constructed image of the subject by selecting one pixel from each set of pixels by choosing that pixel which has a maximum gray scale value that is less than a threshold gray scale value.
US09/013,758 1998-01-27 1998-01-27 Method and apparatus for removal of bright or dark spots by the fusion of multiple images Expired - Fee Related US6088470A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US09/013,758 US6088470A (en) 1998-01-27 1998-01-27 Method and apparatus for removal of bright or dark spots by the fusion of multiple images
PCT/US1999/001541 WO1999038121A1 (en) 1998-01-27 1999-01-26 Method and apparatus for removal of bright or dark spots by the fusion of multiple images
KR1020007008222A KR20010040433A (en) 1998-01-27 1999-01-26 Method and apparatus for removal of bright or dark spots by the fusion of multiple images
AU23411/99A AU2341199A (en) 1998-01-27 1999-01-26 Method and apparatus for removal of bright or dark spots by the fusion of multiple images
EP99903372A EP1050019A1 (en) 1998-01-27 1999-01-26 Method and apparatus for removal of bright or dark spots by the fusion of multiple images
JP2000528952A JP2002501265A (en) 1998-01-27 1999-01-26 Method and apparatus for removing bright or dark spots by fusing multiple images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/013,758 US6088470A (en) 1998-01-27 1998-01-27 Method and apparatus for removal of bright or dark spots by the fusion of multiple images

Publications (1)

Publication Number Publication Date
US6088470A true US6088470A (en) 2000-07-11

Family

ID=21761607

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/013,758 Expired - Fee Related US6088470A (en) 1998-01-27 1998-01-27 Method and apparatus for removal of bright or dark spots by the fusion of multiple images

Country Status (6)

Country Link
US (1) US6088470A (en)
EP (1) EP1050019A1 (en)
JP (1) JP2002501265A (en)
KR (1) KR20010040433A (en)
AU (1) AU2341199A (en)
WO (1) WO1999038121A1 (en)

Cited By (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6231188B1 (en) * 1998-09-22 2001-05-15 Feng Gao Interactive eyewear selection system
US20020006226A1 (en) * 2000-07-12 2002-01-17 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US6469710B1 (en) * 1998-09-25 2002-10-22 Microsoft Corporation Inverse texture mapping using weighted pyramid blending
US6470151B1 (en) * 1999-06-22 2002-10-22 Canon Kabushiki Kaisha Camera, image correcting apparatus, image correcting system, image correcting method, and computer program product providing the image correcting method
US20030071893A1 (en) * 2001-10-05 2003-04-17 David Miller System and method of providing visual documentation during surgery
US20030123711A1 (en) * 2001-12-28 2003-07-03 Lg Electronics Inc. Iris recognition method and system using the same
EP1335329A2 (en) * 2002-02-05 2003-08-13 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
US20030152251A1 (en) * 2001-05-11 2003-08-14 Takahiro Ike Method and apparartus for picking up object being authenticated
US20040016737A1 (en) * 2002-07-23 2004-01-29 Gerd Huismann Method and apparatus for controlling a welding system
US20040047491A1 (en) * 2000-12-21 2004-03-11 Bo Rydbeck Image capturing device with reflex reduction
US6711280B2 (en) * 2001-05-25 2004-03-23 Oscar M. Stafsudd Method and apparatus for intelligent ranging via image subtraction
US6718067B1 (en) * 1999-07-30 2004-04-06 Sony Corporation Method of manipulating illumination effects associated with an image
US20040184641A1 (en) * 2003-03-04 2004-09-23 Akio Nagasaka Personal authentication device
US20040212725A1 (en) * 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
US20050074221A1 (en) * 2003-10-06 2005-04-07 Remillard Jeffrey T. Active night vision image intensity balancing system
US20050117118A1 (en) * 2001-10-05 2005-06-02 David Miller Digital ophthalmic workstation
US20050281440A1 (en) * 2004-06-18 2005-12-22 Pemer Frederick A Iris feature detection and sensor-based edge detection
US20060008171A1 (en) * 2004-07-06 2006-01-12 Microsoft Corporation Digital photography with flash/no flash extension
US20060114328A1 (en) * 2004-11-29 2006-06-01 Samsung Electronics Co., Ltd. Apparatus and method for processing images taking into consideration light reflection, and a computer readable medium storing computer program therefor
US20060147095A1 (en) * 2005-01-03 2006-07-06 Usher David B Method and system for automatically capturing an image of a retina
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
US20060222260A1 (en) * 2005-03-30 2006-10-05 Casio Computer Co., Ltd. Image capture apparatus, image processing method for captured image, and recording medium
WO2006129316A1 (en) * 2005-05-31 2006-12-07 Zamir Recognition Systems Ltd. Light sensitive system and method for attenuating the effect of ambient light
US20070092152A1 (en) * 2005-10-25 2007-04-26 Kevin Brokish Clear image using pixel voting
US7236201B1 (en) * 2004-02-12 2007-06-26 The United States Of America As Represented By The Secertary Of The Navy Method of generating an image in a turbid medium
US20070206183A1 (en) * 1999-07-08 2007-09-06 Ppt Vision, Inc. Method and apparatus for auto-adjusting illumination
US20070273611A1 (en) * 2004-04-01 2007-11-29 Torch William C Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US20080004610A1 (en) * 2006-06-30 2008-01-03 David Miller System for calculating IOL power
EP1908396A1 (en) * 2006-10-04 2008-04-09 Delphi Technologies, Inc. Illumination and imaging system with glare reduction and method therefor
US20090091554A1 (en) * 2007-10-05 2009-04-09 Microsoft Corporation Correcting for ambient light in an optical touch-sensitive device
US20090225218A1 (en) * 2008-03-04 2009-09-10 Canon Kabushiki Kaisha Image pickup system, image capturing method, and computer-readbale storage medium storing program for performing image capturing method
US7593550B2 (en) 2005-01-26 2009-09-22 Honeywell International Inc. Distance iris recognition
US20090254070A1 (en) * 2008-04-04 2009-10-08 Ashok Burton Tripathi Apparatus and methods for performing enhanced visually directed procedures under low ambient light conditions
US20090251534A1 (en) * 2006-11-09 2009-10-08 Aisin Seiki Kabushiki Kaisha Vehicle-mounted image processing device, and method for controlling vehicle-mounted image processing device
US20090274345A1 (en) * 2006-09-22 2009-11-05 Hanna Keith J Compact Biometric Acquisition System and Method
US20100094262A1 (en) * 2008-10-10 2010-04-15 Ashok Burton Tripathi Real-time surgical reference indicium apparatus and methods for surgical applications
USRE41376E1 (en) 1996-08-19 2010-06-15 Torch William C System and method for monitoring eye movement
US7761453B2 (en) 2005-01-26 2010-07-20 Honeywell International Inc. Method and system for indexing and searching an iris image database
US20100217278A1 (en) * 2009-02-20 2010-08-26 Ashok Burton Tripathi Real-time surgical reference indicium apparatus and methods for intraocular lens implantation
US20100220186A1 (en) * 2008-07-28 2010-09-02 Bluplanet Pte Ltd Method And System For Detecting Micro-Cracks In Wafers
WO2010115008A1 (en) 2009-04-01 2010-10-07 Tearscience, Inc. Ocular surface interferometry (osi) devices, systems, and methods for imaging, processing, and/or displaying an ocular tear film and/or measuring ocular tear film layer thickness (es)
US20110077548A1 (en) * 2004-04-01 2011-03-31 Torch William C Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US20110092984A1 (en) * 2009-10-20 2011-04-21 Ashok Burton Tripathi Real-time Surgical Reference Indicium Apparatus and Methods for Astigmatism Correction
US7933507B2 (en) 2006-03-03 2011-04-26 Honeywell International Inc. Single lens splitter camera
US20110119141A1 (en) * 2009-11-16 2011-05-19 Hoyos Corporation Siccolla Identity Verification Architecture and Tool
EP2334222A2 (en) * 2008-10-16 2011-06-22 Verdooner Steven Apparatus and method for imaging the eye
US20110160578A1 (en) * 2008-10-10 2011-06-30 Ashok Burton Tripathi Real-time surgical reference guides and methods for surgical applications
US20110213342A1 (en) * 2010-02-26 2011-09-01 Ashok Burton Tripathi Real-time Virtual Indicium Apparatus and Methods for Guiding an Implant into an Eye
US20110211056A1 (en) * 2010-03-01 2011-09-01 Eye-Com Corporation Systems and methods for spatially controlled scene illumination
US8045764B2 (en) 2005-01-26 2011-10-25 Honeywell International Inc. Expedient encoding system
US8049812B2 (en) 2006-03-03 2011-11-01 Honeywell International Inc. Camera with auto focus capability
US8050463B2 (en) 2005-01-26 2011-11-01 Honeywell International Inc. Iris recognition system having image quality metrics
US8064647B2 (en) 2006-03-03 2011-11-22 Honeywell International Inc. System for iris detection tracking and recognition at a distance
US8063889B2 (en) 2007-04-25 2011-11-22 Honeywell International Inc. Biometric data collection system
US8085993B2 (en) 2006-03-03 2011-12-27 Honeywell International Inc. Modular biometrics collection system architecture
US8090246B2 (en) 2008-08-08 2012-01-03 Honeywell International Inc. Image acquisition system
US8090157B2 (en) 2005-01-26 2012-01-03 Honeywell International Inc. Approaches and apparatus for eye detection in a digital image
US8098901B2 (en) 2005-01-26 2012-01-17 Honeywell International Inc. Standoff iris recognition system
US8213782B2 (en) 2008-08-07 2012-07-03 Honeywell International Inc. Predictive autofocusing system
US20120206611A1 (en) * 2006-03-03 2012-08-16 Acterna Llc Systems and methods for visualizing errors in video signals
US8280119B2 (en) 2008-12-05 2012-10-02 Honeywell International Inc. Iris recognition system using quality metrics
US20120314103A1 (en) * 2011-06-09 2012-12-13 Peter Ivan Majewicz Glare and shadow mitigation by fusing multiple frames
US8428337B2 (en) 2008-07-28 2013-04-23 Bluplanet Pte Ltd Apparatus for detecting micro-cracks in wafers and method therefor
US8436907B2 (en) 2008-05-09 2013-05-07 Honeywell International Inc. Heterogeneous video capturing system
WO2013028700A3 (en) * 2011-08-22 2013-05-10 Eyelock Inc. Systems and methods for capturing artifact free images
US8442276B2 (en) 2006-03-03 2013-05-14 Honeywell International Inc. Invariant radial iris segmentation
US8472681B2 (en) 2009-06-15 2013-06-25 Honeywell International Inc. Iris and ocular recognition system using trace transforms
US8610976B1 (en) 2012-06-27 2013-12-17 3M Innovative Properties Company Image enhancement methods
WO2014003991A1 (en) * 2012-06-27 2014-01-03 3M Innovative Properties Company Image enhancement methods
WO2014003994A1 (en) * 2012-06-27 2014-01-03 3M Innovative Properties Company Image enhancement methods
US8630464B2 (en) 2009-06-15 2014-01-14 Honeywell International Inc. Adaptive iris matching using database indexing
US8705808B2 (en) 2003-09-05 2014-04-22 Honeywell International Inc. Combined face and iris recognition system
US8742887B2 (en) 2010-09-03 2014-06-03 Honeywell International Inc. Biometric visitor check system
US8787624B2 (en) 2011-03-08 2014-07-22 Fujitsu Limited Biometric-information processing device, method of processing biometric information, and computer-readable recording medium storing biometric-information processing program
US8798334B2 (en) 2005-11-11 2014-08-05 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US8818052B2 (en) 2006-10-02 2014-08-26 Eyelock, Inc. Fraud resistant biometric financial transaction system and method
US8885877B2 (en) 2011-05-20 2014-11-11 Eyefluence, Inc. Systems and methods for identifying gaze tracking scene reference locations
US8911087B2 (en) 2011-05-20 2014-12-16 Eyefluence, Inc. Systems and methods for measuring reactions of head, eyes, eyelids and pupils
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
US8953849B2 (en) 2007-04-19 2015-02-10 Eyelock, Inc. Method and system for biometric recognition
US8958606B2 (en) 2007-09-01 2015-02-17 Eyelock, Inc. Mirror system and method for acquiring biometric data
US9002073B2 (en) 2007-09-01 2015-04-07 Eyelock, Inc. Mobile identity platform
US9036871B2 (en) 2007-09-01 2015-05-19 Eyelock, Inc. Mobility identity platform
US9053558B2 (en) 2013-07-26 2015-06-09 Rui Shen Method and system for fusing multiple images
US9095287B2 (en) 2007-09-01 2015-08-04 Eyelock, Inc. System and method for iris data acquisition for biometric identification
US20150228098A1 (en) * 2014-02-10 2015-08-13 International Business Machines Corporation Simplified lighting compositing
US9117119B2 (en) 2007-09-01 2015-08-25 Eyelock, Inc. Mobile identity platform
US9124798B2 (en) 2011-05-17 2015-09-01 Eyelock Inc. Systems and methods for illuminating an iris with visible light for biometric acquisition
US9142070B2 (en) 2006-06-27 2015-09-22 Eyelock, Inc. Ensuring the provenance of passengers at a transportation facility
US9280706B2 (en) 2011-02-17 2016-03-08 Eyelock Llc Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor
DE102014115540A1 (en) * 2014-10-24 2016-04-28 Sick Ag Camera and method for capturing objects
US9350894B2 (en) * 2013-12-25 2016-05-24 Pfu Limited Image capturing system
WO2016131075A1 (en) * 2015-02-20 2016-08-25 Seeing Machines Limited Glare reduction
DE102015208087A1 (en) * 2015-04-30 2016-11-03 Carl Zeiss Microscopy Gmbh Method for generating a reflection-reduced contrast image and related devices
US9489416B2 (en) 2006-03-03 2016-11-08 Eyelock Llc Scalable searching of biometric databases using dynamic selection of data subsets
US20160364612A1 (en) * 2015-06-12 2016-12-15 Google Inc. Using a Scene Illuminating Infrared Emitter Array in a Video Monitoring Camera to Estimate the Position of the Camera
US9552660B2 (en) 2012-08-30 2017-01-24 Truevision Systems, Inc. Imaging system and methods displaying a fused multidimensional reconstructed image
US20170070712A1 (en) * 2015-09-04 2017-03-09 Panasonic Intellectual Property Management Co., Ltd. Lighting device, lighting system, and program
US9646217B2 (en) 2007-04-19 2017-05-09 Eyelock Llc Method and system for biometric recognition
US9668647B2 (en) 2012-12-21 2017-06-06 Tearscience Inc. Full-eye illumination ocular surface imaging of an ocular tear film for determining tear film thickness and/or providing ocular topography
US9795290B2 (en) 2013-11-15 2017-10-24 Tearscience, Inc. Ocular tear film peak detection and stabilization detection systems and methods for determining tear film layer characteristics
US9838602B2 (en) 2015-06-12 2017-12-05 Google Inc. Day and night detection based on one or more of illuminant detection, Lux level detection, and tiling
US9858490B2 (en) 2011-12-15 2018-01-02 Fujitsu Limited Vein authentication method, image processing method, and vein authentication device
US9866801B2 (en) 2011-10-28 2018-01-09 Google Inc. Home video capturing and monitoring system
US9866760B2 (en) 2015-05-27 2018-01-09 Google Inc. Multi-mode LED illumination system
US9888839B2 (en) 2009-04-01 2018-02-13 Tearscience, Inc. Methods and apparatuses for determining contact lens intolerance in contact lens wearer patients based on dry eye tear film characteristic analysis and dry eye symptoms
US9900560B1 (en) 2015-06-12 2018-02-20 Google Inc. Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
US9965672B2 (en) 2008-06-26 2018-05-08 Eyelock Llc Method of reducing visibility of pulsed illumination while acquiring high quality imagery
US9999346B2 (en) 2009-04-01 2018-06-19 Tearscience, Inc. Background reduction apparatuses and methods of ocular surface interferometry (OSI) employing polarization for imaging, processing, and/or displaying an ocular tear film
US10008003B2 (en) 2015-06-12 2018-06-26 Google Llc Simulating an infrared emitter array in a video monitoring camera to construct a lookup table for depth determination
US10039445B1 (en) 2004-04-01 2018-08-07 Google Llc Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US10043229B2 (en) 2011-01-26 2018-08-07 Eyelock Llc Method for confirming the identity of an individual while shielding that individual's personal data
US10135897B2 (en) 2012-01-06 2018-11-20 Google Llc Backfill of video stream
US10180615B2 (en) 2016-10-31 2019-01-15 Google Llc Electrochromic filtering in a camera
US10278587B2 (en) 2013-05-03 2019-05-07 Tearscience, Inc. Eyelid illumination systems and method for imaging meibomian glands for meibomian gland analysis
US10299880B2 (en) 2017-04-24 2019-05-28 Truevision Systems, Inc. Stereoscopic visualization camera and platform
US10306157B2 (en) 2015-06-12 2019-05-28 Google Llc Using images of a monitored scene to identify windows
WO2020023721A1 (en) * 2018-07-25 2020-01-30 Natus Medical Incorporated Real-time removal of ir led reflections from an image
US10917543B2 (en) 2017-04-24 2021-02-09 Alcon Inc. Stereoscopic visualization camera and integrated robotics platform
US11083537B2 (en) 2017-04-24 2021-08-10 Alcon Inc. Stereoscopic camera with fluorescence visualization
US11763445B2 (en) 2020-01-06 2023-09-19 Ricoh Company, Ltd. Inspection of a target object using a comparison with a master image and a strictness of a quality evaluation threshold value

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3894792B2 (en) * 1999-12-17 2007-03-22 剛 西郷 Glasses frame shooting method
GB2375676A (en) * 2001-05-17 2002-11-20 Hewlett Packard Co Reducing the effects of specular reflections appearing in an image
KR20030053840A (en) * 2001-12-24 2003-07-02 엘지전자 주식회사 Sevral light using method for iris recognition of pc
JP3642336B2 (en) * 2003-07-01 2005-04-27 松下電器産業株式会社 Eye imaging device
WO2006011261A1 (en) 2004-07-26 2006-02-02 Matsushita Electric Industrial Co., Ltd. Image processing method, image processing device, and image processing program
JP4851723B2 (en) * 2005-03-04 2012-01-11 富士通株式会社 Internal structure image acquisition device, internal structure image acquisition method, and internal structure image acquisition program
EP2022008A4 (en) * 2006-05-09 2012-02-01 Technion Res & Dev Foundation Imaging systems and methods for recovering object visibility
US8212857B2 (en) * 2007-01-26 2012-07-03 Microsoft Corporation Alternating light sources to reduce specular reflection
ES2310136B1 (en) * 2007-06-07 2009-11-05 Consejo Superior De Investigaciones Cientificas METHOD FOR AUTOMATIC IMPROVEMENT OF IMAGES AND SEQUENCES WITH SPACIALLY VARIANT DEGRADATION.
JP5014966B2 (en) * 2007-11-30 2012-08-29 パナソニック電工Sunx株式会社 Magnifying observation device
US20100232654A1 (en) * 2009-03-11 2010-09-16 Harris Corporation Method for reconstructing iris scans through novel inpainting techniques and mosaicing of partial collections
US8306288B2 (en) 2009-08-19 2012-11-06 Harris Corporation Automatic identification of fingerprint inpainting target areas
JP5751019B2 (en) 2011-05-30 2015-07-22 富士通株式会社 Biological information processing apparatus, biological information processing method, and biological information processing program
GB2495324B (en) 2011-10-07 2018-05-30 Irisguard Inc Security improvements for Iris recognition systems
GB2495323B (en) 2011-10-07 2018-05-30 Irisguard Inc Improvements for iris recognition systems
EP2806394A1 (en) * 2013-05-23 2014-11-26 bioMérieux Method, system and computer program product for improving the quality of an image
US10007995B2 (en) * 2013-05-23 2018-06-26 Biomerieux Method, system and computer program product for producing a raised relief map from images of an object
CN110168606B (en) * 2016-06-08 2023-09-26 谷歌有限责任公司 Method and system for generating composite image of physical object
US10675955B2 (en) 2016-11-14 2020-06-09 Google Llc Adaptive glare removal and/or color correction
DE102017123971A1 (en) * 2017-10-16 2019-04-18 RENO Technology Switzerland Device and method for object observation, in particular face recognition
US11282187B2 (en) 2019-08-19 2022-03-22 Ricoh Company, Ltd. Inspection system, inspection apparatus, and method using multiple angle illumination
US20220100433A1 (en) 2020-09-30 2022-03-31 Ricoh Company, Ltd. Image forming device, information processing device, computer-readable medium, and image forming method
CN112884689B (en) * 2021-02-25 2023-11-17 景德镇陶瓷大学 Method for removing high light of strong reflection surface image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4516264A (en) * 1982-01-29 1985-05-07 United States Of America Postal Service Apparatus and process for scanning and analyzing mail information
US4654583A (en) * 1983-04-15 1987-03-31 Hitachi, Ltd. Method and apparatus for detecting defects of printed circuit patterns
US4661986A (en) * 1983-06-27 1987-04-28 Rca Corporation Depth-of-focus imaging process method
US5016282A (en) * 1988-07-14 1991-05-14 Atr Communication Systems Research Laboratories Eye tracking image pickup apparatus for separating noise from feature portions
US5325449A (en) * 1992-05-15 1994-06-28 David Sarnoff Research Center, Inc. Method for fusing images and apparatus therefor
EP0635972A2 (en) * 1993-07-19 1995-01-25 Eastman Kodak Company Automated detection and correction of eye color defects due to flash illumination
EP0680205A2 (en) * 1994-04-29 1995-11-02 International Business Machines Corporation Imaging system for object segmentation
US5526446A (en) * 1991-09-24 1996-06-11 Massachusetts Institute Of Technology Noise reduction system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4516264A (en) * 1982-01-29 1985-05-07 United States Of America Postal Service Apparatus and process for scanning and analyzing mail information
US4654583A (en) * 1983-04-15 1987-03-31 Hitachi, Ltd. Method and apparatus for detecting defects of printed circuit patterns
US4661986A (en) * 1983-06-27 1987-04-28 Rca Corporation Depth-of-focus imaging process method
US5016282A (en) * 1988-07-14 1991-05-14 Atr Communication Systems Research Laboratories Eye tracking image pickup apparatus for separating noise from feature portions
US5526446A (en) * 1991-09-24 1996-06-11 Massachusetts Institute Of Technology Noise reduction system
US5325449A (en) * 1992-05-15 1994-06-28 David Sarnoff Research Center, Inc. Method for fusing images and apparatus therefor
US5488674A (en) * 1992-05-15 1996-01-30 David Sarnoff Research Center, Inc. Method for fusing images and apparatus therefor
EP0635972A2 (en) * 1993-07-19 1995-01-25 Eastman Kodak Company Automated detection and correction of eye color defects due to flash illumination
EP0680205A2 (en) * 1994-04-29 1995-11-02 International Business Machines Corporation Imaging system for object segmentation

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
"The Laplacian Pyramid as a Compact Compute Code," Peter J. Bert, IEEE Transactions on Communications, Vol. Com-31, No. 4, Apr., 1983.
"VLSI Pyramid Chip for Multiresolution Image Analysis," Gooitzen S. Van Der Val and Peter J. Bert, International Journal of Computer Vision, 8:3, 177-189 (1992).
P. J. Burt, "Invited Address: A Gradient Pyramid Basis for Pattern-Selective Image Fusion," SID International Symposium Digest of Papers, Boston, May 17, 1992.
P. J. Burt, Invited Address: A Gradient Pyramid Basis for Pattern Selective Image Fusion, SID International Symposium Digest of Papers, Boston, May 17, 1992. *
The Laplacian Pyramid as a Compact Compute Code, Peter J. Bert, IEEE Transactions on Communications , Vol. Com 31, No. 4, Apr., 1983. *
VLSI Pyramid Chip for Multiresolution Image Analysis, Gooitzen S. Van Der Val and Peter J. Bert, International Journal of Computer Vision , 8:3, 177 189 (1992). *
Yoshinobu Ebisawa, Improved Video Based Eye Gaze Detection Method, Proceedings of Instrumentation and Measurment Technology Conference, IEEE, pp. 963 966, May 1994. *
Yoshinobu Ebisawa, Improved Video-Based Eye-Gaze Detection Method, Proceedings of Instrumentation and Measurment Technology Conference, IEEE, pp. 963-966, May 1994.

Cited By (256)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE42471E1 (en) 1996-08-19 2011-06-21 Torch William C System and method for monitoring eye movement
USRE41376E1 (en) 1996-08-19 2010-06-15 Torch William C System and method for monitoring eye movement
US6231188B1 (en) * 1998-09-22 2001-05-15 Feng Gao Interactive eyewear selection system
US6508553B2 (en) 1998-09-22 2003-01-21 Virtual Visual Devices, Llc Interactive eyewear selection system
US6469710B1 (en) * 1998-09-25 2002-10-22 Microsoft Corporation Inverse texture mapping using weighted pyramid blending
US6470151B1 (en) * 1999-06-22 2002-10-22 Canon Kabushiki Kaisha Camera, image correcting apparatus, image correcting system, image correcting method, and computer program product providing the image correcting method
US20070206183A1 (en) * 1999-07-08 2007-09-06 Ppt Vision, Inc. Method and apparatus for auto-adjusting illumination
US7557920B2 (en) * 1999-07-08 2009-07-07 Lebens Gary A Method and apparatus for auto-adjusting illumination
US6718067B1 (en) * 1999-07-30 2004-04-06 Sony Corporation Method of manipulating illumination effects associated with an image
US6975763B2 (en) * 2000-07-12 2005-12-13 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US20020006226A1 (en) * 2000-07-12 2002-01-17 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US20040047491A1 (en) * 2000-12-21 2004-03-11 Bo Rydbeck Image capturing device with reflex reduction
US20030152251A1 (en) * 2001-05-11 2003-08-14 Takahiro Ike Method and apparartus for picking up object being authenticated
EP1387314A1 (en) * 2001-05-11 2004-02-04 Matsushita Electric Industrial Co., Ltd. Method and apparatus for picking up image of object being authenticated
EP1387314A4 (en) * 2001-05-11 2004-08-11 Matsushita Electric Ind Co Ltd Method and apparatus for picking up image of object being authenticated
US6711280B2 (en) * 2001-05-25 2004-03-23 Oscar M. Stafsudd Method and apparatus for intelligent ranging via image subtraction
US20030071893A1 (en) * 2001-10-05 2003-04-17 David Miller System and method of providing visual documentation during surgery
US20050117118A1 (en) * 2001-10-05 2005-06-02 David Miller Digital ophthalmic workstation
KR100854890B1 (en) * 2001-12-28 2008-08-28 엘지전자 주식회사 Iris recording and recognition method using of several led for iris recognition system
US7146027B2 (en) 2001-12-28 2006-12-05 Lg Electronics, Inc. Iris recognition method and system using the same
EP1326197A3 (en) * 2001-12-28 2004-05-19 Lg Electronics Inc. Iris recognition method and system using the same
US20030123711A1 (en) * 2001-12-28 2003-07-03 Lg Electronics Inc. Iris recognition method and system using the same
EP1326197A2 (en) * 2001-12-28 2003-07-09 Lg Electronics Inc. Iris recognition method and system using the same
CN1437161B (en) * 2002-02-05 2010-04-28 松下电器产业株式会社 Personal recognition method, personal recognition apparatus
EP1600898A2 (en) * 2002-02-05 2005-11-30 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
EP1335329A2 (en) * 2002-02-05 2003-08-13 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
US20030152252A1 (en) * 2002-02-05 2003-08-14 Kenji Kondo Personal authentication method, personal authentication apparatus and image capturing device
EP1335329A3 (en) * 2002-02-05 2004-09-22 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
EP1600898A3 (en) * 2002-02-05 2006-06-14 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
US7155035B2 (en) 2002-02-05 2006-12-26 Matsushita Electric Industrial Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
US20040016737A1 (en) * 2002-07-23 2004-01-29 Gerd Huismann Method and apparatus for controlling a welding system
US7245745B2 (en) 2003-03-04 2007-07-17 Hitachi, Ltd. Personal authentication device
US9141843B2 (en) 2003-03-04 2015-09-22 Hitachi, Ltd. Personal authentication device
US7706582B2 (en) 2003-03-04 2010-04-27 Hitachi, Ltd. Personal authentication device
US20040184641A1 (en) * 2003-03-04 2004-09-23 Akio Nagasaka Personal authentication device
US8208691B2 (en) 2003-03-04 2012-06-26 Hitachi, Ltd. Personal authentication device
US8121354B2 (en) 2003-03-04 2012-02-21 Hitachi, Ltd. Personal authentication device
US20050254690A1 (en) * 2003-03-04 2005-11-17 Hitachi, Ltd. Personal authentication device
US20080152195A1 (en) * 2003-03-04 2008-06-26 Hitachi, Ltd. Personal authentication device
US20080049981A1 (en) * 2003-03-04 2008-02-28 Akio Nagasaka Personal authentication device
US20040212725A1 (en) * 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
US7738725B2 (en) * 2003-03-19 2010-06-15 Mitsubishi Electric Research Laboratories, Inc. Stylized rendering using a multi-flash camera
US8705808B2 (en) 2003-09-05 2014-04-22 Honeywell International Inc. Combined face and iris recognition system
US20050074221A1 (en) * 2003-10-06 2005-04-07 Remillard Jeffrey T. Active night vision image intensity balancing system
US7319805B2 (en) * 2003-10-06 2008-01-15 Ford Motor Company Active night vision image intensity balancing system
US20080210870A1 (en) * 2003-10-06 2008-09-04 Remillard Jeffrey T Active Night Vision Image Intensity Balancing System
US7646884B2 (en) * 2003-10-06 2010-01-12 Ford Motor Company Active night vision image intensity balancing system
US7236201B1 (en) * 2004-02-12 2007-06-26 The United States Of America As Represented By The Secertary Of The Navy Method of generating an image in a turbid medium
US20070273611A1 (en) * 2004-04-01 2007-11-29 Torch William C Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US10039445B1 (en) 2004-04-01 2018-08-07 Google Llc Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US20110077548A1 (en) * 2004-04-01 2011-03-31 Torch William C Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US20050281440A1 (en) * 2004-06-18 2005-12-22 Pemer Frederick A Iris feature detection and sensor-based edge detection
US20060008171A1 (en) * 2004-07-06 2006-01-12 Microsoft Corporation Digital photography with flash/no flash extension
US7457477B2 (en) * 2004-07-06 2008-11-25 Microsoft Corporation Digital photography with flash/no flash extension
US20060114328A1 (en) * 2004-11-29 2006-06-01 Samsung Electronics Co., Ltd. Apparatus and method for processing images taking into consideration light reflection, and a computer readable medium storing computer program therefor
KR100647298B1 (en) 2004-11-29 2006-11-23 삼성전자주식회사 Method and apparatus for processing image, and computer readable media for storing computer program considering light reflection
WO2006073781A3 (en) * 2005-01-03 2007-01-18 Retica Systems Inc Method and system for automatically capturing an image of a retina
US20060147095A1 (en) * 2005-01-03 2006-07-06 Usher David B Method and system for automatically capturing an image of a retina
WO2006073781A2 (en) * 2005-01-03 2006-07-13 Retica Systems, Inc. Method and system for automatically capturing an image of a retina
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
US7756301B2 (en) 2005-01-26 2010-07-13 Honeywell International Inc. Iris recognition system and method
US8090157B2 (en) 2005-01-26 2012-01-03 Honeywell International Inc. Approaches and apparatus for eye detection in a digital image
US8488846B2 (en) 2005-01-26 2013-07-16 Honeywell International Inc. Expedient encoding system
US8285005B2 (en) 2005-01-26 2012-10-09 Honeywell International Inc. Distance iris recognition
US8050463B2 (en) 2005-01-26 2011-11-01 Honeywell International Inc. Iris recognition system having image quality metrics
US7593550B2 (en) 2005-01-26 2009-09-22 Honeywell International Inc. Distance iris recognition
US8045764B2 (en) 2005-01-26 2011-10-25 Honeywell International Inc. Expedient encoding system
US7761453B2 (en) 2005-01-26 2010-07-20 Honeywell International Inc. Method and system for indexing and searching an iris image database
US8098901B2 (en) 2005-01-26 2012-01-17 Honeywell International Inc. Standoff iris recognition system
US7760962B2 (en) * 2005-03-30 2010-07-20 Casio Computer Co., Ltd. Image capture apparatus which synthesizes a plurality of images obtained by shooting a subject from different directions, to produce an image in which the influence of glare from a light is reduced
US20060222260A1 (en) * 2005-03-30 2006-10-05 Casio Computer Co., Ltd. Image capture apparatus, image processing method for captured image, and recording medium
US20080203277A1 (en) * 2005-05-31 2008-08-28 Zamir Recognition Systems, Ltd. Light Sensitive System and Method for Attenuating the Effect of Ambient Light
WO2006129316A1 (en) * 2005-05-31 2006-12-07 Zamir Recognition Systems Ltd. Light sensitive system and method for attenuating the effect of ambient light
US7539349B2 (en) * 2005-10-25 2009-05-26 Hewlett-Packard Development Company, L.P. Clear image using pixel voting
US20070092152A1 (en) * 2005-10-25 2007-04-26 Kevin Brokish Clear image using pixel voting
US8798334B2 (en) 2005-11-11 2014-08-05 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US9792499B2 (en) 2005-11-11 2017-10-17 Eyelock Llc Methods for performing biometric recognition of a human eye and corroboration of same
US8818053B2 (en) 2005-11-11 2014-08-26 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US8798333B2 (en) 2005-11-11 2014-08-05 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US8798330B2 (en) 2005-11-11 2014-08-05 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US10102427B2 (en) 2005-11-11 2018-10-16 Eyelock Llc Methods for performing biometric recognition of a human eye and corroboration of same
US9613281B2 (en) 2005-11-11 2017-04-04 Eyelock Llc Methods for performing biometric recognition of a human eye and corroboration of same
US8798331B2 (en) 2005-11-11 2014-08-05 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
US9489416B2 (en) 2006-03-03 2016-11-08 Eyelock Llc Scalable searching of biometric databases using dynamic selection of data subsets
US20120206611A1 (en) * 2006-03-03 2012-08-16 Acterna Llc Systems and methods for visualizing errors in video signals
US8761458B2 (en) 2006-03-03 2014-06-24 Honeywell International Inc. System for iris detection, tracking and recognition at a distance
US8049812B2 (en) 2006-03-03 2011-11-01 Honeywell International Inc. Camera with auto focus capability
US8442276B2 (en) 2006-03-03 2013-05-14 Honeywell International Inc. Invariant radial iris segmentation
US8064647B2 (en) 2006-03-03 2011-11-22 Honeywell International Inc. System for iris detection tracking and recognition at a distance
US8085993B2 (en) 2006-03-03 2011-12-27 Honeywell International Inc. Modular biometrics collection system architecture
US7933507B2 (en) 2006-03-03 2011-04-26 Honeywell International Inc. Single lens splitter camera
US8964858B2 (en) * 2006-03-03 2015-02-24 Jds Uniphase Corporation Systems and methods for visualizing errors in video signals
US9142070B2 (en) 2006-06-27 2015-09-22 Eyelock, Inc. Ensuring the provenance of passengers at a transportation facility
US20080004610A1 (en) * 2006-06-30 2008-01-03 David Miller System for calculating IOL power
US20090274345A1 (en) * 2006-09-22 2009-11-05 Hanna Keith J Compact Biometric Acquisition System and Method
US8965063B2 (en) 2006-09-22 2015-02-24 Eyelock, Inc. Compact biometric acquisition system and method
US9626562B2 (en) 2006-09-22 2017-04-18 Eyelock, Llc Compact biometric acquisition system and method
US9355299B2 (en) 2006-10-02 2016-05-31 Eyelock Llc Fraud resistant biometric financial transaction system and method
US8818051B2 (en) 2006-10-02 2014-08-26 Eyelock, Inc. Fraud resistant biometric financial transaction system and method
US8818052B2 (en) 2006-10-02 2014-08-26 Eyelock, Inc. Fraud resistant biometric financial transaction system and method
US7646422B2 (en) 2006-10-04 2010-01-12 Branislav Kisacanin Illumination and imaging system with glare reduction and method therefor
US20080084499A1 (en) * 2006-10-04 2008-04-10 Delphi Technologies, Inc. Illumination and imaging system with glare reduction and method therefor
EP1908396A1 (en) * 2006-10-04 2008-04-09 Delphi Technologies, Inc. Illumination and imaging system with glare reduction and method therefor
US8350903B2 (en) * 2006-11-09 2013-01-08 Aisin Seiki Kabushiki Kaisha Vehicle-mounted image processing device, and method for controlling vehicle-mounted image processing device
US20090251534A1 (en) * 2006-11-09 2009-10-08 Aisin Seiki Kabushiki Kaisha Vehicle-mounted image processing device, and method for controlling vehicle-mounted image processing device
US8953849B2 (en) 2007-04-19 2015-02-10 Eyelock, Inc. Method and system for biometric recognition
US9646217B2 (en) 2007-04-19 2017-05-09 Eyelock Llc Method and system for biometric recognition
US9959478B2 (en) 2007-04-19 2018-05-01 Eyelock Llc Method and system for biometric recognition
US10395097B2 (en) 2007-04-19 2019-08-27 Eyelock Llc Method and system for biometric recognition
US8063889B2 (en) 2007-04-25 2011-11-22 Honeywell International Inc. Biometric data collection system
US9633260B2 (en) 2007-09-01 2017-04-25 Eyelock Llc System and method for iris data acquisition for biometric identification
US9792498B2 (en) 2007-09-01 2017-10-17 Eyelock Llc Mobile identity platform
US9626563B2 (en) 2007-09-01 2017-04-18 Eyelock Llc Mobile identity platform
US10296791B2 (en) 2007-09-01 2019-05-21 Eyelock Llc Mobile identity platform
US9117119B2 (en) 2007-09-01 2015-08-25 Eyelock, Inc. Mobile identity platform
US9192297B2 (en) 2007-09-01 2015-11-24 Eyelock Llc System and method for iris data acquisition for biometric identification
US9095287B2 (en) 2007-09-01 2015-08-04 Eyelock, Inc. System and method for iris data acquisition for biometric identification
US9055198B2 (en) 2007-09-01 2015-06-09 Eyelock, Inc. Mirror system and method for acquiring biometric data
US8958606B2 (en) 2007-09-01 2015-02-17 Eyelock, Inc. Mirror system and method for acquiring biometric data
US9036871B2 (en) 2007-09-01 2015-05-19 Eyelock, Inc. Mobility identity platform
US9002073B2 (en) 2007-09-01 2015-04-07 Eyelock, Inc. Mobile identity platform
US9946928B2 (en) 2007-09-01 2018-04-17 Eyelock Llc System and method for iris data acquisition for biometric identification
US20090091554A1 (en) * 2007-10-05 2009-04-09 Microsoft Corporation Correcting for ambient light in an optical touch-sensitive device
US8004502B2 (en) 2007-10-05 2011-08-23 Microsoft Corporation Correcting for ambient light in an optical touch-sensitive device
US20090225218A1 (en) * 2008-03-04 2009-09-10 Canon Kabushiki Kaisha Image pickup system, image capturing method, and computer-readbale storage medium storing program for performing image capturing method
US20120013771A1 (en) * 2008-03-04 2012-01-19 Canon Kabushiki Kaisha Image pickup system, image capturing method, and computer-readbale storage medium storing program for performing image capturing method
US9007518B2 (en) * 2008-03-04 2015-04-14 Canon Kabushiki Kaisha Image pickup system, image capturing method, and computer-readbale storage medium storing program for performing image capturing method
US8115861B2 (en) * 2008-03-04 2012-02-14 Canon Kabushiki Kaisha Image pickup system, image capturing method, and computer-readbale storage medium storing program for performing image capturing method
US10398598B2 (en) 2008-04-04 2019-09-03 Truevision Systems, Inc. Apparatus and methods for performing enhanced visually directed procedures under low ambient light conditions
US9168173B2 (en) 2008-04-04 2015-10-27 Truevision Systems, Inc. Apparatus and methods for performing enhanced visually directed procedures under low ambient light conditions
US20090254070A1 (en) * 2008-04-04 2009-10-08 Ashok Burton Tripathi Apparatus and methods for performing enhanced visually directed procedures under low ambient light conditions
US8436907B2 (en) 2008-05-09 2013-05-07 Honeywell International Inc. Heterogeneous video capturing system
US9965672B2 (en) 2008-06-26 2018-05-08 Eyelock Llc Method of reducing visibility of pulsed illumination while acquiring high quality imagery
US20100220186A1 (en) * 2008-07-28 2010-09-02 Bluplanet Pte Ltd Method And System For Detecting Micro-Cracks In Wafers
US9651502B2 (en) * 2008-07-28 2017-05-16 Bluplanet Pte Ltd Method and system for detecting micro-cracks in wafers
US8428337B2 (en) 2008-07-28 2013-04-23 Bluplanet Pte Ltd Apparatus for detecting micro-cracks in wafers and method therefor
US8213782B2 (en) 2008-08-07 2012-07-03 Honeywell International Inc. Predictive autofocusing system
US8090246B2 (en) 2008-08-08 2012-01-03 Honeywell International Inc. Image acquisition system
US20100094262A1 (en) * 2008-10-10 2010-04-15 Ashok Burton Tripathi Real-time surgical reference indicium apparatus and methods for surgical applications
US11051884B2 (en) 2008-10-10 2021-07-06 Alcon, Inc. Real-time surgical reference indicium apparatus and methods for surgical applications
US10117721B2 (en) 2008-10-10 2018-11-06 Truevision Systems, Inc. Real-time surgical reference guides and methods for surgical applications
US9226798B2 (en) * 2008-10-10 2016-01-05 Truevision Systems, Inc. Real-time surgical reference indicium apparatus and methods for surgical applications
US20110160578A1 (en) * 2008-10-10 2011-06-30 Ashok Burton Tripathi Real-time surgical reference guides and methods for surgical applications
EP2334222A4 (en) * 2008-10-16 2014-09-17 Steven Verdooner Apparatus and method for imaging the eye
EP2334222A2 (en) * 2008-10-16 2011-06-22 Verdooner Steven Apparatus and method for imaging the eye
US8280119B2 (en) 2008-12-05 2012-10-02 Honeywell International Inc. Iris recognition system using quality metrics
US20100217278A1 (en) * 2009-02-20 2010-08-26 Ashok Burton Tripathi Real-time surgical reference indicium apparatus and methods for intraocular lens implantation
US11039901B2 (en) 2009-02-20 2021-06-22 Alcon, Inc. Real-time surgical reference indicium apparatus and methods for intraocular lens implantation
US9173717B2 (en) * 2009-02-20 2015-11-03 Truevision Systems, Inc. Real-time surgical reference indicium apparatus and methods for intraocular lens implantation
US10582848B2 (en) 2009-04-01 2020-03-10 Tearscience, Inc. Ocular surface interferometry (OSI) devices and systems for imaging, processing, and/or displaying an ocular tear film
EP2413699B1 (en) * 2009-04-01 2019-11-20 Tearscience, Inc. Ocular surface interferometry (osi) apparatus for imaging an ocular tear film
WO2010115008A1 (en) 2009-04-01 2010-10-07 Tearscience, Inc. Ocular surface interferometry (osi) devices, systems, and methods for imaging, processing, and/or displaying an ocular tear film and/or measuring ocular tear film layer thickness (es)
US11771317B2 (en) 2009-04-01 2023-10-03 Tearscience, Inc. Ocular surface interferometry (OSI) for imaging, processing, and/or displaying an ocular tear film
US11259700B2 (en) 2009-04-01 2022-03-01 Tearscience Inc Ocular surface interferometry (OSI) for imaging, processing, and/or displaying an ocular tear film
US10004396B2 (en) 2009-04-01 2018-06-26 Tearscience, Inc. Ocular surface interferometry (OSI) devices and systems for imaging, processing, and/or displaying an ocular tear film
US9999346B2 (en) 2009-04-01 2018-06-19 Tearscience, Inc. Background reduction apparatuses and methods of ocular surface interferometry (OSI) employing polarization for imaging, processing, and/or displaying an ocular tear film
US10716465B2 (en) 2009-04-01 2020-07-21 Johnson & Johnson Vision Care, Inc. Methods and apparatuses for determining contact lens intolerance in contact lens wearer patients based on dry eye tear film characteristic analysis and dry eye symptoms
US9662008B2 (en) 2009-04-01 2017-05-30 Tearscience, Inc. Ocular surface interferometry (OSI) devices and systems for imaging, processing, and/or displaying an ocular tear film
EP2420180B1 (en) * 2009-04-01 2019-05-22 Tearscience, Inc. Apparatus for measuring ocular tear film layer thickness(es)
US9693682B2 (en) 2009-04-01 2017-07-04 Tearscience, Inc. Ocular surface interferometry (OSI) devices and systems for imaging, processing, and/or displaying an ocular tear film
US9888839B2 (en) 2009-04-01 2018-02-13 Tearscience, Inc. Methods and apparatuses for determining contact lens intolerance in contact lens wearer patients based on dry eye tear film characteristic analysis and dry eye symptoms
US8472681B2 (en) 2009-06-15 2013-06-25 Honeywell International Inc. Iris and ocular recognition system using trace transforms
US8630464B2 (en) 2009-06-15 2014-01-14 Honeywell International Inc. Adaptive iris matching using database indexing
US9414961B2 (en) 2009-10-20 2016-08-16 Truevision Systems, Inc. Real-time surgical reference indicium apparatus and methods for astigmatism correction
US8784443B2 (en) 2009-10-20 2014-07-22 Truevision Systems, Inc. Real-time surgical reference indicium apparatus and methods for astigmatism correction
US20110092984A1 (en) * 2009-10-20 2011-04-21 Ashok Burton Tripathi Real-time Surgical Reference Indicium Apparatus and Methods for Astigmatism Correction
US10368948B2 (en) 2009-10-20 2019-08-06 Truevision Systems, Inc. Real-time surgical reference indicium apparatus and methods for astigmatism correction
US20110119141A1 (en) * 2009-11-16 2011-05-19 Hoyos Corporation Siccolla Identity Verification Architecture and Tool
US20110213342A1 (en) * 2010-02-26 2011-09-01 Ashok Burton Tripathi Real-time Virtual Indicium Apparatus and Methods for Guiding an Implant into an Eye
US8890946B2 (en) 2010-03-01 2014-11-18 Eyefluence, Inc. Systems and methods for spatially controlled scene illumination
US20110211056A1 (en) * 2010-03-01 2011-09-01 Eye-Com Corporation Systems and methods for spatially controlled scene illumination
US8742887B2 (en) 2010-09-03 2014-06-03 Honeywell International Inc. Biometric visitor check system
US10043229B2 (en) 2011-01-26 2018-08-07 Eyelock Llc Method for confirming the identity of an individual while shielding that individual's personal data
US10116888B2 (en) 2011-02-17 2018-10-30 Eyelock Llc Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor
US9280706B2 (en) 2011-02-17 2016-03-08 Eyelock Llc Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor
US8787624B2 (en) 2011-03-08 2014-07-22 Fujitsu Limited Biometric-information processing device, method of processing biometric information, and computer-readable recording medium storing biometric-information processing program
US9124798B2 (en) 2011-05-17 2015-09-01 Eyelock Inc. Systems and methods for illuminating an iris with visible light for biometric acquisition
US8885877B2 (en) 2011-05-20 2014-11-11 Eyefluence, Inc. Systems and methods for identifying gaze tracking scene reference locations
US8911087B2 (en) 2011-05-20 2014-12-16 Eyefluence, Inc. Systems and methods for measuring reactions of head, eyes, eyelids and pupils
US9332156B2 (en) * 2011-06-09 2016-05-03 Hewlett-Packard Development Company, L.P. Glare and shadow mitigation by fusing multiple frames
US20120314103A1 (en) * 2011-06-09 2012-12-13 Peter Ivan Majewicz Glare and shadow mitigation by fusing multiple frames
WO2013028700A3 (en) * 2011-08-22 2013-05-10 Eyelock Inc. Systems and methods for capturing artifact free images
US9122925B2 (en) 2011-08-22 2015-09-01 Eyelock, Inc. Systems and methods for capturing artifact free images
US10708470B2 (en) 2011-10-28 2020-07-07 Google Llc Integrated video camera module
US9866801B2 (en) 2011-10-28 2018-01-09 Google Inc. Home video capturing and monitoring system
USD1016890S1 (en) 2011-10-28 2024-03-05 Google Llc Video camera
USD876522S1 (en) 2011-10-28 2020-02-25 Google Llc Video camera
US9866800B2 (en) 2011-10-28 2018-01-09 Google Inc. Camera module
USD812124S1 (en) 2011-10-28 2018-03-06 Google Llc Camera stand
US9942525B2 (en) 2011-10-28 2018-04-10 Google Llc Integrated video camera module
US9871953B2 (en) 2011-10-28 2018-01-16 Google Inc. Modular camera system
US10321026B2 (en) 2011-10-28 2019-06-11 Google Llc Home video capturing and monitoring system
USD826306S1 (en) 2011-10-28 2018-08-21 Google Llc Video camera
USD905782S1 (en) 2011-10-28 2020-12-22 Google Llc Video camera
USD892195S1 (en) 2011-10-28 2020-08-04 Google Llc Video camera
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
US9858490B2 (en) 2011-12-15 2018-01-02 Fujitsu Limited Vein authentication method, image processing method, and vein authentication device
US10135897B2 (en) 2012-01-06 2018-11-20 Google Llc Backfill of video stream
US10708334B2 (en) 2012-01-06 2020-07-07 Google Llc Backfill of video stream
WO2014003994A1 (en) * 2012-06-27 2014-01-03 3M Innovative Properties Company Image enhancement methods
WO2014003990A1 (en) * 2012-06-27 2014-01-03 3M Innovative Properties Company Image enhancement methods
US8610976B1 (en) 2012-06-27 2013-12-17 3M Innovative Properties Company Image enhancement methods
US8743426B2 (en) 2012-06-27 2014-06-03 3M Innovative Properties Company Image enhancement methods
WO2014003991A1 (en) * 2012-06-27 2014-01-03 3M Innovative Properties Company Image enhancement methods
US10740933B2 (en) 2012-08-30 2020-08-11 Alcon Inc. Imaging system and methods displaying a fused multidimensional reconstructed image
US10019819B2 (en) 2012-08-30 2018-07-10 Truevision Systems, Inc. Imaging system and methods displaying a fused multidimensional reconstructed image
US9552660B2 (en) 2012-08-30 2017-01-24 Truevision Systems, Inc. Imaging system and methods displaying a fused multidimensional reconstructed image
US9668647B2 (en) 2012-12-21 2017-06-06 Tearscience Inc. Full-eye illumination ocular surface imaging of an ocular tear film for determining tear film thickness and/or providing ocular topography
US10582849B2 (en) 2012-12-21 2020-03-10 Tearscience, Inc. Full-eye illumination ocular surface imaging of an ocular tear film for determining tear film thickness and/or providing ocular topography
US10244939B2 (en) 2012-12-21 2019-04-02 Tearscience, Inc. Full-eye illumination ocular surface imaging of an ocular tear film for determining tear film thickness and/or providing ocular topography
US9993151B2 (en) 2012-12-21 2018-06-12 Tearscience, Inc. Full-eye illumination ocular surface imaging of an ocular tear film for determining tear film thickness and/or providing ocular topography
US11141065B2 (en) 2013-05-03 2021-10-12 Tearscience, Inc Eyelid illumination systems and methods for imaging meibomian glands for meibomian gland analysis
US10278587B2 (en) 2013-05-03 2019-05-07 Tearscience, Inc. Eyelid illumination systems and method for imaging meibomian glands for meibomian gland analysis
US11844586B2 (en) 2013-05-03 2023-12-19 Tearscience, Inc. Eyelid illumination systems and methods for imaging meibomian glands for meibomian gland analysis
US9053558B2 (en) 2013-07-26 2015-06-09 Rui Shen Method and system for fusing multiple images
US9795290B2 (en) 2013-11-15 2017-10-24 Tearscience, Inc. Ocular tear film peak detection and stabilization detection systems and methods for determining tear film layer characteristics
US10512396B2 (en) 2013-11-15 2019-12-24 Tearscience, Inc. Ocular tear film peak detection and stabilization detection systems and methods for determining tear film layer characteristics
US9350894B2 (en) * 2013-12-25 2016-05-24 Pfu Limited Image capturing system
US9396571B2 (en) * 2014-02-10 2016-07-19 International Business Machines Corporation Simplified lighting compositing
US20150228098A1 (en) * 2014-02-10 2015-08-13 International Business Machines Corporation Simplified lighting compositing
US10089767B2 (en) 2014-02-10 2018-10-02 International Business Machines Corporation Simplified lighting compositing
US10621769B2 (en) 2014-02-10 2020-04-14 International Business Machines Corporation Simplified lighting compositing
DE102014115540A1 (en) * 2014-10-24 2016-04-28 Sick Ag Camera and method for capturing objects
US9609194B2 (en) 2014-10-24 2017-03-28 Sick Ag Camera and method for the detection of objects
WO2016131075A1 (en) * 2015-02-20 2016-08-25 Seeing Machines Limited Glare reduction
US10521683B2 (en) 2015-02-20 2019-12-31 Seeing Machines Limited Glare reduction
DE102015208087A1 (en) * 2015-04-30 2016-11-03 Carl Zeiss Microscopy Gmbh Method for generating a reflection-reduced contrast image and related devices
US10218916B2 (en) 2015-05-27 2019-02-26 Google Llc Camera with LED illumination
US10397490B2 (en) 2015-05-27 2019-08-27 Google Llc Camera illumination
US11219107B2 (en) 2015-05-27 2022-01-04 Google Llc Electronic device with adjustable illumination
US11596039B2 (en) 2015-05-27 2023-02-28 Google Llc Electronic device with adjustable illumination
US9866760B2 (en) 2015-05-27 2018-01-09 Google Inc. Multi-mode LED illumination system
US10306157B2 (en) 2015-06-12 2019-05-28 Google Llc Using images of a monitored scene to identify windows
US20160364612A1 (en) * 2015-06-12 2016-12-15 Google Inc. Using a Scene Illuminating Infrared Emitter Array in a Video Monitoring Camera to Estimate the Position of the Camera
US9886620B2 (en) * 2015-06-12 2018-02-06 Google Llc Using a scene illuminating infrared emitter array in a video monitoring camera to estimate the position of the camera
US10602065B2 (en) 2015-06-12 2020-03-24 Google Llc Tile-based camera mode switching
US9838602B2 (en) 2015-06-12 2017-12-05 Google Inc. Day and night detection based on one or more of illuminant detection, Lux level detection, and tiling
US10008003B2 (en) 2015-06-12 2018-06-26 Google Llc Simulating an infrared emitter array in a video monitoring camera to construct a lookup table for depth determination
US20190387202A1 (en) * 2015-06-12 2019-12-19 Google Llc Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
US10869003B2 (en) * 2015-06-12 2020-12-15 Google Llc Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
US9900560B1 (en) 2015-06-12 2018-02-20 Google Inc. Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
US10341560B2 (en) 2015-06-12 2019-07-02 Google Llc Camera mode switching based on light source determination
US10389954B2 (en) 2015-06-12 2019-08-20 Google Llc Using images of a monitored scene to identify windows
US10389986B2 (en) 2015-06-12 2019-08-20 Google Llc Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
US20170070712A1 (en) * 2015-09-04 2017-03-09 Panasonic Intellectual Property Management Co., Ltd. Lighting device, lighting system, and program
US10110865B2 (en) * 2015-09-04 2018-10-23 Panasonic Intellectual Property Management Co., Ltd. Lighting device, lighting system, and program
US10678108B2 (en) 2016-10-31 2020-06-09 Google Llc Electrochromic filtering in a camera
US10180615B2 (en) 2016-10-31 2019-01-15 Google Llc Electrochromic filtering in a camera
US11058513B2 (en) 2017-04-24 2021-07-13 Alcon, Inc. Stereoscopic visualization camera and platform
US11083537B2 (en) 2017-04-24 2021-08-10 Alcon Inc. Stereoscopic camera with fluorescence visualization
US10917543B2 (en) 2017-04-24 2021-02-09 Alcon Inc. Stereoscopic visualization camera and integrated robotics platform
US10299880B2 (en) 2017-04-24 2019-05-28 Truevision Systems, Inc. Stereoscopic visualization camera and platform
US11179035B2 (en) * 2018-07-25 2021-11-23 Natus Medical Incorporated Real-time removal of IR LED reflections from an image
EP3826528A4 (en) * 2018-07-25 2022-07-27 Natus Medical Incorporated Real-time removal of ir led reflections from an image
WO2020023721A1 (en) * 2018-07-25 2020-01-30 Natus Medical Incorporated Real-time removal of ir led reflections from an image
US11763445B2 (en) 2020-01-06 2023-09-19 Ricoh Company, Ltd. Inspection of a target object using a comparison with a master image and a strictness of a quality evaluation threshold value

Also Published As

Publication number Publication date
WO1999038121A1 (en) 1999-07-29
EP1050019A1 (en) 2000-11-08
AU2341199A (en) 1999-08-09
KR20010040433A (en) 2001-05-15
JP2002501265A (en) 2002-01-15

Similar Documents

Publication Publication Date Title
US6088470A (en) Method and apparatus for removal of bright or dark spots by the fusion of multiple images
US6021210A (en) Image subtraction to remove ambient illumination
US6055322A (en) Method and apparatus for illuminating and imaging eyes through eyeglasses using multiple sources of illumination
JP4258868B2 (en) Iris pattern recognition device
EP0932114B1 (en) A method of and apparatus for detecting a face-like region
US6069967A (en) Method and apparatus for illuminating and imaging eyes through eyeglasses
US8170293B2 (en) Multimodal ocular biometric system and methods
US20070110285A1 (en) Apparatus and methods for detecting the presence of a human eye
Nowara et al. Near-infrared imaging photoplethysmography during driving
US20180018516A1 (en) Method and apparatus for iris recognition
CN113892254A (en) Image sensor under display
KR20180102637A (en) Systems and methods of biometric analysis
CA2833599C (en) Method of pupil segmentation
KR20110094037A (en) Video infrared retinal image scanner
CN109255282B (en) Biological identification method, device and system
US20220148218A1 (en) System and method for eye tracking
Chen et al. Real-time eye localization, blink detection, and gaze estimation system without infrared illumination
Crihalmeanu et al. On the use of multispectral conjunctival vasculature as a soft biometric
US11179035B2 (en) Real-time removal of IR LED reflections from an image
WO2022005336A1 (en) Noise-resilient vasculature localization method with regularized segmentation
KR100647298B1 (en) Method and apparatus for processing image, and computer readable media for storing computer program considering light reflection
JP2688527B2 (en) Gaze direction detection method
KR101122513B1 (en) Assuming system of eyeball position using 3-dimension position information and assuming method of eyeball position
KR101762852B1 (en) Iris recognition apparatus
JPH04174304A (en) Detecting apparatus for position of eye

Legal Events

Date Code Title Description
AS Assignment

Owner name: SENSAR, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAMUS, THEODORE A.;SALGANICOFF, MARCOS;CHMIELEWSKI, JR., THOMAS A.;AND OTHERS;REEL/FRAME:009032/0456

Effective date: 19980126

Owner name: SARNOFF CORPORATION, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAMUS, THEODORE A.;SALGANICOFF, MARCOS;CHMIELEWSKI, JR., THOMAS A.;AND OTHERS;REEL/FRAME:009032/0456

Effective date: 19980126

CC Certificate of correction
REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
AS Assignment

Owner name: PERSEUS 2000, L.L.C., AS AGENT, DISTRICT OF COLUMB

Free format text: SECURITY AGREEMENT;ASSIGNOR:IRIDIAN TECHNOLOGIES, INC.;REEL/FRAME:015562/0039

Effective date: 20040701

FP Lapsed due to failure to pay maintenance fee

Effective date: 20040711

AS Assignment

Owner name: IRIDIAN TECHNOLOGIES, INC., NEW JERSEY

Free format text: RELEASE & TERMINATION OF INTELLECTUAL PROPERTY SEC;ASSIGNOR:PERSEUS 2000, L.L.C.;REEL/FRAME:016004/0911

Effective date: 20050330

AS Assignment

Owner name: BANK OF AMERICA, N.A., ILLINOIS

Free format text: SECURITY AGREEMENT;ASSIGNORS:L-1 IDENTITY SOLUTIONS, INC.;IMAGING AUTOMATION, INC.;TRANS DIGITAL TECHNOLOGIES CORPORATION;AND OTHERS;REEL/FRAME:018679/0105

Effective date: 20061019

AS Assignment

Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:IRIDIAN TECHNOLOGIES, INC.;REEL/FRAME:021398/0128

Effective date: 20080805

AS Assignment

Owner name: IRIDIAN TECHNOLOGIES LLC, CONNECTICUT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:026647/0403

Effective date: 20110725

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362