WO2004029861A1 - Illumination for face recognition - Google Patents

Illumination for face recognition Download PDF

Info

Publication number
WO2004029861A1
WO2004029861A1 PCT/AU2003/001261 AU0301261W WO2004029861A1 WO 2004029861 A1 WO2004029861 A1 WO 2004029861A1 AU 0301261 W AU0301261 W AU 0301261W WO 2004029861 A1 WO2004029861 A1 WO 2004029861A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
light distribution
image
illumination
parameters
Prior art date
Application number
PCT/AU2003/001261
Other languages
French (fr)
Inventor
Edward Simon Dunstone
Fabrice Lestideau
Teewoon Tan
Original Assignee
Biometix Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biometix Pty Ltd filed Critical Biometix Pty Ltd
Priority to AU2003265722A priority Critical patent/AU2003265722A1/en
Publication of WO2004029861A1 publication Critical patent/WO2004029861A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings

Definitions

  • the present invention relates generally to illumination for image processing applications, such as face recognition systems.
  • Face recognition systems operate in many cases under widely varying lighting conditions, ranging from office environments with fluorescent ceiling lights and halogen lamps to a partly or fully outdoor environment with strong exposure to natural sunlight. The performance of most if not all face recognition systems is strongly affected by environmental lighting conditions.
  • the performance-degrading effects of unfavourable environmental lighting on a face recognition system can be assessed using a camera operatively connected to a computer.
  • the assessment can be displayed to a user via an output device.
  • Parameters relating to the assessment of the lighting conditions can be used to alleviate unfavourable lighting conditions, using one or more controllable light sources that form part of a controllable lighting system.
  • the controllable light sources are located in a distributed manner relative to a camera, and are oriented to illuminate a subject viewed by the camera. An image of the subject is captured by the camera and supplied to a computer system or other computing hardware.
  • the computer system is in turn connected to a power distributor for controlling the power supplied to the controllable light sources.
  • the light distribution of the subject is analysed by the computer system from the image of the subject taken by the camera. Light intensity settings for the light sources are consequently calculated. The power supplied to the light sources is correspondingly adjusted by the power distributor connected to the computer system.
  • Examples of applications that may benefit from an active subject illumination system also ' include those that perform human face tracking and detection, human face extraction, hand gesture analysis and the detection of three-dimensional subjects. Machine vision applications also exist.
  • Fig. 1 is a schematic diagram of a controllable lighting system that can be used to determine and modify the illumination of a subject.
  • Fig. 2 is a schematic diagram of a computer system that can be used in the controllable lighting system represented.
  • Fig. 3 is a schematic diagram of a power distributor that can be used in the controllable lighting system.
  • Fig. 4 is a flow chart that represents steps involved in determining new light intensities in the controllable lighting system.
  • Fig. 5 is a flow chart that represents steps involved in creating a light distribution model in a face recognition application.
  • Fig. 6 is a schematic diagram that represents vertical and horizontal light distribution histograms for unbalanced environmental lighting illuminating the subject of a human face.
  • Fig. 7 is a schematic diagram that represents vertical and horizontal histograms after compensatory lighting provided by the controllable lighting system.
  • Fig. 8 is a vertical histogram of an input signal of the subject of a human face having a large amount of eye shadows.
  • Fig. 9 is a vertical histogram of the reference model of a human face having some amount of eye shadows.
  • Fig. 10 represents two horizontal histograms used in the creation and comparison of the input and reference lighting distribution models.
  • Fig. 11 is a flow chart that represents steps involved in evaluating the light distribution on a subject and feeding that data back to the user via one or more output devices.
  • Fig. 12 is a diagram of examples for representing light distribution parameters to a user.
  • Fig. 1 schematically represents a controllable lighting system 100 for actively illuminating a subject of interest 110.
  • Camera 125 is positioned to capture the subject of interest 110, such as a human face, illuminated by controllable light sources 120.
  • the camera 125 provides an image signal to a computer system 200, which has a video display 290.
  • the light sources 120 are controlled by power distributor 300.
  • the camera 125 and light sources 120 operate in conjunction with controllable light distributor 300 and the computer system 200.
  • Information relating to the lighting system 100 can be displayed to a user via the video display 290, and user input accepted by the computer system 200.
  • Appropriate software executes on the computer system 200, the operation of which is described herein.
  • Four light sources 120 are illustrated in Fig.
  • At least one controllable light source 120 is used, and the subject 110 is in many cases also illuminated by static or dynamic ambient lighting, or a combination of both static and dynamic lighting.
  • Static ambient light may be provided by regular lighting fixtures, while dynamic ambient lighting may be provided by daylight.
  • the camera 125 can be any suitable image capture device. As described herein, the camera 125 provided an output video signal that can be digitised to provide an effective resolution. A resolution of 320 pixels wide by 240 pixels high can be used. An example of a suitable camera is the Logitech QuickCam Pro Universal Serial Bus (USB) camera.
  • the camera 125 is generally a visible-light camera. Other possibilities include, for example, an infrared-sensitive camera 125, coupled with infrared light sources 120.
  • the camera 125 provides an image data signal to the computing hardware 200.
  • the image data signal is converted into an appropriate form through an image capture process.
  • the computer system 200 might include an analog-to-digital converter (ADC) that receives raw signal data from the camera 125 and then digitises the signal using the ADC.
  • ADC analog-to-digital converter
  • the controllable light sources 120 are attached at the ends of bars extending from the camera 125. This is simply one possible orientation that may be adopted. Various other numbers and orientations of light sources 120 can be used. Any suitable form of lighting boom or other apparatus can be used to position the light sources 120.
  • the light sources 120 are typically electrically-operated light sources, such as tungsten-element lamps. Other types of controllable light sources 120 can instead be used. In the case of electrically-operated lights, electrical power can be supplied to the light sources 120 either via the power distributor 300, or independently of the power distributor 300. Suitable lights include Cold Cathode Fluorescent Lights (CCFL), regular fluorescent lights and halogen lamps.
  • CCFL Cold Cathode Fluorescent Lights
  • regular fluorescent lights regular fluorescent lights and halogen lamps.
  • wireless connections can alternatively be used for any one or more of these links. Wireless may be selected for convenience, as an example.
  • a wireless connection between the power distributor 300 and the light sources 120 can be expected to require a separate supply of power to the light sources 120.
  • Fig. 2 is a schematic representation of a computer system 200 that can be used in the lighting system 100 described with reference to Fig. 1.
  • the components of the computer system 200 include a computer 220, a keyboard 210 and mouse 215, and a video display 290.
  • the computer 220 includes a processor 240, a memory 250, input/output (I/O) interfaces 260, 265, a video interface 245, and a storage device 255.
  • the computer system 200 is connected via an input/output (I O) interface 265 to the camera 125 and the power distributor 300.
  • Each of the components of the computer 220 is connected to an internal bus 230 that includes data, address, and control buses, to allow components of the computer 220 to communicate with each other via the bus 230.
  • the processor 240 is a central processing unit (CPU) that executes a suitable operating system and computer software executing under the operating system.
  • the computer software is described in further detail below.
  • the memory 250 includes random access memory (RAM) and read-only memory (ROM), and is used under direction of the processor 240.
  • the video interface 245 is connected to video display 290 and provides video signals for display on the video display 290.
  • User input to operate the computer 220 is provided from the keyboard 210 and mouse 215.
  • the storage device 255 can include a disk drive or any other suitable storage medium.
  • a suitable computer operating system installed on the computer system 200.
  • An operator can interact with the computer system 200 using the keyboard 210 and mouse 215 to operate the programmed computer software executing on the computer 220.
  • the application software that executes on the computer system may be part of an image processing application that includes, as part of the application software, software components for controlling the light sources 125 as described herein.
  • the application software is programmed using any suitable computer programming language, and may be thought of as comprising various software code means for achieving particular steps.
  • the computer software may be recorded on a portable storage medium, in which case, the computer software program is accessed by the computer system 200 from the storage device 255.
  • the computer system 200 described above is described only as an example of a particular type of system suitable for implementing the described techniques.
  • application-specific computer hardware having no, or only a minimum of user interface controls, might be used in place of the computer system 200 to provide computing capability for receiving an image signal from the camera 125 and providing light intensity settings to the power distributor 300.
  • Examples include personal Digital Assistants (PDAs), such as Palm Pilots produced by the Palm Corporation, and custom-designed stand-alone embedded systems.
  • Fig. 3 schematically represents the power distributor 300 represented in Fig. 1.
  • the power distributor 300 is connected to the computer 220, and to the controllable light sources 120.
  • the power distributor 300 receives control information from the computer 220, and provides, as output, controlled voltages to the light sources 120.
  • the power distributor communicates with the computer 220 via an input/output (I/O) interface 390.
  • the light intensity settings for channel 1 are stored in memory store 310.
  • the intensity settings for channels 2, 3 and 4 are stored in respective 315, 320 and 325.
  • the channel setting memory stores 310, 315, 320 and 325 can be implemented as latches, DRAM, SRAM, or some other form of memory device.
  • the computer 220 sets or gets the values in the channel settings 310, 315, 320 and 325 via an internal bus 305.
  • the internal bus includes data, address and control buses to allow the computer 220 to select the appropriate channel to read or write.
  • Pulse waveform generators 330, 335, 340 and 345 are connected to each respective channel setting memory store 310, 315, 320 and 325.
  • the pulse waveform generators 330 to 345 generate a pulse width modulated electrical signal, in which the width of the pulse is proportional to the light intensity values stored in memory stores 310, to 325.
  • the smoothing circuits 350, 355, 360 and 365 are based on the charging and discharging of capacitors, which smooth out the pulse width modulated input signals.
  • the output of the smoothing circuits is an analog voltage signal that is proportional to the input pulse width, which in turn is proportional to the light intensity values.
  • the output of the smoothing circuits are then amplified using power amplifiers 370, 375, 380 and 385.
  • the power amplifiers amplify the input signals to the appropriate voltage and provide enough current to drive the lights 120A, 120B, 120C and 120D.
  • Fig. 4 is a flow chart that outlines the procedural steps in the operation of the lighting system 100, and which are predominantly performed by the computer system 200.
  • Image data from the camera 125 is received by the computer system 200 in step 405.
  • a light distribution relating the captured image data is calculated using the computer system 200 in step 410. This light distribution is subsequently analysed in step 415.
  • Light distribution parameters are adjusted in step 420.
  • a feedback process can be used, in which a revised distribution is calculated and analysed in repeated steps 410 and 415. This feedback loop can be cycled as required until the light distribution parameters are deemed satisfactory.
  • the created light distribution model is analysed in step 415.
  • the results of this analysis are ultimately used to adjust the light intensity settings for each light source 120 in step 435.
  • Feedback can be provided in adjustment steps 410 to 420 to iteratively adjust the light model. This feedback is optional, and gives an extra level of control over the model creation process, and model parameters are modified based on the results of the distribution model analysis.
  • the light distribution parameters, so adjusted, can be compared with a reference light distribution in step 425.
  • a reference light distribution model is compared with the light distribution model created from the input image. Parameters representing the differences are generated.
  • the reference light distribution model can be created from a set of training images. Another possibility is to create the reference light distribution model from expert knowledge.
  • New light intensity settings are calculated as a result of comparing with the reference light model in step 430.
  • difference parameters generated in step 425 are mapped to numbers, one value for each light source. For example, if the difference parameters are the values in a histogram, the mapping function could simply return the average over all bins plus an offset. The resulting value is the new light intensity setting. In another example, the difference parameters are already single values, and in this case the mapping function could just scale that value plus an offset to create the new light intensity value.
  • Power settings supplied to the controllable light sources 120 can be then adjusted using the power distributor 300 in step 435.
  • the power distributor 300 converts the light settings, received from the computer system 200, to corresponding voltage levels for the light sources 120.
  • a human operator can take partial or complete control over the generation of new light intensity settings in step 430.
  • the human operator may choose to set the light source intensities to give the desired illumination. These intensities may then be fixed and used in future image captures.
  • the light intensities set by the human operator can also be used as initial set points for the automatic system.
  • Fig. 11 is a flow chart of a further alternative procedure. Steps 405', 410' and 415', corresponding with steps 405, 410, and 415 are performed as described with reference to Fig. 4.
  • the light distribution parameters from step 415' can be represented for the benefit of the user using the video display 290 in step 420'.
  • This representation of the light distribution parameters can be provided by a graph, or some other form of graphical representation. Non-graphical representation is also possible and may take the form, for example, of an audio alert. This may be a sound that changes amplitude proportional to one of the light distribution parameters.
  • the illumination of the subject can be manually adjusted by a user viewing the displayed needle or graph to provide a favoured or desired light distribution.
  • the illumination can be controlled using the computer 200 and the power distributor 300.
  • a manual interface can be provided by the computer 200 to allow a user to manually adjust power distributed to the controllable light sources 120.
  • the power can be adjusted directly at the power distributor 300, or through manual adjustment of the light sources 120 themselves. That is, a user can manually change the position or orientation of the light sources 120, for example, with respect to the subject.
  • a first mode of operation operates as a classical feedback system.
  • a second mode of operation (Mode II) performs an unconstrained optimisation.
  • a third mode of operation (Mode III) tests different combinations of possible light source intensities to determine a satisfactory combination. Each of these three modes is described in further detail below.
  • the lighting system 100 can be treated as a "plant" to be controlled.
  • the reference light distribution model is effectively the reference input or set point. Model parameters are adjusted in step 420 to create an active feedback system.
  • Proportional, integration, difference and PID controllers can be implemented by using the appropriate mapping function when mapping to new light intensities in step 430. Using the appropriate feedback controller may reduce unwanted oscillations in the system.
  • Mode II operation unconstrained optimization
  • the lighting system 100 can also operate as an unconstrained optimiser.
  • An objective function is created, and derived from the multidimensional output of element 425.
  • the objective function might simply be the mapping function of 430, that is, the output from 430 is used as the function value.
  • a standard unconstrained optimisation can then be performed to find the minimum function value.
  • the parameters for the optimisation are the model parameters, adjusted in step 420.
  • a set of pre-programmed light intensities is stored in 430.
  • Each setting in this set is sent to the light power distributor via step 435 in a sequential, or otherwise controlled, manner.
  • a pause/delay may be added before a new light setting is sent to 435.
  • the difference parameters obtained in step 425 for each light setting in the set is stored in memory 250 or a storage device 255 along with the corresponding light intensity setting used.
  • Fig. 5 provides an outline, in greater detail, of the step 415 of analysing the light distribution described above for a facial recognition process.
  • the subject of interest is a human face
  • h step 505 the incoming image signal is first normalised for size, contrast and brightness settings, before further processing. This step 505 may be unnecessary in many cases as the received image signal is of sufficient quality to use without performing such normalisation.
  • step 510 eye locations are detected using any suitable technique.
  • the eyes can be detected in step 510 using techniques ranging from neural networks, support vector machines (SNMs) to template matching.
  • SNMs which are kernel-based learning machines, can be used to implement Structural Risk Minimisation.
  • SNMs which are kernel-based learning machines, can be used to implement Structural Risk Minimisation.
  • N. Cristianini and J. Shawe-Taylor An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, 2000.
  • the best left and right eye locations then indicate the most likely face location, whereby correction for head size and in-plane rotation can be performed in step 515.
  • the face image is also cropped with a bounding box that encompasses only the face without the jaw, neck or hairline. This reduces inconsistencies that may be associated with different hairstyles, clothing and/or jewellery. Pixel intensities are projected onto the vertical and horizontal axes to create two histograms as described in step 520.
  • Projections are performed by summing all the pixel values along each row for the vertical histogram, and summing all the pixel values along each column for the horizontal histogram.
  • the horizontal and vertical histograms for the reference model are created from the projections of the average image of a set of training images.
  • the histograms represent the light distribution model for the input face.
  • Fig. 6 schematically represents an unbalanced environmental light illuminating a human face from one direction.
  • the sun 600 is at the top-left corner and casts shadows under the eyes and nose of the face 605.
  • the horizontal 615 and vertical 610 histograms are shown.
  • the deep indentations marked at the beginning and end by dashed lines are due to the darker nature of the eyes, eye brows and nose bridge. Shadows can be caused by unbalanced illumination, usually from lighting that is biased to greater intensities coming from the top of the head. The width and depth of the indentations will typically become more distinct as the amount of shadow increases. Therefore, the amount of light required at the top and bottom can be determined by examining the width of the indentation.
  • the beginning and end of the dark areas around the eyes for the input image are indicated by D x and D 2 respectively.
  • the beginning and end of the dark areas around the eyes for the reference model are indicated by R ⁇ and R 2 respectively.
  • Eye detection The input face must be extracted prior to further processing. Face extraction is achieved by first detecting the eye locations. Once the locations of the eyes are known the face can be extracted and normalised for size and in-plane rotations. The description of a method for detecting the eyes using Support Vector Machines is given below.
  • a decision hyperplane that separates those samples into two classes with a maximal margin is computed during the training phase.
  • the samples are mapped to a linear or non-linear space in which this hyperplane is determined using constrained optimisation.
  • a decision function can then be created in terms of the kernel k , as presented in Equation [1] below.
  • / is the number of support vectors
  • y. are the class labels ⁇ -1,1 ⁇
  • a t are the weighting factors found during the optimisation process
  • b is the bias of the hyperplane
  • x. are the support vectors
  • x is the unknown sample to be classified.
  • images of eyes centered on the left and right eyes are used as positive class samples and negative class samples are obtained by the random sampling of scenes not containing eyes.
  • Training can be performed using various algorithms.
  • a suitable example is Sequential Minimal Optimisation.
  • a relevant reference for such a technique is J. Platt, Fast Training of Support Vector Machines Using Sequential Minimal Optimization, B. Sch ⁇ lkopf, C. J. C. Burges and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pp. 185-208, MIT Press, Cambridge, MA, 1999.
  • Chunking methods can also be used, as an alternative.
  • Eye detection can be performed by using the trained SNM in a template matching process, where sub-images in the input image are extracted at selected locations and used in the evaluation of the decision function in (1). A more refined search can be performed at the locations that give the most positive f(x) by using smaller incremental steps.
  • the locations with the most positive f(x) are the most likely eye locations.
  • An alternative procedure uses two different SNMs, one for the left and one for the right eye.
  • a knowledge base can be used to reduce the total number of left-right eye combinations by including conditions such as minimum and maximum eye distances.
  • Analysis of a light distribution can be performed based on a set of training samples.
  • the average image from a set of training faces can be used to determine the horizontal and vertical histograms for the reference model.
  • the training faces may be collected from samples that have unbalanced illumination, in which case the resulting system attempts to match the illumination of the input face to that of the reference model.
  • Using horizontal and vertical histograms as light distribution models is one possible method of representing a light distribution. Other methods of analysing the light distribution can also be used.
  • the parameters used to model the light distribution can be adjusted based on a set of training images and also possibly on a series of live images.
  • the parameters are then fixed so that the light intensities are held at the trained setting. This assumes that the fixed setting will be suited for future subjects. This is usually true if the subjects of interest have similar properties, the camera and light sources are in a fixed position, and the environmental lighting does not vary significantly over time.
  • One of the advantages of holding the lighting conditions constant is a quicker response time.
  • Equation [2] The following difference parameter of Equation [2] below is created from the vertical histograms of the input image and the reference model
  • Equation [4] /z 7 is the horizontal histogram obtained from the input image
  • h R is the horizontal histogram from the reference model
  • n is the total number of histogram bins. Values at the t ' -th bin are referenced by A 7 (z) and h R (i).
  • Equation [5] Another parameter representing the horizontal histogram differences is used, which is given by Equation [5] below.
  • this parameter imposes a condition of symmetry into the illumination matching process. For example, if h R (i) is set to zero for all / then this parameter in effect measures the symmetry of the left-right illumination. Matching the input image illumination to a reference that is created from an unbalanced lighting model may be desirable, in which case the reference model histogram contains the desired model.
  • m L , m R , m ⁇ , and m B represent the raw amounts by which the intensities of the left, right, top and bottom lights should be adjusted respectively.
  • the inclusion of the factor s R enables the use of just symmetry alone for the left and right illumination adjustments when h R (i) is set to zero for all i .
  • Fig. 7 illustrates the effects of using the controllable lighting system 100 on the scene illustrated in Fig. 6.
  • the same face 605, is represented but with improved illumination, now shown as face 705.
  • the reference model has a horizontal histogram of all zeros so that only p v and p H are minimised.
  • the new horizontal and vertical histograms are indicated by 715 and 710 respectively.
  • the weighting factors b L , b R , b ⁇ and b B , and the previous settings l L ' , l R ' , l ⁇ ' and l B ' are controlled.
  • the weighting factors and the initial values for the previous settings can be pre-determined by experimentation to achieve an acceptable result.
  • Equation [14] The expression on the right-hand side of Equation [14] is minimised by modifying the light source intensities l L , l R , l ⁇ and l B as variables using any suitable method.
  • the controllable lighting system 100 can also be operated in a manual mode as described herein with reference to Fig. 11, following a representation to the user of light distribution parameters relation to the spatial light distribution of the received image.
  • Fig. 12 represents examples of the form in which the light distribution parameters can be represented to the user via a graphical user interface of software executing on the computer system 200.
  • a first example involves the display of a graphical dial 1205, having a needle 1215 that rotates in proportion to the relative amounts of lighting on either side of the face of the subject. For example, the needle 1215 rotates further to the left, if the left side of the face is more brightly illuminated than the right. The needle 1215 then rotates to the right if the right side of the face is more brightly illuminated than the left.
  • the length and colour of the needle 1215 can also be made to change with other light distribution parameters.
  • the needle 1215 can be programmed to graphically display the value of the parameter / # .
  • a graph 1210 can also be used to represent the spatial light distribution of the image of the subject.
  • the graph 1210 is a representation of the left-right symmetry of the light distribution.
  • the graph 1210 is "positive" if the left side of the face is more brightly illuminated than the right side.
  • the graph 1210 becomes negative if the right side of the face is more brightly illuminated than the left side.
  • Other methods or graphing are possible, for example plotting the graph vertically instead of horizontally as in 1210.
  • a simple histogram 1000 such as indicated in Fig. 10, can be presented to a user.
  • the light distribution parameters determined from the light distribution of the received image can be used to provide a simple representation of the light distribution in terms of a simple textual analysis, such as "right side darker” or “left side darker”, suited to the particular application or subject.
  • a suite of messages, or related recommendations, can be provided to the user.
  • the assumed properties described with reference to the face recognition application described herein in the subsection entitled “Mapping to a representative number” can be used to provide such messages or recommendations.
  • a generic representation of the subject 110 can be graphically displayed, with a graphical indication of the light distribution being provided to a user within a frame representing the subject 110.
  • Use of "lighter” and “darker” areas may be made to provide a simplified but nonetheless useful representation of the light distribution. The user can act upon the representation of the light distribution as the user sees fit.
  • the representation of the light distribution provided to the user can be periodically updated to ensure “live” feedback corresponding with changes in lighting conditions, either as a result in ambient lighting changes or adjustments to the light sources 120.
  • the dial 1205 or graph 1210 can be updated in "real-time”; that is, an update occurs when a new light distribution assessment is periodically made.
  • the user can manually adjust the lighting of the subject in manner desired to achieve a suitable lighting distribution that is measured, and represented to the user.
  • Applications of the lighting system 100 described herein include face detection and recognition in an environment where the natural or artificial lighting is less than optimal.
  • the lighting system 100 can also be used in cases in which the images used for "registration" in image processing applications are taken under less optimal conditions. These adverse lighting conditions can be simulated or approximated, which may improve the performance of the image processing system.
  • Other applications may involve detecting and reading hand and/or body gestures, in which case the same considerations apply.
  • Detecting the presence of three-dimensional subjects is also possible.
  • the light sources are adjusted, and if the only changes in the light distribution are the brightness and/or contrast content, which may be uniform or non-uniform, without the accentuation or attenuation of shadows then it is likely that the subject of interest is a two-dimensional image, such as a photograph.
  • Horizontal and vertical histograms can monitor the changes in p v and p H for different light source intensity settings. Assuming that each light source produces a uniform illumination over a two-dimensional subject, and if both p v and p H has insignificant changes as light intensities are varied then the subject of interest is likely a two- dimensional image. This ability is useful in applications that require knowledge about the flatness of the subject, such as face recognition where one can determine whether a person has held up a photograph to the camera in order to circumvent the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The performance-degrading effects of unfavourable environmental lighting on face recognition systems can be assessed and quantified using a camera system operatively connected to a computer system. An assessment can be reported to the user via an appropriate output device, or stored in a storage medium for later use. The assessed lighting can then be used to alleviate the effects of unfavourable lighting conditions using one or more controllable light sources (120) that form part of a controllable lighting system. The controllable light sources (120) are located in a distributed manner relative to a camera (125), and are oriented to illuminate a subject viewed by the camera (125). An image of the subject is captured by the camera (125) and supplied to a computer system or other computing hardware. The computer system is in turn connected to a power distributor for controlling the power supplied to the controllable light sources (120).

Description

SUBJECT ILLUMINATION FOR FACE RECOGNITION
Field of the invention
The present invention relates generally to illumination for image processing applications, such as face recognition systems.
Background
Software and hardware applications incorporating face recognition algorithms are becoming increasingly common. Such algorithms typically use image capture devices such as still or video cameras. The resulting image data, whether captured image frames or video sequences of such frames, is typically processed using variety of image processing techniques.
Face recognition systems operate in many cases under widely varying lighting conditions, ranging from office environments with fluorescent ceiling lights and halogen lamps to a partly or fully outdoor environment with strong exposure to natural sunlight. The performance of most if not all face recognition systems is strongly affected by environmental lighting conditions.
Also, changing the position and orientation of the capture device (that is, a camera) usually affects the recorded illumination of the subject. Furthermore, environments affected by sunlight will have different lighting conditions at different times of the day.
Accordingly, a need clearly exists for an improved manner of subject illumination for face recognition applications in view of these and other observations.
Summary
The performance-degrading effects of unfavourable environmental lighting on a face recognition system can be assessed using a camera operatively connected to a computer. The assessment can be displayed to a user via an output device. Parameters relating to the assessment of the lighting conditions can be used to alleviate unfavourable lighting conditions, using one or more controllable light sources that form part of a controllable lighting system. The controllable light sources are located in a distributed manner relative to a camera, and are oriented to illuminate a subject viewed by the camera. An image of the subject is captured by the camera and supplied to a computer system or other computing hardware. The computer system is in turn connected to a power distributor for controlling the power supplied to the controllable light sources.
The light distribution of the subject is analysed by the computer system from the image of the subject taken by the camera. Light intensity settings for the light sources are consequently calculated. The power supplied to the light sources is correspondingly adjusted by the power distributor connected to the computer system.
Examples of applications that may benefit from an active subject illumination system also ' include those that perform human face tracking and detection, human face extraction, hand gesture analysis and the detection of three-dimensional subjects. Machine vision applications also exist.
Description of drawings
Fig. 1 is a schematic diagram of a controllable lighting system that can be used to determine and modify the illumination of a subject.
Fig. 2 is a schematic diagram of a computer system that can be used in the controllable lighting system represented.
Fig. 3 is a schematic diagram of a power distributor that can be used in the controllable lighting system.
Fig. 4 is a flow chart that represents steps involved in determining new light intensities in the controllable lighting system.
Fig. 5 is a flow chart that represents steps involved in creating a light distribution model in a face recognition application. Fig. 6 is a schematic diagram that represents vertical and horizontal light distribution histograms for unbalanced environmental lighting illuminating the subject of a human face.
Fig. 7 is a schematic diagram that represents vertical and horizontal histograms after compensatory lighting provided by the controllable lighting system.
Fig. 8 is a vertical histogram of an input signal of the subject of a human face having a large amount of eye shadows.
Fig. 9 is a vertical histogram of the reference model of a human face having some amount of eye shadows.
Fig. 10 represents two horizontal histograms used in the creation and comparison of the input and reference lighting distribution models.
Fig. 11 is a flow chart that represents steps involved in evaluating the light distribution on a subject and feeding that data back to the user via one or more output devices.
Fig. 12 is a diagram of examples for representing light distribution parameters to a user.
Detailed Description
Fig. 1 schematically represents a controllable lighting system 100 for actively illuminating a subject of interest 110. Camera 125 is positioned to capture the subject of interest 110, such as a human face, illuminated by controllable light sources 120. The camera 125 provides an image signal to a computer system 200, which has a video display 290. The light sources 120 are controlled by power distributor 300. The camera 125 and light sources 120 operate in conjunction with controllable light distributor 300 and the computer system 200. Information relating to the lighting system 100 can be displayed to a user via the video display 290, and user input accepted by the computer system 200. Appropriate software executes on the computer system 200, the operation of which is described herein. Four light sources 120 are illustrated in Fig. 1, though more or less light sources 120 can be used in practice. At least one controllable light source 120 is used, and the subject 110 is in many cases also illuminated by static or dynamic ambient lighting, or a combination of both static and dynamic lighting. Static ambient light may be provided by regular lighting fixtures, while dynamic ambient lighting may be provided by daylight.
The camera 125 can be any suitable image capture device. As described herein, the camera 125 provided an output video signal that can be digitised to provide an effective resolution. A resolution of 320 pixels wide by 240 pixels high can be used. An example of a suitable camera is the Logitech QuickCam Pro Universal Serial Bus (USB) camera. The camera 125 is generally a visible-light camera. Other possibilities include, for example, an infrared-sensitive camera 125, coupled with infrared light sources 120.
The camera 125 provides an image data signal to the computing hardware 200. In the computer system 200, the image data signal is converted into an appropriate form through an image capture process. For example, the computer system 200 might include an analog-to-digital converter (ADC) that receives raw signal data from the camera 125 and then digitises the signal using the ADC.
The controllable light sources 120, as represented in Fig. 1, are attached at the ends of bars extending from the camera 125. This is simply one possible orientation that may be adopted. Various other numbers and orientations of light sources 120 can be used. Any suitable form of lighting boom or other apparatus can be used to position the light sources 120.
Furthermore, the light sources 120 are typically electrically-operated light sources, such as tungsten-element lamps. Other types of controllable light sources 120 can instead be used. In the case of electrically-operated lights, electrical power can be supplied to the light sources 120 either via the power distributor 300, or independently of the power distributor 300. Suitable lights include Cold Cathode Fluorescent Lights (CCFL), regular fluorescent lights and halogen lamps.
While wired, physical connections are represented in Fig. 1 between the camera 125, the computer system 200, the power distributor 200 and the light sources 120, wireless connections can alternatively be used for any one or more of these links. Wireless may be selected for convenience, as an example. A wireless connection between the power distributor 300 and the light sources 120 can be expected to require a separate supply of power to the light sources 120.
Computer hardware
Fig. 2 is a schematic representation of a computer system 200 that can be used in the lighting system 100 described with reference to Fig. 1.
The components of the computer system 200 include a computer 220, a keyboard 210 and mouse 215, and a video display 290. The computer 220 includes a processor 240, a memory 250, input/output (I/O) interfaces 260, 265, a video interface 245, and a storage device 255. The computer system 200 is connected via an input/output (I O) interface 265 to the camera 125 and the power distributor 300.
Each of the components of the computer 220 is connected to an internal bus 230 that includes data, address, and control buses, to allow components of the computer 220 to communicate with each other via the bus 230.
The processor 240 is a central processing unit (CPU) that executes a suitable operating system and computer software executing under the operating system. The computer software is described in further detail below. The memory 250 includes random access memory (RAM) and read-only memory (ROM), and is used under direction of the processor 240.
The video interface 245 is connected to video display 290 and provides video signals for display on the video display 290. User input to operate the computer 220 is provided from the keyboard 210 and mouse 215. The storage device 255 can include a disk drive or any other suitable storage medium.
A suitable computer operating system installed on the computer system 200. Application software for performing the control processes described below, and for providing a user interface to an operator, executes on the computer system 200 under the operating system. An operator can interact with the computer system 200 using the keyboard 210 and mouse 215 to operate the programmed computer software executing on the computer 220. The application software that executes on the computer system may be part of an image processing application that includes, as part of the application software, software components for controlling the light sources 125 as described herein.
The application software is programmed using any suitable computer programming language, and may be thought of as comprising various software code means for achieving particular steps. The computer software may be recorded on a portable storage medium, in which case, the computer software program is accessed by the computer system 200 from the storage device 255.
Other configurations or types of computer systems can be equally well used to implement the described techniques. The computer system 200 described above is described only as an example of a particular type of system suitable for implementing the described techniques. As an example, application-specific computer hardware having no, or only a minimum of user interface controls, might be used in place of the computer system 200 to provide computing capability for receiving an image signal from the camera 125 and providing light intensity settings to the power distributor 300. Examples include personal Digital Assistants (PDAs), such as Palm Pilots produced by the Palm Corporation, and custom-designed stand-alone embedded systems.
Controllable light distributor
Fig. 3 schematically represents the power distributor 300 represented in Fig. 1. The power distributor 300 is connected to the computer 220, and to the controllable light sources 120.
The power distributor 300 receives control information from the computer 220, and provides, as output, controlled voltages to the light sources 120. The power distributor communicates with the computer 220 via an input/output (I/O) interface 390. The light intensity settings for channel 1 are stored in memory store 310. The intensity settings for channels 2, 3 and 4 are stored in respective 315, 320 and 325. The channel setting memory stores 310, 315, 320 and 325 can be implemented as latches, DRAM, SRAM, or some other form of memory device. The computer 220 sets or gets the values in the channel settings 310, 315, 320 and 325 via an internal bus 305. The internal bus includes data, address and control buses to allow the computer 220 to select the appropriate channel to read or write.
Pulse waveform generators 330, 335, 340 and 345 are connected to each respective channel setting memory store 310, 315, 320 and 325. The pulse waveform generators 330 to 345 generate a pulse width modulated electrical signal, in which the width of the pulse is proportional to the light intensity values stored in memory stores 310, to 325. The smoothing circuits 350, 355, 360 and 365 are based on the charging and discharging of capacitors, which smooth out the pulse width modulated input signals. The output of the smoothing circuits is an analog voltage signal that is proportional to the input pulse width, which in turn is proportional to the light intensity values. The output of the smoothing circuits are then amplified using power amplifiers 370, 375, 380 and 385. The power amplifiers amplify the input signals to the appropriate voltage and provide enough current to drive the lights 120A, 120B, 120C and 120D.
Procedural overview
Fig. 4 is a flow chart that outlines the procedural steps in the operation of the lighting system 100, and which are predominantly performed by the computer system 200.
Image data from the camera 125 is received by the computer system 200 in step 405. A light distribution relating the captured image data is calculated using the computer system 200 in step 410. This light distribution is subsequently analysed in step 415.
Light distribution parameters are adjusted in step 420. A feedback process can be used, in which a revised distribution is calculated and analysed in repeated steps 410 and 415. This feedback loop can be cycled as required until the light distribution parameters are deemed satisfactory.
The created light distribution model is analysed in step 415. The results of this analysis are ultimately used to adjust the light intensity settings for each light source 120 in step 435. Feedback can be provided in adjustment steps 410 to 420 to iteratively adjust the light model. This feedback is optional, and gives an extra level of control over the model creation process, and model parameters are modified based on the results of the distribution model analysis.
The light distribution parameters, so adjusted, can be compared with a reference light distribution in step 425. A reference light distribution model is compared with the light distribution model created from the input image. Parameters representing the differences are generated. The reference light distribution model can be created from a set of training images. Another possibility is to create the reference light distribution model from expert knowledge.
New light intensity settings are calculated as a result of comparing with the reference light model in step 430. In this step difference parameters generated in step 425 are mapped to numbers, one value for each light source. For example, if the difference parameters are the values in a histogram, the mapping function could simply return the average over all bins plus an offset. The resulting value is the new light intensity setting. In another example, the difference parameters are already single values, and in this case the mapping function could just scale that value plus an offset to create the new light intensity value.
Power settings supplied to the controllable light sources 120 can be then adjusted using the power distributor 300 in step 435. The power distributor 300 converts the light settings, received from the computer system 200, to corresponding voltage levels for the light sources 120.
Alternatively, a human operator can take partial or complete control over the generation of new light intensity settings in step 430. The human operator may choose to set the light source intensities to give the desired illumination. These intensities may then be fixed and used in future image captures. The light intensities set by the human operator can also be used as initial set points for the automatic system.
Fig. 11 is a flow chart of a further alternative procedure. Steps 405', 410' and 415', corresponding with steps 405, 410, and 415 are performed as described with reference to Fig. 4. The light distribution parameters from step 415' can be represented for the benefit of the user using the video display 290 in step 420'. This representation of the light distribution parameters can be provided by a graph, or some other form of graphical representation. Non-graphical representation is also possible and may take the form, for example, of an audio alert. This may be a sound that changes amplitude proportional to one of the light distribution parameters.
The illumination of the subject can be manually adjusted by a user viewing the displayed needle or graph to provide a favoured or desired light distribution. The illumination can be controlled using the computer 200 and the power distributor 300. A manual interface can be provided by the computer 200 to allow a user to manually adjust power distributed to the controllable light sources 120. Alternatively, the power can be adjusted directly at the power distributor 300, or through manual adjustment of the light sources 120 themselves. That is, a user can manually change the position or orientation of the light sources 120, for example, with respect to the subject.
Control regimes
There are several modes of operation of the lighting system 100 described above. Three modes are described below, though others are possible. A first mode of operation (Mode I) operates as a classical feedback system. A second mode of operation (Mode II) performs an unconstrained optimisation. A third mode of operation (Mode III) tests different combinations of possible light source intensities to determine a satisfactory combination. Each of these three modes is described in further detail below.
Mode I operation - classical feedback
The lighting system 100 can be treated as a "plant" to be controlled. The reference light distribution model is effectively the reference input or set point. Model parameters are adjusted in step 420 to create an active feedback system.
Proportional, integration, difference and PID controllers can be implemented by using the appropriate mapping function when mapping to new light intensities in step 430. Using the appropriate feedback controller may reduce unwanted oscillations in the system.
Mode II operation — unconstrained optimization The lighting system 100 can also operate as an unconstrained optimiser. An objective function is created, and derived from the multidimensional output of element 425. For example, the objective function might simply be the mapping function of 430, that is, the output from 430 is used as the function value. A standard unconstrained optimisation can then be performed to find the minimum function value. The parameters for the optimisation are the model parameters, adjusted in step 420.
Mode III operation - testing
Testing of possible light source intensities, subject to some constraints, can be performed. A search for a light intensity distribution that minimises the difference values calculated in step 425. In this mode, a set of pre-programmed light intensities is stored in 430. Each setting in this set is sent to the light power distributor via step 435 in a sequential, or otherwise controlled, manner. A pause/delay may be added before a new light setting is sent to 435. The difference parameters obtained in step 425 for each light setting in the set is stored in memory 250 or a storage device 255 along with the corresponding light intensity setting used. These differences and their corresponding light settings can later be retrieved, and the light setting that minimise the differences selected for use in step 430.
Overview of face recognition application
Fig. 5 provides an outline, in greater detail, of the step 415 of analysing the light distribution described above for a facial recognition process. In this case, the subject of interest is a human face, h step 505, the incoming image signal is first normalised for size, contrast and brightness settings, before further processing. This step 505 may be unnecessary in many cases as the received image signal is of sufficient quality to use without performing such normalisation.
Next, in step 510, eye locations are detected using any suitable technique. The eyes can be detected in step 510 using techniques ranging from neural networks, support vector machines (SNMs) to template matching. SNMs, which are kernel-based learning machines, can be used to implement Structural Risk Minimisation. As an example, one can refer to N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, 2000.
Assuming only a single face appears in the image, the best left and right eye locations then indicate the most likely face location, whereby correction for head size and in-plane rotation can be performed in step 515. The face image is also cropped with a bounding box that encompasses only the face without the jaw, neck or hairline. This reduces inconsistencies that may be associated with different hairstyles, clothing and/or jewellery. Pixel intensities are projected onto the vertical and horizontal axes to create two histograms as described in step 520.
Projections are performed by summing all the pixel values along each row for the vertical histogram, and summing all the pixel values along each column for the horizontal histogram. The horizontal and vertical histograms for the reference model are created from the projections of the average image of a set of training images. The histograms represent the light distribution model for the input face.
Lighting for facial recognition
Fig. 6 schematically represents an unbalanced environmental light illuminating a human face from one direction. The sun 600 is at the top-left corner and casts shadows under the eyes and nose of the face 605. The horizontal 615 and vertical 610 histograms are shown.
In Fig. 6, the deep indentations marked at the beginning and end by dashed lines are due to the darker nature of the eyes, eye brows and nose bridge. Shadows can be caused by unbalanced illumination, usually from lighting that is biased to greater intensities coming from the top of the head. The width and depth of the indentations will typically become more distinct as the amount of shadow increases. Therefore, the amount of light required at the top and bottom can be determined by examining the width of the indentation. The beginning and end of the dark areas around the eyes for the input image are indicated by Dx and D2 respectively. The beginning and end of the dark areas around the eyes for the reference model are indicated by Rλ and R2 respectively.
Eye detection The input face must be extracted prior to further processing. Face extraction is achieved by first detecting the eye locations. Once the locations of the eyes are known the face can be extracted and normalised for size and in-plane rotations. The description of a method for detecting the eyes using Support Vector Machines is given below.
Given a set of training data, in this case samples representing eyes and non-eyes, a decision hyperplane that separates those samples into two classes with a maximal margin is computed during the training phase. Depending on the kernel type the samples are mapped to a linear or non-linear space in which this hyperplane is determined using constrained optimisation. A decision function can then be created in terms of the kernel k , as presented in Equation [1] below.
Figure imgf000014_0001
In Equation [1] above, / is the number of support vectors, y. are the class labels {-1,1} , at are the weighting factors found during the optimisation process, b is the bias of the hyperplane, x. are the support vectors and x is the unknown sample to be classified. The classification phase involves evaluating the decision function for the unknown sample , where fix) greater than zero indicates that x probably belongs to the positive class +1. Similarly, a negative f(x) indicates that x is more likely to belong to the negative class -1. If /(x) = 0 then x can be assigned arbitrarily to any class. In this embodiment, images of eyes centered on the left and right eyes are used as positive class samples and negative class samples are obtained by the random sampling of scenes not containing eyes. For simplicity, the linear kernel was used, where k(x, x{ ) = (x • ; } .
Training can be performed using various algorithms. A suitable example is Sequential Minimal Optimisation. A relevant reference for such a technique is J. Platt, Fast Training of Support Vector Machines Using Sequential Minimal Optimization, B. Schδlkopf, C. J. C. Burges and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pp. 185-208, MIT Press, Cambridge, MA, 1999. Chunking methods can also be used, as an alternative. Eye detection can be performed by using the trained SNM in a template matching process, where sub-images in the input image are extracted at selected locations and used in the evaluation of the decision function in (1). A more refined search can be performed at the locations that give the most positive f(x) by using smaller incremental steps. The locations with the most positive f(x) are the most likely eye locations. An alternative procedure uses two different SNMs, one for the left and one for the right eye. A knowledge base can be used to reduce the total number of left-right eye combinations by including conditions such as minimum and maximum eye distances.
Light distribution model and analysis
Analysis of a light distribution can be performed based on a set of training samples. For example, in the human face application described above, the average image from a set of training faces can be used to determine the horizontal and vertical histograms for the reference model. The training faces may be collected from samples that have unbalanced illumination, in which case the resulting system attempts to match the illumination of the input face to that of the reference model. Using horizontal and vertical histograms as light distribution models is one possible method of representing a light distribution. Other methods of analysing the light distribution can also be used.
The parameters used to model the light distribution can be adjusted based on a set of training images and also possibly on a series of live images. The parameters are then fixed so that the light intensities are held at the trained setting. This assumes that the fixed setting will be suited for future subjects. This is usually true if the subjects of interest have similar properties, the camera and light sources are in a fixed position, and the environmental lighting does not vary significantly over time. One of the advantages of holding the lighting conditions constant is a quicker response time.
Determining difference parameters
The following difference parameter of Equation [2] below is created from the vertical histograms of the input image and the reference model
Pv = (R2 -Rl)-(D2 -Dl) [2] The parameters representing the differences in the input and reference horizontal histogram models are generated by considering the left and right halves of the histograms separately, which is illustrated in Fig. 10. The sum of the difference between the left halves is calculated by the expression of Equation [3] below
Figure imgf000016_0001
The sum of the difference between the right halves is calculated by the expression of Equation [4] below.
Figure imgf000016_0002
In Equation [4] above, /z7 is the horizontal histogram obtained from the input image, hR is the horizontal histogram from the reference model and n is the total number of histogram bins. Values at the t'-th bin are referenced by A7(z) and hR(i). Another parameter representing the horizontal histogram differences is used, which is given by Equation [5] below.
Figure imgf000016_0003
Adding this parameter imposes a condition of symmetry into the illumination matching process. For example, if hR (i) is set to zero for all / then this parameter in effect measures the symmetry of the left-right illumination. Matching the input image illumination to a reference that is created from an unbalanced lighting model may be desirable, in which case the reference model histogram contains the desired model.
Four difference parameters, pr , pHL , pHR and pH are thus obtained.
Mapping to a representative number Difference parameters are mapped to a representative number, one for each light source 120. The following properties are assumed. References to "left hand" and "right hand" sides are with relative to the perspective from camera 125:
• If pv is greater than zero then it is probably likely that the input image has less shadow under the eyes or less eye details than that of the reference model, so the top illumination should be increased and/or the bottom illumination decreased.
• If py is less than zero then it is probably likely that the input image has more shadow under the eyes than that of the reference model, so the top illumination should be decreased and/or the bottom illumination increased.
• If pv is equal to zero, or \pv | < τQ for a small threshold τ0 , then the input image probably has lighting around the eyes that more or less match that of the reference model, so no more changes are required.
• If pH is greater than zero then the current right hand side of the subject is probably more strongly illuminated than the left, so the left illumination should be increased and/or the right illumination decreased;
• If pH is less than zero then the current left hand side of the subject is probably more strongly illuminated than the right, so the right illumination should be increased and/or the left illumination decreased;
• If pH is equal to zero, or
Figure imgf000017_0001
< r0 for a small threshold τ0 , then the balance of the left and right illuminations of the input image probably matches that of the reference model, but changes might still be required to account for brightness.
• If pHL is greater than zero then the left illumination probably needs to be decreased.
• If pHL is less than zero then the left illumination needs to be increased; • If pHL is equal to zero, or ]jpjHZ. | < -zτ0 for a small threshold τ0 , then the left illumination probably does not need to be changed;
• If pHR is greater than zero then it is probably likely that the right illumination needs to be decreased;
• If pHR is less than zero then the right illumination probably needs to be increased; and
• If pHR is equal to zero, or 1^1 < r0 for a small threshold τQ , then the right illumination probably does not need to be changed.
With these properties noted above the following maps of Equations [6] to [9] below are created.
mL = -sigιt(pHL)x (pH - pHLsR ) [6]
mR = sign(pHR)x (pH + pHRsR ) [7]
mT = pv [8]
mB = -pv [9]
In Equations [6] to [9] above, sR = a hR(i), and a is a weighting factor that can be
adjusted for better performance, and mL , mR , mτ , and mB represent the raw amounts by which the intensities of the left, right, top and bottom lights should be adjusted respectively. The inclusion of the factor sR enables the use of just symmetry alone for the left and right illumination adjustments when hR (i) is set to zero for all i .
Fig. 7 illustrates the effects of using the controllable lighting system 100 on the scene illustrated in Fig. 6. The same face 605, is represented but with improved illumination, now shown as face 705. The reference model has a horizontal histogram of all zeros so that only pv and pH are minimised. The new horizontal and vertical histograms are indicated by 715 and 710 respectively. Mode I mapping considerations
The mapping presented in Equations [10] to [13] below is performed to achieve feedback.
lL = bLmL +lL' [10]
lR =bRmR + R [11]
lτ - bτmτ +lτ' [12]
lB = bBmB +lB' [13]
In Equations [10] to [13] above, the weighting factors bL , bR , bτ and bB , and the previous settings lL' , lR' , lτ' and lB' are controlled. The weighting factors and the initial values for the previous settings can be pre-determined by experimentation to achieve an acceptable result.
Mode II mapping considerations
The variables lL , lR , lτ and lB as optimised using a standard unconstrained optimisation approach. An example of a standard unconstrained optimisation method is described in William H. Press, Brian P. Flannery, Saul A. Teukolsky and William T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, 2nd Edition, 1993. The objective function is shown in Equation [14] below.
Figure imgf000019_0001
The expression on the right-hand side of Equation [14] is minimised by modifying the light source intensities lL , lR , lτ and lB as variables using any suitable method.
Mode III mapping considerations Each of the intensities lL , lR , lτ and lB is incremented from a relatively zero value in quantum steps qt, and all possible combinations are attempted. Each light intensity setting is sent to the power distributor 300, and the resulting light distribution difference with respect to the light reference model is recorded. Once all light settings are attempted the light intensity is set to the one that minimises the value m + m +m + m B2
Manual operation
The controllable lighting system 100 can also be operated in a manual mode as described herein with reference to Fig. 11, following a representation to the user of light distribution parameters relation to the spatial light distribution of the received image.
Fig. 12 represents examples of the form in which the light distribution parameters can be represented to the user via a graphical user interface of software executing on the computer system 200. A first example involves the display of a graphical dial 1205, having a needle 1215 that rotates in proportion to the relative amounts of lighting on either side of the face of the subject. For example, the needle 1215 rotates further to the left, if the left side of the face is more brightly illuminated than the right. The needle 1215 then rotates to the right if the right side of the face is more brightly illuminated than the left. The length and colour of the needle 1215 can also be made to change with other light distribution parameters. As a simple example, the needle 1215 can be programmed to graphically display the value of the parameter / #.
A graph 1210 can also be used to represent the spatial light distribution of the image of the subject. The graph 1210 is a representation of the left-right symmetry of the light distribution. The graph 1210 is "positive" if the left side of the face is more brightly illuminated than the right side. The graph 1210 becomes negative if the right side of the face is more brightly illuminated than the left side. Other methods or graphing are possible, for example plotting the graph vertically instead of horizontally as in 1210. Also, a simple histogram 1000, such as indicated in Fig. 10, can be presented to a user.
As a further variation, the light distribution parameters determined from the light distribution of the received image can be used to provide a simple representation of the light distribution in terms of a simple textual analysis, such as "right side darker" or "left side darker", suited to the particular application or subject. A suite of messages, or related recommendations, can be provided to the user. The assumed properties described with reference to the face recognition application described herein in the subsection entitled "Mapping to a representative number" can be used to provide such messages or recommendations.
Similarly, a generic representation of the subject 110 can be graphically displayed, with a graphical indication of the light distribution being provided to a user within a frame representing the subject 110. Use of "lighter" and "darker" areas may be made to provide a simplified but nonetheless useful representation of the light distribution. The user can act upon the representation of the light distribution as the user sees fit.
The representation of the light distribution provided to the user can be periodically updated to ensure "live" feedback corresponding with changes in lighting conditions, either as a result in ambient lighting changes or adjustments to the light sources 120. The dial 1205 or graph 1210 can be updated in "real-time"; that is, an update occurs when a new light distribution assessment is periodically made. The user can manually adjust the lighting of the subject in manner desired to achieve a suitable lighting distribution that is measured, and represented to the user.
Conclusion
Applications of the lighting system 100 described herein include face detection and recognition in an environment where the natural or artificial lighting is less than optimal. The lighting system 100 can also be used in cases in which the images used for "registration" in image processing applications are taken under less optimal conditions. These adverse lighting conditions can be simulated or approximated, which may improve the performance of the image processing system. Other applications may involve detecting and reading hand and/or body gestures, in which case the same considerations apply.
Detecting the presence of three-dimensional subjects is also possible. The light sources are adjusted, and if the only changes in the light distribution are the brightness and/or contrast content, which may be uniform or non-uniform, without the accentuation or attenuation of shadows then it is likely that the subject of interest is a two-dimensional image, such as a photograph.
Horizontal and vertical histograms can monitor the changes in pv and pH for different light source intensity settings. Assuming that each light source produces a uniform illumination over a two-dimensional subject, and if both pv and pH has insignificant changes as light intensities are varied then the subject of interest is likely a two- dimensional image. This ability is useful in applications that require knowledge about the flatness of the subject, such as face recognition where one can determine whether a person has held up a photograph to the camera in order to circumvent the system.
Various alterations and modifications can be made to the techniques and arrangements described herein, as would be apparent to one skilled in the relevant art.

Claims

Claims
1. A method for measuring the illumination on a subject for a face recognition system, the method comprising the steps of:
(i) receiving an image relating to a subject of interest; and
(ii) analyzing the spatial light distribution of the subject based upon the received image; and
(iii) calculating one or more light intensity settings for one or more controllable light sources based upon the light distribution of the subject.
2. The method as claimed in claim 1, further comprising the step of (iv) adjusting the power supplied to the one or more controllable light sources to correspond with the one or more calculated light intensity settings.
3. The method as claimed in claim 2, further comprising the step of repeatedly performing steps (i) to (iv) for a series of received images.
4. The method as claimed in claim 2, further comprising the step of iteratively performing steps (i) to (iv) for a received image.
5. The method as claimed in claim 1, further comprising the step of providing a representation of the light distribution of the subject.
6. The method as claimed in claim 1, further comprising the step of recording one or more parameters that characterise the light distribution of the subject.
7. The method as claimed in claim 1, further comprising calculating one or more parameters that characterise the light distribution of the subject.
8. The method as claimed in claim 1, further comprising the step of storing a reference light distribution for comparison with the light distribution of the subject.
9. The method as claimed in claim 8, further comprising the step of comparing the light distribution for the subject with the stored reference light distribution.
10. The method as claimed in claim 8, wherein the reference light distribution model is determined upon the basis of training images.
11. The method as claimed in claim 8, wherein the reference light distribution is generated with reference to the received image.
12. The method as claimed in claim 8, further comprising the step of calculating one or more parameters representing the difference between the light distribution of the subject and the reference light distribution.
13. The method as claimed in claim 12, further comprising the step of calculating the one or more light intensity settings based upon the one or more difference parameters.
14. The method as claimed in claim 13, further comprising the step of mapping the one or more parameters into one or more numbers representative of the new lighting intensity settings.
15. The method as claimed in claim 1, further comprising the step of normalising at least one of the contrast and brightness of the received image.
16. The method as claimed in claim 1, further comprising the step of normalising at least one of the scale and rotational orientation of the received image.
17. The method as claimed in claim 1, further comprising the step of detecting a location of at least one reference point in the received image.
18. The method as claimed in claim 1, further comprising the step of summing the intensities of picture elements (pixels) of the image along at least one axis of the image.
19. A system for illuminating a subject comprising:
at least one controllable light source for illuminating a subject;
a camera for capturing an image of the subject;
computing hardware operatively connected to the camera for receiving image and calculating light intensity settings relating to the controllable light sources based upon a light distribution of the subject calculated from the received image; and
a power distributor operatively connected to the controllable light sources for receiving calculated light intensity settings from the computing hardware, and correspondingly controlling power supplied to the controllable light sources.
20. Computing hardware for facilitating illumination of a subject using at least one controllable light source, the computing hardware comprising:
a memory for storing an image relating to a subject of interest;
a processor for analyzing the light distribution of the subject based upon the received image, and for calculating proposed light intensity settings for respective one or more the one controllable light sources based upon the light distribution of the subject; and
an input/output interface for receiving the stored image, and for transmitting the light intensity settings for the controllable light sources.
21. A computer program product for facilitating illumination of a subject using one or more controllable light sources, the computer program product comprising software components for performing the steps of:
(i) receiving an image relating to a subject of interest;
(ii) analyzing the spatial light distribution of the subject based upon the received image; and
(iii) calculating one or more light intensity settings for one or more controllable light sources based upon the light distribution of the subject.
22. A method for measuring the illumination of a subj ect comprising the steps of:
(i) receiving an image relating to a subject of interest; and
(ii) analyzing the spatial light distribution of the subject based upon the received image;
(iii) representing the light distribution of the subject to a user.
23. The method as claimed in claim 22, further comprising the step of adjusting the power supplied to the one or more light sources able to illuminate the subject based upon the representation.
24. The method as claimed in claim 22, further comprising the step of repeatedly performing steps (i) to (iii) for a series of received images.
25. The method as claimed in claim 22, further comprising the step of calculating one or more light intensity settings for one or more controllable light sources that can illuminate the subject.
26. The method as claimed in claim 22, further comprising the step of recording one or more parameters that characterise the analyzed light distribution.
27. A computer program product for measuring illumination of a subject, the computer program product comprising software components for performing the steps of:
(i) receiving an image relating to a subject of interest;
(iii) analyzing the spatial light distribution of the subject based upon the received image; and
(iii) representing the analyzed light distribution of the subject to a user.
28. A system for assessing the illumination on a subject comprising:
a camera for capturing an image of the subject;
computing hardware having an input/output interface for receiving the image from the camera, and a processor for analyzing the spatial light distribution of the subject; and
a device for representing the analyzed light distribution received from the computing hardware through the input/output interface.
29. Computing hardware to assess the illumination on a subject, the computing hardware comprising:
a memory for storing an image relating to a subject of interest; and
a processor for analyzing the light distribution of the subject based upon the stored image and analyzing the spatial light distribution of the subject; and a display for representing the analyzed light distribution produced by the processor.
30. A method for measuring the illumination on a subject for a face recognition system, the method comprising the steps of:
(i) capturing an image relating to a subject of interest;
(ii) storing the captured image in digital form for subsequent analysis;
(iii) analyzing the spatial light distribution of the subject based upon the stored image; and
(iv) calculating one or more parameters that characterise the analyzed light distribution of the subj ect.
31. The method as claimed in claim 30, further comprising the step of representing the one or more parameters to a user.
32. The method as claimed in claim 30, further comprising the step of calculating one or more light intensity settings, based upon the one or more parameters, for one or more controllable light sources that can illuminate the subject.
33. The method as claimed in claim 30, further comprising the step of adjusting the power supplied to the one or more light sources able to illuminate the subject.
34. The method as claimed in claim 30, further comprising the step of repeatedly perfoπning steps (i) to (iv).
35. A computer program product for measuring illumination of a subject, the computer program product comprising software components for performing the steps of:
(i) capturing an image relating to a subject of interest; (ii) storing the captured image in digital form for subsequent analysis;
(iii) analyzing the spatial light distribution of the subject based upon the stored image; and
(iv) calculating one or more parameters that characterise the analyzed light distribution of the subject.
36. A system for assessing the illumination on a subject comprising:
a camera for capturing an image of the subject;
computing hardware having an input/output interface for receiving the image from the camera, a memory for storing the received image, and a processor for calculating one or more parameters that characterise the spatial light distribution of the subject.
37. Computing hardware to assess the illumination on a subject, the computing hardware comprising:
a memory for storing an image relating to a subject of interest; and
a processor for analyzing the light distribution of the subject based upon the stored image and calculating one or more parameters describing the analyzed spatial light distribution of the subject; and
a device for representing light distribution parameters resulting from analysis of the light distribution of the subject.
PCT/AU2003/001261 2002-09-24 2003-09-24 Illumination for face recognition WO2004029861A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003265722A AU2003265722A1 (en) 2002-09-24 2003-09-24 Illumination for face recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2002951625 2002-09-24
AU2002951625A AU2002951625A0 (en) 2002-09-24 2002-09-24 System for improving and modifying object illumination using active light sources

Publications (1)

Publication Number Publication Date
WO2004029861A1 true WO2004029861A1 (en) 2004-04-08

Family

ID=28047380

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2003/001261 WO2004029861A1 (en) 2002-09-24 2003-09-24 Illumination for face recognition

Country Status (2)

Country Link
AU (1) AU2002951625A0 (en)
WO (1) WO2004029861A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100826309B1 (en) * 2005-09-28 2008-04-30 가부시키가이샤 덴소 Device for authenticating face image
DE202007002260U1 (en) * 2007-02-15 2008-06-26 Leuze Electronic Gmbh & Co Kg Optoelectronic device
EP2172805A3 (en) * 2008-10-03 2010-04-28 John Hyde Special positional synchronous illumination for reduction of specular reflection
CN103776770A (en) * 2014-01-02 2014-05-07 东华大学 Multi-path light source self-adaptation system based on information entropy and working method of system
WO2017040968A1 (en) * 2015-09-04 2017-03-09 Silexpro Llc Wireless content sharing, center-of-table collaboration, and panoramic telepresence experience (pte) devices
RU170881U1 (en) * 2016-06-27 2017-05-12 Федеральное государственное казенное образовательное учреждение высшего образования "Калининградский пограничный институт Федеральной службы безопасности Российской Федерации" FACE IMAGE REGISTRATION DEVICE
FR3045133A1 (en) * 2015-12-15 2017-06-16 Morpho LIGHTING DEVICE AND METHOD FOR MANUFACTURING A PATCH OF SUCH A DEVICE
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
RU210432U1 (en) * 2021-10-18 2022-04-15 Федеральное государственное казенное образовательное учреждение высшего образования "Калининградский пограничный институт Федеральной службы безопасности Российской Федерации REGISTRATION DEVICE FOR BIOMETRIC FEATURES
WO2022125087A1 (en) * 2020-12-09 2022-06-16 Hewlett-Packard Development Company, L.P. Light intensity determination

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4979135A (en) * 1988-01-27 1990-12-18 Storage Technology Corporation Vision system illumination calibration apparatus
US5163102A (en) * 1990-03-19 1992-11-10 Sharp Kabushiki Kaisha Image recognition system with selectively variable brightness and color controlled light source
US6122408A (en) * 1996-04-30 2000-09-19 Siemens Corporate Research, Inc. Light normalization method for machine vision
EP1136937A2 (en) * 2000-03-22 2001-09-26 Kabushiki Kaisha Toshiba Facial image forming recognition apparatus and a pass control apparatus
US20020071036A1 (en) * 2000-12-13 2002-06-13 International Business Machines Corporation Method and system for video object range sensing
US20020101200A1 (en) * 1997-08-26 2002-08-01 Dowling Kevin J. Systems and methods for providing illumination in machine vision systems
WO2002069246A1 (en) * 2001-02-21 2002-09-06 Justsystem Corporation Method and apparatus for using illumination from a display for computer vision based user interfaces and biometric authentication
US20020181774A1 (en) * 2001-05-30 2002-12-05 Mitsubishi Denki Kabushiki Kaisha Face portion detecting apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4979135A (en) * 1988-01-27 1990-12-18 Storage Technology Corporation Vision system illumination calibration apparatus
US5163102A (en) * 1990-03-19 1992-11-10 Sharp Kabushiki Kaisha Image recognition system with selectively variable brightness and color controlled light source
US6122408A (en) * 1996-04-30 2000-09-19 Siemens Corporate Research, Inc. Light normalization method for machine vision
US20020101200A1 (en) * 1997-08-26 2002-08-01 Dowling Kevin J. Systems and methods for providing illumination in machine vision systems
EP1136937A2 (en) * 2000-03-22 2001-09-26 Kabushiki Kaisha Toshiba Facial image forming recognition apparatus and a pass control apparatus
US20020071036A1 (en) * 2000-12-13 2002-06-13 International Business Machines Corporation Method and system for video object range sensing
WO2002069246A1 (en) * 2001-02-21 2002-09-06 Justsystem Corporation Method and apparatus for using illumination from a display for computer vision based user interfaces and biometric authentication
US20020181774A1 (en) * 2001-05-30 2002-12-05 Mitsubishi Denki Kabushiki Kaisha Face portion detecting apparatus

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100826309B1 (en) * 2005-09-28 2008-04-30 가부시키가이샤 덴소 Device for authenticating face image
DE202007002260U1 (en) * 2007-02-15 2008-06-26 Leuze Electronic Gmbh & Co Kg Optoelectronic device
EP2172805A3 (en) * 2008-10-03 2010-04-28 John Hyde Special positional synchronous illumination for reduction of specular reflection
CN103776770A (en) * 2014-01-02 2014-05-07 东华大学 Multi-path light source self-adaptation system based on information entropy and working method of system
WO2017040968A1 (en) * 2015-09-04 2017-03-09 Silexpro Llc Wireless content sharing, center-of-table collaboration, and panoramic telepresence experience (pte) devices
FR3045133A1 (en) * 2015-12-15 2017-06-16 Morpho LIGHTING DEVICE AND METHOD FOR MANUFACTURING A PATCH OF SUCH A DEVICE
RU170881U1 (en) * 2016-06-27 2017-05-12 Федеральное государственное казенное образовательное учреждение высшего образования "Калининградский пограничный институт Федеральной службы безопасности Российской Федерации" FACE IMAGE REGISTRATION DEVICE
CN108875484A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium for mobile terminal
WO2022125087A1 (en) * 2020-12-09 2022-06-16 Hewlett-Packard Development Company, L.P. Light intensity determination
RU210432U1 (en) * 2021-10-18 2022-04-15 Федеральное государственное казенное образовательное учреждение высшего образования "Калининградский пограничный институт Федеральной службы безопасности Российской Федерации REGISTRATION DEVICE FOR BIOMETRIC FEATURES

Also Published As

Publication number Publication date
AU2002951625A0 (en) 2002-10-10

Similar Documents

Publication Publication Date Title
US7940978B2 (en) Automatic characterization of cellular motion
US5148477A (en) Method and apparatus for detecting and quantifying motion of a body part
JP6793325B2 (en) Skin diagnostic device and skin diagnostic method
CN105260731A (en) Human face living body detection system and method based on optical pulses
US20080080746A1 (en) Method and Apparatus for Identifying Facial Regions
JP2006319534A (en) Imaging apparatus, method and program
US11915430B2 (en) Image analysis apparatus, image analysis method, and storage medium to display information representing flow quantity
WO2004029861A1 (en) Illumination for face recognition
KR20200087310A (en) System and method for generation and augmentation of annotation learning data for deep learning in artificial intelligence picture reading
CN110598638A (en) Model training method, face gender prediction method, device and storage medium
US20200399580A1 (en) Systems, apparatus and methods for controlling a movement of a cell culture to optimize cell growth
CN116051560B (en) Embryo dynamics intelligent prediction system based on embryo multidimensional information fusion
CN115641364B (en) Embryo division period intelligent prediction system and method based on embryo dynamics parameters
CN108184286A (en) The control method and control system and electronic equipment of lamps and lanterns
Ciesla et al. Eye pupil location using webcam
US5214711A (en) Method and apparatus for detecting and quantifying motion of a body part
JP2010026392A (en) Method of analyzing image for cell observation image, image processing program and image processing apparatus
CN108235831B (en) The control method and control system and electronic equipment of lamps and lanterns
KR100463345B1 (en) Display systems and display methods that respond to eye fatigue
Somwang et al. Image Processing for Quality Control in Manufacturing Process
US20230190402A1 (en) System, method, and computer program for a surgical microscope system and corresponding surgical microscope system
Bhavani Automated Attendance System and Voice Assistance using Face Recognition
US11847778B2 (en) Image capture techniques personalized to individual subjects being imaged
US20220369923A1 (en) Method and a system for detection of eye gaze-pattern abnormalities and related neurological diseases
Eitzinger et al. Surface characterization of aircraft interior parts: Modelling human perception of surface texture

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP