US20090003726A1 - Illumination normalizing method and apparatus - Google Patents

Illumination normalizing method and apparatus Download PDF

Info

Publication number
US20090003726A1
US20090003726A1 US12/040,170 US4017008A US2009003726A1 US 20090003726 A1 US20090003726 A1 US 20090003726A1 US 4017008 A US4017008 A US 4017008A US 2009003726 A1 US2009003726 A1 US 2009003726A1
Authority
US
United States
Prior art keywords
illumination
pixel
discontinuity
weight
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/040,170
Other versions
US8175410B2 (en
Inventor
Young-Kyung Park
Seok-Lai Park
Ji-Hyoung Son
Kwang-Hee Jung
Joong-Kyu Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sungkyunkwan University Foundation for Corporate Collaboration
Original Assignee
Sungkyunkwan University Foundation for Corporate Collaboration
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sungkyunkwan University Foundation for Corporate Collaboration filed Critical Sungkyunkwan University Foundation for Corporate Collaboration
Assigned to SUNGKYUNKWAN UNIVERSITY FOUNDATION FOR CORPORATE COLLABORATION reassignment SUNGKYUNKWAN UNIVERSITY FOUNDATION FOR CORPORATE COLLABORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JUNG, KWANG-HEE, KIM, JOONG-KYU, PARK, SEOK-LAI, PARK, YOUNG-KYUNG, SON, JI-HYOUNG
Publication of US20090003726A1 publication Critical patent/US20090003726A1/en
Application granted granted Critical
Publication of US8175410B2 publication Critical patent/US8175410B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns

Definitions

  • the present invention relates to an illumination normalization, more particularly, an illumination normalization apparatus of removing shadow on an image.
  • An illumination is the most important element in an object recognition system (i.e., face recognition system).
  • the illumination causes changes to image of an object much more than shape.
  • an ambient lighting varies with environmental conditions such as during the daytime and at night, and indoor and outdoor, and the shade generated by a light source from a certain direction may hide a main feature of object.
  • an illumination cone method which was proposed by Georghiades, is designed. This method performs modeling changes on a face caused by illumination as the illumination cone. If a well-constructed training image was used, its performance is good, however, these model-based methods need limited assumptions and many training images, so that it is difficult to apply to a real situation.
  • Retinex-based method is grounded on a fact that an image is a product of illumination and reflectance, so this is advantages that no training image is required and as a result relatively faster than other methods.
  • Retinex-based method an assumption is used that illumination smoothly varies but reflectance varies fast. In this assumption, the illumination can be estimated by blurring the image, and be normalized by dividing the estimated value by original image.
  • SSR Single Scale Retinex
  • SQI Single Quotient Image
  • a method and an apparatus are also proposed that can remove a shadow without degrading of object's features by using a new transfer function when combining two discontinuity detecting methods.
  • an illumination normalizing apparatus including a discontinuity measuring means, measuring a discontinuity of each pixel of an input image, in which the discontinuity includes a spatial gradient and a local inhomogeneity, a weight calculating means, producing a weight of each pixel from the discontinuity by using a transfer function, an illumination estimating means, producing an estimated illumination by repeating a convolution operation on each weight, and an illumination normalizing means, subtracting the estimated illumination from the input image.
  • the discontinuity measuring means may include a gradient measuring means, producing the spatial gradient by performing a partial differential on the pixel, and an inhomogeneity measuring means, producing differences between luminance of the pixel and each luminance of n adjacent pixels and the mean of differences.
  • the discontinuity measuring means may control the gradient measuring means and the inhomogeneity measuring means to be independently operated in parallel.
  • the gradient measuring means may produce the spatial gradient by the following formula:
  • I(x, y) is a value of a pixel that locates on a coordinate (x, y), and
  • x and y are integers equal or greater than 0.
  • the inhomogeneity measuring means may produce the local inhomogeneity by the following formula:
  • ⁇ ⁇ ⁇ ( x , y ) sin ⁇ ( ⁇ 2 ⁇ ⁇ s ⁇ ( x , y ) ) wherein ⁇ ⁇ ⁇ s ⁇ ( x , y ) ⁇ ⁇ is ⁇ ⁇ ⁇ ⁇ ( x , y ) - ⁇ min ⁇ max - ⁇ min , ⁇ ⁇ ⁇ ( x , y ) ⁇ ⁇ is ⁇ ⁇ ⁇ ⁇ ( m , n ) ⁇ ⁇ ⁇ ⁇ I ⁇ ( x , y ) - I ⁇ ( m , n ) ⁇ ⁇ ⁇ ⁇ ,
  • ⁇ max and ⁇ min are the maximum value and the minimum value among ⁇ (x, y), respectively, ⁇ indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in ⁇ , and m and n are integers.
  • the weight may be produced by the following formula:
  • the convolution operation may be performed by the following formula:
  • L (t) is an estimated illumination of each pixel when the convolution operation is performed t times
  • w (t) (x, y) is a weight of each pixel when the convolution operation is performed t times
  • i and j are integers.
  • the illumination normalizing means performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • an illumination normalizing method that produces at least one discontinuity for each pixel of an input image, produces a weight of each pixel from the discontinuity by using a transfer function, produces an estimated illumination by repeating a convolution operation on each weight, and subtracts the estimated illumination from the input image.
  • the discontinuity may include a spatial gradient and a local inhomogeneity.
  • the spatial gradient may be produced by the following formula:
  • I(x, y) is a value of a pixel that locates on a coordinate (x, y), and
  • x and y are integers equal or greater than 0.
  • the local inhomogeneity may be produced by the following formula:
  • ⁇ ⁇ ⁇ ( x , y ) sin ⁇ ( ⁇ 2 ⁇ ⁇ s ⁇ ( x , y ) ) wherein ⁇ ⁇ ⁇ s ⁇ ( x , y ) ⁇ ⁇ is ⁇ ⁇ ⁇ ⁇ ( x , y ) - ⁇ min ⁇ max - ⁇ min , ⁇ ⁇ ⁇ ( x , y ) ⁇ ⁇ is ⁇ ⁇ ⁇ ⁇ ( m , n ) ⁇ ⁇ ⁇ I ⁇ ( x , y ) - I ⁇ ( m , n ) ⁇ ⁇ ⁇ ⁇ ,
  • ⁇ max and ⁇ min are the maximum value and the minimum value among ⁇ (x, y), respectively, ⁇ indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in ⁇ , and m and n are integers.
  • the weight may be produced by the following formula:
  • the convolution operation may be performed by the following formula:
  • L (t) is an estimated illumination of each pixel when the convolution operation is performed t times
  • w (t) (x, y) is a weight of each pixel when the convolution operation is performed t times
  • i and j are integers.
  • the subtracting the estimated illumination from the input image is a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • a computer-readable medium including a program containing computer-executable instructions for illumination normalization performing the method including producing a vector of the spatial gradient for each pixel of an input image, measuring a local inhomogeneity that indicates the degree of inhomogeneity between the pixel and adjacent pixels, producing a weight for retaining a feature of the input image and for removing a shadow having discontinuity similar to the feature by use of the spatial gradient and the local inhomogeneity, producing an estimated illumination by repeating a convolution operation by use of the weight, and performing a subtraction between the logarithm of the input image and the logarithm of the estimated illumination.
  • FIG. 1 is a block diagram of an illumination normalizing apparatus according to one embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a discontinuity measuring means according to one embodiment of the present invention.
  • FIG. 3 shows comparison of the illumination normalized image that the conventional transfer function is applied with that that a transfer function according to one embodiment of the present invention is applied;
  • FIG. 4 is a profile of illumination being estimated through repeating the convolution operation according to one embodiment of the present invention.
  • FIG. 5 illustrates normalizations of face image having a strong shadow according to one embodiment of the present invention.
  • FIG. 6 is a flowchart of illumination normalization according to one embodiment of the present invention.
  • Embodiments according to the present invention can effectively remove a shadow having discontinuity similar to object's feature through combining a spatial gradient detection and an inhomogeneity detection.
  • the spatial gradient detection and the inhomogeneity detection will be described with reference to FIG. 1 .
  • the shadow is smoothly removed from a face image after illumination normalization using a discontinuity transfer function suitable for the face image, and the features of face can be retained.
  • an illumination refers to a brightness being radiated on an object
  • adjacent pixel(s) refers to k number of pixel(s) (k is natural number), which is adjacent to a pixel to be calculated when calculations are performed on each pixel during the illumination normalization.
  • FIG. 1 is a block diagram of an illumination normalizing apparatus according to one embodiment of the present invention
  • FIG. 2 is a schematic diagram of a discontinuity measuring means according to one embodiment of the present invention
  • FIG. 3 shows comparison of the illumination normalized image that the conventional transfer function is applied with that that a transfer function according to one embodiment of the present invention is applied.
  • the illumination normalizing apparatus 100 includes an image input means 110 , a discontinuity measuring means 120 , an weight calculating means 130 , an illumination estimating means 140 , and an illumination normalizing means 150 .
  • the image input means 110 receives an input image. In order to measure discontinuity of the image for the illumination normalization, the image input means 110 sends the image to the discontinuity measuring means 120 .
  • the discontinuity measuring means 120 measures both a spatial gradient and a local discontinuity for each pixel of the image.
  • the discontinuity measuring means 120 includes a gradient measuring means 210 and an inhomogeneity measuring means 220 .
  • the discontinuity measuring means 120 outputs the image from the image input means 110 to the gradient measuring means 210 and the inhomogeneity measuring means 220 , respectively, and may control the gradient measuring means 210 and the inhomogeneity measuring means 220 to be independently operated in parallel.
  • the gradient measuring means 210 produces a vector's value of the spatial gradient of each pixel of the image, so that it can measure the discontinuity caused by the spatial gradient (hereinafter, referred as “measured discontinuity of spatial gradient”).
  • the spatial gradient is a partial differential at each of pixels (x, y) as defined by Formula 1, and each partial differential is defined by Formula 2,
  • I(x, y) is a value of a pixel that locates on a coordinate (x, y); G x and G y are partial differential values of x and y, respectively; x and y are integers equal or greater than 0. Partial differential values are calculated by Formula 2.
  • the discontinuity measuring means 120 can produce the measured discontinuity of the spatial gradient by calculating a vector's magnitude of the spatial gradient as expressed in Formula 3.
  • the inhomogeneity measuring means 220 measures a local inhomogeneity by using k (k is natural number) adjacent pixels. And, the inhomogeneity measuring means 220 normalizes the measured local inhomogeneity by using a predetermined method, and performs a non-linear transform to output to the weight calculating means 140 .
  • the local inhomogeneity which is non-linearly transformed by the inhomogeneity measuring means 220 , will be referred as “transformed local inhomogeneity.”
  • local inhomogeneity is for complementing the discontinuity produced by use of the spatial gradient, and indicates the degree of inhomogeneity between the pixel and adjacent pixels.
  • large local inhomogeneity of a pixel means that discontinuity at the pixel is larger than ones at adjacent pixels.
  • the inhomogeneity measuring means 220 can produce differences between luminance of the pixel and each luminance of n adjacent pixels and the mean of the differences, and measure the local inhomogeneity. Namely, inhomogeneity measuring means 220 can produce the measured local inhomogeneity by using Formula 4,
  • ⁇ ⁇ ( x , y ) ⁇ ⁇ ( m , n ) ⁇ ⁇ ⁇ ⁇ I ⁇ ( x , y ) - I ⁇ ( m , n ) ⁇ ⁇ ⁇ ⁇ Formula ⁇ ⁇ 4
  • indicates n adjacent pixels
  • (m, n) is a coordinate of an adjacent pixel included in ⁇
  • ⁇ (x, y) is the mean of differences between luminance of the pixel and each luminance of adjacent pixels (hereinafter, referred as “mean of luminance”)
  • m and n are integers.
  • the inhomogeneity measuring means 220 performs the non-linear transform on the measured local inhomogeneity and finally outputs the transformed local inhomogeneity to the weight calculating means 130 .
  • the inhomogeneity measuring means 220 normalizes luminance of each pixel to a value within the predetermined range (e.g., between “0” to “1”) by using Formula 5,
  • ⁇ s ⁇ ( x , y ) ⁇ ⁇ ( x , y ) - ⁇ min ⁇ max - ⁇ min Formula ⁇ ⁇ 5
  • ⁇ max is the maximum value among mean values (i.e., ⁇ (x, y) values corresponding to each pixel) of luminance associated with each pixel.
  • the inhomogeneity measuring means 220 performs the non-linear transform to emphasize a part having high local inhomogeneity by using Formula 6.
  • the discontinuity measuring means 120 can produce respectively independently the measured discontinuity of spatial gradient and the transformed local inhomogeneity, and output them to the weight calculating means 130 .
  • the weight calculating means 130 produces weights to be applied to each pixel by using independently the measured discontinuity of spatial gradient and the transformed local inhomogeneity from the discontinuity measuring means 120 , and sends the weights to the illumination estimating means 140 , in which the weights are produced for removing shadows of each pixel.
  • the weight calculating means 130 can produce a weight of each pixel by inputting the measured discontinuity of spatial gradient and the transformed local inhomogeneity from the discontinuity measuring means 120 to a predetermined transfer function.
  • the weight calculating means 130 may produce a weight of each pixel by using Formula 7,
  • the illumination estimating means 140 estimates a luminance for each pixel by performing the convolution operation T times (T is a natural number. It may be referred as repeated times based on context) by use of the weight of each pixel from the weight calculating means 130 . It will be appreciated the number of repeated times T may vary according to the system implementation.
  • the illumination estimating means 140 performs the convolution operation by using Formula 8,
  • L (t) is an estimated illumination of each pixel when the convolution operation is performed t times
  • w (t) (x, y is a weight of each pixel when the convolution operation is performed t times.
  • reflectance of illumination has a value within the predetermined range (e.g., a value between “0” to “1”), such that the estimated illumination should not be greater than the luminance of each pixel of the input image. Therefore the illumination estimating means 140 selects the highest value from the results of the (t+1) th operation and the t th operation to prevent the illumination from being greater than the luminance of the input image.
  • the illumination normalizing means 150 normalizes the illumination of the input image by using the estimated illumination from the illumination estimating means 140 .
  • the illumination normalizing means 150 performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • the illumination normalizing means 150 can normalize the illumination by applying the estimated illumination to the input image by using Formula 9.
  • 3 ( a ) is the input image
  • 3 ( b ) shows the illumination estimated by the conventional method by using the input image
  • 3 ( c ) shows a result of the illumination normalization by using the conventional estimated illumination
  • 3 ( d ) shows an estimated illumination by using the transfer function according to one embodiment of the present invention
  • 3 ( e ) shows a result of the illumination normalization by using the illumination estimated by the transfer function according to one embodiment of the present invention.
  • the illumination estimation by using the transfer function according to one embodiment of the present invention is more effective.
  • FIG. 4 is a profile of illumination estimated through repeating the convolution operation according to one embodiment of the present invention
  • FIG. 5 illustrates normalizations of a face image having a strong shadow according to one embodiment of the present invention.
  • Each of graphs in FIG. 4 indicates the luminance of each pixel locating on white horizontal lines 310 , 315 , 320 of each image.
  • the discontinuity on illumination is retained as shown in graphs. It will be appreciated from this that the discontinuity of input image can be retained despite repeat of convolution operation. Thus, even if the convolution operation is performed repeatedly, it will be appreciated that the shadow can be effectively removed while the object's features of the input image are still retained.
  • 5 ( a ), 5 ( c ) and 5 ( e ) are based on the same object, but have a different intensity and direction of illumination.
  • 5 ( b ), 5 ( d ) and 5 ( f ) are the illumination normalized image of 5 ( a ), 5 ( c ) and 5 ( e ), respectively, and show images similar to each other. From this, it will be appreciated that the recognition error caused by the change of illumination may be reduced by performing the illumination normalization according to one embodiment of the present invention.
  • FIG. 6 is a flowchart of illumination normalization method being performed by an illumination normalizing apparatus according to one embodiment of the present invention. Although each step to be described below will be performed by each element of the illumination normalizing apparatus 100 , it will be referred collectively as illumination normalizing apparatus.
  • the illumination normalizing apparatus 100 receives an input image from an external device such as camera, etc.
  • the illumination normalizing apparatus 100 measures discontinuity of each pixel by use of the input image.
  • the discontinuity is measured by producing the measured discontinuity of spatial gradient and the transformed local inhomogeneity.
  • the illumination normalizing apparatus 100 produces a weight of each pixel by inputting the measured discontinuity of spatial gradient and the transformed local inhomogeneity into a predetermined transfer function.
  • the illumination normalizing apparatus 100 repeats the predetermined convolution operation T times on the weight of each pixel and estimates the illumination.
  • the illumination normalizing apparatus 100 normalizes the illumination by subtracting the estimated illumination from the input image.
  • the illumination normalizing apparatus 100 performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination to normalize the illumination.
  • the aforementioned illumination normalizing method can be implemented in the form of computer program. Codes and code segments consisting of the program may be easily made by those skilled in the art. Also, the program may be stored on a computer-readable media, and a computer can read and perform the illumination normalizing method.
  • the computer-readable media may be one of magnetic recording media, optical recording media, and carrier wave media.

Abstract

An illumination normalizing apparatus and a method are disclosed. The illumination normalizing apparatus measures a discontinuity of each pixel of an input image, the discontinuity including a spatial gradient and a local inhomogeneity, produces a weight of each pixel from the discontinuity by using a transfer function, produces an estimated illumination by repeating a convolution operation on each weight, and subtracts the estimated illumination from the input image.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of Invention
  • The present invention relates to an illumination normalization, more particularly, an illumination normalization apparatus of removing shadow on an image.
  • 2. Background Art
  • An illumination is the most important element in an object recognition system (i.e., face recognition system). The illumination causes changes to image of an object much more than shape. For example, an ambient lighting varies with environmental conditions such as during the daytime and at night, and indoor and outdoor, and the shade generated by a light source from a certain direction may hide a main feature of object.
  • Recently, in order to overcome this problem, an illumination cone method, which was proposed by Georghiades, is designed. This method performs modeling changes on a face caused by illumination as the illumination cone. If a well-constructed training image was used, its performance is good, however, these model-based methods need limited assumptions and many training images, so that it is difficult to apply to a real situation. On the contrary, Retinex-based method is grounded on a fact that an image is a product of illumination and reflectance, so this is advantages that no training image is required and as a result relatively faster than other methods. In Retinex-based method an assumption is used that illumination smoothly varies but reflectance varies fast. In this assumption, the illumination can be estimated by blurring the image, and be normalized by dividing the estimated value by original image. Examples of this method are SSR (Single Scale Retinex) and SQI (Single Quotient Image). SSR uses a Gaussian filter for blurring and SQI uses a weighed Gaussian filter, which assigns different weights based on the mean value of Convolution region, to apply an effect caused by ununiform changes of illumination.
  • But all methods as mentioned above cannot remove a local and this may lower a recognition ratio.
  • SUMMARY OF THE INVENTION
  • In this disclosure, a method and an apparatus are proposed that use two kinds of discontinuity detecting methods for effectively removing a shadow having discontinuity similar to object's features.
  • In this disclosure, a method and an apparatus are also proposed that can remove a shadow without degrading of object's features by using a new transfer function when combining two discontinuity detecting methods.
  • Other features will be appreciated later through the description on embodiments according to the present invention.
  • According to one aspect associated with the present invention, there is provided an illumination normalizing apparatus including a discontinuity measuring means, measuring a discontinuity of each pixel of an input image, in which the discontinuity includes a spatial gradient and a local inhomogeneity, a weight calculating means, producing a weight of each pixel from the discontinuity by using a transfer function, an illumination estimating means, producing an estimated illumination by repeating a convolution operation on each weight, and an illumination normalizing means, subtracting the estimated illumination from the input image.
  • The discontinuity measuring means may include a gradient measuring means, producing the spatial gradient by performing a partial differential on the pixel, and an inhomogeneity measuring means, producing differences between luminance of the pixel and each luminance of n adjacent pixels and the mean of differences.
  • The discontinuity measuring means may control the gradient measuring means and the inhomogeneity measuring means to be independently operated in parallel.
  • The gradient measuring means may produce the spatial gradient by the following formula:

  • |∇I(x,y)|=√{square root over (G x 2 +G y 2)}
  • Wherein I(x, y) is a value of a pixel that locates on a coordinate (x, y), and

  • G x =I(x+1,y)−I(x−1, y)

  • G y =I(x, y+1)−I(x, y−1)
  • wherein x and y are integers equal or greater than 0.
  • The inhomogeneity measuring means may produce the local inhomogeneity by the following formula:
  • τ ~ ( x , y ) = sin ( π 2 τ s ( x , y ) ) wherein τ s ( x , y ) is τ ( x , y ) - τ min τ max - τ min , τ ( x , y ) is ( m , n ) Ω I ( x , y ) - I ( m , n ) Ω ,
  • τmax and τmin are the maximum value and the minimum value among τ(x, y), respectively, Ω indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in Ω, and m and n are integers.
  • The weight may be produced by the following formula:
  • w ( x , y ) = 1 1 + τ ~ ( x , y ) h × 1 1 + I ( x , y ) S
  • wherein h and S are real numbers.
  • The convolution operation may be performed by the following formula:
  • N ( x , y ) ( t ) = i = - 1 1 j = - 1 1 w ( t ) ( x + i , y + j ) L ( t + 1 ) ( x , y ) = max { 1 N ( x , y ) ( t ) i = - 1 1 j = - 1 1 L ( t ) ( x + i , y + j ) w ( t ) ( z + i , y + j ) , L ( t ) ( x , y ) }
  • wherein L(t) is an estimated illumination of each pixel when the convolution operation is performed t times, w(t)(x, y) is a weight of each pixel when the convolution operation is performed t times, and i and j are integers.
  • The illumination normalizing means performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • According to another aspect associated with the present invention, there is provided an illumination normalizing method that produces at least one discontinuity for each pixel of an input image, produces a weight of each pixel from the discontinuity by using a transfer function, produces an estimated illumination by repeating a convolution operation on each weight, and subtracts the estimated illumination from the input image.
  • The discontinuity may include a spatial gradient and a local inhomogeneity.
  • The spatial gradient may be produced by the following formula:

  • |∇I(x,y)|=√{square root over (G x 2 +G y 2)}
  • wherein I(x, y) is a value of a pixel that locates on a coordinate (x, y), and

  • G x =I(x+1, y)−I(x−1, y)

  • G y =I(x, y+1)−I(x, y−1)
  • wherein x and y are integers equal or greater than 0.
  • The local inhomogeneity may be produced by the following formula:
  • τ ~ ( x , y ) = sin ( π 2 τ s ( x , y ) ) wherein τ s ( x , y ) is τ ( x , y ) - τ min τ max - τ min , τ ( x , y ) is ( m , n ) Ω I ( x , y ) - I ( m , n ) Ω ,
  • τmax and τmin are the maximum value and the minimum value among τ(x, y), respectively, Ω indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in Ω, and m and n are integers.
  • The weight may be produced by the following formula:
  • w ( x , y ) = 1 1 + τ ~ ( x , y ) h × 1 1 + I ( x , y ) S
  • wherein h and S are real numbers.
  • The convolution operation may be performed by the following formula:
  • N ( x , y ) ( t ) = i = - 1 1 j = - 1 1 w ( t ) ( x + i , y + j ) L ( t + 1 ) ( x , y ) = max { 1 N ( x , y ) ( t ) i = - 1 1 j = - 1 1 L ( t ) ( x + i , y + j ) w ( t ) ( x + i , y + j ) , L ( t ) ( x , y ) }
  • wherein L(t) is an estimated illumination of each pixel when the convolution operation is performed t times, w(t)(x, y) is a weight of each pixel when the convolution operation is performed t times, and i and j are integers.
  • The subtracting the estimated illumination from the input image is a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • According to another aspect associated with the present invention, there is provided a computer-readable medium including a program containing computer-executable instructions for illumination normalization performing the method including producing a vector of the spatial gradient for each pixel of an input image, measuring a local inhomogeneity that indicates the degree of inhomogeneity between the pixel and adjacent pixels, producing a weight for retaining a feature of the input image and for removing a shadow having discontinuity similar to the feature by use of the spatial gradient and the local inhomogeneity, producing an estimated illumination by repeating a convolution operation by use of the weight, and performing a subtraction between the logarithm of the input image and the logarithm of the estimated illumination.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an illumination normalizing apparatus according to one embodiment of the present invention;
  • FIG. 2 is a schematic diagram of a discontinuity measuring means according to one embodiment of the present invention;
  • FIG. 3 shows comparison of the illumination normalized image that the conventional transfer function is applied with that that a transfer function according to one embodiment of the present invention is applied;
  • FIG. 4 is a profile of illumination being estimated through repeating the convolution operation according to one embodiment of the present invention;
  • FIG. 5 illustrates normalizations of face image having a strong shadow according to one embodiment of the present invention; and
  • FIG. 6 is a flowchart of illumination normalization according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Since there can be a variety of permutations and embodiments of the present invention, certain embodiments will be illustrated and described with reference to the accompanying drawings. This, however, is by no means to restrict the present invention to certain embodiments, and shall be construed as including all permutations, equivalents and substitutes covered by the spirit and scope of the present invention. In describing the present invention, if the description about the prior art may make the subject matter of the present invention vague, then the description will be omitted.
  • The terms used in the description are intended to describe certain embodiments only, and shall by no means restrict the present invention. Unless clearly used otherwise, expressions in the singular number include a plural meaning. In the present description, an expression such as “comprising” or “consisting of” is intended to designate a characteristic, a number, a step, an operation, an element, a part or combinations thereof, and shall not be construed to preclude any presence or possibility of one or more other characteristics, numbers, steps, operations, elements, parts or combinations thereof.
  • Unless otherwise defined, all terms, including technical terms and scientific terms, used herein have the same meaning as how they are generally understood by those of ordinary skill in the art to which the invention pertains. Any term that is defined in a general dictionary shall be construed to have the same meaning in the context of the relevant art, and, unless otherwise defined explicitly, shall not be interpreted to have an idealistic or excessively formalistic meaning.
  • Embodiments according to the present invention can effectively remove a shadow having discontinuity similar to object's feature through combining a spatial gradient detection and an inhomogeneity detection. The spatial gradient detection and the inhomogeneity detection will be described with reference to FIG. 1. The shadow is smoothly removed from a face image after illumination normalization using a discontinuity transfer function suitable for the face image, and the features of face can be retained.
  • Hereinafter, an illumination refers to a brightness being radiated on an object, and adjacent pixel(s) refers to k number of pixel(s) (k is natural number), which is adjacent to a pixel to be calculated when calculations are performed on each pixel during the illumination normalization.
  • Hereinafter an illumination normalization apparatus according to embodiments of the present invention will be described with reference to FIG. 1 to FIG. 4.
  • FIG. 1 is a block diagram of an illumination normalizing apparatus according to one embodiment of the present invention, FIG. 2 is a schematic diagram of a discontinuity measuring means according to one embodiment of the present invention, and FIG. 3 shows comparison of the illumination normalized image that the conventional transfer function is applied with that that a transfer function according to one embodiment of the present invention is applied.
  • Referring to FIG. 1, the illumination normalizing apparatus 100 includes an image input means 110, a discontinuity measuring means 120, an weight calculating means 130, an illumination estimating means 140, and an illumination normalizing means 150.
  • The image input means 110 receives an input image. In order to measure discontinuity of the image for the illumination normalization, the image input means 110 sends the image to the discontinuity measuring means 120.
  • The discontinuity measuring means 120 measures both a spatial gradient and a local discontinuity for each pixel of the image.
  • As shown in FIG. 2, the discontinuity measuring means 120 includes a gradient measuring means 210 and an inhomogeneity measuring means 220. The discontinuity measuring means 120 outputs the image from the image input means 110 to the gradient measuring means 210 and the inhomogeneity measuring means 220, respectively, and may control the gradient measuring means 210 and the inhomogeneity measuring means 220 to be independently operated in parallel.
  • Referring to FIG. 2, the gradient measuring means 210 produces a vector's value of the spatial gradient of each pixel of the image, so that it can measure the discontinuity caused by the spatial gradient (hereinafter, referred as “measured discontinuity of spatial gradient”).
  • The spatial gradient is a partial differential at each of pixels (x, y) as defined by Formula 1, and each partial differential is defined by Formula 2,
  • I ( x , y ) = [ G x , G y ] = [ I ( x , y ) x , I ( x , y ) y ] Formula 1
  • wherein I(x, y) is a value of a pixel that locates on a coordinate (x, y); Gx and Gy are partial differential values of x and y, respectively; x and y are integers equal or greater than 0. Partial differential values are calculated by Formula 2.

  • G x =I(x+1, y)−I(x−1, y)

  • G y =I(x, y+1)−I(x, y−1)   Formula 2
  • According to this, the discontinuity measuring means 120 can produce the measured discontinuity of the spatial gradient by calculating a vector's magnitude of the spatial gradient as expressed in Formula 3.

  • |∇I(x,y)|=√{square root over (G x 2 +G y 2)}  Formula 3
  • The inhomogeneity measuring means 220 measures a local inhomogeneity by using k (k is natural number) adjacent pixels. And, the inhomogeneity measuring means 220 normalizes the measured local inhomogeneity by using a predetermined method, and performs a non-linear transform to output to the weight calculating means 140. Hereinafter, for convenience of the description, the local inhomogeneity, which is non-linearly transformed by the inhomogeneity measuring means 220, will be referred as “transformed local inhomogeneity.”
  • Here “local inhomogeneity” is for complementing the discontinuity produced by use of the spatial gradient, and indicates the degree of inhomogeneity between the pixel and adjacent pixels. Thus large local inhomogeneity of a pixel means that discontinuity at the pixel is larger than ones at adjacent pixels.
  • For example, the inhomogeneity measuring means 220 can produce differences between luminance of the pixel and each luminance of n adjacent pixels and the mean of the differences, and measure the local inhomogeneity. Namely, inhomogeneity measuring means 220 can produce the measured local inhomogeneity by using Formula 4,
  • τ ( x , y ) = ( m , n ) Ω I ( x , y ) - I ( m , n ) Ω Formula 4
  • wherein Ω indicates n adjacent pixels, (m, n) is a coordinate of an adjacent pixel included in Ω, τ(x, y) is the mean of differences between luminance of the pixel and each luminance of adjacent pixels (hereinafter, referred as “mean of luminance”), and m and n are integers.
  • And, the inhomogeneity measuring means 220 performs the non-linear transform on the measured local inhomogeneity and finally outputs the transformed local inhomogeneity to the weight calculating means 130.
  • For example, the inhomogeneity measuring means 220 normalizes luminance of each pixel to a value within the predetermined range (e.g., between “0” to “1”) by using Formula 5,
  • τ s ( x , y ) = τ ( x , y ) - τ min τ max - τ min Formula 5
  • wherein τmax is the maximum value among mean values (i.e., τ(x, y) values corresponding to each pixel) of luminance associated with each pixel.
  • As this, after performing the normalization, the inhomogeneity measuring means 220 performs the non-linear transform to emphasize a part having high local inhomogeneity by using Formula 6.
  • τ ~ ( x , y ) = sin ( π 2 τ s ( x , y ) ) Formula 6
  • As this, the discontinuity measuring means 120 can produce respectively independently the measured discontinuity of spatial gradient and the transformed local inhomogeneity, and output them to the weight calculating means 130.
  • The weight calculating means 130 produces weights to be applied to each pixel by using independently the measured discontinuity of spatial gradient and the transformed local inhomogeneity from the discontinuity measuring means 120, and sends the weights to the illumination estimating means 140, in which the weights are produced for removing shadows of each pixel.
  • For example, the weight calculating means 130 can produce a weight of each pixel by inputting the measured discontinuity of spatial gradient and the transformed local inhomogeneity from the discontinuity measuring means 120 to a predetermined transfer function.
  • For example, the weight calculating means 130 may produce a weight of each pixel by using Formula 7,
  • w ( x , y ) = 1 1 + τ ~ ( x , y ) h × 1 1 + I ( x , y ) S Formula 7
  • wherein h and S are real numbers.
  • The illumination estimating means 140 estimates a luminance for each pixel by performing the convolution operation T times (T is a natural number. It may be referred as repeated times based on context) by use of the weight of each pixel from the weight calculating means 130. It will be appreciated the number of repeated times T may vary according to the system implementation.
  • For example, the illumination estimating means 140 performs the convolution operation by using Formula 8,
  • N ( x , y ) ( t ) = i = - 1 1 j = - 1 1 w ( t ) ( x + i , y + j ) L ( t + 1 ) ( x , y ) = max { 1 N ( x , y ) ( t ) i = - 1 1 j = - 1 1 L ( t ) ( x + i , y + j ) w ( t ) ( x + i , y + j ) , L ( t ) ( x , y ) } Formula 8
  • wherein L(t) is an estimated illumination of each pixel when the convolution operation is performed t times, and w(t)(x, y is a weight of each pixel when the convolution operation is performed t times.
  • Also, reflectance of illumination has a value within the predetermined range (e.g., a value between “0” to “1”), such that the estimated illumination should not be greater than the luminance of each pixel of the input image. Therefore the illumination estimating means 140 selects the highest value from the results of the (t+1)th operation and the tth operation to prevent the illumination from being greater than the luminance of the input image.
  • The illumination normalizing means 150 normalizes the illumination of the input image by using the estimated illumination from the illumination estimating means 140.
  • The illumination normalizing means 150 performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of the subtraction.
  • For example, the illumination normalizing means 150 can normalize the illumination by applying the estimated illumination to the input image by using Formula 9.

  • R(x, y)=110×[log(I(x, y))−log(L (T)(x, y))]+150   Formula 9
  • Referring to FIG. 3, 3(a) is the input image, 3(b) shows the illumination estimated by the conventional method by using the input image, and 3(c) shows a result of the illumination normalization by using the conventional estimated illumination. Also in FIG. 3, 3(d) shows an estimated illumination by using the transfer function according to one embodiment of the present invention, and 3(e) shows a result of the illumination normalization by using the illumination estimated by the transfer function according to one embodiment of the present invention.
  • When 3(c) is compared with 3(e) in FIG. 3, it is noted that features of a face in the input image are erased together with shade in 3(c), while 3(e) is retaining features of the face in the input image (i.e., 3(a)).
  • Namely, as compared to the conventional method, it will be appreciated that the illumination estimation by using the transfer function according to one embodiment of the present invention is more effective.
  • FIG. 4 is a profile of illumination estimated through repeating the convolution operation according to one embodiment of the present invention, and FIG. 5 illustrates normalizations of a face image having a strong shadow according to one embodiment of the present invention.
  • Each of graphs in FIG. 4 indicates the luminance of each pixel locating on white horizontal lines 310, 315, 320 of each image. Despite the increase of repeated times (t), it will be appreciated that the discontinuity on illumination is retained as shown in graphs. It will be appreciated from this that the discontinuity of input image can be retained despite repeat of convolution operation. Thus, even if the convolution operation is performed repeatedly, it will be appreciated that the shadow can be effectively removed while the object's features of the input image are still retained.
  • Referring to FIG. 5, 5(a), 5(c) and 5(e) are based on the same object, but have a different intensity and direction of illumination. 5(b), 5(d) and 5(f) are the illumination normalized image of 5(a), 5(c) and 5(e), respectively, and show images similar to each other. From this, it will be appreciated that the recognition error caused by the change of illumination may be reduced by performing the illumination normalization according to one embodiment of the present invention.
  • FIG. 6 is a flowchart of illumination normalization method being performed by an illumination normalizing apparatus according to one embodiment of the present invention. Although each step to be described below will be performed by each element of the illumination normalizing apparatus 100, it will be referred collectively as illumination normalizing apparatus.
  • At step 510, the illumination normalizing apparatus 100 receives an input image from an external device such as camera, etc.
  • At step 520, the illumination normalizing apparatus 100 measures discontinuity of each pixel by use of the input image. The discontinuity is measured by producing the measured discontinuity of spatial gradient and the transformed local inhomogeneity.
  • At step 530, the illumination normalizing apparatus 100 produces a weight of each pixel by inputting the measured discontinuity of spatial gradient and the transformed local inhomogeneity into a predetermined transfer function.
  • At step 540, the illumination normalizing apparatus 100 repeats the predetermined convolution operation T times on the weight of each pixel and estimates the illumination.
  • At step 550, the illumination normalizing apparatus 100 normalizes the illumination by subtracting the estimated illumination from the input image. The illumination normalizing apparatus 100 performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination to normalize the illumination.
  • The aforementioned illumination normalizing method can be implemented in the form of computer program. Codes and code segments consisting of the program may be easily made by those skilled in the art. Also, the program may be stored on a computer-readable media, and a computer can read and perform the illumination normalizing method. The computer-readable media may be one of magnetic recording media, optical recording media, and carrier wave media.
  • Although the present invention is described with embodiment, those who skilled in the art can understand that various modifications, changes, and additions can be made without departing from the mete and scope of the present invention.

Claims (16)

1. An illumination normalizing apparatus, comprising:
a discontinuity measuring means, measuring a discontinuity of each pixel of an input image, the discontinuity comprising a spatial gradient and a local inhomogeneity;
a weight calculating means, producing a weight of each pixel from the discontinuity by using a transfer function;
an illumination estimating means, producing an estimated illumination by repeating a convolution operation on each weight; and
an illumination normalizing means, subtracting the estimated illumination from the input image.
2. The illumination normalizing apparatus of claim 1, in which the discontinuity measuring means comprises:
a gradient measuring means, producing the spatial gradient by performing a partial differential on the pixel; and
an inhomogeneity measuring means, producing differences between luminance of the pixel and each luminance of n adjacent pixels and the mean of differences.
3. The illumination normalizing apparatus of claim 2, in which the discontinuity measuring means controls the gradient measuring means and the inhomogeneity measuring means to be independently operated in parallel.
4. The illumination normalizing apparatus of claim 2, in which the gradient measuring means produces the spatial gradient by the following formula:

|∇I(x,y)|=√{square root over (G x 2 +G y 2)}
wherein I(x, y) is a value of a pixel that locates on a coordinate (x, y), and

G x =I(x+1,y)−I(x−1, y)

G y =I(x, y+1)−I(x, y−1)
wherein x and y are integers equal or greater than 0.
5. The illumination normalizing apparatus of claim 3, in which the inhomogeneity measuring means produces the local inhomogeneity by the following formula:
τ ~ ( x , y ) = sin ( π 2 τ s ( x , y ) ) wherein τ s ( x , y ) is τ ( x , y ) - τ min τ max - τ min , τ ( x , y ) is ( m , n ) Ω I ( x , y ) - I ( m , n ) Ω ,
τmax and τmin are the maximum value and the minimum value among τ(x, y), respectively, Ω indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in Ω, and m and n are integers.
6. The illumination normalizing apparatus of claim 5, in which the weight is produced by the following formula:
w ( x , y ) = 1 1 + τ ~ ( x , y ) h × 1 1 + I ( x , y ) S
wherein h and S are real numbers.
7. The illumination normalizing apparatus of claim 1, in which the convolution operation is performed by the following formula:
N ( x , y ) ( t ) = i = - 1 1 j = - 1 1 w ( t ) ( x + i , y + j ) L ( t + 1 ) ( x , y ) = max { 1 N ( x , y ) ( t ) i = - 1 1 j = - 1 1 L ( t ) ( x + i , y + j ) w ( t ) ( x + i , y + j ) , L ( t ) ( x , y ) }
wherein L(t) is an estimated illumination of each pixel when the convolution operation is performed t times, w(t)(x, y) is a weight of each pixel when the convolution operation is performed t times, and i and j are integers.
8. The illumination normalizing apparatus of claim 1, in which the illumination normalizing means performs a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of subtraction.
9. An illumination normalizing method, comprising:
producing at least one discontinuity for each pixel of an input image;
producing a weight of each pixel from the discontinuity by using a transfer function;
producing an estimated illumination by repeating a convolution operation on each weight; and
subtracting the estimated illumination from the input image.
10. The illumination normalizing method of claim 9, in which the discontinuity comprises a spatial gradient and a local inhomogeneity.
11. The illumination normalizing method of claim 10, in which the spatial gradient is produced by the following formula:

|∇I(x,y)|=√{square root over (G x 2 +G y 2)}
wherein I(x, y) is a value of a pixel that locates on a coordinate (x, y), and

G x =I(x+1,y)−I(x−1, y)

G y =I(x, y+1)−I(x, y−1)
wherein x and y are integers equal or greater than 0.
12. The illumination normalizing method of claim 11, in which the local inhomogeneity is produced by the following formula:
τ ~ ( x , y ) = sin ( π 2 τ s ( x , y ) ) wherein τ s ( x , y ) is τ ( x , y ) - τ min τ max - τ min , τ ( x , y ) is ( m , n ) Ω I ( x , y ) - I ( m , n ) Ω ,
τmax and τmin are the maximum value and the minimum value among τ(x, y), respectively, Ω indicates k adjacent pixels that are adjacent to the pixel at (x, y), (m, n) is a coordinate of an adjacent pixel included in Ω, and m and n are integers.
13. The illumination normalizing method of claim 12, in which the weight is produced by the following formula:
w ( x , y ) = 1 1 + τ ~ ( x , y ) h × 1 1 + I ( x , y ) S
wherein h and S are real numbers.
14. The illumination normalizing method of claim 9, in which the convolution operation is performed by the following formula:
N ( x , y ) ( t ) = i = - 1 1 j = - 1 1 w ( t ) ( x + i , y + j ) L ( t + 1 ) ( x , y ) = max { 1 N ( x , y ) ( t ) i = - 1 1 j = - 1 1 L ( t ) ( x + i , y + j ) w ( t ) ( x + i , y + j ) , L ( t ) ( x , y ) }
wherein L(t) is an estimated illumination of each pixel when the convolution operation is performed t times, w(t)(x, y) is a weight of each pixel when the convolution operation is performed t times, and i and j are integers.
15. The illumination normalizing method of claim 9, in which the subtracting the estimated illumination from the input image is a subtraction between the logarithm of the input image and the logarithm of the estimated illumination, and normalizes the result of subtraction.
16. A computer-readable medium including a program containing computer-executable instructions for illumination normalization performing the method comprising:
producing a vector of spatial gradient for each pixel of an input image;
measuring a local inhomogeneity that indicates the degree of inhomogeneity between the pixel and adjacent pixels;
producing a weight for retaining a feature of the input image and for removing a shadow having discontinuity similar to the feature by use of the spatial gradient and the local inhomogeneity;
producing an estimated illumination by repeating a convolution operation by use of the weight; and
performing a subtraction between the logarithm of the input image and the logarithm of the estimated illumination.
US12/040,170 2007-06-27 2008-02-29 Illumination normalizing method and apparatus Active 2031-03-10 US8175410B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2007-0063880 2007-06-27
KR1020070063880A KR100897385B1 (en) 2007-06-27 2007-06-27 Method and apparatus for illumination normalization

Publications (2)

Publication Number Publication Date
US20090003726A1 true US20090003726A1 (en) 2009-01-01
US8175410B2 US8175410B2 (en) 2012-05-08

Family

ID=40160604

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/040,170 Active 2031-03-10 US8175410B2 (en) 2007-06-27 2008-02-29 Illumination normalizing method and apparatus

Country Status (2)

Country Link
US (1) US8175410B2 (en)
KR (1) KR100897385B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509345A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Portrait art shadow effect generating method based on artist knowledge
CN102867176A (en) * 2012-09-11 2013-01-09 清华大学深圳研究生院 Face image normalizing method
CN105354862A (en) * 2015-09-30 2016-02-24 深圳大学 Method and system for detecting shadow of moving object in surveillance video
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method
CN106817542A (en) * 2015-12-02 2017-06-09 深圳超多维光电子有限公司 The imaging method and imaging device of microlens array
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5067459B2 (en) * 2010-08-31 2012-11-07 ブラザー工業株式会社 Image formation control program and image processing apparatus
CN102360513B (en) * 2011-09-30 2013-02-06 北京航空航天大学 Object illumination moving method based on gradient operation
CN103198464B (en) * 2013-04-09 2015-08-12 北京航空航天大学 A kind of migration of the face video shadow based on single reference video generation method
RU2697627C1 (en) 2018-08-01 2019-08-15 Самсунг Электроникс Ко., Лтд. Method of correcting illumination of an object on an image in a sequence of images and a user's computing device which implements said method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991456A (en) * 1996-05-29 1999-11-23 Science And Technology Corporation Method of improving a digital image
US6788822B1 (en) * 1999-08-31 2004-09-07 Sharp Kabushiki Kaisha Method and device for correcting lightness of image
US6834125B2 (en) * 2001-06-25 2004-12-21 Science And Technology Corp. Method of improving a digital image as a function of its dynamic range
US20050073702A1 (en) * 2003-10-02 2005-04-07 Doron Shaked Robust recursive envelope operators for fast retinex-type processing
US6885482B1 (en) * 1999-08-27 2005-04-26 Sharp Kabushiki Kaisha Image processing method and image processing apparatus
US7199793B2 (en) * 2002-05-21 2007-04-03 Mok3, Inc. Image-based modeling and photo editing
US20080101719A1 (en) * 2006-10-30 2008-05-01 Samsung Electronics Co., Ltd. Image enhancement method and system
US7382941B2 (en) * 2004-10-08 2008-06-03 Samsung Electronics Co., Ltd. Apparatus and method of compressing dynamic range of image
US20100303372A1 (en) * 2007-07-26 2010-12-02 Omron Corporation Digital image processing and enhancing system and method with function of removing noise

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3578321B2 (en) 1999-03-16 2004-10-20 日本ビクター株式会社 Image normalizer
JP2006004090A (en) 2004-06-16 2006-01-05 Mitsubishi Electric Corp Image normalization apparatus and image normalization program
KR100698828B1 (en) * 2005-02-28 2007-03-23 한국과학기술원 An illumination reflectance model based image distortion elimination method
KR100690295B1 (en) 2005-09-20 2007-03-09 삼성전자주식회사 Method of face image normalization and face recognition system for a mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991456A (en) * 1996-05-29 1999-11-23 Science And Technology Corporation Method of improving a digital image
US6885482B1 (en) * 1999-08-27 2005-04-26 Sharp Kabushiki Kaisha Image processing method and image processing apparatus
US6788822B1 (en) * 1999-08-31 2004-09-07 Sharp Kabushiki Kaisha Method and device for correcting lightness of image
US6834125B2 (en) * 2001-06-25 2004-12-21 Science And Technology Corp. Method of improving a digital image as a function of its dynamic range
US7199793B2 (en) * 2002-05-21 2007-04-03 Mok3, Inc. Image-based modeling and photo editing
US20050073702A1 (en) * 2003-10-02 2005-04-07 Doron Shaked Robust recursive envelope operators for fast retinex-type processing
US7382941B2 (en) * 2004-10-08 2008-06-03 Samsung Electronics Co., Ltd. Apparatus and method of compressing dynamic range of image
US20080101719A1 (en) * 2006-10-30 2008-05-01 Samsung Electronics Co., Ltd. Image enhancement method and system
US20100303372A1 (en) * 2007-07-26 2010-12-02 Omron Corporation Digital image processing and enhancing system and method with function of removing noise

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509345A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Portrait art shadow effect generating method based on artist knowledge
CN102867176A (en) * 2012-09-11 2013-01-09 清华大学深圳研究生院 Face image normalizing method
CN105354862A (en) * 2015-09-30 2016-02-24 深圳大学 Method and system for detecting shadow of moving object in surveillance video
CN106817542A (en) * 2015-12-02 2017-06-09 深圳超多维光电子有限公司 The imaging method and imaging device of microlens array
CN108780508A (en) * 2016-03-11 2018-11-09 高通股份有限公司 System and method for normalized image
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method

Also Published As

Publication number Publication date
US8175410B2 (en) 2012-05-08
KR20080114379A (en) 2008-12-31
KR100897385B1 (en) 2009-05-14

Similar Documents

Publication Publication Date Title
US8175410B2 (en) Illumination normalizing method and apparatus
US8396324B2 (en) Image processing method and apparatus for correcting distortion caused by air particles as in fog
CN108229525B (en) Neural network training and image processing method and device, electronic equipment and storage medium
JP4160258B2 (en) A new perceptual threshold determination for gradient-based local contour detection
Singh et al. A novel dehazing model for remote sensing images
US9870600B2 (en) Raw sensor image and video de-hazing and atmospheric light analysis methods and systems
US20060165311A1 (en) Spatial standard observer
Banerjee et al. Real-time underwater image enhancement: An improved approach for imaging with AUV-150
US9542725B2 (en) Image processing device, image processing method and medium
CN109118446B (en) Underwater image restoration and denoising method
WO2010132237A1 (en) Light detection, color appearance models, and modifying dynamic range for image display
US20120141044A1 (en) Removing Illumination Variation from Images
US9858495B2 (en) Wavelet-based image decolorization and enhancement
US8938119B1 (en) Facade illumination removal
Mahiddine et al. Performances analysis of underwater image preprocessing techniques on the repeatability of SIFT and SURF descriptors
CN110910347B (en) Tone mapping image non-reference quality evaluation method based on image segmentation
WO2011033744A1 (en) Image processing device, image processing method, and program for processing image
US11798134B2 (en) Image processing device, image processing method, and image processing program
Ashwini et al. Image and video dehazing based on transmission estimation and refinement using Jaya algorithm
KR101242070B1 (en) color image rendering using a modified image formation model
Lee et al. Visibility dehazing based on channel-weighted analysis and illumination tuning
Mahdi et al. SINGLE IMAGE DE-HAZING THROUGH IMPROVED DARK CHANNEL PRIOR AND ATMOSPHERIC LIGHT ESTIMATION.
CN110322431B (en) Haze image quality evaluation method and system, storage medium and electronic equipment
Lu Local Defogging Algorithm for Improving Visual Impact in Image Based on Multiobjective Optimization
CN113658302B (en) Three-dimensional animation data processing method and device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUNGKYUNKWAN UNIVERSITY FOUNDATION FOR CORPORATE C

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARK, YOUNG-KYUNG;PARK, SEOK-LAI;SON, JI-HYOUNG;AND OTHERS;REEL/FRAME:020584/0172

Effective date: 20080220

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY