CN106780561B - Color space construction method with illumination robustness for visual tracking - Google Patents

Color space construction method with illumination robustness for visual tracking Download PDF

Info

Publication number
CN106780561B
CN106780561B CN201611260740.4A CN201611260740A CN106780561B CN 106780561 B CN106780561 B CN 106780561B CN 201611260740 A CN201611260740 A CN 201611260740A CN 106780561 B CN106780561 B CN 106780561B
Authority
CN
China
Prior art keywords
pixel
color space
illumination
components
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611260740.4A
Other languages
Chinese (zh)
Other versions
CN106780561A (en
Inventor
顾国华
万敏杰
钱惟贤
任侃
陈钱
张晓敏
王佳节
陈雪琦
隋修宝
何伟基
刘雯彬
姜睿妍
王雨馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611260740.4A priority Critical patent/CN106780561B/en
Publication of CN106780561A publication Critical patent/CN106780561A/en
Application granted granted Critical
Publication of CN106780561B publication Critical patent/CN106780561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Image Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The invention discloses a color space construction method with illumination robustness for visual tracking, which provides a method for inter-frame maintenance of H components according to a quantitative conversion formula of RGB and HSI spaces; dividing image pixel points into two categories according to the existence of color information, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space, and giving a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination; and applying the established new color space to a classic visual tracking algorithm. The method can keep higher stability and precision of the traditional tracking algorithm, only linear change is carried out on the color space, modification of the tracking algorithm is not involved, the overall calculation complexity of the method is greatly reduced, and the method has the characteristics of low calculation complexity and good real-time performance.

Description

Color space construction method with illumination robustness for visual tracking
Technical Field
The invention belongs to the field of optical signal processing and digital image processing, and particularly relates to a color space construction method with robustness to illumination change based on a hue keeping principle.
Background
The study of tracking algorithms robust to illumination variations has long been a place in optical signal processing applications such as military target striking, video surveillance, and robot vision. For the vision tracking problem based on target feature distribution, for example, a vehicle ([1] royal jelly, Caojie, a moving automobile tracking algorithm based on feature points researches [ J ]. electric automation.2011 (06)), a human face tracking ([2] Xiao Bing, Wangzhan, a human face recognition research reviews [ J ]. computer application research 2005(08)), and the drift of the tracking result is often caused by the illumination intensity change caused by the influence of tree shadow shielding, weather, camera parameters and the like. Most of the existing technical researches utilize mathematical theory to correct and improve the tracking algorithm, but do not really carry out thinking and related method mining on the optical characteristics of the target ([3] Zhaoxin, Chenfeng, Wu Li Zhi, an improved meanshift moving target tracking algorithm [ J ]. communication technology [ 2011(11) ], [4] Magazine, Korea, Han Showa, a particle filter tracking algorithm [ J ]. photoelectric engineering based on information fusion [ 2007(04) ], [5] marvelin, Yinfang, double particle filter method [ J ]. electronic and informatics report [ 2008(09) ] for joint estimation of state and parameters in a nonlinear system.
In order to overcome the influence of illumination, a large number of visual tracking algorithms based on color information are produced, and can be roughly divided into two categories: a template update based method and an invariant color feature based method. The former uses the color state of the target current frame to predict and update a color model, thereby ensuring that the next frame has a certain degree of adaptability to the change of color information, but the method is limited by the updating speed and cannot cope with the rapid change of the color information; the latter is to eliminate the influence caused by the change probability density of the color distribution by calculating the invariant optical flow field, and has certain adaptability to the illumination change speed, but the calculation amount is still large, and real-time processing cannot be realized.
Disclosure of Invention
The invention provides a color space construction method with illumination robustness for visual tracking, which can be directly applied to a traditional visual tracking method based on characteristic distribution and better solves the problem that the method is easy to generate tracking drift phenomenon under the condition that the illumination condition is obviously changed.
The technical solution for realizing the purpose of the invention is as follows: a color space construction method with illumination robustness for visual tracking comprises the following steps:
firstly, according to a quantitative conversion formula of RGB and HSI space, a method for carrying out interframe maintenance on H components is provided, namely, hue maintenance is carried out;
secondly, dividing image pixel points into two categories according to the existence of color information, namely color pixel points and gray pixel points, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space, and giving a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination;
and finally, applying the established new color space to a classic visual tracking algorithm, and proving that the new color space improves the illumination change resistance of the traditional tracking algorithm.
Compared with the prior art, the invention has the following remarkable advantages: (1) under the condition that the intensity of a light source changes, the traditional tracking algorithm can keep higher stability and precision, only linear change is carried out on a color space, modification of the tracking algorithm is not involved, the overall calculation complexity of the method is greatly reduced, and the method has the characteristics of low calculation complexity and good real-time performance. (2) On one hand, the influence of illumination change on the characteristics of a target image is explained from the optical essence, on the other hand, components sensitive to illumination in a color space are corrected from the source, and the stability and the precision of the existing tracking algorithm in an illumination change environment are greatly improved, so that the illumination robustness is extremely high. (3) The method carries out interframe maintenance on color components (namely hue components) which can reflect the real color characteristics of a target and are insensitive to illumination change, and carries out artificial control and correction on the sensitive components, thereby eliminating the influence of the illumination change on the color space and providing a new color space with illumination robustness.
The invention is further described below with reference to the accompanying drawings:
drawings
FIG. 1 is a schematic diagram of the illumination robust color space construction method for visual tracking according to the present invention.
Fig. 2-1 to 2-4 are tracking result graphs of each visual tracking method in an RGB color space and a color space proposed by the method, where fig. 2-1 is a tracking result comparison graph of an MS algorithm, fig. 2-2 is a tracking result comparison graph of a PF algorithm, fig. 2-3 is a tracking result comparison graph of a CS algorithm, fig. 2-4 is a tracking result comparison graph of a PF algorithm, a solid line box in each graph represents a tracking result of an algorithm in an RGB color space, a dashed line box represents a tracking result of an algorithm in a color space provided by the present invention, and a red frame in each first picture represents a manually selected first frame target position.
Fig. 3 is a CR graph.
Detailed Description
With reference to fig. 1, the method for constructing a color space with illumination robustness for visual tracking according to the present invention includes the following steps:
firstly, on the basis of keeping the inter-frame hue information of each pixel point unchanged, a method for keeping the H component between frames is provided according to a quantitative conversion formula of RGB and HSI (hue-saturation-brightness) spaces, and hue keeping is carried out, namely, translation and scale transformation are utilized to carry out linear transformation on RGB spatial color components, so that the hue H component in the corresponding HSI space is kept unchanged, and constraint conditions required to be followed by correcting the saturation (S) and brightness (I) components can be simultaneously carried out.
Wherein the conventional RGB color space is converted to the HSI color space: for any 24-bit RGB color space display based color digital image, R, G, B three components need to be normalized to the [0,1] interval range first, that is: divide these three components by 255 respectively; the H, S, I component in its corresponding HSI color space can then be calculated according to the following formula:
Figure BDA0001199762560000031
in the formula, R, G, B represents the red, green, and blue components of a pixel, H, S, I represents the hue, saturation, and luminance components of a pixel, arctan (·) represents an arctangent operation, and min (·) represents a minimum operation.
The method for maintaining the color tone comprises the following steps: firstly, according to the mathematical relationship between H, S, I and R, G, B in formula (1), on the right side of the equal sign of the three equations, if the R, G, B three components are shifted or scaled simultaneously, that is, the three components are added with the same constant or multiplied by the same constant which is not 0, the value of H is not changed; then, if a certain pixel has an H component, the values of the S and I components can be changed only by translation and scale transformation, and the value of the H component can be ensured to be unchanged; finally, using the same translation factor or scale factor, the R, G, B of the pixel point is translated or scaled at the same time, i.e. translation and scale are performed, so as to change the corresponding S and I component values and ensure that the H component value is constant.
Secondly, dividing image pixel points into two categories according to the existence of color information, namely color pixel points and gray pixel points, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space (correcting and constraining color components S and I components which can change along with illumination except for an H component in the HSI space so that H, S, I components of each pixel keep illumination robust characteristics in continuous frames), and providing a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination. Since the hue H is robust to illumination, when the red (R), green (G), and blue (B) components are corrected, it is necessary to ensure that their corresponding H components remain unchanged before and after the conversion. That is, only R, G, B of each pixel point of the whole image is translated and scaled, and the corresponding S, I is forced to change in value, while the H value remains unchanged.
Correcting the S component of the color pixel: firstly, based on the fact that the illumination change can change the pixel saturation value, the S component of the color pixel is uniformly corrected to be 1, namely the saturation of the pixel is restrained, so that the saturation interference is eliminated; then, the foregoing translation operation is performed, a translation factor δ is set, and the S component is corrected using the following formula:
Figure BDA0001199762560000041
wherein R, G, B represents the red, green and blue components of a pixel, H, S, I represents the hue, saturation and brightness components of a pixel, and min (-) represents the minimum operation (min represents the minimum operation); finally, by observing the formula (2) and combining the conditional constraint that the three component values of the color pixel R, G, B cannot be all equal, the value of δ is obtained:
δ=-min(R,G,B) (3)
in the formula, min (·) represents a minimum value operation, namely, the original R, G, B three components are respectively subtracted by the minimum values, so that saturation correction is completed.
Correcting the I component of the color pixel point, namely, firstly, uniformly correcting the I component of the color pixel point to be a constant α based on the fact that the illumination change can change the pixel brightness value, namely, restricting the pixel brightness so as to eliminate the brightness interference, then, carrying out the scale scaling operation, setting a scale factor β, and correcting the I component by using the following formula:
Figure BDA0001199762560000042
wherein R, G, B represents the red, green, and blue components of a pixel, H, S, I represents the hue, saturation, and brightness components of a pixel, δ is a translation factor, and α is a constantβ, and finally, solving the value of the scale factor according to the formula (4):
Figure BDA0001199762560000044
in the formula (I), the compound is shown in the specification,
Figure BDA0001199762560000051
Figure BDA0001199762560000052
i.e., the R, G, B components are multiplied by β, respectively, to complete the luminance correction.
Carrying out unified mapping on the gray pixel points: firstly, based on the fact that gray pixel points R, G, B have no H information and S is 0, the conclusion that only the I component of the gray pixel points changes along with illumination and the illumination influence can be eliminated only by correcting the I component of the gray pixel points is obtained; then, the I component is artificially set as a constant in accordance with the content of the step of correcting the S component of the color pixel
Figure BDA0001199762560000053
That is, all the gray pixels on the brightness axis of the RGB space are mapped to the same point, and thus the new color space is constructed.
And finally, replacing the RGB color space with the established new color space, applying the new color space to a classic/traditional visual tracking algorithm based on characteristic distribution, and proving that the new color space can obviously improve the illumination change resistance of the traditional tracking algorithm without any correction and adjustment on the algorithm. The direct application of the new color space to the visual tracking algorithm: firstly, selecting four typical classic visual tracking algorithms based on feature distribution, namely a Mean Shift (MS) algorithm, a Particle Filter (PF) algorithm, a continuous adaptive mean shift (CS) algorithm and a mixed mean shift-particle filter (MSPF) algorithm as testing tools; then, under the scene of continuous change of illumination, a color CCD camera is adopted to shoot a total of 200 frames of images of a moving red model trolley as a test video; then, the RGB color space is used as the feature space of the four tracking algorithms to track the red trolley; thirdly, replacing the RGB space by the color space, tracking the trolley by using the same four algorithms, and respectively displaying the tracking results of the same algorithm in the RGB color space and the color space of the invention on the same graph to obtain 4 tracking result comparison graphs as shown in figures 2-1 to 2-4; finally, the accuracy of each tracking result is measured by using a quantitative index CR, wherein the CR is defined as:
Figure BDA0001199762560000054
in the formula, ACRepresenting the standard tracking result, A, of the original image, previously marked by an operatorRThe actual tracking result obtained by the tracking algorithm is shown, ∩ shows the overlapping area of the two areas, the higher the CR is, the closer the tracking result is to the standard result, and simultaneously, CR curves of all methods in 200 frames of video images are drawn to obtain a graph 3.

Claims (5)

1. A color space construction method with illumination robustness for visual tracking is characterized by comprising the following steps:
firstly, according to a quantitative conversion formula of RGB and HSI space, a method for carrying out interframe maintenance on H components is provided, namely, hue maintenance is carried out;
secondly, dividing image pixel points into two categories according to the existence of color information, namely color pixel points and gray pixel points, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space, and giving a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination;
finally, the established new color space is applied to a classic visual tracking algorithm, and the illumination change resistance of the traditional tracking algorithm is improved;
the process of converting the conventional RGB color space to the HSI color space is: for any 24-bit RGB color space display based color digital image, R, G, B three components are first normalized to the [0,1] range, i.e.: divide these three components by 255 respectively; the H, S, I component in its corresponding HSI color space can then be calculated according to the following formula:
Figure FDA0002225400930000011
in the formula, R, G, B represents the red, green and blue components of the pixel, H, S, I represents the hue, saturation and brightness components of the pixel, arctan (·) represents the arctangent operation, and min (·) represents the minimum operation;
the steps of maintaining the color tone are as follows: firstly, according to the mathematical relationship between H, S, I and R, G, B in formula (1), on the right side of the equal sign of the three equations, if the R, G, B three components are shifted or scaled simultaneously, that is, the three components are added with the same constant or multiplied by the same constant which is not 0, the value of H is not changed; then, if a certain pixel has an H component, the values of the S and I components can be changed only by translation and scale transformation, and the value of the H component can be ensured to be unchanged; and finally, simultaneously translating or scaling R, G, B of the pixel points by adopting the translation factor or the scaling factor with the same value, thereby changing the corresponding S and I component values and ensuring that the H component value is constant.
2. The illumination-robust color space construction method for visual tracking according to claim 1, wherein the S-component of the color pixel is corrected by: firstly, based on the fact that the illumination change can change the pixel saturation value, the S component of the color pixel is uniformly corrected to be 1, namely the saturation of the pixel is restrained, so that the saturation interference is eliminated; then, a translation operation is performed, a translation factor δ is set, and the S component is corrected using the following formula:
Figure FDA0002225400930000021
in the formula, R, G, B represents the red, green and blue components of the pixel, H, S, I represents the hue, saturation and brightness components of the pixel, and min (-) represents the minimum operation; finally, by observing the formula (2) and combining the conditional constraint that the three component values of the color pixel R, G, B cannot be all equal, the value of δ is obtained:
δ=-min(R,G,B) (3)
in the formula, min (·) represents a minimum value operation, namely, the original R, G, B three components are respectively subtracted by the minimum values, so that saturation correction is completed.
3. The illumination-robust color space construction method for visual tracking according to claim 1, wherein the I-component of the color pixel is corrected by first uniformly correcting the I-component of the color pixel to a constant α, i.e. a constraint on the brightness of the pixel, based on the fact that the illumination change will change the value of the brightness of the pixel, thereby eliminating the brightness interference, then performing a scaling operation to set a scale factor β, and correcting the I-component using the following formula:
Figure FDA0002225400930000022
wherein R, G, B represents the red, green and blue components of the pixel, H, S, I represents the hue, saturation and brightness components of the pixel, δ is the translation factor, α is a constant, β is the scale factor, and finally the value of the scale factor is solved according to equation (4):
Figure FDA0002225400930000023
in the formula (I), the compound is shown in the specification,
Figure FDA0002225400930000024
i.e., the R, G, B components are multiplied by β, respectively, to complete the luminance correction.
4. The illumination-robust color space construction method for visual tracking according to claim 1, wherein gray pixels are mapped uniformly: first, based on the fact that the gray pixel R ═ G ═ B, has no H information, and S ═ 0, it is found that only the I component changes with the illumination,the conclusion of the illumination influence can be eliminated only by correcting the component I; then, the I component is artificially set as a constant in accordance with the content of the step of correcting the S component of the color pixel
Figure FDA0002225400930000031
That is, all the gray pixels on the brightness axis of the RGB space are mapped to the same point, and thus the new color space is constructed.
5. The illumination robust color space construction method for visual tracking according to claim 1, characterized in that the new color space is directly applied to the visual tracking algorithm: firstly, selecting four typical classic visual tracking algorithms based on characteristic distribution, namely a mean shift MS algorithm, a particle filter PF algorithm, a continuous self-adaptive mean shift CS algorithm and a mixed mean shift-particle filter MSPF algorithm as testing tools; then, under the scene of continuous change of illumination, a color CCD camera is adopted to shoot a total of 200 frames of images of a moving red model trolley as a test video; then, the RGB color space is used as the feature space of the four tracking algorithms to track the red trolley; thirdly, replacing the RGB space by using the color space, and tracking the trolley by using the same four algorithms to obtain a tracking result comparison graph; finally, the accuracy of each tracking result is measured by using a quantitative index CR, wherein the CR is defined as:
Figure FDA0002225400930000032
in the formula, ACRepresenting the standard tracking result, A, of the original image, previously marked by an operatorRRepresenting the actual tracking result from the tracking algorithm, ∩ represents the overlapping area of the two regions.
CN201611260740.4A 2016-12-30 2016-12-30 Color space construction method with illumination robustness for visual tracking Active CN106780561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611260740.4A CN106780561B (en) 2016-12-30 2016-12-30 Color space construction method with illumination robustness for visual tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611260740.4A CN106780561B (en) 2016-12-30 2016-12-30 Color space construction method with illumination robustness for visual tracking

Publications (2)

Publication Number Publication Date
CN106780561A CN106780561A (en) 2017-05-31
CN106780561B true CN106780561B (en) 2020-04-17

Family

ID=58953430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611260740.4A Active CN106780561B (en) 2016-12-30 2016-12-30 Color space construction method with illumination robustness for visual tracking

Country Status (1)

Country Link
CN (1) CN106780561B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587591A (en) * 2009-05-27 2009-11-25 北京航空航天大学 Visual accurate tracking technique based on double parameter thresholds dividing
CN102779330A (en) * 2012-06-13 2012-11-14 京东方科技集团股份有限公司 Image reinforcement method, image reinforcement device and display device
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100507780B1 (en) * 2002-12-20 2005-08-17 한국전자통신연구원 Apparatus and method for high-speed marker-free motion capture
US20090268953A1 (en) * 2008-04-24 2009-10-29 Apteryx, Inc. Method for the automatic adjustment of image parameter settings in an imaging system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587591A (en) * 2009-05-27 2009-11-25 北京航空航天大学 Visual accurate tracking technique based on double parameter thresholds dividing
CN102779330A (en) * 2012-06-13 2012-11-14 京东方科技集团股份有限公司 Image reinforcement method, image reinforcement device and display device
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection

Also Published As

Publication number Publication date
CN106780561A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
Jiang et al. Night video enhancement using improved dark channel prior
WO2022100242A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
DE102019106252A1 (en) Method and system for light source estimation for image processing
US8724894B1 (en) Colorization of digital imagery
CN111429370B (en) Underground coal mine image enhancement method, system and computer storage medium
CN108230407B (en) Image processing method and device
CN105049718A (en) Image processing method and terminal
CN103778900B (en) A kind of image processing method and system
CN105185314A (en) Uniformity compensation method for LED display screen
CN104581105B (en) Based on the auto white balance method of colour temperature range conversion weight map and the correction of block reliability
CN103065334A (en) Color cast detection and correction method and device based on HSV (Hue, Saturation, Value) color space
CN103268596B (en) A kind of method for reducing picture noise and making color be near the mark
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN111027415B (en) Vehicle detection method based on polarization image
CN104504722B (en) Method for correcting image colors through gray points
CN111970432A (en) Image processing method and image processing device
CN108063926A (en) Image processing method and device, computer readable storage medium and computer equipment
CN115665565A (en) Online tobacco leaf image color correction method, system and device
JP2008092565A (en) Color matching method and image capturing device
CN106780561B (en) Color space construction method with illumination robustness for visual tracking
CN109672874B (en) Space-time consistent stereo video color correction method
CN111836103A (en) Anti-occlusion processing system based on data analysis
CN107316040B (en) Image color space transformation method with unchanged illumination
CN114529460A (en) Low-illumination-scene intelligent highway monitoring rapid defogging method and device and electronic equipment
CN110148188B (en) Method for estimating low-illumination image illumination distribution based on maximum difference image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant