CN101383912A - Color intelligent automatic regulating device for television video taking - Google Patents

Color intelligent automatic regulating device for television video taking Download PDF

Info

Publication number
CN101383912A
CN101383912A CNA2008102016225A CN200810201622A CN101383912A CN 101383912 A CN101383912 A CN 101383912A CN A2008102016225 A CNA2008102016225 A CN A2008102016225A CN 200810201622 A CN200810201622 A CN 200810201622A CN 101383912 A CN101383912 A CN 101383912A
Authority
CN
China
Prior art keywords
image
color
face
red
green
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102016225A
Other languages
Chinese (zh)
Other versions
CN101383912B (en
Inventor
张秀彬
应俊豪
程远
董长城
徐林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN2008102016225A priority Critical patent/CN101383912B/en
Publication of CN101383912A publication Critical patent/CN101383912A/en
Application granted granted Critical
Publication of CN101383912B publication Critical patent/CN101383912B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a method for intelligently and automatically adjusting TV videography colors in the technical field of automatic control. The method comprises the steps as follows: Step 1, a database for classical color images and color characteristic values of a human face is established; Step 2, person images are collected and processed; Step 3, the colored light intensity is controlled; Step 4, the Step 2 and Step 3 are repeated till the difference between the color characteristic values of a current face image under the effect of the colored light intensity and the classical color images and color characteristic values of the human face is less than a determined threshold; and Step 5, the image is switched form an original colorized space to a YCbCr colorized space for purpose of the replay on a news program. In the invention, the image colors of the person to be shot achieves perfect combination between natural beauty and artistic beauty, therefore, the invention achieves a new technology which saves time and labor, improves the news interview and film shooting effect.

Description

Telephotography color intelligent Automatic adjustment method
Technical field
What the present invention relates to is the method in a kind of automatic control technology field, specifically is a kind of telephotography color intelligent Automatic adjustment method.
Background technology
As everyone knows, during telephotography, no matter you play the part of any role to perform in a radio or TV programme the personage, also your status height no matter, all to accept to direct the arrangement with the make up artist, except your image being carried out design and decorating, also will execute and colorly on the powder carry out just right painted beautifying at your face, otherwise the shooting effect of a personal image is well imagined; Although, usually in the studio, light and regulation technology thereof have been disposed according to room unit and space requirement, as: 2 main spotlights, 2 poly-light of pair, 3 soft lights, outline light modulation after 2,3 figure viewed from behind light modulations, and employing traversing carriage, order word ceiling suspension hook lamp bracket, 12 output of 3 passages Control Console etc., be used to improve figural shooting effect, but, all these work all are to rely on manual operation to finish, and be to rely on photographer individual's professional level to hold figure image design and shooting effect, therefore, this be one very time-consuming, bothersome, and the work that is difficult to hold actual techniques level and effect.
In fact, the figural shooting effect of telephotography process need not the figure image that is taken is painstakingly carried out painted fully, and can be achieved by the automatic color matching irradiation of colour lamplight to personage's nature original shape.
Find by prior art documents, Lin Guoyi: " light of small-sized studio and camera technique " (" broadcasting and TV tech " the 10th phase in 2003) introduced in news production, the close ties of studio light technology and camera technique, and the method for light and composition picture inquired into.Three base color soft light application to personage's shooting effect in the studio set forth emphatically in article, emphasize: will make good use of three base color soft light must combine with technology by art in small-sized studio, require to illuminate simultaneously the figure and ground in the news studio cloth light time, cloth goes out the effect of personage's key light, secondary light, backlight, catchlights, bias light, reach tridimensional density, therefore, lighting method---the annular lighting of a kind of science of the most employing of news studio at present.It and traditional " bikini personage lighting " are different, de-emphasize a lamp one effect, but light is around the personage, evenly throws closely making larger area, make the personage no longer include tangible projection and luminance contrast, the indoor similar effect of personage's light efficiency in formation and the life.Also emphasize: in the light modulation process, handle the performance of the saturation, backcolor, tone, contrast, color of balance, the color of correct reduction, the color of light ratio, the color of face light, face light and backlight well to spatial impression, coordinate the relation of illumination, the depth of field and image sharpness, the stereovision and the symmetry of light, handle the relation of panorama light and local light well.
But the weak point of above-mentioned technology is: (1) whole light configuration is very complicated with operation; (2) configuration adjustment of lamplight brightness and color all relies on manually and carries out, and this just makes actual effect be subjected to the restriction of artificial technology's level largely.It is the automatic adjustment technology that the method for above-mentioned document can not be applicable to news shooting figure image color.
Summary of the invention
The objective of the invention is to overcome above-mentioned deficiency of the prior art, a kind of telephotography color intelligent Automatic adjustment method is provided, anyone figure image is being accepted automatic color matching and the adjusting that shooting process adopts colour lamplight, therefore the figure image aesthetic effect on video screen automatically can obtain to be taken, make us pleasing, and can search out the best combination of a kind of true colour of skin and aesthetic according to the not agnate different colours of skin.
The present invention is achieved by the following technical solutions, the present invention includes following steps:
Step 1 is set up people's face classical color image and color character Value Data storehouse thereof;
Step 2, take character image by digital camera, wherein facial image is carried out image color space conversion, figure image intensifying, recognition of face and tracking, and decomposition of facial image color and image coupling are carried out in people's face classical color image and color character Value Data storehouse thereof that integrating step one is set up, identify the ethnic group and the colour of skin thereof of personage in the facial image, and find the pairing people's face of this colour of skin classical color image;
Step 3, red, green, basket three primary colors composition luminous flux in the color character value of facial image and people's face classical color image are compared, obtain the difference of luminous flux between the two, and determine red, green, basket three primary colors composition luminous flux regulating control command, and amplify by the driver module conversion that back control is red, green, the intensities of illumination of basket three look cold light sources according to the difference of luminous flux;
Step 4, repeating step two and step 3, until colour lamplight intensity to the difference of the formed color character value of the current personage's that is taken facial image and people's face classical color image color characteristic value less than preset threshold;
Step 5, with image from the original color space conversion to the YCbCr color space for the news television relay, be RGB color space model image by the YCbCr color space conversion again at television receive terminal.
Described step 1, set up people's face classical color image and color character Value Data storehouse thereof, be meant that collection can represent people's face coloured image classics of the different colours of skin in the world, form people's face classical color image collection, deposit in the image data base, simultaneously, obtain the color character value of people's face classical color image, deposit in the color character Value Data storehouse.In people's face classical color image and the color character Value Data storehouse thereof, according to the color character value of people's face classical color image, can obtain the specific colour of skin ethnic group face pairing color character value relevant parameter of stored every width of cloth people face classical color image representative in the database.As: the ratio value of " people's face classical color image " pairing in the RGB color space " the color character value " that filters out from many images of yellow is r:g:b=r y: g y: b yWhether belong to the yellow so can utilize this ratio feature of yellow to differentiate the personage that is taken, if the color property ratio of current character facial equals or approaches this proportionate relationship most, think that then current personage is the yellow, otherwise think and be not the yellow.
Described color character value, it can be expressed as follows in the RGB color space: any color C can use the trichromatic three kinds of primary colours of photoelectricity: red (R, wavelength 700nm), green (G, wavelength 546.1nm), blue (B, wavelength 435.8nm) coupling are come out, that is:
C=rR+gG+bB
Wherein, r, g, b are the proportionality coefficient of three primary colors, promptly
r g b = R / ( R + G + B ) 0 0 0 G / ( R + G + B ) 0 0 0 B / ( R + G + B ) R G B
Obviously, r+g+b=1; R, G, B are the luminous flux of three primary colors, and unit is a light watt; When adopting red, green, blue three primary colors light to be matched to white light, the ratio of the luminous flux of needed red, green, blue three primary colors light is 1:4.5907:0.0601; The luminous flux of the three primary colors R that image is decomposed out in the RGB color space, G, B and proportionate relationship r:g:b thereof are " the color character value " of this coloured image in the RGB color space; Coloured image is in the expression formula [R G B] of RGB color space TNot only represent the color of red, green, blue three primary colors light, also represent the brightness of three primary colors simultaneously, exist very big correlation between RGB three looks.And for example the YCbCr color space is the color space model of a kind of standard of adopting of vitascan, and characteristic value wherein: Y is that brightness, aberration Cb and Cr are respectively poor between blueness and danger signal and the luminance signal.
Described color space is the colored a kind of mathematics description method of expression, is used for specifying and producing the color of scenery and image thereof, or claims color, makes scenery and color of image visualization thereof; By describing the colored mathematical space that Mathematical Modeling constituted, be called the color space, different colored Mathematical Modelings constitute different color spaces, claim the color space model again; The color space model adopts threedimensional model to represent usually, and each color in the color space is specified by one group of coordinate parameters (three coordinate parameters) of describing the position of this color in the color space.Now, people have constructed various color spaces, to adapt to different application scenarios.For example: rgb space, the CMYK space of expression printing device, the YCbCr space of expression TV signal and HSV space of expression human color perception characteristic of optoelectronic devices such as expression digital camera or the like." color space model " and " color space " have equal notion, are describing the coloured image process, when both mutual alternative are used without any difference.
Described digital camera is taken character image, promptly to being taken the personage under red, green, basket three-basic-colour lamp rayed, concentrates " aiming " face area to take relatively, simultaneously natural daylight and other artificial light source is not provided with special qualifications.The character image of gathering this moment has been determined the color space model of output image, i.e. RGB color space model by digital camera.The character image picture is formed by figure image and background, and figure image is made up of other position of character facial and health again, when character facial occupies larger proportion on picture, when comparatively outstanding in other words, claims that traditionally this character image is a facial image.
Described image color space conversion, be to prepare for follow-up recognition of face and figure image intensifying etc., specifically be digital picture with digital camera output from the RGB color space conversion to the HSV color space, tone H, saturation S and brightness V after the conversion are expressed as follows respectively:
V≤max(R,G,B)
Figure A200810201622D00091
Calculating the H process if then get H<0
Figure A200810201622D00093
Actual value for H.
Described figure image intensifying, comprise: the intensity of brightness component strengthens and the saturation component is regulated two parts, wherein: the intensity of brightness component strengthens employing Pulse Coupled Neural Network (Pulse-Coupled Neural Networks, abbreviation PCNN) method strengthens the image brightness strength component, described PCNN, be a kind of by the simulation simplified model that the viewed synchronous behavior of neuron relevant with feature set up in people's brain visual cortex, therefore, the perception of it and visual system has natural getting in touch.Adopt PCNN to handle in two dimensional image, each neuron is corresponding one by one with pixel, and its luminance intensity value is as neuronic outside stimulus, so, in PCNN inside, spatial neighbor, the similar then simultaneous ignition of pixel cluster of intensity, otherwise asynchronous igniting.This is in image enhancement processing, and the image pixel that shows as the simultaneous ignition correspondence presents identical luminance intensity value, thus level and smooth image-region; The corresponding image pixel of asynchronous igniting presents different luminance intensity value, thereby has strengthened the gradient of intensity of brightness between image-region, and then has given prominence to edge of image more, makes that the image brightness intensity distributions after strengthening has more level.
So-called " igniting ", be that each neuron is except that the stimulation that receives from the outside, also receive from other neuronic presenting of internal network and import and the connection input, then, constitute neuronic internal act with the product coupled mode, comparison by dynamic threshold and internal act encourages or suppresses neuronic pulse signal output, is referred to as igniting.
The adjusting of saturation component be meant for the image color after guaranteeing to strengthen clearly more demarcated, to the saturation component carry out the nonlinear exponent adjustment with the dynamic range that enlarges color change, strengthen its contrast, the Mathematical Modeling of nonlinear exponent adjustment is: S '=S α, wherein, α is the stretching factor of index adjustment process, the degree of saturation of decision saturation component S; S ' is the saturation component after being stretched.
Described recognition of face and tracking, specific as follows:
At first, in character image, scan the scenery tone that detects pixel from left to right, from top to bottom according to pixel coordinate, and will fall between the tone zone of HSV color space [2 °, 47 °] tone gather pairing pixel and draft and be human face region, therefore follow human face region to make a distinction exactly in other zone of personage and image background;
Secondly, in the set of face complexion tone, with near the pairing pixel coordinate of 11 ° tone value as people's face central point;
Once more, adopt the method for single order prediction algorithm, obtain the pixel coordinate of target of prediction as the human face region tracking;
At last, according to the corresponding relation of pixel coordinate system with world coordinate system, original pixel coordinate that utilization is learnt and motion prediction pixel coordinate respectively original some three-dimensional coordinate in the corresponding world coordinate system and the three-dimensional coordinate of motion prediction point come the beam axis sight direction of controlling and driving camera lens, thereby reach the tracking purpose of pick-up lens to human face region.
Described facial image color decomposes and the image coupling, and detailed process is as follows:
The first step is taken the facial image that character image obtained to step 2 and is carried out the three primary colors decomposition in the RGB color space, specifically obtains red, green, the luminous flux of basket three primary colors composition in the figure image zone by the three primary colors filter;
In second step, red, green, basket three primary colors composition luminous flux according to human face region calculate red, green, basket three primary colors proportional relationship of all ingredients r:g:b;
The 3rd step, " color character value " by " people's face classical color image " in r:g:b and the database compares, seek out the most approaching with it red, green, basket three primary colors ratio value, and therefore assert that the current personage of being taken is exactly the colour of skin of pairing ethnic group of this ratio value r:g:b and representative thereof, and then find this colour of skin pairing " people's face classical color image ".
Described difference according to luminous flux is determined red, green, basket three primary colors composition luminous flux regulating control command, and amplify by the driver module conversion that back control is red, green, the intensities of illumination of basket three look cold light sources, detailed process is as follows:
The first step, red, green, the basket three primary colors composition that are obtained are compared one by one at the luminous flux of facial image and red, green, the basket three primary colors composition luminous flux in pairing " people's face classical color image " color character value, calculate current character image and " the people's face classical color image " difference on red, green, basket three primary colors composition luminous flux that is taken;
In second step,, determine that red, green, basket three primary colors composition luminous flux regulating control command is in order to reduce above-mentioned difference value between the two according to current facial image and " the people's face classical color image " difference on red, green, basket three primary colors composition luminous flux;
The 3rd step, drive circuit sending controling instruction to each monochromatic source illumination of control, promptly red, green, the basket three primary colors composition luminous flux regulating control command of determining according to second step sends to red driver module, green driver module and blue driver module, red driver module, green driver module and blue driver module are converted to the simulation controlled quentity controlled variable with red, green, basket three primary colors composition luminous flux regulating control command, and go to regulate output variables red, green, basket three look cold light source intensities of illumination after amplifier amplifies.
In the step 5, the described conversion from the original color space to the YCbCr color space with image is meant to be converted to the YCbCr color space to be used for the news television relay with regulating the image that reaches optimum state at last from original color space RGB, and conversion formula is as follows:
Y Cb Cr = 0.299 0.578 0.114 - 0.1687 - 0.3313 0.500 0.500 - 0.4187 - 0.0813 R G B + 0 128 128 ;
To convert RGB color space model image to from the YCbCr color space model image that receives at television receive terminal and be convenient to bitmap mode memory image, the inverse conversion formula of image from the YCbCr color space to the RGB color space is as follows:
R G B = 1 0 1.402 1 - 0.3441 - 0.7141 1 1.772 0 Y Cb Cr + - 179.456 135.4496 - 226.816 .
Compared with prior art, the present invention has following beneficial effect:
1. can make the figure image color that is taken reach the perfect adaptation of nature and aesthetic; 2. shooting process, the personage need not painstakingly to make up, and only needs to keep usual nature, and the technique device of utilization this method can carry out the color matching of automation colour lamplight and regulates the figure image that is taken; 3. whether possessed white light (comprising: main white light, background light and means of complementary annular lamp etc.) and do not had special requirement performing in a radio or TV programme environment; Not agnate personage's people's face classical color image and color character Value Data storehouse thereof are changed in the variation that 4. can adapt to social development and people's aesthetic standards in real time, make that taking figural color is rich in the epoch aesthetic feeling.Therefore, the present invention can save time to save trouble, improve a new technology of news interview and movies-making efficient.
Embodiment
Below embodiments of the invention are elaborated: present embodiment is being to implement under the prerequisite with the technical solution of the present invention, provided detailed execution mode and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Present embodiment comprises following concrete steps:
Step 1, collection can be represented people's face coloured image classics of the different colours of skin in the world, form people's face classical color image collection, deposit in the image data base; Simultaneously, every width of cloth image in people's face classical color image collection is expressed the color character Value Data storehouse of setting up representative's face classical color image by the RGB color space, that is the proportionality coefficient r:g:b of the luminous flux of the three primary colors R of every width of cloth image, G, B and three primary colors.
Step 2, take character image by digital camera, wherein facial image is carried out image color space conversion, figure image intensifying, recognition of face and tracking, and decomposition of facial image color and image coupling are carried out in people's face classical color image and color character Value Data storehouse thereof that integrating step one is set up, identify the ethnic group and the colour of skin thereof of personage in the facial image, and find the pairing people's face of this colour of skin classical color image;
Described digital camera is taken character image, is meant being taken the personage under red, green, basket three primary colors cold light lamp shine, and concentrates " aiming " face area to take relatively, and natural daylight and other artificial light source are not provided with special qualifications.The character image picture of gathering will guarantee that character facial occupies larger proportion on picture, by digital camera output RGB coloured image.
Described image color space conversion, be meant facial image with digital camera output from the RGB color space conversion to the HSV color space, tone H, saturation S and brightness V after the conversion are expressed as follows respectively:
V≤max(R,G,B)
Figure A200810201622D00131
Figure A200810201622D00132
When H<0, then get
Figure A200810201622D00133
With the digital picture of digital camera output from the RGB color space conversion to the HSV color space.
Described HSV color space is a kind of three-dimensional colour spatial model that comprises tone H, saturation S and brightness V of creating according to the characteristic directly perceived of color, also claims the hexagonal pyramid model.In this color space model, tone H measures with angle, and span is 0~360 °, begins by counterclockwise calculating from redness, and redness is that 0 °, green are that 120 °, blueness are 240 °, their complementary color: yellow is that 60 °, cyan are 180., magenta is 300 °; Saturation S span is 0.0~1.0; Brightness V span is: 0.0 (black)~1.0 (white); As: pure red is H=0, S=1, V=1, and S=0 represents achromaticity, and in this case, tone is undefined.HIV color space model has two key characters: 1. the colouring information of V component and image is irrelevant; 2. the mode of H and S component and human eye acquisition color is closely related.These features make the HSV model be very suitable for coming by people's vision system the image processing algorithm of aware colors characteristic.Owing to the storage of color digital image, show the general rgb color space model that adopts, so, every need handle and analyze the color characteristics of coloured image by human vision property the time, must carry out the color space conversion of coloured image.Theoretical verified with experiment: with RGB color space model conversion is that HSV color space model surmounts other conversion regime in the superiority aspect color characteristics processing and the analysis.From RGB to HSV, conversion between them can be divided into four kinds of the conversion of cylinder conversion, single hexagonal awl, spheroid conversion and triangle conversion, the geometric space feature of triangle conversion and the resulting fused images of cylinder conversion is better than other conversion, amount of information and standard deviation after the conversion, the triangle conversion is better than cylinder conversion.
So-called tone is called form and aspect again, is the sensation of vision system to optical wavelength, is used for the title and the kind of distinct color, is the easiest a kind of attribute that chromatic zones is separated.Tone depends on the frequency (dominant wavelength) of the light wave in the visible spectrum, does not have the color of dominant wavelength, is called the color of achromatization, as black, grey, white etc.
So-called saturation is meant the purity of color, and it can be used to the bright-coloured degree of distinct color.Infiltrated other light components the more the time when a kind of color, just said that this color is more unsaturated.Saturated color is meant and is not infiltrated the color (being the pure color light of single wavelength) that white light presented fully.For example, the spectrum colour of only being made up of single wavelength is exactly complete saturated color.
So-called brightness is a kind of physical attribute.The intensity of color can be described with different terms and method, as: lightness, brightness and brightness.Lightness is a kind of perception properties, the tolerance difficulty; Brightness is a kind of physical attribute, measures easily; Brightness is the influence value of vision to brightness, between lightness and brightness, can calculate, but with real apperceive characteristic, also have certain difference.Lightness is a vision system to luminous what the perception properties of visible object, and it and people's perception is relevant.Because lightness is difficult to tolerance, so Commission Internationale De L'Eclairage defined a physical quantity of tolerance relatively easily, is called brightness.According to the definition of Commission Internationale De L'Eclairage, brightness is the radiant power that obtains with after the spectrum sensitive function weighting that reflects visual characteristic, and has reached peak value at the 555nm place, and its amplitude is directly proportional with physics power.In this sense, can think that brightness is exactly light intensity.The value of brightness is to represent that with the light intensity of reflection on the unit are or emission unit is candela/metre2 (cd/m 2).1 candle light (candela candela is abbreviated as cd) equals: sending frequency is 540 * 10 12The light source of Hz radiation, in the luminous intensity of assigned direction, this light source in the radiation intensity of this direction is
Figure A200810201622D00141
Watt/surface of sphere.Brightness also can be used lumen---and lumen (lumen is abbreviated as lm) is represented.1 lumen equals the luminous flux that the even point-source of light of a candle light sends in unit solid angle.For light source, the lm of same brightness and cd/m 2Value, quantitatively equate.The brightness of appointment in fact commonly used is that white light is for referencial use, and its nominal is turned to 1 or 256 unit.
Described figure image intensifying comprises: the intensity of brightness component strengthens and the saturation component is regulated, and detailed process is as follows:
The enhancing of intensity of brightness component is meant adopts the Pulse Coupled Neural Network method that the image brightness strength component is strengthened, each neuron is corresponding one by one with pixel, its luminance intensity value is as neuronic outside stimulus, in PCNN inside, spatial neighbor, the similar then simultaneous ignition of pixel cluster of intensity, thereby level and smooth image-region; Otherwise asynchronous igniting makes corresponding image pixel present different luminance intensity value, thereby has strengthened the gradient of intensity of brightness between image-region, and then more given prominence to edge of image, makes that the image brightness intensity distributions after strengthening has more level;
The adjusting of saturation component be meant for the image color after guaranteeing to strengthen clearly more demarcated, to the saturation component carry out the nonlinear exponent adjustment with the dynamic range that enlarges color change, strengthen its contrast, adopt formula S '=S αCalculate.
Describedly carry out recognition of face and tracking, detailed process is as follows:
The first step, human face region is searched: in character image, scan the scenery tone that detects pixel from left to right, from top to bottom according to pixel coordinate, and the tone that will fall into [2 °, 47 °] between the tone zone of HSV color space is gathered pairing pixel and is drafted and be human face region;
Face complexion in the character image and clothing and background exist visibly different tone difference, face complexion is distributed in different angular regions with clothing and background tone separately, and the tone of face complexion concentrates on certain angular regions in the HSV color space relatively stablely, no matter confirm by experiment is natural lighting or artificial light source irradiation, no matter the also similarities and differences of camera system, the angular distribution of the tone H of the colour of skin all remains between 2~47 ° of HSV color space substantially, therefore can distinguish face complexion and clothing by the tone value of character image in the HSV space, background and other scenery, in other words, have only tone to be in [2 ° in interval when a certain scenery, 47 °] within, it just might be face complexion, otherwise be other scenery, as clothing or other article.Further confirm by experiment: the tone value of face complexion is that 11 ° probability is the highest, therefore claims that 11 ° tone value is the probability peak of face complexion.When order, face complexion tone were P (H) at the distribution probability of interval [2 °, 47 °], H=11 ° probability reached the highest, i.e. P (11 °)=P Max, that is to say, when the tone of certain scenery is 11 °, assert that this scenery is that the confidence level of face complexion reaches the highest.
In second step, determine people's face center
In the set of face complexion tone, with near the pairing pixel coordinate of 11 ° tone value as people's face central point, as: through the human face region search result, the tone set that obtains to fall into [2 °, 47 °] between the tone zone of HSV color space for ..., 9.7 °, 10.1 °, 9.5 ° ... }, and should the set pairing pixel coordinate be ..., (u K-1, v K-1), (u k, v k), (u K+1, v K+1) ..., be 10.1 ° near 11 ° tone value wherein, pairing pixel coordinate is (u k, v k), therefore just can determine (u k, v k) be people's face center position coordinates, the row coordinate of u remarked pixel, the row-coordinate of v remarked pixel, footnote are represented columns and line number, u kK represent k row, v kK represent that k is capable.
In the 3rd step, human face region is followed the tracks of
Because the telephotography figure image only has slightly generally speaking, move slowly, therefore intend the method that adopts the single order prediction algorithm to follow the tracks of, that is: as personage's human face region
If current figure image target speed is V (t k)=[V u(t k) V v(t k)] T, then its single order prediction estimated value should be:
V ~ u ( t k ) V ~ v ( t k ) = u k - 1 - u k - 2 t k - t k - 1 v k - 1 - v k - 2 t k - t k - 1
The pixel coordinate of target of prediction is
u ^ k + 1 v ^ k + 1 = V ~ u ( t k ) ( t k - t k - 1 ) V ^ v ( t k ) ( t k - t k - 1 ) + u k v k
In the formula, V u(t k) and V v(t k) be respectively k speed V (t constantly k) component in pixel coordinate system on u and two reference axis of v;
Figure A200810201622D00163
With Be respectively k speed V (t constantly k) component estimated value on u and two reference axis of v; u k, u K-1With u K-2Be respectively k, k-1 and k-2 u coordinate figure constantly; v k, v K-1With v K-2Be respectively k, k-1 and k-2 v coordinate figure constantly;
Figure A200810201622D00165
With Be respectively the coordinate estimated value of k+1 moment u and v.
Described " figure image target speed " adopts under Δ t blanking time, twice computing in front and back of human face region center asked for, as: as Δ t=t k-t K-1The time,
V u ( t k ) V v ( t k ) = u k - u k - 1 t k - t k - 1 v k - v k - 1 t k - t k - 1
Therefore, as k moment t kPeople's face center be (u k, v k) time, can dope figure image at k+1 moment t by the single order prediction algorithm K+1People's face center be
Figure A200810201622D00168
The 4th step is to the servo-drive of camera lens
According to the corresponding relation of pixel coordinate system, promptly with world coordinate system
1. the conversion of world coordinate system and camera coordinate system, describe with spin matrix R and translation vector t:
x c y c z c = R X W Y W Z W + t ; R = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 , t = t x t y t z
[X WY WZ W] TBe the world coordinate system point coordinates; [x cy cz c] TBe the camera coordinate system point coordinates.
2. desirable perspective projection transformation, promptly point coordinates is tied to the conversion of normalization imaginary plane image coordinate system from camera coordinates:
x u=fx c/z c,y u=fy c/z c
In the formula, f is the imaging focal length of video camera, (x u, y u) be normalization imaginary plane image coordinate system ideal point coordinate.
3. Mathematical Modeling is corrected in picture distortion in the normalization imaginary plane image coordinate system, and the distortion equation when only considering the single order radial distortion is:
x d=(1+k 1r 2)x u,y d=(1+k 1r 2)y u
In the formula, r 2 = x u 2 + y u 2 ; k 1Be the single order coefficient of radial distortion.
4. normalization imaginary plane image coordinate system distortion point coordinate is to the conversion of computer picture coordinate system:
u=x d/dx+c x,v=y d/dy+c y,s x=dy/dx
In the formula, (u v) is a computer picture coordinate system pixel coordinate, (c x, c y) for the pixel coordinate of principal point o is the principal point coordinate, (dx dy) is respectively the distance between unit picture element on x on the plane of delineation, the y direction, s xBe aspect ratio.
Get by 1.~4. formula
p 1 = 1 s K R t P 1
Be called the mapping relations equation that world coordinates is tied to computer picture coordinate system (being pixel coordinate system), wherein, p=[u v] T, P=[X WY WZ W] T, s is that spatial point is mapped to ccd video camera coordinate system (x in the world coordinate system c, y c, z c) middle z cComponent on the axle, its numerical value equal the element value in resulting the 3rd column vector of above-mentioned mapping relations equation the right result of calculation; K = f x 0 c x 0 f y c y 0 0 1 Be the equal proportion scaled matrix of three-dimensional point coordinate from the empty imaging plane of normalization to the physics imaging plane, the ratio of its convergent-divergent (logic focal length) is relevant with real focal length f, and unit is " pixel/m ", promptly
f x = f ( 1 + k 1 r d 2 ) dx ; f y = f ( 1 + k 1 r d 2 ) dy
This expression though have only a physics focal length, shows to have 2 different logic focal lengths on the image.
Described normalization imaginary plane image coordinate system, be meant: along optical axis with the camera optics imaging plane form the symmetry the plane be called imaginary plane, the coordinate system of being set up is called the imaginary plane image coordinate system thereon, the image pixel characteristic quantity is carried out the normalization imaginary plane coordinate system that the expression on the imaginary plane coordinate system after the normalization is called image expresses, the imaginary plane coordinate system plane of living in of using normalization to express is called the normalization imaginary plane again.
Therefore, according to formula p 1 = 1 s K R t P 1 Learn original pixel coordinate (u k, v k) and motion prediction pixel coordinate
Figure A200810201622D00182
Respectively original some three-dimensional coordinate [X in the corresponding world coordinate system WY WZ W] TThree-dimensional coordinate with motion prediction point X ^ W Y ^ W Z ^ W T , And then optical axis direction that can the controlling and driving camera lens is [X from the aiming coordinate WY WZ W] TOriginal point move to coordinate and be X ^ W Y ^ W Z ^ W T The motion prediction point, thereby reach the tracking purpose of pick-up lens to character facial.
Described facial image color decomposes and the image coupling, and detailed process is as follows:
The first step is taken the facial image that character image obtained to step 2 and is carried out the three primary colors decomposition in the RGB color space by the three primary colors filter;
In second step, red, green, basket three primary colors composition luminous flux according to human face region calculate red, green, basket three primary colors proportional relationship of all ingredients r:g:b=r y: g y: b y
In the 3rd step, pass through r:g:b=r y: g y: b yCompare with " the color character value " of " typical classical color image " in the database, seek out the most approaching with it red, green, basket three primary colors ratio value and belong to the yellow race, therefore find successively corresponding xanthous " people's face classical color image ".
Step 3, red, green, basket three primary colors composition luminous flux in the color character value of facial image and people's face classical color image are compared, obtain the difference of luminous flux between the two, and determine that according to the difference of luminous flux red, green, basket three primary colors composition luminous flux regulates the output control command, and amplify the back by the driver module conversion and control intensities of illumination red, green, basket three look cold light sources, detailed process is as follows:
The first step, red, green, the basket three primary colors composition that are obtained are compared one by one at the luminous flux of facial image and red, green, the basket three primary colors composition luminous flux in pairing yellow " people's face classical color image " the color character value, calculate current facial image and " the people's face classical color image " difference on red, green, basket three primary colors composition luminous flux;
Second step, according to current facial image and " the people's face classical color image " difference on red, green, basket three primary colors composition luminous flux, determine red, green, basket three primary colors composition luminous flux regulating control command, control command is by 12 bit coded representation, wherein, high-order double figures representation color, i.e. " 00 " correspondence " red ", " 01 " correspondence " green ", " 10 " correspondence " basket "; High-order the 3rd figure place sign indicating number is represented sign symbol, i.e. " 0 " correspondence "+", " 1 " correspondence "-"; Low level nine figure place representation intensity of illumination regulated quantitys, promptly " 000000000 " represents 0, and " 000000001 " represents 1, and " 000000010 " represents 2 etc.; By way of example, as: 000000000100 instruction code is represented " red intensity of illumination increases regulated quantity 4 ", the intensity of illumination size of regulated quantity 4 representatives is wherein determined by the gain coefficient of drive circuit by experiment, in the present embodiment, the intensity of illumination output that increases by 1 light watt is being represented in the output of 1 regulated quantity of every increase, and nine bits are represented can increase by 512 light watt altogether;
In the 3rd step,, promptly determine that according to second step red, green, basket three primary colors composition luminous flux regulating control command sends to red driver module, green driver module and blue driver module to the drive circuit sending controling instruction of each monochromatic source illumination of control; Red driver module, green driver module and blue driver module are converted to the simulation controlled quentity controlled variable with red, green, basket three primary colors composition luminous flux regulating control command and go to regulate output variables red, green, basket three look cold light source intensities of illumination, promptly control the electromotive power output size of red in the three primary colors light fixture, green and basket colored lights.
Step 4, repeating step two and step 3, until colour lamplight intensity to the difference of the formed color character value of current facial image and people's face classical color image color characteristic value less than preset threshold, its expression formula is as follows:
δ = T - T 0 T 0 × 100 %
In the formula, T, T 0Be respectively the facial image color character value of real-time shooting and the people's face classical color image color characteristic value in the database, in the present embodiment, as δ during less than threshold value 0.1%, no longer repeating step two, three.
Step 5, with image from the original color space conversion to the usefulness of YCbCr color space for the news television relay, conversion formula is as follows:
Y Cb Cr = 0.299 0.578 0.114 - 0.1687 - 0.3313 0.500 0.500 - 0.4187 - 0.0813 R G B + 0 128 128 ;
To convert RGB color space model image to from the YCbCr color space model image that receives at television receive terminal and be convenient to bitmap mode memory image, the inverse conversion formula of image from the YCbCr color space to the RGB color space is as follows:
R G B = 1 0 1.402 1 - 0.3441 - 0.7141 1 1.772 0 Y Cb Cr + - 179.456 135.4496 - 226.816 .
In the present embodiment, shooting process, the personage need not painstakingly to make up, only need to keep usual nature, this device for carrying out said figure image color of can realizing being taken to reach the perfect adaptation of nature and aesthetic, other light source of performing in a radio or TV programme environment is disposed does not have special requirement, therefore, time for face-painting before present embodiment can be saved and take is saved the cost of making up, and has improved news interview and movies-making efficient.

Claims (8)

1, a kind of telephotography color intelligent Automatic adjustment method is characterized in that, may further comprise the steps:
Step 1 is set up people's face classical color image and color character Value Data storehouse thereof;
Step 2, take the figure image image by digital camera, and facial image carried out image color space conversion, figure image intensifying, recognition of face and tracking, and decomposition of figure image image color and image coupling are carried out in people's face classical color image and color character Value Data storehouse thereof that integrating step one is set up, identify the ethnic group and the colour of skin thereof of personage in the facial image, and find the pairing people's face of this colour of skin classical color image;
Step 3, red, green, basket three primary colors composition luminous flux in the color character value of facial image and people's face classical color image are compared, obtain the difference of luminous flux between the two, and determine that according to the difference of luminous flux red, green, basket three primary colors composition luminous flux regulates the output control command, and amplify by the driver module conversion that back control is red, green, the intensities of illumination of basket three look cold light sources;
Step 4, repeating step two and step 3, until colour lamplight intensity to the difference of the formed color character value of the current personage's that is taken facial image and people's face classical color image color characteristic value less than preset threshold;
Step 5, with image from the original color space conversion to the YCbCr color space for the news television relay, be RGB color space model image by the YCbCr color space conversion again at television receive terminal.
2, telephotography color intelligent Automatic adjustment method according to claim 1, it is characterized in that, described people's face classical color image and the color character Value Data storehouse thereof set up, be meant and collect representative people's face coloured image classics of the different colours of skin in the world, form people's face classical color image collection, deposit in the image data base, simultaneously, obtain the color character value of people's face classical color image, deposit in the color character Value Data storehouse, in people's face classical color image and the color character Value Data storehouse thereof, according to the color character value of people's face classical color image, can obtain the specific colour of skin ethnic group face pairing color character value relevant parameter of stored every width of cloth people face classical color image representative in the database.
3, telephotography color intelligent Automatic adjustment method according to claim 1, it is characterized in that, described digital camera is taken character image, to being taken the personage under red, green, basket three-basic-colour lamp rayed, concentrating on face area takes, and natural daylight and other artificial light source are not provided with special qualifications, the character image picture of collection will guarantee that character facial occupies larger proportion on picture, by digital camera output RGB coloured image.
4, telephotography color intelligent Automatic adjustment method according to claim 1, it is characterized in that, described image color space conversion, be meant digital picture with digital camera output from the RGB color space conversion to the HSV color space, tone H, saturation S and brightness V after the conversion are expressed as follows respectively:
V≤max(R,G,B)
Figure A200810201622C00031
If then get H<0
Figure A200810201622C00033
Actual value for H.
5, telephotography color intelligent Automatic adjustment method according to claim 1 is characterized in that, described recognition of face and tracking are specific as follows:
At first, in character image, scan the scenery tone that detects pixel from left to right, from top to bottom, and the tone that will fall into [2 °, 47 °] between the tone zone of HSV color space is gathered pairing pixel and is drafted and be human face region according to pixel coordinate;
Secondly, in the set of face complexion tone, with near the pairing pixel coordinate of 11 ° tone value as people's face central point;
Once more, adopt the method for single order prediction algorithm, obtain the pixel coordinate of target of prediction as the human face region tracking;
At last, according to the corresponding relation of pixel coordinate system with world coordinate system, original pixel coordinate that utilization is learnt and motion prediction pixel coordinate respectively original some three-dimensional coordinate in the corresponding world coordinate system and the three-dimensional coordinate of motion prediction point come the beam axis sight direction of controlling and driving camera lens, thereby reach the purpose that pick-up lens is followed the tracks of.
6, telephotography color intelligent Automatic adjustment method according to claim 1 is characterized in that, described facial image color decomposes and the image coupling, and detailed process is as follows:
The first step is taken the facial image that character image obtained to step 2 and is carried out the three primary colors decomposition in the RGB color space, specifically obtains red, green, the luminous flux of basket three primary colors composition in the figure image zone by the three primary colors filter;
In second step, red, green, basket three primary colors composition luminous flux according to human face region calculate red, green, basket three primary colors proportional relationship of all ingredients r:g:b;
The 3rd step, color character value by people's face classical color image in r:g:b and the database is compared, seek out the most approaching with it red, green, basket three primary colors ratio value, thereby personage's the ethnic group and the colour of skin thereof in the identification facial image, and then find the pairing people's face of this colour of skin classical color image.
7, telephotography color intelligent Automatic adjustment method according to claim 1, it is characterized in that, described difference according to luminous flux is determined red, green, basket three primary colors composition luminous flux regulating control command, and amplify the back by the driver module conversion and control intensities of illumination red, green, basket three look cold light sources, detailed process is as follows:
The first step, red, green, the basket three primary colors composition that obtained red, green, the basket three primary colors composition luminous flux in the color character value of the luminous flux of facial image and pairing people's face classical color image compared one by one, calculate current facial image and the people's face classical color image difference on red, green, basket three primary colors composition luminous flux that is taken;
In second step,, determine that red, green, basket three primary colors composition luminous flux regulating control command is to reduce above-mentioned difference value between the two according to current facial image and the people's face classical color image difference on red, green, basket three primary colors composition luminous flux;
The 3rd step, drive circuit sending controling instruction to each monochromatic source illumination of control, promptly red, green, the basket three primary colors composition luminous flux regulating control command of determining according to second step sends to red driver module, green driver module and blue driver module, red driver module, green driver module and blue driver module are converted to the simulation controlled quentity controlled variable with red, green, basket three primary colors composition luminous flux regulating control command, and go to regulate output variables red, green, basket three look cold light source intensities of illumination after amplifier amplifies.
8, telephotography color intelligent Automatic adjustment method according to claim 1, it is characterized in that, in the step 5, conversion with image from the original color space to the YCbCr color space, be meant that will regulate the image that reaches optimum state at last is converted to the YCbCr color space to be used for the news television relay from original color space RGB, conversion formula is as follows:
Y Cb Cr = 0.299 0.578 0.114 - 0.1687 - 0.3313 0.500 0.500 - 0.4187 - 0.0813 R G B + 0 128 128 ;
To convert RGB color space model image to from the YCbCr color space model image that receives at television receive terminal, its inverse conversion formula is as follows:
R G B = 1 0 1.402 1 - 0.3441 - 0.7141 1 1.772 0 Y Cb Cr + - 179.456 135.4496 - 226.816 .
CN2008102016225A 2008-10-23 2008-10-23 Color intelligent automatic regulating method for television video taking Expired - Fee Related CN101383912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102016225A CN101383912B (en) 2008-10-23 2008-10-23 Color intelligent automatic regulating method for television video taking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102016225A CN101383912B (en) 2008-10-23 2008-10-23 Color intelligent automatic regulating method for television video taking

Publications (2)

Publication Number Publication Date
CN101383912A true CN101383912A (en) 2009-03-11
CN101383912B CN101383912B (en) 2010-12-08

Family

ID=40463510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102016225A Expired - Fee Related CN101383912B (en) 2008-10-23 2008-10-23 Color intelligent automatic regulating method for television video taking

Country Status (1)

Country Link
CN (1) CN101383912B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679657A (en) * 2013-12-10 2014-03-26 三峡大学 Method and device for enhancing image contrast ratio
CN103780797A (en) * 2014-01-23 2014-05-07 北京京东方光电科技有限公司 Image color enhancement method and device
WO2015074379A1 (en) * 2013-11-19 2015-05-28 华为技术有限公司 Image rendering method and apparatus
CN105915791A (en) * 2016-05-03 2016-08-31 广东欧珀移动通信有限公司 Electronic device control method and device, and electronic device
CN108053453A (en) * 2017-12-28 2018-05-18 努比亚技术有限公司 A kind of optimization method of color, terminal and computer readable storage medium
CN109167921A (en) * 2018-10-18 2019-01-08 北京小米移动软件有限公司 Image pickup method, device, terminal and storage medium
CN109658341A (en) * 2018-10-26 2019-04-19 深圳市华星光电技术有限公司 Enhance the method and device thereof of picture contrast
CN117082222A (en) * 2023-10-18 2023-11-17 广东图盛超高清创新中心有限公司 Image and video optimization toning method for rebroadcasting vehicle

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015074379A1 (en) * 2013-11-19 2015-05-28 华为技术有限公司 Image rendering method and apparatus
US9805501B2 (en) 2013-11-19 2017-10-31 Huawei Technologies Co., Ltd. Image rendering method and apparatus
CN103679657B (en) * 2013-12-10 2017-04-12 三峡大学 Method and device for enhancing image contrast ratio
CN103679657A (en) * 2013-12-10 2014-03-26 三峡大学 Method and device for enhancing image contrast ratio
CN103780797A (en) * 2014-01-23 2014-05-07 北京京东方光电科技有限公司 Image color enhancement method and device
CN103780797B (en) * 2014-01-23 2016-08-31 北京京东方光电科技有限公司 The method and apparatus that a kind of image color strengthens
US9501814B2 (en) 2014-01-23 2016-11-22 Boe Technology Group Co., Ltd. Method and apparatus for image color enhancement
CN105915791B (en) * 2016-05-03 2019-02-05 Oppo广东移动通信有限公司 Electronic apparatus control method and device, electronic device
CN105915791A (en) * 2016-05-03 2016-08-31 广东欧珀移动通信有限公司 Electronic device control method and device, and electronic device
CN108053453A (en) * 2017-12-28 2018-05-18 努比亚技术有限公司 A kind of optimization method of color, terminal and computer readable storage medium
CN108053453B (en) * 2017-12-28 2020-07-28 努比亚技术有限公司 Color optimization method, terminal and computer-readable storage medium
CN109167921A (en) * 2018-10-18 2019-01-08 北京小米移动软件有限公司 Image pickup method, device, terminal and storage medium
CN109167921B (en) * 2018-10-18 2020-10-20 北京小米移动软件有限公司 Shooting method, shooting device, shooting terminal and storage medium
CN109658341A (en) * 2018-10-26 2019-04-19 深圳市华星光电技术有限公司 Enhance the method and device thereof of picture contrast
CN117082222A (en) * 2023-10-18 2023-11-17 广东图盛超高清创新中心有限公司 Image and video optimization toning method for rebroadcasting vehicle
CN117082222B (en) * 2023-10-18 2024-02-06 广东图盛超高清创新中心有限公司 Image and video optimization toning method for rebroadcasting vehicle

Also Published As

Publication number Publication date
CN101383912B (en) 2010-12-08

Similar Documents

Publication Publication Date Title
CN101383912B (en) Color intelligent automatic regulating method for television video taking
CN106952246A (en) The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
Wang et al. Color design for illustrative visualization
CN1977542B (en) Dominant color extraction using perceptual rules to produce ambient light derived from video content
CN104363676B (en) A kind of LED operation shadowless lamp systems of the permanent photocontrol of fully-automatic intelligent
CN101425179B (en) Face image relighting method and device
CN109064433A (en) Improved HDR image coding and decoding methods and equipment
CN106651795A (en) Method of using illumination estimation to correct image color
CN106504212A (en) A kind of improved HSI spatial informations low-luminance color algorithm for image enhancement
CN102663741B (en) Method for carrying out visual stereo perception enhancement on color digit image and system thereof
CN101383911B (en) Color intelligent automatic regulating device for television video taking
CN102314602A (en) Use the optimal orientation linear axis to remove by the shade in the image of catching based on the camera of vehicle
CN106485222A (en) A kind of method for detecting human face being layered based on the colour of skin
CN109493309A (en) A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN107481206A (en) MIcrosope image background equalization Processing Algorithm
CN109274948A (en) Image color correction method, apparatus, storage medium and computer equipment
CN110223262A (en) A kind of rapid image fusion method based on Pixel-level
Kim et al. Large scale multi-illuminant (lsmi) dataset for developing white balance algorithm under mixed illumination
CN106296749B (en) RGB-D image eigen decomposition method based on L1 norm constraint
Corbala´ n et al. Color pattern recognition with CIELAB coordinates
Mohammed et al. Image segmentation for skin detection
Deswal et al. A simplified review on fast HSV image color and texture detection and image conversion algorithm
CN111340897A (en) Color identification method based on fuzzy mathematical similarity
Hsu et al. A novel automatic white balance method for color constancy under different color temperatures
CN107316040A (en) A kind of color of image spatial transform method of illumination invariant

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101208

Termination date: 20131023