CN100559850C - Be used for the method that mass-tone is extracted - Google Patents

Be used for the method that mass-tone is extracted Download PDF

Info

Publication number
CN100559850C
CN100559850C CNB2005800220072A CN200580022007A CN100559850C CN 100559850 C CN100559850 C CN 100559850C CN B2005800220072 A CNB2005800220072 A CN B2005800220072A CN 200580022007 A CN200580022007 A CN 200580022007A CN 100559850 C CN100559850 C CN 100559850C
Authority
CN
China
Prior art keywords
tone
mass
color
color space
painted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005800220072A
Other languages
Chinese (zh)
Other versions
CN1977529A (en
Inventor
S·古塔
M·J·埃尔廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Vision Holding BV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of CN1977529A publication Critical patent/CN1977529A/en
Application granted granted Critical
Publication of CN100559850C publication Critical patent/CN100559850C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Extracting and handle the video content of encoding in the painted color space is used for by the environment light source launching simulation, comprise and from vision signal, extract mass-tone information, and use three look principal matrixs, be used for the environment distribution through not painted color space transformation colouring information to form the second painted color space.Step comprises and quantizes the painted color space forming the distribution of color of appointment, and such as by reducing possible color state, or storage is to form super pixel; Utilize the pattern of pixel colourity, mean value, intermediate value, or weighted average is selected mass-tone.Can further analyze interested color to produce real mass-tone, frame of video formerly can be used for instructing the selection of mass-tone in the frame of back.

Description

Be used for the method that mass-tone is extracted
Technical field
The present invention relates to use multiple light courcess to produce and be provided with the environment light efficiency, typically based on, perhaps in conjunction with the video content that for example shows from video.More particularly, relate to a kind of in real time to video content sampling or sub sampling extracting the method for mass-tone information, and carry out color map conversion from the color space of video content to the color space that preferably allows to drive a plurality of environment light sources.
Background technology
Engineers has been done long exploration for expand the sensation impression by the sample video content, for example by enlarging screen and view field, sound is modulated to real 3 dimension effects, and improve video image, comprise wide video colour gamut, resolution and aspect ratio for example can adopt high definition (HD) Digital Television and video system to realize.And, film, TV and video player are also attempted the impression that the audio-visual means influence the beholder, for example by using color dexterously, scene cut, visual angle, scene on every side and area of computer aided diagrammatic representation.This also will comprise the theatrical stage illumination.Lighting effects, for example, light efficiency usually and video or the synchronous layout of scena reproduces under the help of the machine of the suitable scene Script Programming of encoding by expected scheme or computer.
In the prior art of digital field, comprise the scene of not planning or not having script, in response to the self adaptation of the fast-changing illumination of scene, in large scene, be not easy to coordinate very much, because extra high bandwidth bit stream need utilize present system.
Philip (Holland) discloses with other company and has utilized the light source that separates that shows away from video, change environment or illumination on every side and improve the video content method, it is applied in typical household or the commerce, and for a lot of application, the light efficiency of expecting is carried out layout formerly or coding.Shown that ambient lighting is applied to the video demonstration or TV can reduce visual fatigue and raising authenticity and experience the degree of depth.
Sensation impression is the function of the nature of human vision aspect, its utilize huge and complicated sense pipe and nervous system produce to color and light effect sensation.The mankind can distinguish general 1,000 ten thousand kinds of different colors.In human eye, receive or photopic vision for color, and have general 200 ten thousand totally three groups of sensation bodies that are called the cone, it has the optical wavelength peak Distribution and absorbs at 445nm, 535nm and 565nm and distribute, and has a large amount of overlapping.This three classes cone cell forms so-called trichromatic system, also is called as B (indigo plant), G (green), R (red) because of historical reasons; Peak value needn't be used in any mass-tone in the demonstration corresponding to those, for example, and the normal RGB fluorophor that utilizes.Also be useful on scotopic interaction, the perhaps so-called night vision body that is called rod.Human eye typically has 100,012,000 rods, and it influences video perception, particularly under the half-light condition, for example in home theater.
The color video is based upon on the rule of human vision, as everyone knows, how three looks of human vision and channel opposing theory are influenced eyes and go to see that desired color is with primary signal or expect that image has the color and the effect of high fidelity in conjunction with being used for understanding by us.In most of color model and space, three-dimensional or coordinate are used for describing people's visual experience.
The color video relies on metamerism fully, and it allows to utilize a spot of reference stimuli to produce color perception, rather than the actual light of desired color and feature.Like this, utilize a limited number of reference stimuli, the color of whole colour gamut is reproduced in people's brains, for example well-known RGB (red, green, the indigo plant) trichromatic system utilized in rabbit of worldwide.As everyone knows, for example, nearly all video shows ruddiness and the green glow by generation approximately equal quantity in each pixel or elementary area, and shows yellow scene.Pixel is compared very little with the solid angle of its subtend, and eyes take for and perceive yellow; It can not true green or the red light of launching of perception.
The mode that has a lot of color model and designated color, comprise well-known CIE (Commission Internationale De L'Eclairage (Commission Interationale de I ' eclairage)) color coordinate system, utilize it to describe and be given for the color of rabbit.Instant creation can utilize the color model of any amount, comprises the uncoloured opposition color space of utilization, such as CIE L*U*V* (CIELUV) or CIE L*a*b* (CIELAB) system.The CIE that builds on 1931 is the basis of the management and the reproduction of all colours, and the result utilizes the chromatic diagram of three-dimensional x, y and z.This three dimension system is generally used for describing color in the zone of high-high brightness according to x and y, and this zone is called 1931x, the y chromatic diagram, and it is considered to describe all human appreciable colors.This and color reproduction form contrast, and here metamerism has been cheated eyes and brain.Now, reproducing color therewith wherein has Adobe RGB by utilizing three kinds of Essential colour or fluorophor for a lot of color model that utilizing or space, NTSC RGB, etc.
It should be noted that especially video system by utilize that these tristimulus systems are showed might color scope be limited.NTSC (international television standard committee) RGB system has wide relatively available color gamut, but human half in can all colours of perception only can be reproduced by this system.Utilize the usable range of conventional video system can not sufficiently reproduce multiple blueness and purple, blue-green and orange/red.
And, the human visual system be endowed the compensation and evident characteristics, to it understanding for the design any video system be necessary.Human color can occur with several display modes, and target pattern and light-emitting mode are arranged therein.
In target pattern, light stimulus is perceived as the light that object reflected of light source irradiation.In light-emitting mode, light stimulus is regarded as light source.Light-emitting mode is included in the excitation in complicated, and it is brighter than other excitation.It does not comprise the excitation that is known as light source, and such as video display, its brightness or briliancy are with scenery or watch whole brightness in place identical or lower, so that this excitation shows with target pattern.
It should be noted that a lot of colors only occur in target pattern, had therein, brown, olive colour, chestnut color, grey and shallow brown yellowish pink.For example there is not lamp, such as the brown traffic lights as brown light light emitting source.
For this reason, replenish the direct light source of conduct that the surround lighting of giving the video system that will increase object color can not utilize light like this.The bright redness in nearly scope and the combination of green light can not be reproduced brown or hestnut color, therefore select quite limited.The spectral color that only has the rainbow of the intensity of variation and saturation can reproduce out by the Direct observation to the light of bright light source.This emphasizes the needs to the meticulous control of ambient lighting system, such as under the situation of noting tone management, provides the light output of low-light level from light source.Under present data structure, this precision control can't addressing under quick variation and meticulous ambient lighting mode.
Rabbit can be taked a lot of forms.Spectral color reproduce to allow accurately to reproduce the spectral power distribution of initial excitation, but this can not realize in any rabbit that utilizes three mass-tones.The tristimulus values of the reproducible human vision of color reproduction accurately produce the dynamic isomerism with initial matching, but must be similar for the whole observation condition of image and original scene, to obtain similar demonstration.The whole observation condition of image and original scene comprises the corner of image, brightness on every side and colourity, and high light.Can not often obtain a reason of accurate color rendering, be being restricted because of the high-high brightness that can produce on colour picture monitor.
Proportional when the colourity of tristimulus values and original scene, the colourity color reproduction provides a kind of useful replacement.Chromaticity coordinate is accurately reproduced, but has reduced brightness pro rata.If original have identical colourity with the reference white that reproduces, observation condition is identical, and system has whole unified gamma, and the colourity color reproduction is good normative reference to video system.Because it is limited to produce brightness in video shows, can not obtain color reproduction with the equivalence of the colourity of original scene and brightness coupling.
Most of rabbit in the reality attempts to obtain corresponding color reproduction, here, produce if original scene is illuminated identical average brightness level and with reproduce in identical reference white colourity, the color of reproduction will have the color performance consistent with original scene.Yet the final goal of a lot of arguements is the preferred in practice color reproductions of display system, influences the fidelity of color in this observer's preference.For example, tanned skin color is preferably the average color of real skin, and sky is preferably the color more blue than reality, and leaf is greener than reality.Even corresponding color reproduction is accepted as design standard, some colors are more important than other color, and such as yellowish pink, it is the theme of special processing at a lot of playback systems in such as the ntsc video standard.
When reconstruction of scenes light, for the chromatic adaptation that obtains white balance is important.Under the video camera and display suitably adjusted, white and neutral ash typically reproduce with the colourity of CIE standard daylight source D65.By always reappearing white surface with identical colourity, this system can imitate the human visual system, it adapts to perception inherently so that white surface always presents identical demonstration, and no matter the colourity of light source, so that a blank sheet of paper, no matter, can both show as white on the sunny seabeach or under the incandescent lamp at indoor scene.In color reproduction, the white balance adjustment is usually by at R, the gain controlling in G and the B passage and obtaining.
The light output of typical colour receiver typically is not linear, applies video voltage but meet power law relation.Light output is proportional with the video drive voltage that is promoted to the power gamma, color CRT (cathode ray tube) gamma typically is 2.5 here, is 1.8 to the light source of other type.In the camera video process amplifier, compensate this factor by three main brightness coefficient correction devices, so as encoded, transmit and in fact the main vision signal of decoding is not R, G and B but R 1/ (, G 1/ (And B 1/ (The overall intensity coefficient that the colourity color reproduction needs rabbit---comprise video camera, display and any gray scale are adjusted electronic equipment---is unified, but when the corresponding color reproduction of trial, the brightness of environment is preferential.For example, the gamma that dim environment needs is approximately 1.2, and dark environment is approximately 1.5 for the gamma that the best color reproduction of acquisition needs.To the RGB color space, gamma is important executive problem.
Most of color reproductions coding utilizes the standard RGB color space, such as sRGB, and ROMM RGB, Adobe RGB98, Apple RGB and such as the video rgb space that in the NTSC standard, is utilized.Typically, image is truncated to transducer or source device space, and it is special equipment and image.It can be switched to uncoloured image space, and this is the reference colour space (seeing definitional part) of the original colourity of expression.
Yet often directly from the source device space conversion to painted image space (seeing definitional part), it represents the output equipment that some is real or virtual to video image, such as the color space of video demonstration.Most of standard RGB color space that exists is painted image space.For example, the color space that source that is produced by video camera and scanner and output region are not based on CIE, but the spectral space that limits by other characteristic of spectral sensitivity and video camera or scanner.
The rendered image space is based on truly or the color space of the special installation of the colourity of virtual unit characteristic.Image can be transformed into colorant space from painted or uncoloured image space.The complexity of these conversions can change, and can comprise that complex image relies on algorithm.This conversion is irreversible, and abandons or compress some information of original scene coding, with dynamic range and the colour gamut that adapts to special installation.
A kind of uncoloured RGB color space is only arranged at present, and it is becoming in the process for standard, is defined in the ISO RGB among the ISO17321, is used for the color characteristics of digital camera more.In majority application now, in order to file and data transaction, conversion comprises that the image of vision signal is to the painted color space.To another, can cause serious image artifacts from a rendered image or color space transformation.Colour gamut and white point do not match manyly more between two equipment, and then negative effect is big more.
A shortcoming of the surround lighting display system of prior art is to the extraction of the representative color that is used for environment emission or problematic from video content.For example, the color of pixel colourity on average often causes grey, brown or other colour cast, and these are not the expressions of the perception of video scene or image.The color that obtains from simple colourity is average seems it usually is smudgy and wrong choosing, especially when itself and characteristics of image, when contrasting such as the fish of becoming clear or main background such as blue sky.
Another problem of prior art surround lighting display system is also not provide special method to be used for synchronous operation in real time, so that painted tristimulus values are transformed into environment light source from video, thereby obtains suitable colourity and performance.For example, the light that sends from the LED environment light source usually is dazzling, and has colour gamut restriction or deflection---normally, color harmony colourity is difficult to assessment and reproduces.For example, the authenticity of people's such as Akashi United States Patent (USP) 6611297 ambient light, but do not provide special method to be used for guaranteeing correct and gratifying colourity, and No. 297 patents of Akashi do not allow the real-time analysis video, but need script or its equivalent.
In addition, utilize being provided with of environment light source of the gamma of the video content color space of having proofreaied and correct often to cause dazzling and bright color.The serious problems of another prior art are to need conversion to be used for the information of drive environment light source in a large number, and this environment light source to adapt to the fast-changing ambient light environment of expection, is expected good color matching as the function of real-time video content therein.
Therefore, expansion by ambient lighting in conjunction with typical trichromatic vision frequently the possible colour gamut of the color that display system produced be useful, when developing the characteristic of human eye, such as the variation in the relative visual luminosity of the different colours of intensity level function, by regulating or changing color and the light characteristic that conveys to the video user of utilizing ambient lighting system, utilize it to improve the characteristic of useful compensation effect, sensitivity and other human vision.
Be not subjected under the distortion effect that gamma causes, producing that the environment of quality is arranged also is useful.What it further needed is, a kind of method can be provided, and utilizes the average or data flow of the saving of color value coding qualitatively, by extracting mass-tone from selected video area, is used to provide better ambient lighting.What it further needed is to reduce the size of the data flow that needs.
Painted about video and television engineering, compress technique, data transmission and coding, human vision, color scene and perception, the color space, colourity, image, and the information that comprises rabbit, to quote in the list of references below, these documents are the combinations on the whole of these information: with reference to [1] Color Perception, and Alan R.Robertson, Physics Today, in December, 1992, the 45th volume, the 12nd phase, 24-29 page or leaf; With reference to [2] The physics and Chemistryof Color, 2rd, Kurt Nassau, John Wiley﹠amp; Sons, Inc., New York 2001; With reference to [3] Principles of Color Technology, 3ed, Roy S.Berns, JohnWiley ﹠amp; Sons, Inc., New York, 2000; With reference to [4] Standard Handbook ofVideo and Television Engineering, 4ed, Jerry Whitaker and K.BlairBenson, McGraw-Hill, New York
Figure C20058002200700103
2003.
Summary of the invention
The method that different embodiments of the invention provide comprises to be utilized pixel level statistics or equivalence to determine with the least possible amount of calculation to a certain extent or extracts one or more mass-tones, but at the same time, provide the comfortable and suitable colourity that is chosen as mass-tone according to law of perception.
The present invention relates to from the video content of the painted color space, encoding, extract the method for mass-tone, produce the mass-tone that is used to simulate by environment light source.Method step comprises: [1] quantizes the painted color space by quantizing at least some pixel colourities of video content in the painted color space, to form the distribution of designated color; [2] carry out mass-tone from the distribution of designated color and extract, so that each produces mass-tone below extracting: the pattern of [a] designated color; The intermediate value of [b] designated color; The weighted average of the colourity of [c] designated color; [d] uses the weighted average of weighting function; Change mass-tone as for the second painted color space that allows the drive environment light source from the painted color space [3] then.
The quantification of pixel colourity (or painted color space) can realize (seeing definitional part) by a lot of methods, its target is to alleviate computation burden by seeking to simplify in possible color state, such as the appointment colourity or the color that taper to lesser amt from a large amount of colourities (for example pixel colourity) of distributing; Or select the selection course of pixel to reduce pixel quantity by choosing; Or vanning (binning) is to produce representational pixel or super pixel.
If by at least one super pixel that pixel colourity is cased, partly carry out the quantification of the painted color space, the super pixel of Chan Shenging can have the size consistent with characteristics of image, direction, shape or position like this.Utilize the color of appointment to may be selected to be the field color vector in quantizing process, it needn't be in the painted color space, such as can be in the second painted color space.
In case from designated color distributes, selected mass-tone, can return then, promptly obtain actual pixel colourity to refine mass-tone.For example, can in the distribution of color of appointment, determine at least a interested color, and be extracted in the pixel colourity of this appointment, with the real mass-tone that obtains final appointment as mass-tone.Like this, when designated color can be the rough approximation of video content, real mass-tone can provide correct colourity for environment sends, but still the amount of calculation that can save original needs.Mass-tone also can comprise the dominant hue colour table.
The embodiment step [3] that is used to transform that other is possible comprise [3a] with mass-tone from painted color space transformation to the not painted color space; [3b] is with the mass-tone painted color space of space conversion to the second of painted color never then.This additionally comprises matrix conversion, utilizes the first and second three look principal matrixs, and the primary colors of changing the painted color space and the second painted color space is to the not painted color space; With carry out matrix multiple by the inverse matrix of painted color space primary colors, the one or three colour moment battle array and the two or three colour moment battle array, obtain the conversion of the painted color space of colouring information to the second.
The pixel colourity of step [1] can obtain from extracting the zone, and additional step [4] can comprise the environment light source emission mass-tone surround lighting that extracts the zone from contiguous.
The rational weighting function of step [2] allows from a plurality of pixel colourities of image characteristics extraction, and frame of video formerly can be used for instructing select mass-tone in the rear video frame.Any extraction zone may be selected to be the characteristics of image that extracts from frame of video.
These steps can be with a lot of mode combinations, and the not painted color space can be CIE XYZ; ISO RGB in iso standard 17321 definition; Photography YCC; CIE LAB; Perhaps any other one of colorant space not.And step [1], [2] and [3] are basic and vision signal is synchronous, utilize the colouring information in the second painted color space, show or launch environment light around the video demonstration from video.
Description of drawings
Fig. 1 represents the front surface view that the simple video of the present invention shows, has showed that colouring information extracts the zone and launches from six environment light sources with relevant surround lighting;
Fig. 2 represents the vertical view in a room, and part schematic diagram and partial cross section figure wherein, utilize the present invention, produces the surround lighting from the multiple environment light emitted.
Fig. 3 represents to extract colouring information and influence color space transformation to allow the system of drive environment light source according to one of the present invention;
Fig. 4 represents to extract the equation that the colouring information average is calculated in the zone from video;
Fig. 5 represents that prior art changes painted main RGB to the matrix equation formula of painted color space XYZ not;
Fig. 6 and Fig. 7 represent to shine upon respectively video and surround lighting painted areas to the matrix equation formula of painted areas not;
Fig. 8 represents to utilize known matrix inversion to transforming the way that never painted color space XYZ obtains surround lighting tristimulus values R ' G ' B ';
Utilize the white point method three look principal matrix M that derive in Fig. 9-11 expression prior art;
Figure 12 representation class is similar to system as shown in Figure 3, comprises a gamma aligning step that is used for the environment emission in addition;
Figure 13 represents the schematic diagram of total transfer process that the present invention is used;
Figure 14 represents the treatment step of the acquisition environment light source transition matrix coefficient that utilizes among the present invention;
Figure 15 represents the treatment step that the used estimation video of the present invention extracts and surround lighting reproduces;
Figure 16 represents the schematic diagram according to frame of video extraction of the present invention;
Figure 17 represents the colourity evaluation process step according to simplification of the present invention;
Figure 18 represents the extraction step shown in Fig. 3 and 12, is the drive environment light source, utilizes frame decoder, the frame recovery rate is set and carries out output calculating;
Figure 19 and Figure 20 represent the treatment step that colouring information of the present invention extracts and handles;
Figure 21 represents the schematic diagram according to the total process of the present invention, comprises mass-tone extraction and the conversion of arriving the surround lighting color space;
Figure 22 schematically shows a kind of specified pixel colourity of passing through to designated color, quantizes the possible method from the pixel colourity of video content;
Figure 23 schematically shows a kind of by pixel colourity being stored to the example of the possible method that super pixel quantizes;
Figure 24 representation class is like the storing process of Figure 23, but size, direction, shape or the position of super pixel can as one man form with characteristics of image here;
Figure 25 is illustrated in field color vector on the standard Descartes CIE color diagram and their color or chromaticity coordinate, and color vector is positioned at the outside of color colour gamut here, and the color colour gamut obtains by PAL/SECAM, NTSC and Adobe RGB color generation standard;
Figure 26 represents the feature of the part of CIE figure shown in Figure 25, in addition remarked pixel colourity and its distribution on the field color vector;
Figure 27 represents to show according to the present invention the histogram of the possible method of pattern that designated color distributes;
Figure 28 represents the possibility method of intermediate value according to designated color distribution of the present invention;
Figure 29 represents the possible method by the mathematics summation of the weighted average of the colourity of designated color according to the present invention;
Figure 30 represents to utilize pixel weighting function to pass through the possible method of mathematics summation of weighted average of the colourity of designated color according to the present invention;
Figure 31 is illustrated in designated color and determines interested color in distributing, and is extracted in specified pixel colourity there then, to obtain a real mass-tone and to be appointed as the schematic diagram of mass-tone;
Figure 32 schematically shows according to mass-tone extraction of the present invention and can repeatedly carry out or carry out simultaneously respectively so that a series of mass-tones to be provided;
Figure 33 represents the simple front surface figure that video as shown in Figure 1 shows, expression imposes on example of preferred area of space according to Figure 29 and 30 exemplary method with different weights;
Figure 34 provides the simple front surface figure that a video as shown in figure 33 shows, the expression of figure tabular form is extracted a characteristics of image according to the purpose that the present invention extracts for mass-tone;
Figure 35 provides the diagram of another embodiment of the present invention, and video content is decoded into a framing, and nationality relies on the mass-tone of former frame at least in part with the mass-tone of the frame that allows to obtain;
Figure 36 represents to select according to the present invention the process steps of the omission process of mass-tone.
Embodiment
Definition
Following definition can be general in full:
-environment light source-will in claim subsequently, comprise that needs influence any light generation circuit or driver that light produces.
-environment space-will mean any and all material bodies or air or spaces in the video display unit outside.
-designated color distribute-will be represented one group of color, select it to be used for the four corner of the pixel colourity that representative (as being used for calculating purpose) finds at video image or video content.
-colourity-in the context of drive environment light source, with machinery, numerical value or physics mode of stipulating the color characteristic that light produces of expression, and will not imply special methodology for example is used for the television broadcasting of NTSC or PAL.
-colouring information-will comprise the whole of colourity and brightness or one of them, perhaps function equivalence amount;
-computer-will comprise and be not only all processors, for example utilize the CPU (CPU) of known structure, also comprise allowing to encode, decode, read, handle, carry out any smart machine of setting code or changing code, for example can carry out the digit optical equipment or the analog circuit of said function.
-mass-tone-will represent is any selects to be used for the colourity of representing video content for environment emission purpose, comprises any color that is selected to illustrate method disclosed herein;
-extraction zone-will comprise the subclass of any whole video image or frame.
-frame-will be included in the image information that occurs in chronological order in the video content, consistent with the word that utilizes in the industry " frame ", but (for example staggered) or whole view data that also will comprise any part are used at any time or are transmitting video content at any interval.
-angle measurement colourity-will be referred to given as the different colours of the function of visual angle or viewing angle or the character of colourity, for example produce by rainbow.
-angle measurement luminosity-will be referred to is as the character of given different brightness, transmission and/or the color of the function of visual angle or viewing angle, for example is found in shining, spark jump or retroeflection phenomenon.
-interpolation-will be included in linearity or mathematics interpolation between two class values also is included in and is the functional descriptions of set point between two groups of known values;
-light feature-in a broad sense, the meaning are the explanations of the character of any light of for example being produced by environment light source, comprise that except brightness and colourity all describe, for example degree of optical transmission or reflection; The description of perhaps angle measurement chromaticity properties comprises when the environment of observation light source, the color of generation, glitter or phenomenon that other is known as the degree of function of viewing angle; Light output direction comprises the directivity that gives by the seal court of a feudal ruler, a slope or other propagation vector; The perhaps description of the distribution of optic angle degree, for example solid angle or solid angle distribution function.Can also comprise the coordinate of stipulating it in the environment light source position, for example position of unit pixel or lamp.
-brightness-will represent any parameter or the lightness that records, intensity or of equal value the measurement, and will not apply the ad hoc approach of light generation or the explanation of measurement or psychology-biology.
-pixel-will be referred to real or virtual video pixel allows the equivalent information of Pixel Information deviation.
-quantize the color space-in the scope of specification and claim, will be referred to may color state minimizing, for example cause a large amount of colourities (for example pixel colourity) to the colourity or the color of a small amount of appointment from appointment; Perhaps by selecting the selection course of selected pixel, number of pixels reduces; Perhaps case to produce representational pixel or super pixel.
-painted the color space-will represent is from as the transducer of equipment and specific image or specific light source or the display device truncated picture or the color space.Most RGB color spaces are rendered image spaces, comprise being used for driving the video display space that video shows D.In accessory claim, video shows and the special color space of environment light source 88 is painted color spaces.
-converting colors information is to the not painted color space-will comprise in accessory claim or directly be converted to the not painted color space, perhaps utilize or benefit from by utilizing the inverse conversion of three look principal matrixs, this three looks principal matrix is by being converted to the not painted color space ((M as shown in Figure 8 2) -1) and obtain.
-not painted the color space-will represent the color space of a standard or nonspecific equipment, for example these utilize standard C IE XYZ to describe the colorimetric method of original image; The ISO RGB that for example in the ISO17321 standard, defines; Photography YCC; With the CIE LAB color space.
No matter-video-will indicate any vision or light generating device needs energy to produce the active equipment of light, the transfer medium of perhaps any transmitted image information, the window of office block for example, the fiber waveguide of the image information that perhaps obtains from afar.
Therefore-vision signal-will be designated as signal or information that the control of video display unit transmits comprises any audio-frequency unit.Can expect that video content analysis is included as the possible audio content analysis of audio-frequency unit.Usually, vision signal can comprise the signal of any kind, for example utilizes the radio frequency signals of any amount of known modulation technique; The signal of telecommunication comprises simulation and quantitative simulation waveform; Numeral () signal, for example those utilize pulse-width modulation, pulse number modulation, pulse position modulation, PCM (pulse code modulation) and pulse to amplify modulation; Perhaps other signal audible signal for example, voice signal and light signal, they can both utilize digital technology.Wherein the data of sequence arrangement or out of Memory only for example in based on computer application, also can be utilized.
The equivalent method that provides priority status or higher mathematics weighting for specific colourity or locus that-weighting-will be referred to is any here to be provided.
Specifically describe
If desired, formed by the surround lighting that video content obtains according to the present invention, allowing has high fidelity to original video scene light, yet keeps the surround lighting degree of freedom characteristic of high level only to need low computation burden.This allows to have little color colour gamut environment light source and brightness reduces the space, simulates the video scene light that sends from the more senior light source with big relatively color colour gamut and brightness response curve.The possible light source that is used for ambient lighting can comprise the known luminaire of any number, comprises LED (light-emitting diode) and relevant semiconductor radiant source; Electroluminescence device comprises the non-semiconductor type; Incandescent lamp comprises the change type with halogen or more senior chemical substance; The ionic discharge lamp comprises fluorescence and neon lamp; Laser; Tiao Zhi light source again is for example by utilizing LCD (LCD) or other optical modulator; The luminescence generated by light reflector, perhaps the known controllable light source of any amount comprises the array of function class like display.
Declaratives given here will partly at first relate to from video content and extract colouring information, subsequently, relate to extracting method, can represent the mass-tone or the true colors of the environment emission of video image or scene with acquisition.
With reference to figure 1, it only is used to illustrate the simple front surface figure that shows D according to video of the present invention with exemplary purpose.Show that D can comprise the equipment from painted color space decoded video content that any number is known, as NTSC, PAL or SECAM broadcast standard, perhaps painted rgb space, for example Adobe RGB.Show that D can comprise that selectable colouring information extracts region R 1, R2, R3, R4, R5 and R6, their border can be separated with those diagram zones.Colouring information extracts the zone can pre-determine and have the feature that produces peculiar surround lighting A8 arbitrarily, for example controllable environment lighting unit (not shown) is installed by the back, its generation and emission surround lighting L1, L2, L3, L4, L5 and L6 as shown in the figure are for example by leaking into part light on the wall (not shown) that shows the D installation.Selectively, display frame Df as shown in the figure oneself also comprises the ambient lighting unit with simple mode display light, comprises to export-oriented beholder's (not shown).If wish, the influence that each colouring information extraction region R 1-R6 can be independent is near its surround lighting.For example, as shown in the figure, colouring information extracts region R 4 can influence surround lighting L4.
With reference to figure 2, the sectional view-demonstration place or the environment space A0 of the signal of vertical view-part and part wherein, utilize the present invention, produce the surround lighting from many environment light sources.At seat and the desk 7 shown in the environment space A0 setting, dispose them to allow watching video to show D.Also disposed a large amount of ambient lightings unit at environment space A0, utilize instant invention, it is optionally controlled, light loud speaker 1-4 shown in comprising, shown in the floor-lamp SL below sofa or seat, also have a configuration set showing D special simulated environment lighting unit on every side, i.e. the center lamp of generation surround lighting Lx as shown in Figure 1.Each of these ambient lighting unit can launch environment light A8, shown in dash area among the figure.
Immediately invent combination therewith, can at random produce surround lighting, obtain in fact not showing color or the colourity that D launches by video but follow in these ambient lighting unit from these ambient lighting unit.This allows the feature and the vision system of developing human eye.Merit attention, human visual system's brightness function, it has detectivity for different visible wavelengths, as the function of light grade and change.
For example, scotopia or night vision rely on tendency to blue and green responsive more rod.Utilize the suitable more long wavelength's of detection of photopic vision of taper cell light, for example red and yellow.In the black space of theater environment, by modulation or change the color send the video-see person in environment space to, these of relative luminosity that can offset different colours a little change the function as the light grade.This can finish by deducting from the ambient lighting unit, for example utilizes the light loud speaker 1-4 of optical modulator (not shown) or increases element by utilizing on the light loud speaker, and promptly the luminescence generated by light reflector discharges the change light that takes a step forward at environment.The luminescence generated by light reflector is carried out color conversion by absorption or experience from the excitation of the incident light of light source, and then is transmitted in the light in the higher expectation wavelength.This excitation of luminescence generated by light reflector and emission once more, for example fluorescent dye can allow the new color dyes that occurs in raw video image or light source, perhaps also not in the color or color gamut range of the proper operation that shows D.When the brightness of the surround lighting Lx of hope was low, this was helpful, for example in very black scene, and compared the perception level that obtains when not having light modulation usually when high when the perception level of wishing.
The generation of new color can provide new and interesting visual effect.The example that illustrates can be the generation of orange-colored light, for example is called as seek orange, and for it, available fluorescent dye is well-known (with reference to reference to [2]).The example that provides comprises fluorescence color, itself and general fluorescence phenomenon and correlated phenomena opposition.Utilize fluorescent orange or other fluorescent dye kind that light conditions is particularly useful, red here and orange promotion can be offset the sensitivity of noctovision to the long wavelength.
Can be included in dyestuff known in the dye class at ambient lighting unit by using fluorescent dye, perylene for example, naphthalimide, cumarin, thioxanthene, anthraquinone, thioindigo, and special-purpose dye class are for example produced by Ohio, USA Cleveland day-glo color company.Available color comprises the A Paqi Huang, this Huang in the bottom sieve frame, and the prairie Huang, the Pocono Huang, the Moho gram is yellow, the potomac Huang, the marigold orange, the Ottawa is red, and volga is red, and big horse breathes out powder, and Colombia's indigo plant.These dye class can utilize known procedures to be combined in the synthetic resin, for example PS, PET and ABS.
Fluorescent dye and material have improved visual effect, because they can design than bright a lot with the non-fluorescent material of colourity.Be used to produce the so-called endurance issues of traditional organic pigment of fluorescence color, obtained very big solution at recent two decades,, caused the development of durability fluorescent pigment along with development of technology, expose in the sun, it can keep their painted 7-10 true to nature.Therefore these pigment are damaged in the home theater environment that enters UV ray minimum hardly.
Selectively, the fluorescence delustering pigment can utilize, and they are worked simply by the light that absorbs the short wavelength, and once more with the light emission of this light as for example red or orange long wavelength.The technology of inorganic pigment improves has made visible light can realize under excitation, for example blueness and purple, for example light of 400-440nm.
Angle measurement colourity and the effect of angle measurement luminosity can similarly launch to produce the color as the visual angle function, brightness and the feature of not sharing the same light.For realizing this effect, ambient lighting unit 1-4 and SL and Lx can be independent or jointly utilize known angle measurement luminosity element (not shown), and be for example metal painted with transmission pearlescent; Utilize the rainbow material of known scattering or film interference effect, for example utilize the squamous entity; Thin scale guanine; The amino hypoxanthine of 2-that anticorrisive agent is perhaps arranged.Utilize meticulous mica or other material as diffuser, for example the pearlescent material of making by oxide layer, bornite or peacock ore deposit; Sheet metal, glass flake or plastic tab; Particulate matter; Oil; Frosted glass and hair plastics.
With reference to figure 3, expression is extracted colouring information (for example mass-tone or true colors) and is acted on the system of color space transformation with the drive environment light source according to the present invention.At first step, utilize known technology to extract colouring information from vision signal AVS.
Vision signal AVS can comprise the known digital data frames or the bag that are used for mpeg encoded, audio frequency pcm encoder or the like.Can utilize known encoding scheme to give packet, for example have the program flow of variable-length data package, perhaps evenly divide the transmission stream of packet equally, perhaps the scheme of other single program transport.Selectively, functional steps disclosed herein or block diagram can utilize computer code or the simulation of other communication standard, comprise asynchronous protocol.
As general example, video content analysis CA shown in shown vision signal AVS can experience, may utilize known method shown in hard disk HD write down and transmit the content of selection back and forth, the content type storehouse shown in may using or other information that is stored in memory MEM.This can allow to select independent, parallel, direct, that postpone, continuous, the cycle or the aperiodic conversion of video content.By these video contents, the feature extraction FE shown in can carrying out for example usually extracts colouring information (for example mass-tone), perhaps from an image characteristics extraction.Colouring information also will be encoded in the painted color space, is transformed into not colorant space then, the CIE XYZ of the RUR mapping change-over circuit 10 shown in for example utilizing.The translation type that the RUR representative is here wished, promptly painted-not painted-painted, RUR shines upon the painted color space of change-over circuit 10 further converting colors information to the second like this, and this second painted color space is formed and allows to drive described environment light source 88.Preferred during the RUR conversion, but other mapping can be utilized, as long as can utilize surround lighting generation circuit or its equivalent arrangements to receive information in the second painted color space.
RUR mapping change-over circuit 10 can functionally be included in the computer system, it utilizes software to carry out identical functions, but under the situation of the grouping information that decoding is transmitted by data transfer protocol, have a memory in circuit 10, it comprises or is updated so that comprise interrelated or the information of painted color space coefficients or the like is provided.The new second painted color space that produces is suitable and wishes to use it for drive environment light source 88 (as illustrated in fig. 1 and 2), and with shown in the coding supply that surround lighting is produced circuit 18.Surround lighting produces circuit 18 and obtains the second painted color space information from RUR mapping change-over circuit 10, illustrate then from Any user interface and any synthetic preference memory (with U2 together shown in) any input, in may cause after surround lighting (second the is painted) color space lookup table LUT shown in the reference, be used to develop true environment light output Control Parameter (voltage that for example applies).To produce surround lighting output Control Parameter that circuit 18 the produces lamp interface driver D88 shown in supplying with by surround lighting, with the environment light source 88 shown in direct control or the supply, it can comprise independent ambient lighting unit 1-N, the surround lighting loud speaker 1-4 as illustrated in fig. 1 and 2 that for example quotes previously, perhaps surround lighting center lamp Lx.
For reducing any real-time computation burden, the colouring information that removes from vision signal AVS can be omitted or limit.With reference to figure 4, expression is extracted the equation that average color information is calculated in the zone from video, is used for discussing.Can expect, following (referring to Figure 18) that chats face to face and state, the video content of vision signal AVS will comprise a series of time series frame of video, but this is not essential.For each frame of video or time block diagram of equal value, can extract zone (for example R4) from each and extract mean value or other colouring information.Each extracts the zone can be configured to have specific size, for example to 100 * 376 pixels.Suppose, for example frame rate was 25 frame/seconds, for each video RGB three mass-tone, the synthetic total data of extracting region R 1-R6 at extraction mean value (supposing to have only 1 byte need stipulate the color of 8 bits) before will be 6 * 100 * 376 * 25 or 5.64 megabyte/seconds.This data flow is very big, and shines upon intractable in the change-over circuit 10 at RUR, therefore, can work in feature extraction FE to each extraction of extracting the average color of region R 1-R6.Especially, shown can be to each pixel summation RGB Color Channel value in the extraction zone of each m * n pixel (as R Ij), and reach the average of each main RGB by the number of m * n pixel, shown in for example is red R AvgRepeat the summation to each RGB Color Channel like this, each extracts mean value of areas will be a triplet R AVG=| R Avg, G Avg, B Avg|.To all extraction region R 1-R6 and each RGB Color Channel, repeat same process.Extract the number in zone and size can with shown in different, also can divide according to the such of hope.
Next step that carry out the color map conversion by RUR mapping change-over circuit 10 illustrative shown in can being and utilize shown in three look principal matrixs represent for example as shown in Figure 5, wherein have vectorial R, G, painted three color space utilizations of B have element X R, max, Y R, max, Z R, maxThree look principal matrix M conversion, wherein X R, maxBe the tristimulus values that R exports in maximum at first.
From the painted color space to conversion not painted, the specific installation space can be image and/or special equipment-known linearisation, and pixel rebuilds (if desired) and white point selects step to be performed, and follows by matrix conversion.In this case, we select to adopt painted video output region as the starting point that is converted to not painted color space colorimetric simply.Rendered image need be through to the additional conversion of second colorant space, so that they are visual or can print, and such RUR conversion is included in the conversion of second colorant space.
In the first possible step, the matrix equation of the painted color space of video is drawn in Fig. 6 and 7 expressions, respectively by main R, and G, the painted color space of B and surround lighting is represented, respectively by main R ', G ', B ' expression, the not painted color space X shown in arriving, Y, Z, here, three look principal matrix M 1Converting video RGB is to not painted XYZ, three look principal matrix M 2Conversion environment light source R ' G ' B ' to shown in the not painted XYZ color space.Colorant space RGB shown in Figure 8 and R ' G ' B ' are equated, utilize the first and second three look principal matrix (M 1, M 2), allow primary colors RGB of painted (video) color space of matrix conversion and second painted (environment) color space and R ' G ' B ' to arrive the described not painted color space (RUR shines upon conversion); By primary colors RGB, the one or the three colour moment battle array M1 of the painted video color space and the inverse matrix (M of the two or three colour moment battle array 2) -1Carry out matrix multiple, obtain the conversion of the painted color space of colouring information to the second (R ' G ' B ').Yet three look principal matrixs of known display device obtain easily, and those skilled in the art utilize known white point method can determine environment light source.
With reference to figure 9-11, the expression prior art utilizes the white point method to obtain general three look principal matrix M.In Fig. 9, amount S rX rRepresent each (environment light source) main tristimulus values, S in maximum output rRepresent the white point amplitude, X rRepresentative is by the colourity of the key light of (environment) light source generation.Utilize the white point method, utilize known light source colourity inverse of a matrix, matrix equation makes S rEquate with white point reference value vector.Figure 11 is that algebraically is handled with prompting white point fiducial value, for example X w, be the product of white point amplitude or brightness and light source colourity.Carry throughout, tristimulus values X is set up and equals colourity x; Tristimulus values Y is set up and equals chromaticity y; Tristimulus values Z is restricted to and equals 1-(x+y).The mass-tone of the second colored rings environmental light source color space and the enough known technology of reference white element energy are for example by utilizing the color of light spectrometer to obtain.
Can find the similar amount of the first painted video color space.For example, the studio monitor in the known present age in the North America, there is a slightly different standard in Europe and Japan.But for example, it is international consistent to go up basic standard in HDTV (High-Definition Television) (HDTV), and these basic standards are closely represented the feature of studio monitor aspect studio video, calculating and computer picture.This standard represents that formally ITU-R recommends BT.709, and it comprises the parameter of needs, and here, the three look principal matrixs (M) of relevant RGB are:
0.640 the matrix M of 0.300 0.150 ITU-R BT.709
0.330?0.600?0.060
0.030?0.100?0.790
The value of white point also is known.
With reference to Figure 12 and system similarity shown in Figure 3, after being the photoemissive characteristic extraction step FE of environment, comprise a gamma aligning step 55 in addition.Selectively, gamma aligning step 55 can be carried out between the step that is produced circuit 18 execution by RUR mapping change-over circuit 10 and surround lighting.The optimum gradation coefficient value of having found the LED environment light source is 1.8, and it is that the negative gamma of 2.5 the video color space is proofreaied and correct that the gamma value that therefore can utilize known mathematical computations to draw is carried out in order to offset typical gamma.
Usually, RUR shines upon change-over circuit 10, it can be a functional block by any known software platform effect that is fit to, carry out general RUR conversion as shown in figure 13, here, shown schematic diagram obtains to comprise for example vision signal AVS of the painted color space of video RGB, and it is transformed into for example not painted color space of CIE XYZ; Then to the second painted color space (environment light source RGB).After the RUR conversion, except signal processing, as shown in the figure, can drive environment light source 88.
Figure 14 represents to utilize the present invention to obtain the treatment step of environment light source transition matrix coefficient, and as shown in the figure, wherein step comprises drive environment light unit; And shown in this area in the inspection output linearity.If the environment light source primary colors is stable, (shown in the bifurcated of the left side, stablizing primary colors) utilizes the color of light spectrometer can obtain the transition matrix coefficient; On the other hand, if the environment light source primary colors is unsettled, (shown in the bifurcated of the right, unstable primary colors), the gamma that can reset formerly given are proofreaied and correct (gamma curves as shown in the figure, resets).
Usually, wish from extracting for example each pixel extraction colouring information the R4 of zone, but this not necessarily, instead, if desired, can allow the rapid evaluation average color, perhaps produce the generation of extracting the field color feature fast the poll of selecting pixel.Figure 15 represents to utilize video of the present invention to extract the treatment step that assessment and surround lighting reproduce, and the step here comprises the assessment of [1] preparation rabbit colourity (from the painted color space, for example video RGB); [2] be transformed into the not painted color space; And [3] reproduce the conversion colourity assessment of (the second painted color space, for example LEDRGB) for environment.
According to the present invention, have been found that the extraction and the needed data bit stream of processing (for example mass-tone) (participating in Figure 18 down) that need be used for supporting video content from frame of video, can reduce by the frame of video sub sampling of wisdom.With reference to Figure 16, the chart that expression is extracted according to frame of video of the present invention.Show a series of independently continuous video frames F, i.e. frame of video F 1, F 2, F 3Or the like-for example by the individual interleaving of NTSC, PAL or SECAM standard code or the designated frame that does not interweave.By carrying out content analysis and/or feature extraction-for example extract mass-tone information-from selected successive frame, for example frame F 1And F N, can reduce data payload or expense, keep responding ability, actuality and the fidelity of acceptable environment light source simultaneously.The result that can provide when having been found that N=10, promptly sub sampling one frame can be worked from 10 successive frames.This provides the refresh cycle P between the frame of reduction process overhead extracts, and wherein the interframe interpolation process can provide the colourity that shows among the D time dependent appropriate being similar to.Extract selecteed frame F 1And F NThe middle insertion value of (extraction) and colorimetric parameter as shown in the figure is as G 2, G 3And G 4Shown in, the driving process of the environment light source 88 that provides necessary colouring information to notify to quote previously.This has been avoided simply solidifying or keeping to the frame N-1 at frame 2 needs of same colouring information.The frame F that is extracting for example 1And F NBetween whole colourity difference cover under the situation of interpolated frame G expansion, the insertion value can be determined linearly.Selectively, a function can be expanded the frame F that extracts with any alternate manner 1And F NBetween the colourity difference, the high-order approximation of the time developing of the colouring information that for example be fit to extract.The result of interpolation can be used for influencing interpolated frame (for example in DVD player) by visiting frame F in advance, and perhaps selectively, interpolation can be used for influencing interpolated frame in the future (for example application of decoding in emission) when not choosing frame F in advance.
Figure 17 represents the colourity evaluation process step according to simplification of the present invention.The higher-order analysis that frame extracts can enlarge refresh cycle P and enlarge N by original relatively possible values.During frame extracts, perhaps extracting region R xIn the middle interim poll of selecting of pixel, the simplification colourity assessment shown in can handling, this will cause or the delay that next frame extracts, and shown in the left side, or whole frames are extracted to begin, shown in the right.No matter which kind of situation, interpolation are proceeded (interpolation), the next frame that has delay extracts and causes solidifying, and perhaps increases the chromatic value that utilizes.This will be in the operation that provides aspect the bandwidth of bit stream or overhead even save more.
Figure 18 presentation graphs 3 and 12 top, here selectable extraction step shows in that the frame decoder FD that is utilized is other, shown in step 33, allow to extract area information from extracting zone (as R1).The step 35 of further handling or forming comprises assessment colourity difference, and the same as indicated, utilizes these information setting frame of video recovery rates.To the generation circuit 18 shown in ambient lighting and the front, carry out next treatment step prior to transfer of data as shown, this step is carried out output and is calculated 00, as the average treatment of Fig. 4, perhaps extracts as the mass-tone of discussing below.
As shown in figure 19, represent that the general treatment step that colouring information of the present invention extracts and handles comprises acquisition vision signal AVS; From the frame of video (F that for example quotes previously that selects 1And F N) extraction zone (color) information; Interpolation between the frame of video of selecting; RUR shines upon conversion; Selectable gamma is proofreaied and correct; And utilize these information-driven environment light sources (88).As shown in figure 20, after the extracted region of selecting frame information: can insert two other treatment step: can carry out and select frame F 1And F NBetween colourity difference assessment and rely on predetermined criteria, one can be provided with new frame recovery rate as indication.Like this, if successive frame F 1And F NBetween the colourity difference very big, perhaps increase very fast (for example big one-level derivative), perhaps satisfy some other criterion, for example, can increase the frame recovery rate then based on colourity difference history, reduced refresh cycle P like this.Yet, if successive frame F 1And F NBetween the colourity difference very little, and it is very stable or do not have to increase fast (for example the absolute value of first derivative low or be zero), perhaps satisfy other some criterions, for example based on colourity difference history, can save the data bit stream that needs then and reduce the frame recovery rate, improve refresh cycle P like this.
With reference to Figure 21, expression is according to the general processing procedure of one aspect of the invention.As shown in the figure, as a selectable step, may alleviate computation burden, [1] is quantized (QCS quantizes the color space), for example method by providing below utilizing corresponding to the painted color space of video content; [2] select mass-tone (the perhaps palette of mass-tone) (DCE, mass-tone is extracted) then; And the conversion of [3] color map, for example carry out fidelity, scope and appropriateness that RUR mapping conversion (10) (the MT mapping is converted to R ' G ' B ') improves the surround lighting of generation.
The optional quantification of the color space can reduce the number of the pixel of may color state and/or will measure, and can utilize diverse ways to carry out.As an example, Figure 22 schematically shows the method for the pixel colourity of a possible quantitation video content.Here, as shown, illustrative video main value R scope is from being worth 1 to 16, to any color AC that specifies an appointment of these main values R of any.Like this, for example, no matter when, arbitrary red pixel colourity or from 1 to 16 value are met video content, therefore can replace the color AC of appointment, cause reducing red primaries separately with factor 16 in the needed number of color of performance video image characteristic.In this example, to all three primary colors, the minimizing of so possible color state can cause with 16 * 16 * 16, perhaps 4096-the number of the color that is used for calculating-factor reduce.It is exceedingly useful reducing computation burden during this mass-tone in many video systems is determined, for example those have 8 color, and it presents 256 * 256 * 256 or 16.78 million possible color state.
The method of another quantitation video color space as shown in figure 23, it schematically shows another by the example from a large amount of pixel Pi (16 for example) vanning pixel colourity to the painted color space of quantification of super pixel XP.Casing self is a method, and by adjacent pixels mathematics ground (perhaps calculating) being added together to form a super pixel, this super pixel self is used to further calculate or expression.Like this, usually have at video format, for example, 0.75 million pixel selects to be used for to replace the number of the super pixel of video content can reduce the number to 0.05 million of the pixel that is used for calculating or the peanut of any other hope.
The quantity of super pixel like this, size, direction, shape or position can be used as the function of video content and change.Here, for example, help during feature extraction FE, guaranteeing super pixel XP only from image characteristics extraction, rather than from fringe region or background extracting, super pixel XP forms correspondingly.Figure 24 represents the vanning process similar to Figure 23, but here super pixel size, direction, shape or position can with shown in pixel characteristic J8 as one man form.Shown characteristics of image J8 is jagged or irregular, does not have straight level or vertical edge.As shown in the figure, the super pixel XP of selection correspondingly imitates or the analog image character shape.Except shape, can also utilize known pixel class computing technique by the super locations of pixels of characteristics of image J8 influence, size and Orientation with customization.
Quantification can make the designated color (as designated color AC) of pixel color degree and replacement consistent.The color of those appointments can be specified arbitrarily, comprises utilizing preferred color vector.Therefore, do not utilize optionally or uniformly organizing of designated color, at least some video image pixel colourities can be set to preferred color vector.
Figure 25 is illustrated in field color vector on standard Descartes CIE x-y chromatic diagram or the color diagram and their color or chromaticity coordinate.This figure represent color that all are known or appreciable color at the maximum luminosity place function as chromaticity coordinate x and y, shown in nanometer optical wavelength and the luminous white point of CIE standard as a reference.3 area light vector V show on this figure,, can see that a color vector V is positioned at the outside of color colour gamut here, and the color colour gamut is to produce standard (shown in colour gamut) by PAL/SECAM, NTSC and Adobe RGB color to obtain.
For clear, Figure 26 represents the feature of a part of the CIE figure of Figure 25, in addition the field color vector V of remarked pixel chrominance C p and their appointments.The standard of appointed area color vector can change, and utilizes known computing technique, and can comprise that Euclid calculates or with the distance of other particular color vector V.The color vector V that is labeled is positioned at outside the painted color space or color colour gamut of display system; This can allow the preferred colourity that produces by ambient lighting system or light source 88 easily can become a kind of designated color that is used for quantizing painted (video) color space.
In case the distribution that utilizes one or more above-mentioned given methods to draw designated color, next step is to carry out from designated color distributes to carry out the mass-tone extraction by extracting following each: the mode of [a] designated color; The intermediate value of [b] designated color; The weighted average of [c] designated color colourity; Perhaps [d] utilizes the weighted average of weighting function.
For example, can select the designated color that takes place with highest frequency with histogram method.The histogram that Figure 27 represents has provided the color (designated color) of specified pixel color or the most frequent generation (referring to coordinate, pixel percentage), that is, and and the mode that designated color distributes.Great majority among this mode or the ten kinds of designated colors that utilized can be selected to be used for utilization or simulation by ambient lighting system as mass-tone DC (illustrating).
Same, the intermediate value that designated color distributes can selected conduct or is helped to influence the selection of mass-tone DC.Figure 28 schematically shows the intermediate value that designated color distributes, and intermediate value that selection here shows or median (being the designated color interpolation of even number) are as mass-tone DC.
Selectively, can utilize weighted average to carry out the summation of designated color,, may be fit to the intensity of ambient lighting system color colour gamut more so that influence the selection of mass-tone.Figure 29 represents the mathematics summation of the weighted average of designated color colourity.For clear, show unitary variant R, but can utilize the dimension or the coordinate (for example CIE coordinate x and y) of any number.The colourity variable R with pixel coordinate (perhaps super pixel coordinate, if desired) i and j represent, in this example i and j respectively 1 and n and 1 and m between value.Colourity variable R and the pixel weighting function W of index i shown in having and j multiply each other, and carry out whole summation then; The result divided by number of pixels n * m to obtain weighted average.
The similar weighted average that utilizes pixel weighting function as shown in figure 30, except shown in the location of pixels i of W shown in also being and the function of j, similar to Figure 29, this allows space principal function., show during the center of D or any other parts can or be extracted in the selection of mass-tone DC and emphasized that this is discussed below the location of pixels weighting by also.
Weighted sum can be carried out by the given extraction area information step 33 that provides above, can select and store W in any known mode.Pixel weighting function W can be arbitrary function or operator, like this, can comprise integral body, for specific location of pixels, can get rid of zero.Can utilize known technology recognition image feature, as shown in figure 34, in order to serve bigger purpose, W can correspondingly be changed.
Utilize above method or any equivalent method, in case the color of appointment is selected as mass-tone, pass through ambient lighting system, can carry out better assessment to the suitable colourity that is used to represent, if particularly when considering all colourity and/or all video pixel, the calculation procedure that needs needs still less originally than them.Figure 31 is illustrated in and determines interested color in the distribution of designated color, extracts the pixel colourity of appointment then there, to obtain the mass-tone of a true mass-tone as appointment.As can be seen, pixel colourity Cp is assigned to the color AC of two appointments; Be not chosen in the designated color AC shown in the bottom of figure as mass-tone, however top designated color be considered mass-tone (DC) and be selected as shown in interested color COI.Can further check then and be assigned to the pixel that (perhaps at least in part) is considered as the designated color AC of interested color COI, and (for example utilize average by the colourity of directly reading them, as shown in Figure 4, it perhaps is special purpose, in given zonule, carry out the mass-tone extraction step), can obtain the better reproduction of mass-tone, the true mass-tone TDC of conduct shown here.Any treatment step that for this reason needs can utilize the step and/or the assembly that provide above to finish, and perhaps by utilizing independently euchroic selector, it can be known software program or subprogram or task circuit or its equivalent.
Shown in figure 32, extract according to mass-tone of the present invention and can carry out repeatedly or parallelly independently provide a dominant hue colour table, wherein mass-tone DC can comprise mass-tone DC1+DC2+DC3.
Mention as Figure 30, weighting function or equivalent can provide weighting by location of pixels, some viewing area are considered especially or are emphasized allowing.Figure 33 represents the simple front surface figure that video as shown in Figure 1 shows, and represents that one will not waited weighting to offer the example of pixel Pi at preferred area of space.For example, as shown in the figure, some zone C of demonstration can be utilized weighting function W weighting big on the numerical value, and simultaneously, one is extracted zone (perhaps any zone, for example scene background) and can utilize weighting function W weighting little on the numerical value.
As shown in figure 34, this weighting or emphasize to be applied on the characteristics of image J8, wherein provided the simple front surface figure that video shown in Figure 33 shows, utilized known technology to select characteristics of image J8 (fish) (referring to Fig. 3 and 12) here by characteristic extraction step FE.Characteristics of image J8 can be shown in or above the described video content that among mass-tone is extracted DCE, only utilizes, or the part of the video content that utilizes.
With reference to Figure 35, utilize method given here as can be seen, allow to obtain the mass-tone of choosing for frame of video by relying at least one mass-tone of former frame at least in part.Illustrated frame F 1, F 2, F 3And F 4Experience acquisition mass-tone is as shown in the figure extracted the process of DCE, its objective is mass-tone DC1, DC2, DC3 and the DC4 shown in extracting respectively, wherein, by calculating, can be established as the mass-tone that frame is selected, be expressed as DC4, as shown in mass-tone DC1, DC2 and the function (DC4=F (DC1 of DC 3, DC2, DC3)).This allows for any simplification process that frame F4 chooses mass-tone DC4, perhaps notifies the wherein frame F of front better 1, F 2, F 3Mass-tone choose, help to influence choosing of mass-tone DC4.This simplification process as shown in figure 36, be used to reduce computation burden here, interim mass-tone is extracted DC4* and is utilized the colourity assessment, assist by the mass-tone that the frame (the perhaps single frame in front) from the front extracts at next step then, with the selection (utilizing the simplification process to prepare DC4) that helps to prepare DC4.
Usually, environment light source 88 can comprise different diffuser effects and produce the light mixing, also has translucent or other phenomenon, for example has modulated structure frosted or smooth surface by utilization; Ribbed glass or plastics; Perhaps aperture structure for example passes through to utilize the metal structure around arbitrary source.For these interested results are provided, can utilize any amount of known diffusion and scattering material or phenomenon, comprise by utilize scattering to obtain from little suspended particulate; Clouded plastics or resin prepare to utilize colloid, latex or globule 1-5: m or still less, for example are less than 1: m, comprising long-term organic mixture; Gel; And colloidal sol, those skilled in the art will know that its production and manufacturing.Scattering phenomenon can comprise the Rayleigh scattering of visible wavelengths, for example carries out blue generation for the blueness that improves surround lighting.The color that produces can zone definitions, for example in some whole bluish tone in zone or regional tone, for example upper part (surround lighting L1 or L2) that produces as blue light.
Ambient light can also cooperate with angle measurement luminosity element, for example cylindrical prism or lens, and it can form, integrated or be inserted in the inside of modulated structure.This can allow special effect during as the function of beholder position in the light feature that produces.Can utilize other light shape and form, comprise rectangle, leg-of-mutton or erose prism or shape, they can be placed on above the ambient lighting unit or form the environment lighting unit.Whether the result has produced isotropic output, the effect that obtains can ad infinitum change, for example project on the environment light source wall on every side on every side, on the object and lip-deep interested optical frequency band, when scene element, color and intensity change on video display unit, in dark room, produce a kind of light and show.This effect can be a theater surround lighting element, its function as the beholder position-when watching home theater, the beholder from chair stand up or during mobile viewing location-sensitively change the light feature, for example watch bluish spark, be ruddiness then.The number and the type of angle measurement luminosity element almost can unconfinedly be utilized, and comprise plastics in blocks, glass and the optical effect that is produced by scratch and appropriate destructive manufacturing technology.Can make unique ambient light, even to different theater effects, can be interchangeable.These effects can be modulated, for example by changing the light quantity that allows by angle measurement luminosity element, perhaps by changing the difference highlights branch (for example, utilizing sub-lamp or LED in groups) of ambient lighting unit.
Like this, the surround lighting that produces in L3 shown in Figure 1 is simulated extraction region R 3 can have a colourity, and its perception that is provided at that regional phenomenon is extended, for example the fish of the motion shown in.This can increase visual experience and provide suitable and tone not dazzling or normal coupling.
Vision signal AVS certainly is digital data stream and comprises synchronization bit and cascade bit; Parity check bit; Error code; Interweave; Particular modulation; Serial data header and for example description of wishing of ambient light effects of metadata (for example " luminous storm "; " sunrise " or the like) and those skilled in the art's functional steps that will realize, for clear, given here only is illustrative do not comprise conventional step and data.
Tu Xingyonghujiemian ﹠amp shown in Fig. 3 and 12; Preference memory can be used for the behavior of changing environment illuminator, shows the degree of the video content changes colour fidelity of D for example for the video of wishing; Change magnificent, comprise that the color outside any fluorescence color or the colour gamut is launched into the degree of environment space, perhaps according to the variation of video content, the variation of surround lighting has how soon with much, for example by increase brightness or other character that changes in light script command content.This can comprise senior content analysis, and it can be the mild tone of the content production of film or special characteristic.The video content that in content, comprises a lot of dark scene, can influence the behavior of environment light source 88, cause the dimness emission of surround lighting, and magnificent or bright tone can be used for some other content, resemble many yellowish pinks or bright scene (sunshiny sandy beach, a tiger on the prairie, or the like).
Narration given here can make those skilled in the art utilize the present invention.Utilize originally instant instruction, a lot of configurations are possible, and configuration that these provide and arrangement only are illustrative.The target that is not all searching here all needs demonstration, for example, do not breaking away under the situation of the present invention, the special conversion of the second painted color space can be got rid of from instruction given here, particularly as painted color space RGB when being similar or identical with R ' G ' B '.In practice, the part that the method for instruction or claim can be used as big system shows that big system can be recreation center or family theater center.
As everyone knows, the function of illustrative ground instruction here and calculating can utilize software or machine code functionally to reproduce and simulate, and no matter those skilled in the art can utilize these instructions and the control of the mode of the Code And Decode of instructing here.When consider it is not strictness when decode video information must be become frame in order to carry out pixel class statistics, this is especially genuine.
Those skilled in the art instruct based on these, can change the apparatus and method of instructing and requiring here, for example, rearrange step or data structure to be fit to special application, and creation can seldom comprise the system of similar selection illustrative purpose.
Utilize the disclosed the present invention of above-mentioned example can utilize the feature of more top narrations to realize.Equally, here, the thing that does not have instruction or require will not got rid of the increase of other structure or function element.
Significantly, according to top instruction, change of the present invention and variation are possible.Therefore be appreciated that the present invention can utilize within the scope of the appended claims, rather than special here the description or suggestion.

Claims (19)

1, a kind ofly extract mass-tone to produce the method by the mass-tone of environment light source (88) simulation from the video content of encoding the painted color space, it comprises:
1) quantification at least some pixel colourities from described video content in the described painted color space are to form the distribution of designated color;
2) from the distribution of described designated color, carry out mass-tone and extract, so that any one produces mass-tone below extracting: a) designated color that takes place with highest frequency; B) intermediate value of described designated color; C) weighted average of the colourity of described designated color; D) utilization is to weighting function (W (i, j, R)) the weighted average of location of pixels weighting;
3) from the painted color space of the described mass-tone to the second of described painted color space transformation, this second painted color space is formed and allows to drive described environment light source.
2, the method for claim 1, wherein said quantification comprise a plurality of described pixel colourities are assigned in the described designated color one of them.
3, the method for claim 1, wherein said quantification comprise cases described pixel colourity at least one super pixel.
4, method as claimed in claim 3, the size of wherein said super pixel, direction, shape, or any one and characteristics of image form adaptably in the position.
5, method as claimed in claim 3 comprises a plurality of described pixel colourity in the described super pixel is assigned in the described designated color one of them in addition.
6, the method for claim 1, at least one is the field color vector in the wherein said designated color, it is optional in the described painted color space.
7, method as claimed in claim 6, wherein said field color vector is arranged in the described second painted color space.
8, the method for claim 1 is included in addition and determines in the distribution of described designated color that at least a interested color and extraction specify in pixel colourity there, to obtain a kind of true mass-tone that will be appointed as described mass-tone.
9, the method for claim 1, wherein said mass-tone comprises the palette of mass-tone.
10, the method for claim 1, wherein step 3) comprises
3a) from the described mass-tone of described painted color space transformation to the not painted color space;
3b) from the described mass-tone of described not painted color space transformation to the described second painted color space.
11, method as claimed in claim 10, wherein step 3a) and 3b) also comprise step 3c): utilize the one three and the 23 look principal matrix (M 1, M 2), the primary colors of the described painted color space of matrix conversion and the second painted color space is to the described not painted color space; With inverse matrix (M by the described primary colors of the described painted color space, described the one or three look principal matrix and described the two or three look principal matrix 2) -1Carry out matrix multiple, obtain the conversion of colouring information to the described second painted color space.
12, the method for claim 1 wherein obtains the pixel colourity of step 1), and comprises in addition from extract the zone:
4) from broadcast the surround lighting of described mass-tone adjacent to the described described environment light source that extracts the zone.
13, the method for claim 1, wherein step 1) comprises in addition described video content is decoded into a framing, and wherein a plurality of pixel colourity obtains from the characteristics of image that extracts by the frame the described framing.
14, the method for claim 1, wherein step 1) comprises in addition described video content is decoded into a framing, and wherein by depending on from least one mass-tone of former frame and the small part that arrives obtains the mass-tone of frame.
15, a kind ofly extract mass-tone to produce the method by the mass-tone of environment light source (88) simulation from the video content of encoding the painted color space, it comprises:
1) in the described painted color space, described video content is decoded into a plurality of frames, and in one of them of described frame, quantizes at least some, to form the distribution of designated color from pixel colourities of extracting the zone;
2) from the distribution of described designated color, carry out mass-tone and extract, so that any one produces mass-tone below extracting: a) designated color that takes place with highest frequency; B) intermediate value of described designated color; C) weighted average of the colourity of described designated color; D) utilization is to weighting function (W (i, j, R)) the weighted average of location of pixels weighting;
3a) from the described mass-tone of described painted color space transformation to the not painted color space;
3b) from the described mass-tone of described not painted color space transformation to the described second painted color space, assist by the following step:
3c) utilize the one three and the 23 look principal matrix (M 1, M 2), the primary colors of the described painted color space of matrix conversion and the second painted color space is to the described not painted color space; With inverse matrix (M by the described primary colors of the described painted color space, described the one or three look principal matrix and described the two or three look principal matrix 2) -1Carry out matrix multiple, obtain the conversion of colouring information to the described second painted color space;
4) from broadcast the surround lighting of described mass-tone adjacent to the described described environment light source that extracts the zone.
16, method as claimed in claim 15, wherein said extraction zone is selected to the characteristics of image that extracts from a frame.
17, method as claimed in claim 15 is included in addition and determines in the distribution of described designated color that at least a interested color and extraction specify in pixel colourity there, to obtain a kind of true mass-tone that will be appointed as described mass-tone.
18, method as claimed in claim 15, at least one in the wherein said designated color are the field color vectors, and it is optional in the described painted color space.
19, method as claimed in claim 15, wherein said frame comprises first and second frames, and by depending on from least one mass-tone of former frame and the small part that arrives obtains the mass-tone of frame.
CNB2005800220072A 2004-06-30 2005-06-27 Be used for the method that mass-tone is extracted Expired - Fee Related CN100559850C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US58419704P 2004-06-30 2004-06-30
US60/584,197 2004-06-30
US60/652,836 2005-02-14

Publications (2)

Publication Number Publication Date
CN1977529A CN1977529A (en) 2007-06-06
CN100559850C true CN100559850C (en) 2009-11-11

Family

ID=38126383

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005800220072A Expired - Fee Related CN100559850C (en) 2004-06-30 2005-06-27 Be used for the method that mass-tone is extracted

Country Status (1)

Country Link
CN (1) CN100559850C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106406504A (en) * 2015-07-27 2017-02-15 常州市武进区半导体照明应用技术研究院 Atmosphere rendering system and method of man-machine interaction interface

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388205B (en) * 2007-09-10 2011-08-24 联想(北京)有限公司 Display device control method and system
CN105427271B (en) * 2014-09-17 2018-10-16 清华大学 A kind of image color adjusting method and system
CN108933961B (en) * 2018-06-26 2021-03-23 深圳市韵阳科技有限公司 Method and system for controlling LED color development according to image edge data
JP7080399B2 (en) * 2018-11-01 2022-06-03 シグニファイ ホールディング ビー ヴィ Determining light effects based on video and audio information depending on video and audio weights
CN112185317A (en) * 2020-08-17 2021-01-05 深圳市广和通无线股份有限公司 Color calibration method, device, computer equipment and storage medium
WO2022121114A1 (en) * 2020-12-11 2022-06-16 萤火虫(深圳)灯光科技有限公司 Lighting module control method, lighting module, electronic device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106406504A (en) * 2015-07-27 2017-02-15 常州市武进区半导体照明应用技术研究院 Atmosphere rendering system and method of man-machine interaction interface

Also Published As

Publication number Publication date
CN1977529A (en) 2007-06-06

Similar Documents

Publication Publication Date Title
CN1977542B (en) Dominant color extraction using perceptual rules to produce ambient light derived from video content
CN100596207C (en) Ambient light derived form video content by mapping transformations through unrendered color space
JP4729568B2 (en) Dominant color extraction for ambient light from video content transformed via non-rendering color space
US9997135B2 (en) Method for producing a color image and imaging device employing same
CN100584037C (en) Flicker-free adaptive thresholding for ambient light derived from video content mapped through unrendered color space
CN106464892B (en) Method and apparatus for being encoded to HDR image and the method and apparatus for using such coded image
CN102783132B (en) For defining the apparatus and method of color state
CN100596208C (en) Method for coding ambient light script command and dynamically controlling ambient light source
JP2007521775A (en) Ambient light derived by subsampling video content and mapped via unrendered color space
CN103180891B (en) Display management server
CN103891294B (en) The apparatus and method coded and decoded for HDR image
CN100466058C (en) Multi-primary driving values calculation unit and method
CN100559850C (en) Be used for the method that mass-tone is extracted
EP1763974A1 (en) Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences
CN104995903A (en) Improved HDR image encoding and decoding methods and devices
CN1977569A (en) Ambient lighting derived from video content and with broadcast influenced by perceptual rules and user preferences
EP4042405A1 (en) Perceptually improved color display in image sequences on physical displays
Laine et al. Illumination-adaptive control of color appearance: a multimedia home platform application
CN116258779A (en) Poster making method, readable medium, electronic device and computer program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: TP VISION HOLDING B.V.

Free format text: FORMER OWNER: ROYAL PHILIPS ELECTRONICS N.V.

Effective date: 20120822

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20120822

Address after: Holland Ian Deho Finn

Patentee after: Tp Vision Holding B. V.

Address before: Holland Ian Deho Finn

Patentee before: Koninklijke Philips Electronics N.V.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091111

Termination date: 20140627

EXPY Termination of patent right or utility model