CN102360513B - Object illumination moving method based on gradient operation - Google Patents

Object illumination moving method based on gradient operation Download PDF

Info

Publication number
CN102360513B
CN102360513B CN 201110299716 CN201110299716A CN102360513B CN 102360513 B CN102360513 B CN 102360513B CN 201110299716 CN201110299716 CN 201110299716 CN 201110299716 A CN201110299716 A CN 201110299716A CN 102360513 B CN102360513 B CN 102360513B
Authority
CN
China
Prior art keywords
image
destination object
layer
gradient
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110299716
Other languages
Chinese (zh)
Other versions
CN102360513A (en
Inventor
陈小武
赵沁平
金鑫
陈萌萌
吴洪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110299716 priority Critical patent/CN102360513B/en
Publication of CN102360513A publication Critical patent/CN102360513A/en
Application granted granted Critical
Publication of CN102360513B publication Critical patent/CN102360513B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an object illumination moving method based on gradient operation, and provides a new technical scheme for the generation of a virtual scene illumination effect based on stock footage. The invention is characterized in that the total process of building an object illumination moving method based on the gradient operation comprises the following steps: generating an illumination moving result through the steps of image aligning, image layering, weighing of least squares filtering, gradient operation, Poisson integral and image combination; improving a least squares filter; self-adaptively calculating the least squares filtering parameter according to the unsmooth content of the image, so as to carry out filtering in higher level in the unsmooth area and save more detail information in a detail layer; and adding gray level constraints in the gradient operation, so as to enable the whole gray level of the result of the illumination moving to be more close to the reference object. The method provided by the invention can be widely used and promoted to the fields such as interactive digital entertainment, movie and television programming, artistic designing and creation and the like.

Description

Object illumination migration method based on the gradient operation
Technical field
The present invention relates to virtual reality and computer vision field, specifically a kind of object illumination migration method based on the gradient operation.
Background technology
Actual situation fusion scene generation based on video material is the important component part of virtual reality, also is the study hotspot of virtual reality, augmented reality, computer vision and the organic intersection of correlative study direction.Because the video scene that consists of virtual scene and scenario objects are often from different video materials, may there be larger difference in the lighting effect of scenario objects and video scene, yet the video virtual scene needs each scenario objects to have consistent lighting effect, but present video material illumination fusion method is difficult to satisfy the needs of video virtual scene.How the illumination migration problem of video scene object namely moves to scenario objects with the lighting effect of references object in the purpose video scene, the lighting effect of generating scene object under purpose video scene illumination condition.
At present, some are arranged based on the facial image illumination effect synthetic method of video, these methods all need to use complicated illumination collecting device.The Debevec of American South University of California in 2000 etc. propose static scene illumination migration method under a kind of fixed viewpoint.Gather Static Human Face image under 2048 kinds of pointolite illumination conditions, the view data that linear combination gathers generates the image of Static Human Face under new illumination condition, and the static object lighting effect that the method is confined under the fixed viewpoint is synthetic.The Peers of University of Southern California in 2007 etc. has proposed a kind of method of utilizing quotient graph people's face to be carried out the illumination migration.By gathering static reference man's face object mirror field under different illumination conditions, and utilize same target under different illumination conditions facial image and between the facial image under the uniform illumination condition of front ratio, make up the material properties figure of this object under the corresponding illumination condition, be also referred to as quotient graph.To expect that simultaneously the corresponding quotient graph of illumination condition is out of shape, and move to the target people on the face, to generate the lighting effect of target image/video scene.The method is used for people's face video/image is carried out later stage illumination migration process.The method limitation is: target people's face and database people face have close geometrical property and material properties, so the method temporarily can only process the human face light migration problem, can't be used for the illumination migration problem of whole human body.
At present, the illumination migration method that some are estimated based on how much or how much are supposed is arranged, these methods have been made lambert's illumination model hypothesis usually.The Yang Wang of Carnegie Mellon University in 2007 etc. has proposed a kind of facial image illumination moving method under unknown human face geological information and albedo information state.The method input only needs a facial image under any harsh photoenvironment.Utilize a kind of deformable faceform to estimate people's face geological information, utilized the spheric harmonic function base to represent illumination condition, and come the statistical distribution in analog submodule zone and the Space Consistency of people's face texture by Markov random field model.The method is by estimating the illumination condition of facial image with a changeability faceform, and can process complex illumination condition or facial image overexposure and under-exposed situation, can improve greatly the accuracy rate of recognition of face.But the method has been made the hypothesis of people's face Lambert surface, and comes the match facial image to estimate geological information with a deformable faceform, and the facial image that therefore generates is compared with true picture and lacked certain sense of reality.
Certain methods is arranged in addition by image being divided into and making up to generate the illumination migration results, these methods are the irrelevant part of illumination relevant portion and illumination with picture breakdown.The Qing Li of University of Houston in 2009 etc. have proposed a kind of human face light migrating technology based on the logarithm total variational model, can move to the target people to the illumination of reference man's face and go on the face.The method need not to know geometry and the illumination information of people's face.By facial image being decomposed into irrelevant two parts of face characteristic of illumination information and illumination.And the illumination information of utilizing references object is replaced the illumination information realization illumination migration of destination object.Algorithm flow is: at first utilize morphing that reference man's face is out of shape; Then the reference man's face after target people face and the distortion all is decomposed into according to the relevant part of illumination with in the irrelevant part of illumination; By the relevant part of exchange two width of cloth facial image illumination, obtain the result of destination object illumination migration at last.The method only needs a width of cloth references object to be used for the illumination migration, and can process the illumination migration of coloured image and gray level image.The method limitation is: 1. used target people's face and reference man's face to have the hypothesis of similar complexity; 2. it is serious that illumination migration results is subject to the disparity impact of target people face and reference man's face.
Summary of the invention
According to above-mentioned actual demand and key issue, the object of the invention is to: propose a kind of object illumination migration method based on the gradient operation, the method does not need the geometry of destination object is estimated or supposed, do not need illumination model is made hypothesis yet, only need the single width references object, gradient field with reference to the illumination information transfer of object to destination object.
The present invention only processes interested object in the image, and this object is called as object, for example: destination object and references object; To liking the set that consists of this object pixel in the image, it is the part of image.
The technical solution used in the present invention is: make up a kind of object illumination migration method based on the gradient operation and mainly comprise: image alignment, at first by automatic detected characteristics point or manual markings unique point, and by image distortion method with the alignment of destination object object and references object, to find the mapping relations between destination object and the references object; Image layered, at first with image from the RGB color space to the CIELAB color space, with image be divided into lightness layer and colourity layer (described image layered be that image with subject area carries out layering, also be about to destination object and references object and be divided into respectively lightness layer and colourity layer), only the lightness layer is processed, and kept the colourity layer constant; Weighted linear regression is decomposed into large scale layer and levels of detail by the Weighted linear regression device with the lightness layer, and the information relevant with illumination is stayed on the large scale layer, stays on the levels of detail with the information that illumination is irrelevant; The gradient operation is moved to destination object large scale layer gradient with the gradient that illumination variation causes from references object large scale layer gradient field; Poisson integral is found the solution Poisson equation under the constraint in gray scale territory, gradient image is become gray level image obtain the new large scale layer of destination object; Image combining, the levels of detail of the new large scale layer of destination object and destination object is mixed to get the new lightness layer of destination object, and the color layers with destination object is mixed to get the illumination migration results again.
Image alignment mainly is in order to solve how much of references object image and destination object image, and attitude, expression etc. may inconsistent situations, and it comprises positioning feature point and two steps of anamorphose.At first adopt movable contour model to obtain rough unique point coordinate, then the user adjusts the position of unique point in interactively mode; Then with the unique point of references object and destination object references object is out of shape.The present invention adopts the image distortion method based on affined transformation.At first, destination object is carried out triangulation with unique point; Then, for each triangle in the destination object, find its corresponding triangle in references object, calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object; Then, utilize backward anamorphose to find the coordinate figure of each pixel correspondence in references object in the destination object.At last, get the value at its respective coordinates place in references object, the references object after can obtaining being out of shape for each pixel in the destination object.
In the colo(u)r breakup process, the present invention selects the CIELAB color space, and coloured image is decomposed into lightness L passage and color a, b passage, and wherein, the L passage has comprised lightness information, and two passages of a and b have comprised colouring information.Avoided like this image is caused the as a result discordant situation of color that generates easily in three passages processing of RGB.
Level of detail adopts least square filter that the lightness layer is decomposed into the large scale layer and utilizes division to obtain levels of detail.Levels of detail can be regarded as the irrelevant amount of illumination, and the large scale layer is regarded as the illumination correlative.The parameter of Weighted linear regression device adopts adaptive computing method in the details decomposable process, adopt the smoothing processing of different stage in the image zones of different, make the large scale layer comprise the relevant information of more illumination, levels of detail comprises more characteristic informations that have nothing to do with illumination.The filtering parameter computing method are as follows: at uneven image-region (for example beard, the zones such as eyebrow) larger smooth value is set.At smooth image-region, less smooth value is set.The uneven degree of image calculates according to image gradient, account form be each local window inside gradient pixel value of statistical gradient image greater than the quantity of a certain threshold value, this statistic is carried out normalization, be used for the uneven degree of presentation video.
When gradient field is processed, at first will carry out bilateral filtering to the references object large scale take destination object large scale layer as reference, keep some marginal informations of destination object, and filtered large scale layer transformed to gradient field, better process the gradient of contour area in order to distinguish contour area and non-contour area, the present invention generates one at the mask image of profile place gradual change, the mask image has near the characteristics of gradual change contour area, like this can so that the larger gradient of the non-contour area of references object can mix smoothly near profile with destination object.
The present invention processes respectively for contour area and non-contour area: when processing non-contour area, the larger gradient of non-contour area is moved on the destination object; When processing contour area, will cause that gradient moves on the destination object by illumination.
Image combining is mixed to get the lightness layer of illumination migration results with the large scale layer after processing and the levels of detail of destination object, and the colourity layer with destination object is mixed to get the result that illumination is moved again.
The present invention compared with prior art, be characterized in: 1, the present invention is divided into lightness layer and colourity layer with image, the lightness layer is further divided into large scale layer and levels of detail by least square filter, the information relevant with illumination is retained on the large scale layer, and the detailed information of image object has then been stayed on the levels of detail.2, the present invention processes respectively borderline region and non-borderline region in processing procedure with reference to the graded that is caused by illumination in the picture being moved to target to as upper, makes the result of generation truer.3, the illumination migration method that provides of the present invention only needs a references object, and does not need the geometry of destination object is estimated or made hypothesis, does not also need in addition illumination model is supposed, can generate realistic illumination migration results.
Description of drawings:
Fig. 1 is main flow chart of the present invention;
Fig. 2 is that user of the present invention is to destination object punctuate process flow diagram;
Fig. 3 is image affined transformation synoptic diagram of the present invention;
Fig. 4 (a) is triangulation and the pixel mapping unique point cloud synoptic diagram that the present invention is based on affined transformation; Fig. 4 (b) triangulation synoptic diagram; Fig. 4 (c) pixel mapping synoptic diagram
Fig. 5 is the image distortion method process flow diagram that the present invention is based on affined transformation;
Fig. 6 is gradient operational flowchart of the present invention.
Embodiment:
Below in conjunction with accompanying drawing the present invention is elaborated.
Consult Fig. 1 main flow chart of the present invention, the object illumination migration method that the present invention is based on the gradient operation comprises following basic process: at first be to use movable contour model people's face orientation tool and image distortion method to snap to destination object (i.e. the part in destination object zone in the image of input) with reference to object (i.e. the part in references object zone in the image of input), then all be decomposed into lightness layer and colourity layer with reference to object and destination object, use least square filter that the lightness layer is divided into large scale layer and levels of detail, all operations are all carried out at the large scale layer; The present invention is transformed into gradient field with reference to the large scale layer to picture and destination object by the gray scale territory first, and target is upper to picture with reference to the graded that is caused by illumination in the picture is moved to, and zones of different is carried out in different ways in the transition process; Utilize poisson integral that the large scale layer is returned to the gray scale territory by gradient field, obtain the new large scale layer of destination object, and with destination object colourity layer, levels of detail reconfigures and obtains the illumination migration results of destination object under the references object illumination condition.
Consult Fig. 2 user of the present invention to destination object punctuate process flow diagram, when image object is people's face, adopt automatic or automanual method to obtain the characteristic coordinates of image object; And when image object was non-face, the user adopted the fully manual self-defined unique point of mode.The present invention uses movable contour model people face orientation tool to finish the positioning function of features of human face images, this instrument can obtain the coordinate of 116 unique points on the facial image, the manual adjustments function of characteristic point position is provided simultaneously, and the user can manually adjust the coordinate of each unique point.
Consult Fig. 3 image affined transformation, the present invention is out of shape according to the corresponding triangle in the destination object with reference to the triangle in the object.For each the pixel coordinate (x in the destination object, y), calculate its respective pixel in references object coordinate (x ', y '), the corresponding relation of pixel (x, y) and pixel (x ', y ') (x ', y ')=f (x, y) by its respectively the triangular apex at place calculate: establish two groups of leg-of-mutton three groups of corresponding coordinates and be respectively (x 1, y 1) with (x ' 1, y ' 1), (x 2, y 2) with (x ' 2, y ' 2), and (x 3, y 3) with (x ' 3, y ' 3), definition affine transformation matrix T AffineFor:
T affine = x 1 ′ x 2 ′ x 3 ′ y 1 ′ y 2 ′ y 3 ′ 1 1 1 * x 1 x 2 x 3 y 1 y 2 y 3 1 1 1 - 1
For the pixel coordinate (x, y) in the destination object, utilize affine transformation matrix to calculate the coordinate (x ', y ') of its respective pixel in references object:
[x′?y′?1] T=T affine[x?y?1] T
With destination object I BAll pixel coordinate value assignment are that it is at references object I AMiddle respective coordinates place pixel, I ' A(x, y)=I A(f (x, y)), the references object after obtaining being out of shape.
Consult the image distortion method that Fig. 4 and Fig. 5 explanation the present invention is based on affined transformation, there is not line between the unique point, shown in Fig. 4 (a), at first, destination object is carried out the De Laonei triangulation with unique point, so that the triangle behind the subdivision do not have overlapped and covers whole some cloud face, shown in Fig. 4 (b); Then, for each triangle in the destination object, find its corresponding triangle in references object, shown in Fig. 4 (c), calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object; Then, utilize backward anamorphose to find the corresponding coordinate figure (x ', y ') in references object of each pixel (x, y) in the destination object; At last, get its value of respective coordinates place (x ', y ') in references object, the references object after can obtaining being out of shape for each pixel (x, y) in the destination object.
It is large scale layer s and levels of detail with the lightness picture breakdown that the present invention adopts least square filter.Least square filter is the detailed information in the preserve luminosity layer preferably, and its solution procedure is to minimize following energy function:
E = | l - s | 2 + λH ( ▿ s , ▿ l )
H ( ▿ s , ▿ l ) = Σ p ( ( ∂ s / ∂ x ) p 2 ( ∂ l / ∂ x ) p α + ( ∂ s / ∂ y ) p 2 ( ∂ l / ∂ y ) p α + ϵ )
Wherein, s is the large scale image that will find the solution; | l-s| 2So that the data item that l and s approach as far as possible;
Figure BDA0000095269230000054
Be so that the smooth as far as possible regularization term of s, be used for minimizing the local derviation of s; Subscript p represents pixel position, space on image; Minimize
Figure BDA0000095269230000061
The gradient generation non-linear zoom that can cause input picture l, in the larger zone of gradient, convergent-divergent is less, the zone that gradient is less then needs to carry out larger convergent-divergent, the whole compatibility of α control gradient non-linear convergent-divergent, be non-linear zoom to the susceptibility of l gradient magnitude, increase α and can cause the result who generates to keep more sharp-pointed border; ε is a very little number (being generally 0.0001), is used for preventing occurring except 0 in the smooth zone of l; λ is the balance adjustment factor between data item and regular terms, increasing λ can cause filtered image more level and smooth, the present invention adopts the smoothing processing of different stage in the image zones of different in the least squares filtering process, in the λ value of the different different sizes of image-region design.
Calculate λ value for self-adaptation, at first calculate the gradient image of lightness layer, and add up each pixel local window inside gradient greater than the pixel quantity of threshold value; Then with its normalization, be used for calculating the parameter of least square filter.Particularly, the image zones of different is arranged different λ values: at first, calculate level and the vertical direction gradient of lightness layer l, be respectively
Figure BDA0000095269230000062
With
Figure BDA0000095269230000063
And provide a threshold value t 1Then, for each pixel i, calculate the local window w centered by this pixel pThe inside gradient size is greater than the pixel quantity of threshold value.
γ ( p ) = Σ i ∈ w p ( ( ∂ l / ∂ x ) i 2 + ( ∂ l / ∂ y ) i 2 ≥ t 1 )
γ (p) is normalized to 0-1, and then λ value is set to:
λ(p)=λ s+(λ ls)*γ(p)
Wherein, λ sAnd λ lExpression control is carried out minimum and the corresponding minimum of highest level smoothing processing and maximum λ value to image respectively.When realizing, α=1.2, the local window radius is made as 8, λ s=1, λ l=4, threshold value t 1=0.02.
Use least square filter that image is carried out filtering and find the solution at first compute gradient image, and the affinity of calculating pixel and neighbor; Then the space non-homogeneous Laplacian Matrix that represents of calculation level, and structure sparse linear system of equations; Find the solution at last the sparse linear system of equations, obtain the filtering result.
Consult the explanation of Fig. 6 gradient operating process and how image carried out the gradient operation:
When gradient field is processed, at first will keep some marginal informations of destination object take destination object large scale layer as with reference to the references object large scale is carried out bilateral filtering, use following two-sided filter among the present invention:
B ( I ) = 1 W p Σ q ∈ ω G σ d ( | | p - q | | ) G σ r ( | I p - I q | ) I q
W p = Σ q ∈ ω G σ d ( | | p - q | | ) G σ r ( | I p - I q | )
G σ ( x ) = 1 σ 2 π exp ( - x 2 2 σ 2 )
Wherein, I is input picture; P is the image coordinate index; B (I) is filtered result; ω is the window centered by pixel p, and q is the pixel among the window ω;
Figure BDA0000095269230000071
The spatial neighbor function, σ dIt is window size;
Figure BDA0000095269230000072
The gray scale similarity, σ rIt is window size; W pBe the normalization item, make each pixel value of B (I) between 0-1.
In order to distinguish contour area and non-contour area, the present invention generates one at the mask image β of profile place gradual change image, the β image has near the characteristics of gradual change contour area, like this can so that the larger gradient of the non-contour area of references object can mix smoothly near profile with destination object.
The computing method of β image are as follows: define a contour images k, contour images is 0 in the skin area value, in the eyebrow zone, eye areas, lip outline zone, nose outline line zone value be 1, is 2 in the nostril region value, the computing method of mask image β are as follows:
β ( p ) = min q ∈ ω ( 1 - k ( q ) · e - ( p - q ) 2 2 σ 2 )
Wherein, p presentation video coordinated indexing; I represents input picture; ω represents the window centered by pixel p, and q is pixel index in the window, and σ is window size.
Filtered large scale layer is transformed to gradient field, when being processed, gradient field to carry out different operations with non-contour area in the profile territory: for non-contour area, set a threshold value, gradient greater than this threshold value is caused by illumination, moves on the destination object greater than the gradient of this threshold value with reference to the gradient of the non-contour area of object; For contour area, utilize mask image β to move on the destination object with reference to the graded that non-profile in the object outline zone causes.
For given destination object s BWith references object s A, and corresponding β image, undertaken by following formula in the gradient field operation:
▿ s B ′ ( p ) = ▿ s A ( p ) , ifβ ( p ) | | ▿ s A ( p ) | | > | | ▿ s B ( p ) | | ▿ s B ( p ) , otherwise
Wherein, p presentation video coordinated indexing;
Figure BDA0000095269230000075
With Represent respectively the corresponding large scale layer of the destination object gradient image after destination object image, references object image and illumination are moved;
Figure BDA0000095269230000077
Whether the gradient of expression references object is caused by profile; β (p) is the mask image, is used for avoiding the gradient with reference to main profile on the object consists of to move on the destination object.
The present invention adopts poisson integral that gradient image is transformed to gray level image, add grayscale restraint in the solution procedure, more near the large scale layer of references object, the poisson integral solution procedure is to minimize following energy function to large scale layer that like this can the place's of making destination object on overall brightness:
E ( f ) = Σ p ∈ I [ E d ( p ) + E g ( p ) ]
Wherein, I represents input picture; P presentation video coordinated indexing; Gray scale item constraint E d(p) and gradient terms constraint E g(p) be expressed as respectively:
E d(p)=λ[f(p)-d(p)] 2
E g(p)=[f x(p)-g x(p)] 2+[f y(p)-g y(p)] 2
Wherein, f (p) is large scale layer to be found the solution, and d (p) is references object large scale layer; f x(p) and f y(p) be large scale layer horizontal gradient to be found the solution and VG (vertical gradient) image; g x(p) and g y(p) be large scale layer horizontal gradient and VG (vertical gradient) image after processing, be
Figure BDA0000095269230000082
Two parts, parameter lambda is used for controlling the gray scale item constraint to the impact of integral result.
After utilizing the grayscale restraint Poisson equation to find the solution to obtain destination object large scale layer, with levels of detail and the combination of destination object colourity layer of destination object, then obtained the destination object image after the illumination migration.
The above only is basic explanations more of the present invention, and any equivalent transformation according to technical scheme of the present invention is done all should belong to protection scope of the present invention.

Claims (5)

1. object illumination migration method based on gradient operation is characterized in that the method includes the steps of:
(1) according between destination object and the references object by the pixel corresponding relation, snap to destination object with reference to object;
(2) destination object and references object all are decomposed into colourity layer and lightness layer;
(3) utilize the Weighted linear regression device of auto-adaptive parameter that the lightness layer is carried out filtering, obtain the large scale layer, utilize lightness layer and large scale layer to obtain levels of detail;
(4) illumination information is moved to the gradient image of destination object large scale layer from the gradient image of references object large scale layer;
(5) with the large scale layer by gradient domain transformation to the gray scale territory, obtain the new large scale layer of destination object, with destination object levels of detail combination, obtain the new lightness layer of destination object;
(6) with the new lightness layer of destination object, be mixed to get the illumination migration results with the colourity layer of destination object;
The described process that illumination information is moved to the gradient image of destination object large scale layer from the gradient image of references object large scale layer of step (4) comprises following steps:
(4.1) utilize unique point, generate human face region image and contour area image and generate the mask image;
(4.2) references object is carried out bilateral filtering according to destination object, the filtering result is transformed to gradient field;
(4.3) move to the non-contour area of destination object with reference to the non-contour area of object greater than the gradient of specific threshold;
(4.4) with reference to the object outline zone, utilize mask image β to move on the destination object with reference to the graded that non-profile in the object outline zone causes.
2. the object illumination migration method based on gradient operation according to claim 1, it is characterized in that: the described image alignment of step (1) adopts following steps:
(1.1) adopt movable contour model to obtain rough unique point coordinate, then utilize user interactions to optimize the position of unique point;
(1.2) with the unique point of references object and destination object as the reference mark, references object is out of shape according to destination object.
3. the object illumination migration method based on the gradient operation according to claim 2 is characterized in that, following steps are adopted in the described distortion of step (1.2):
(1.2.1) destination object is carried out the De Laonei triangulation with monumented point;
(1.2.2) for each triangle in the destination object, find its corresponding triangle in references object, calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object;
(1.2.3) utilize backward anamorphose to find the coordinate figure of each pixel correspondence in references object in the destination object;
(1.2.4) get the value at its respective coordinates place in references object for each pixel in the destination object, the references object after obtaining being out of shape.
4. the object illumination migration method based on gradient operation according to claim 1, it is characterized in that step (2) is described with destination object with references object all is decomposed into the colourity layer and the lightness layer is by image is finished to the CIELAB color space from the RGB color space.
5. the object illumination migration method based on gradient operation according to claim 1, it is characterized in that, the described mixing with the colourity layer of destination object of step (6) refers in the CIELAB color space, with new lightness layer and colourity layer from the CIELAB color notation conversion space to the RGB color space.
CN 201110299716 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation Expired - Fee Related CN102360513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110299716 CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110299716 CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Publications (2)

Publication Number Publication Date
CN102360513A CN102360513A (en) 2012-02-22
CN102360513B true CN102360513B (en) 2013-02-06

Family

ID=45585838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110299716 Expired - Fee Related CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Country Status (1)

Country Link
CN (1) CN102360513B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665062B (en) * 2012-03-16 2016-03-30 华为技术有限公司 A kind ofly make method that target object image in video is stable and device
CN103337088B (en) * 2013-07-10 2016-07-06 北京航空航天大学 A kind of facial image shadow edit methods kept based on edge
CN103839220A (en) * 2014-02-28 2014-06-04 浙江大学 Seamless image cloning method based on matrix decomposition
CN105447829B (en) * 2015-11-25 2018-06-08 小米科技有限责任公司 Image processing method and device
CN107527334A (en) * 2017-08-28 2017-12-29 北京小米移动软件有限公司 Human face light moving method and device
CN107833198B (en) * 2017-11-09 2021-06-01 中共中央办公厅电子科技学院 Outdoor scene relighting method based on large-scale decomposition
CN111063017B (en) * 2018-10-15 2022-04-12 华为技术有限公司 Illumination estimation method and device
CN109300170B (en) * 2018-10-18 2022-10-28 云南大学 Method for transmitting shadow of portrait photo
CN110175255B (en) * 2019-05-29 2022-04-05 腾讯医疗健康(深圳)有限公司 Image annotation method, and annotation display method and device based on pathological image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080114379A (en) * 2007-06-27 2008-12-31 성균관대학교산학협력단 Method and apparatus for illumination normalization
CN101441774A (en) * 2008-12-26 2009-05-27 北京航空航天大学 Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en) * 2010-05-31 2011-01-19 北京智安邦科技有限公司 Target detection method capable of removing illumination influence and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046099A1 (en) * 2006-11-13 2009-02-19 Bunkspeed Real-time display system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080114379A (en) * 2007-06-27 2008-12-31 성균관대학교산학협력단 Method and apparatus for illumination normalization
CN101441774A (en) * 2008-12-26 2009-05-27 北京航空航天大学 Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en) * 2010-05-31 2011-01-19 北京智安邦科技有限公司 Target detection method capable of removing illumination influence and device thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Face illumination transfer through edge-preserving filters;Xiaowu Chen, Mengmeng Chen, Xin Jin and Qinping Zhao;《IEEE Conference on Computer Vision and Pattern Recognition 2011》;20110625;281-287 *
Learning artistic lighting template from portrait photographs;Xin Jin et al;《ECCV 2010》;20100911;101-114 *
Xiaowu Chen, Mengmeng Chen, Xin Jin and Qinping Zhao.Face illumination transfer through edge-preserving filters.《IEEE Conference on Computer Vision and Pattern Recognition 2011》.2011,281-287.
Xin Jin et al.Learning artistic lighting template from portrait photographs.《ECCV 2010》.2010,101-114.

Also Published As

Publication number Publication date
CN102360513A (en) 2012-02-22

Similar Documents

Publication Publication Date Title
CN102360513B (en) Object illumination moving method based on gradient operation
CN112509151B (en) Method for generating sense of reality of virtual object in teaching scene
CN103824253B (en) Figure five sense organ deformation method based on image local precise deformation
CN102509346A (en) Object illumination migration method based on edge retaining
Liao et al. Automatic caricature generation by analyzing facial features
CN103606186B (en) The virtual hair style modeling method of a kind of image and video
CN105719326A (en) Realistic face generating method based on single photo
CN109859098A (en) Facial image fusion method, device, computer equipment and readable storage medium storing program for executing
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN109035388A (en) Three-dimensional face model method for reconstructing and device
CN105469407A (en) Facial image layer decomposition method based on improved guide filter
CN103258343B (en) A kind of eyes image disposal route based on picture editting
CN102509357B (en) Pencil sketch simulating and drawing system based on brush stroke
CN103646416A (en) Three-dimensional cartoon face texture generation method and device
Li et al. Restoration of brick and stone relief from single rubbing images
WO2008012808A2 (en) Interactive segmentation of images with single scribbles
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
KR20230097157A (en) Method and system for personalized 3D head model transformation
CN109035413A (en) A kind of virtually trying method and system of anamorphose
CN103826032A (en) Depth map post-processing method
CN103914862A (en) Pencil sketch simulating method based on edge tangent stream
CN117157673A (en) Method and system for forming personalized 3D head and face models
CN103337088B (en) A kind of facial image shadow edit methods kept based on edge
CN104157002A (en) Color image texture force tactile reproduction method based on color transform space
CN116648733A (en) Method and system for extracting color from facial image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130206

Termination date: 20150930

EXPY Termination of patent right or utility model