CN102360513A - Object illumination moving method based on gradient operation - Google Patents

Object illumination moving method based on gradient operation Download PDF

Info

Publication number
CN102360513A
CN102360513A CN2011102997162A CN201110299716A CN102360513A CN 102360513 A CN102360513 A CN 102360513A CN 2011102997162 A CN2011102997162 A CN 2011102997162A CN 201110299716 A CN201110299716 A CN 201110299716A CN 102360513 A CN102360513 A CN 102360513A
Authority
CN
China
Prior art keywords
image
destination object
gradient
layer
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102997162A
Other languages
Chinese (zh)
Other versions
CN102360513B (en
Inventor
陈小武
赵沁平
金鑫
陈萌萌
吴洪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110299716 priority Critical patent/CN102360513B/en
Publication of CN102360513A publication Critical patent/CN102360513A/en
Application granted granted Critical
Publication of CN102360513B publication Critical patent/CN102360513B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an object illumination moving method based on gradient operation, and provides a new technical scheme for the generation of a virtual scene illumination effect based on stock footage. The invention is characterized in that the total process of building an object illumination moving method based on the gradient operation comprises the following steps: generating an illumination moving result through the steps of image aligning, image layering, weighing of least squares filtering, gradient operation, Poisson integral and image combination; improving a least squares filter; self-adaptively calculating the least squares filtering parameter according to the unsmooth content of the image, so as to carry out filtering in higher level in the unsmooth area and save more detail information in a detail layer; and adding gray level constraints in the gradient operation, so as to enable the whole gray level of the result of the illumination moving to be more close to the reference object. The method provided by the invention can be widely used and promoted to the fields such as interactive digital entertainment, movie and television programming, artistic designing and creation and the like.

Description

Object illumination moving method based on the gradient operation
Technical field
The present invention relates to virtual reality and computer vision field, specifically a kind of object illumination moving method based on the gradient operation.
Background technology
Actual situation fusion scene generation based on video material is an important in Virtual Reality, also is the research focus of virtual reality, augmented reality, computer vision and the organic intersection of correlative study direction.Because the video scene that constitutes virtual scene and scenario objects are often from different video materials; May there be bigger difference in the lighting effect of scenario objects and video scene; Yet the video virtual scene needs each scenario objects to have consistent lighting effect, but present video material illumination fusion method is difficult to satisfy the needs of video virtual scene.How the illumination migration problem of video scene object promptly moves to scenario objects with the lighting effect of references object in the purpose video scene, generates the lighting effect of scenario objects under purpose video scene illumination condition.
At present, some facial image illumination effect synthetic methods based on video are arranged, these methods all need use complicated illumination collecting device.The Debevec of American South University of California in 2000 etc. propose static scene illumination moving method under a kind of fixed viewpoint.Gather static facial image under 2048 kinds of pointolite illumination conditions, the view data that linear combination is gathered generates the image of static person face under new illumination condition, and the static object lighting effect that this method is confined under the fixed viewpoint is synthetic.The Peers of University of Southern California in 2007 etc. has proposed a kind of method of utilizing quotient graph people's face to be carried out the illumination migration.Through gathering static reference man's face object mirror field under different illumination conditions; And utilize same target under different illumination conditions facial image and between the facial image under the even illumination condition in front ratio; Make up the material properties figure of this object under the corresponding illumination condition, be also referred to as quotient graph.To expect that simultaneously the pairing quotient graph of illumination condition is out of shape, and move to the target people on the face, to generate the lighting effect of target image/video scene.This method is used for people's face video is carried out later stage illumination migration process.This method limitation is: target people's face and database people face have close geometrical property and material properties, so this method temporarily can only handle the human face light migration problem, can't be used for the illumination migration problem of whole human body.
At present, some illumination moving methods based on geometric estimation or how much hypothesis are arranged, these methods have been made lambert's illumination model hypothesis usually.The Yang Wang of Carnegie Mellon University in 2007 etc. has proposed a kind of facial image illumination moving method under unknown human face geological information and albedo information state.This method input only needs the facial image under any harsh photoenvironment.Utilized a kind of deformable faceform to estimate people's face geological information, utilized the spheric harmonic function base to represent illumination condition, and come the statistical distribution in analog submodule zone and the Space Consistency of people's face texture through the markov random file model.This method is estimated the illumination condition of facial image through using a changeability faceform, and ability dealing with complicated illumination condition or facial image overexposure and under-exposed situation, can improve the accuracy rate of recognition of face greatly.But this method has been made the hypothesis of people's face Lambert surface, and comes the match facial image to estimate geological information with a deformable faceform, and the facial image that therefore generates is compared with true picture and lacked certain sense of reality.
Certain methods is arranged in addition through image being divided into and combination generates the illumination migration results, these methods are the irrelevant part of illumination relevant portion and illumination with picture breakdown.The Qing Li of University of Houston in 2009 etc. have proposed a kind of human face light migrating technology based on the logarithm total variational model, can move to the target people to the illumination of reference man's face and go on the face.This method need not to know the geometry and the illumination information of people's face.Through facial image being decomposed into irrelevant two parts of face characteristic of illumination information and illumination.And utilize the illumination information of the illumination information replacement destination object of references object to realize the illumination migration.Algorithm flow is: at first utilize morphing that reference man's face is out of shape; Then the reference man's face after target people face and the distortion all is decomposed into according to the relevant part of illumination with in the irrelevant part of illumination; Through the relevant part of exchange two width of cloth facial image illumination, obtain the result of destination object illumination migration at last.This method only needs a width of cloth references object to be used for the illumination migration, and can handle the illumination migration of coloured image and gray level image.This method limitation is: 1. used target people's face and reference man's face to have the hypothesis of similar complexity; 2. it is serious that illumination migration results receives the disparity influence of target people face and reference man's face.
Summary of the invention
According to above-mentioned actual demand and key issue; The objective of the invention is to: propose a kind of object illumination moving method based on the gradient operation; This method need not estimated or supposes the geometry of destination object; Need not make hypothesis yet, only need the single width references object illumination model, gradient field with the illumination information transfer of references object to destination object.
The present invention only processes interested object in the image, and this object is called as object, for example: destination object and references object; To liking the set that constitutes this object pixel in the image, be the part of image.
The technical scheme that the present invention adopts is: make up a kind of object illumination moving method based on the gradient operation and mainly comprise: image alignment; At first through automatic detected characteristics point or manual markings unique point; And pass through image distortion method with destination object object and references object alignment, to find the mapping relations between destination object and the references object; Image layered; At first with image from the RGB color space to the CIELAB color space; With image be divided into lightness layer and colourity layer (said image layered be that image with subject area carries out layering; Also be about to destination object and references object and be divided into lightness layer and colourity layer respectively), only the lightness layer is handled, and kept the colourity layer constant; Weighted least-squares filtering is decomposed into large scale layer and levels of detail by the weighted least-squares wave filter with the lightness layer, and the information relevant with illumination is stayed on the large scale layer, stays on the levels of detail with the information that illumination is irrelevant; The gradient operation is moved to destination object large scale layer gradient with the gradient that illumination variation causes from references object large scale layer gradient field; Poisson integral is found the solution Poisson equation under the constraint in gray scale territory, gradient image is become gray level image obtain the new large scale layer of destination object; Image combination, the new large scale layer of destination object mix with the levels of detail of destination object and obtain the new lightness layer of destination object, mix obtaining the illumination migration results again with the color layers of destination object.
Image alignment mainly is in order to solve how much of references object image and destination object image, and attitude, expression etc. maybe inconsistent situation, and it comprises positioning feature point and two steps of anamorphose.At first adopt movable contour model to obtain rough unique point coordinate, the user adjusts the position of unique point with interactively mode then; Then references object is out of shape with the unique point of references object and destination object.The present invention adopts the image distortion method based on affined transformation.At first, destination object is carried out triangulation with unique point; Then,, find its corresponding triangle in references object, calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object for each triangle in the destination object; Then, utilize anamorphose backward to find the coordinate figure of each pixel correspondence in references object in the destination object.At last, get the value at its respective coordinates place in references object, the references object after can obtaining being out of shape for each pixel in the destination object.
In the colo(u)r breakup process, the present invention selects the CIELAB color space, and coloured image is decomposed into lightness L passage and color a, b passage, and wherein, the L passage has comprised lightness information, and two passages of a and b have comprised colouring information.Avoided like this image handled on three passages of RGB and caused the discordant situation of color as a result that generates easily.
The details layering adopts least square filter that the lightness layer is decomposed into the large scale layer and utilizes division to obtain levels of detail.Levels of detail can be regarded as the irrelevant amount of illumination, and the large scale layer is regarded as the illumination correlative.The parameter of weighted least-squares wave filter adopts adaptive computing method in the details decomposable process; Adopt the smoothing processing of different stage in the image zones of different; Make the large scale layer comprise the relevant information of more illumination, levels of detail comprises more and the irrelevant characteristic information of illumination.The filtering parameter computing method are following: at uneven image-region (for example beard, zones such as eyebrow) bigger smooth value is set.At smooth image-region, less smooth value is set.The uneven degree of image calculates according to image gradient, and account form is the quantity of each local window inside gradient pixel value of statistical gradient image greater than a certain threshold value, and this statistic is carried out normalization, is used for the uneven degree of presentation video.
When gradient field is handled; To serve as with reference to the references object large scale is carried out bilateral filtering at first with destination object large scale layer; Keep some marginal informations of destination object; And filtered large scale layer transformed to gradient field, in order to distinguish contour area and non-contour area so that handle the gradient of contour area better, the present invention generates a mask image in the gradual change of profile place; The mask image has near the characteristics of gradual change contour area, can make the big gradient of the non-contour area of references object can mix smoothly near profile with destination object like this.
The present invention handles respectively for contour area and non-contour area: when handling non-contour area, the big gradient of non-contour area is moved on the destination object; When handling contour area, will cause that gradient moves on the destination object by illumination.
The image combination mixes the lightness layer that obtains the illumination migration results with the large scale layer after handling with the levels of detail of destination object, mix the result who obtains the illumination migration again with the colourity layer of destination object.
The present invention compared with prior art; Be characterized in: 1, the present invention is divided into lightness layer and colourity layer with image; The lightness layer is further divided into large scale layer and levels of detail through least square filter; The information relevant with illumination is retained on the large scale layer, and the detailed information of image object has then been stayed on the levels of detail.2, the present invention will handle respectively borderline region and non-borderline region in processing procedure with reference to the graded that is caused by illumination in the picture being moved to target to as last, makes the result of generation truer.3, the illumination moving method that provides of the present invention only needs a references object, and need not estimate or make hypothesis the geometry of destination object, also need not suppose illumination model in addition, can generate realistic illumination migration results.
Description of drawings:
Fig. 1 is a main flow chart of the present invention;
Fig. 2 is that user of the present invention is to destination object punctuate process flow diagram;
Fig. 3 is an image affined transformation synoptic diagram of the present invention;
Fig. 4 (a) is triangulation and the pixel mapping unique point cloud synoptic diagram that the present invention is based on affined transformation; Fig. 4 (b) triangulation synoptic diagram; Fig. 4 (c) pixel mapping synoptic diagram
Fig. 5 is the image distortion method process flow diagram that the present invention is based on affined transformation;
Fig. 6 is a gradient operational flowchart of the present invention.
Embodiment:
Below in conjunction with accompanying drawing the present invention is elaborated.
Consult Fig. 1 main flow chart of the present invention; The object illumination moving method that the present invention is based on the gradient operation comprises following basic process: at first be to use movable contour model people face orientation tool and image distortion method that references object (i.e. the part in references object zone in the image of input) is snapped to destination object (i.e. the part in destination object zone in the image of input); Then references object and destination object all are decomposed into lightness layer and colourity layer; Use least square filter that the lightness layer is divided into large scale layer and levels of detail, all operations are all carried out on the large scale layer; The present invention earlier will be with reference to being transformed into gradient field to the picture and the large scale layer of destination object by the gray scale territory, will be last to picture with reference to the graded that is caused by illumination in the picture being moved to target, and the different mode of zones of different employing is carried out in the transition process; Utilize poisson integral that the large scale layer is returned to the gray scale territory by gradient field, obtain the new large scale layer of destination object, and with destination object colourity layer, levels of detail reconfigures and obtains the illumination migration results of destination object under the references object illumination condition.
Consult Fig. 2 user of the present invention to destination object punctuate process flow diagram, when image object is people's face, adopt automatic or automanual method to obtain the characteristic coordinates of image object; And when image object was non-face, the user adopted the manual self-defined unique point of mode fully.The present invention uses movable contour model people face orientation tool to accomplish the positioning function of facial image unique point; This instrument can obtain 116 characteristic point coordinates on the facial image; The manual adjustments function of characteristic point position is provided simultaneously, and the user can each characteristic point coordinates of manual adjustment.
Consult Fig. 3 image affined transformation, the present invention is out of shape the triangle in the references object according to the corresponding triangle in the destination object.For each pixel coordinate in the destination object (x, y), calculate its respective pixel in references object coordinate (x '; Y '); Pixel (x, y) with the corresponding relation of pixel (x ', y ') (x '; Y ')=(x y) is calculated by its triangular apex that belongs to respectively: establish two groups of leg-of-mutton corresponding three groups of coordinates and be respectively (x f 1, y 1) with (x ' 1, y ' 1), (x 2, y 2) with (x ' 2, y ' 2), and (x 3, y 3) with (x ' 3, y ' 3), definition affine transformation matrix T AffineFor:
T affine = x 1 ′ x 2 ′ x 3 ′ y 1 ′ y 2 ′ y 3 ′ 1 1 1 * x 1 x 2 x 3 y 1 y 2 y 3 1 1 1 - 1
For the pixel coordinate in the destination object (x, y), utilize affine transformation matrix to calculate the coordinate (x ', y ') of its respective pixel in references object:
[x′?y′?1] T=T affine[x?y?1] T
With destination object I BAll pixel coordinate value assignment are that it is at references object I AMiddle respective coordinates place pixel, I ' A(x, y)=I A(f (x, y)), the references object after obtaining being out of shape.
Consult the image distortion method that Fig. 4 and Fig. 5 explanation the present invention is based on affined transformation; There is not line between the unique point; Shown in Fig. 4 (a), at first, destination object is carried out the De Laonei triangulation with unique point; Make triangle behind the subdivision not have overlapped and cover whole some cloud face, shown in Fig. 4 (b); Then,, find its corresponding triangle in references object, shown in Fig. 4 (c), calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object for each triangle in the destination object; Then, utilize anamorphose backward to find each pixel in the destination object (x, y) corresponding coordinate figure (x ', y ') in references object; At last, (x y) gets its value of respective coordinates place (x ', y ') in references object, the references object after can obtaining being out of shape for each pixel in the destination object.
It is large scale layer s and levels of detail with the lightness picture breakdown that the present invention adopts least square filter.Least square filter is the detailed information in the preserve luminosity layer preferably, and its solution procedure is to minimize following energy function:
E = | l - s | 2 + λH ( ▿ s , ▿ l )
H ( ▿ s , ▿ l ) = Σ p ( ( ∂ s / ∂ x ) p 2 ( ∂ l / ∂ x ) p α + ( ∂ s / ∂ y ) p 2 ( ∂ l / ∂ y ) p α + ϵ )
Wherein, s is the large scale image that will find the solution; | l-s| 2It is the data item that makes that l and s are approaching as far as possible;
Figure BDA0000095269230000054
Be the regularization term that makes that s is smooth as far as possible, be used for minimizing the local derviation of s; Subscript p representes pixel position, space on image; Minimize
Figure BDA0000095269230000061
Can cause the gradient of input picture l that non-linear convergent-divergent takes place; In the bigger zone of gradient; Convergent-divergent is less, and bigger convergent-divergent then need be carried out in the less zone of gradient, the whole compatibility of α control gradient non-linear convergent-divergent; Be the susceptibility of non-linear convergent-divergent, increase α and can cause the result who generates to keep more sharp-pointed border the l gradient magnitude; ε is a very little number (being generally 0.0001), is used for preventing occurring removing 0 in the smooth zone of l; λ is the balance adjustment factor between data item and regular terms; Increasing λ can cause filtered image more level and smooth; The present invention in the smoothing processing of image zones of different employing different stage, designs the λ values of different sizes in different image areas in the least squares filtering process.
Calculate the λ value for self-adaptation, at first calculate the gradient image of lightness layer, and add up the pixel quantity of each pixel local window inside gradient greater than threshold value; Then, be used to calculate the parameter of least square filter with its normalization.Particularly, the image zones of different is provided with different λ values: at first, calculate level and the vertical direction gradient of lightness layer l, be respectively
Figure BDA0000095269230000062
With
Figure BDA0000095269230000063
And provide a threshold value t 1Then, for each pixel i, calculating with this pixel is the local window w at center pThe inside gradient size is greater than the pixel quantity of threshold value.
γ ( p ) = Σ i ∈ w p ( ( ∂ l / ∂ x ) i 2 + ( ∂ l / ∂ y ) i 2 ≥ t 1 )
γ (p) is normalized to 0-1, and then the λ value is set to:
λ(p)=λ s+(λ ls)*γ(p)
Wherein, λ sAnd λ lExpression control is carried out minimum and pairing minimum of highest level smoothing processing and maximum λ value to image respectively.When realizing, α=1.2, the local window radius is made as 8, λ s=1, λ l=4, threshold value t 1=0.02.
Use least square filter that image is carried out filtering and find the solution at first compute gradient image, and the affinity of calculating pixel and neighbor; The sparse linear of the space non-homogeneous Laplce matrix represented of calculation level, and structure then system of equations; Find the solution the sparse linear system of equations at last, obtain filtered.
Consult the explanation of Fig. 6 gradient operating process and how image carried out the gradient operation:
When gradient field is handled, will serve as with reference to the references object large scale being carried out bilateral filtering, keep some marginal informations of destination object, the following two-sided filter of use among the present invention at first with destination object large scale layer:
B ( I ) = 1 W p Σ q ∈ ω G σ d ( | | p - q | | ) G σ r ( | I p - I q | ) I q
W p = Σ q ∈ ω G σ d ( | | p - q | | ) G σ r ( | I p - I q | )
G σ ( x ) = 1 σ 2 π exp ( - x 2 2 σ 2 )
Wherein, I is an input picture; P is the image coordinate index; B (I) is filtered result; ω is to be the window at center with the pixel p, and q is the pixel among the window ω;
Figure BDA0000095269230000071
Be the spatial neighbor function, σ dIt is window size;
Figure BDA0000095269230000072
Be the gray scale similarity, σ rIt is window size; W pBe the normalization item, each pixel value that makes B (I) is all between 0-1.
In order to distinguish contour area and non-contour area; The present invention generates one at the mask image β of profile place gradual change image; The β image has near the characteristics of gradual change contour area, can make the big gradient of the non-contour area of references object can mix smoothly near profile with destination object like this.
The computing method of β image are following: define a contour images k; Contour images is 0 in the skin area value; In the eyebrow zone, eye areas, lip contour area, nose outline line zone value be 1, is 2 in the nostril region value, the computing method of mask image β are following:
β ( p ) = min q ∈ ω ( 1 - k ( q ) · e - ( p - q ) 2 2 σ 2 )
Wherein, p presentation video coordinated indexing; I representes input picture; ω representes with the pixel p to be the window at center, and q is a pixel index in the window, and σ is a window size.
Filtered large scale layer is transformed to gradient field; When being handled, gradient field to carry out different operation: for non-contour area in profile territory and non-contour area; Set a threshold value; Gradient greater than this threshold value is caused by illumination, and the gradient of the non-contour area of references object is moved on the destination object greater than the gradient of this threshold value; For contour area, utilize mask image β that the graded that non-profile in the references object contour area causes is moved on the destination object.
For given destination object s BWith references object s A, and corresponding β image, on gradient field, operate and undertaken by following formula:
▿ s B ′ ( p ) = ▿ s A ( p ) , ifβ ( p ) | | ▿ s A ( p ) | | > | | ▿ s B ( p ) | | ▿ s B ( p ) , otherwise
Wherein, P presentation video coordinated indexing;
Figure BDA0000095269230000075
and
Figure BDA0000095269230000076
representes the corresponding large scale layer of the destination object gradient image after destination object image, references object image and the illumination migration respectively; Whether the gradient of
Figure BDA0000095269230000077
expression references object is caused by profile; β (p) is the mask image, is used for avoiding the gradient with main profile on the references object constitutes to move on the destination object.
The present invention adopts poisson integral that gradient image is transformed to gray level image; Add grayscale restraint in the solution procedure; Can make like this place destination object the large scale layer on overall brightness more near the large scale layer of references object, the poisson integral solution procedure is to minimize following energy function:
E ( f ) = Σ p ∈ I [ E d ( p ) + E g ( p ) ]
Wherein, I representes input picture; P presentation video coordinated indexing; Gray scale item constraint E d(p) and gradient terms constraint E g(p) be expressed as respectively:
E d(p)=λ[f(p)-d(p)] 2
E g(p)=[f x(p)-g x(p)] 2+[f y(p)-g y(p)] 2
Wherein, f (p) is a large scale layer to be found the solution, and d (p) is a references object large scale layer; f x(p) and f y(p) be to wait to find the solution large scale layer horizontal gradient and VG (vertical gradient) image; g x(p) and g y(p) be large scale layer horizontal gradient and VG (vertical gradient) image after handling, be
Figure BDA0000095269230000082
Two parts, parameter lambda is used for controlling the influence of gray scale item constraint to integral result.
After utilizing the grayscale restraint Poisson equation to find the solution to obtain destination object large scale layer,, then obtained the destination object image after the illumination migration with the levels of detail and the combination of destination object colourity layer of destination object.
The above is merely basic explanations more of the present invention, and any equivalent transformation according to technical scheme of the present invention is done all should belong to protection scope of the present invention.

Claims (6)

1. object illumination moving method based on gradient operation is characterized in that the method includes the steps of:
(1) according between destination object and the references object by the pixel corresponding relation, references object is snapped to destination object;
(2) destination object and references object all are decomposed into colourity layer and lightness layer;
(3) utilize the weighted least-squares wave filter of auto-adaptive parameter that the lightness layer is carried out filtering, obtain the large scale layer, utilize lightness layer and large scale layer to obtain levels of detail;
(4) illumination information is moved to the gradient image of destination object large scale layer from the gradient image of references object large scale layer;
(5) with the large scale layer by gradient domain transformation to the gray scale territory, obtain the new large scale layer of destination object, with destination object levels of detail combination, obtain the new lightness layer of destination object;
(6), mix to obtain the illumination migration results with the colourity layer of destination object with the new lightness layer of destination object.
2. the object illumination moving method based on the gradient operation according to claim 1 is characterized in that: the described image alignment of step (1) adopts following steps:
(1.1) adopt movable contour model to obtain rough unique point coordinate, utilize user interactions to optimize the position of unique point then;
(1.2) with the unique point of references object and destination object as the reference mark, references object is out of shape according to destination object.
3. the object illumination moving method based on the gradient operation according to claim 2 is characterized in that, following steps are adopted in the described distortion of step (1.2):
(1.2.1) destination object is carried out the De Laonei triangulation with monumented point;
(1.2.2) for each triangle in the destination object, find its corresponding triangle in references object, calculate two leg-of-mutton affine transformation matrixs according to the apex coordinate of triangle in destination object and references object;
(1.2.3) utilize anamorphose backward to find the coordinate figure of each pixel correspondence in references object in the destination object;
(1.2.4) get the value at its respective coordinates place in references object for each pixel in the destination object, the references object after obtaining being out of shape.
4. the object illumination moving method based on the gradient operation according to claim 1; It is characterized in that step (2) is described with destination object with references object all is decomposed into the colourity layer and the lightness layer is through image is accomplished from the RGB color space to the CIELAB color space.
5. the image object illumination moving method based on the gradient operation according to claim 1; It is characterized in that the described process that illumination information is moved to the gradient image of destination object large scale layer from the gradient image of references object large scale layer of step (4) comprises following steps:
(4.1) utilize unique point, generate human face region image and contour area image and generate the mask image;
(4.2) references object is carried out bilateral filtering according to destination object, filtered is transformed to gradient field;
(4.3) the non-contour area of references object is moved to the non-contour area of destination object greater than the gradient of specific threshold;
(4.4) the references object contour area is moved to the destination object contour area greater than the gradient of specific threshold.
6. the object illumination moving method based on the gradient operation according to claim 1; It is characterized in that; The described mixing with the colourity layer of destination object of step (6) is meant in the CIELAB color space, with new lightness layer and colourity layer from the CIELAB color notation conversion space to the RGB color space.
CN 201110299716 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation Expired - Fee Related CN102360513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110299716 CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110299716 CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Publications (2)

Publication Number Publication Date
CN102360513A true CN102360513A (en) 2012-02-22
CN102360513B CN102360513B (en) 2013-02-06

Family

ID=45585838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110299716 Expired - Fee Related CN102360513B (en) 2011-09-30 2011-09-30 Object illumination moving method based on gradient operation

Country Status (1)

Country Link
CN (1) CN102360513B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665062A (en) * 2012-03-16 2012-09-12 华为技术有限公司 Method and device for stabilizing target object image in video
CN103337088A (en) * 2013-07-10 2013-10-02 北京航空航天大学 Human face image light and shadow editing method based on edge preserving
CN103839220A (en) * 2014-02-28 2014-06-04 浙江大学 Seamless image cloning method based on matrix decomposition
CN105447829A (en) * 2015-11-25 2016-03-30 小米科技有限责任公司 Image processing method and device
CN107527334A (en) * 2017-08-28 2017-12-29 北京小米移动软件有限公司 Human face light moving method and device
CN107833198A (en) * 2017-11-09 2018-03-23 中共中央办公厅电子科技学院 A kind of Outdoor Scene weight illumination method decomposed based on large scale
CN109300170A (en) * 2018-10-18 2019-02-01 云南大学 Portrait photo shadow transmission method
CN110175255A (en) * 2019-05-29 2019-08-27 腾讯科技(深圳)有限公司 The method of image labeling, mark methods of exhibiting and device based on pathological image
CN111063017A (en) * 2018-10-15 2020-04-24 华为技术有限公司 Illumination estimation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080114379A (en) * 2007-06-27 2008-12-31 성균관대학교산학협력단 Method and apparatus for illumination normalization
US20090046099A1 (en) * 2006-11-13 2009-02-19 Bunkspeed Real-time display system
CN101441774A (en) * 2008-12-26 2009-05-27 北京航空航天大学 Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en) * 2010-05-31 2011-01-19 北京智安邦科技有限公司 Target detection method capable of removing illumination influence and device thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046099A1 (en) * 2006-11-13 2009-02-19 Bunkspeed Real-time display system
KR20080114379A (en) * 2007-06-27 2008-12-31 성균관대학교산학협력단 Method and apparatus for illumination normalization
CN101441774A (en) * 2008-12-26 2009-05-27 北京航空航天大学 Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en) * 2010-05-31 2011-01-19 北京智安邦科技有限公司 Target detection method capable of removing illumination influence and device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOWU CHEN, MENGMENG CHEN, XIN JIN AND QINPING ZHAO: "Face illumination transfer through edge-preserving filters", 《IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2011》 *
XIN JIN ET AL: "Learning artistic lighting template from portrait photographs", 《ECCV 2010》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013135067A1 (en) * 2012-03-16 2013-09-19 华为技术有限公司 Method and apparatus for stabilizing target object image in video
CN102665062B (en) * 2012-03-16 2016-03-30 华为技术有限公司 A kind ofly make method that target object image in video is stable and device
CN102665062A (en) * 2012-03-16 2012-09-12 华为技术有限公司 Method and device for stabilizing target object image in video
CN103337088A (en) * 2013-07-10 2013-10-02 北京航空航天大学 Human face image light and shadow editing method based on edge preserving
CN103337088B (en) * 2013-07-10 2016-07-06 北京航空航天大学 A kind of facial image shadow edit methods kept based on edge
CN103839220A (en) * 2014-02-28 2014-06-04 浙江大学 Seamless image cloning method based on matrix decomposition
CN105447829B (en) * 2015-11-25 2018-06-08 小米科技有限责任公司 Image processing method and device
CN105447829A (en) * 2015-11-25 2016-03-30 小米科技有限责任公司 Image processing method and device
CN107527334A (en) * 2017-08-28 2017-12-29 北京小米移动软件有限公司 Human face light moving method and device
CN107833198A (en) * 2017-11-09 2018-03-23 中共中央办公厅电子科技学院 A kind of Outdoor Scene weight illumination method decomposed based on large scale
CN107833198B (en) * 2017-11-09 2021-06-01 中共中央办公厅电子科技学院 Outdoor scene relighting method based on large-scale decomposition
CN111063017A (en) * 2018-10-15 2020-04-24 华为技术有限公司 Illumination estimation method and device
CN111063017B (en) * 2018-10-15 2022-04-12 华为技术有限公司 Illumination estimation method and device
CN109300170A (en) * 2018-10-18 2019-02-01 云南大学 Portrait photo shadow transmission method
CN110175255A (en) * 2019-05-29 2019-08-27 腾讯科技(深圳)有限公司 The method of image labeling, mark methods of exhibiting and device based on pathological image
CN110175255B (en) * 2019-05-29 2022-04-05 腾讯医疗健康(深圳)有限公司 Image annotation method, and annotation display method and device based on pathological image

Also Published As

Publication number Publication date
CN102360513B (en) 2013-02-06

Similar Documents

Publication Publication Date Title
CN102360513B (en) Object illumination moving method based on gradient operation
CN112509151B (en) Method for generating sense of reality of virtual object in teaching scene
CN103824253B (en) Figure five sense organ deformation method based on image local precise deformation
CN102509346A (en) Object illumination migration method based on edge retaining
CN103606186B (en) The virtual hair style modeling method of a kind of image and video
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
Liao et al. Automatic caricature generation by analyzing facial features
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN105719326A (en) Realistic face generating method based on single photo
CN109035388A (en) Three-dimensional face model method for reconstructing and device
CN105469407A (en) Facial image layer decomposition method based on improved guide filter
CN102509357B (en) Pencil sketch simulating and drawing system based on brush stroke
CN103258343B (en) A kind of eyes image disposal route based on picture editting
CN110458172A (en) A kind of Weakly supervised image, semantic dividing method based on region contrast detection
CN103646416A (en) Three-dimensional cartoon face texture generation method and device
CN108257139A (en) RGB-D three-dimension object detection methods based on deep learning
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
CN113160421B (en) Projection-based spatial real object interaction virtual experiment method
CN109410119A (en) Mask image distortion method and its system
CN107291346A (en) Drafting content processing method, device and the terminal device of terminal device
CN103914862A (en) Pencil sketch simulating method based on edge tangent stream
CN101329768B (en) Method for synthesizing cartoon based on background view
CN103337088B (en) A kind of facial image shadow edit methods kept based on edge
CN104392433A (en) Image-based digital oil painting canvas automatic generation method
Bui et al. 3D-look shading from contours and hatching strokes

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130206

Termination date: 20150930

EXPY Termination of patent right or utility model