CN107680071A - A kind of face and the method and system of body fusion treatment - Google Patents

A kind of face and the method and system of body fusion treatment Download PDF

Info

Publication number
CN107680071A
CN107680071A CN201710994338.7A CN201710994338A CN107680071A CN 107680071 A CN107680071 A CN 107680071A CN 201710994338 A CN201710994338 A CN 201710994338A CN 107680071 A CN107680071 A CN 107680071A
Authority
CN
China
Prior art keywords
face
color
area
fusion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710994338.7A
Other languages
Chinese (zh)
Other versions
CN107680071B (en
Inventor
芦爱余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wang Conghai
Original Assignee
Yun Zhimeng Science And Technology Ltd Of Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yun Zhimeng Science And Technology Ltd Of Shenzhen filed Critical Yun Zhimeng Science And Technology Ltd Of Shenzhen
Priority to CN201710994338.7A priority Critical patent/CN107680071B/en
Publication of CN107680071A publication Critical patent/CN107680071A/en
Application granted granted Critical
Publication of CN107680071B publication Critical patent/CN107680071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of face and the method for body fusion treatment, including, people's head region acquisition process;Mould body color transfer will be marked to face complexion color;Fusion background area is set;Carry out graph cut.The present invention proposes a kind of face and body method for amalgamation processing and system, on the one hand, because the otherness of different people face complexion is larger, and the intended body color merged is fixed, it is unnatural to be merged when solving the problems, such as the facial image of fusion and too big intended body color (brightness) difference, the method migrated using the mark mould body colour of skin to face complexion, the purpose that can reach color nature transition is pre-processed before ensureing fusion.On the other hand restricted boundary condition is passed through so that face chin and number of people region transfers pixel and human body neck neck intersecting area are merged, and remove the influence of face chin transition pixel.

Description

A kind of face and the method and system of body fusion treatment
Technical field
The present invention relates to area of computer graphics, more particularly to a kind of face and the method and system of body fusion treatment.
Background technology
With the development of human society, clothes it is alternative more and more.In allegro life, people, which have, not to be had to The demand that can of changing one's clothes tries on a dress, so virtual fitting product arises at the historic moment.Virtual fitting technology can substantially be divided into two kinds: 2D and 3D, model and clothes based on 3D virtual fitting technologies are that 3D makes, and advantage is that clothes and template build can exist The upper seamless connections of 3D, and there are 360 degree of visual effects, shortcoming is that every clothes 3D scannings take time and effort.It is virtual based on 2D For fitting technology by the way of 2D image mosaics, mark mould uses the form of 2D pictures by the way of 3D projects to 2D or directly. Advantage is that the upper linear velocity of novel clothes is fast, and shortcoming is the effect without 360 degree of displayings.Virtual fitting technology based on 2D passes through The image with the number of people is shot, face and hair zones image information is obtained, is spliced on preprepared mark mould body.Due to The face area of different people shooting is influenceed by the colour of skin, illumination and other factors, it is impossible to after the number of people is spliced on body by guarantee Colour of skin energy harmony and natural is excessive.So how to enter face area natural fusion in body image, whole human body is set to seem certainly It is so a difficult point for being currently based on 2D virtual fittings.
On solving the problems, such as fusion, most widely used present image graphics process field is to use graph cut, that is, is solved The method of Poisson's equation.But graph cut at combination of edge color difference it is too big when, can cause fusion after distortion the problem of.
Prior art proposes a kind of improved repeatedly Poisson image interfusion method (application publication number CN105096287 A 2015-11-25).The purpose of this method is that to solve the problems, such as that larger object region merges with Background unnatural.This method Target image and background image are changed to HSV color spaces, Poisson image co-registration first is carried out to each passage, then extracted Object region edge gradient information simultaneously calculates target internal subregion, to target internal sub-image area and initial fusion Image carries out graph cut and operates to obtain final result again.The advantages of this method is the internally figure of image subsection and initial fusion As doing graph cut again, can a given layer all on optimize internal subpicture picture confluent colours difference it is big the problem of, shortcoming is Heterochromia can only be slowed down, but can still have the problem of heterochromia.
In addition, prior art proposes the human face countenance synthesis method (Shen based on the extraction of quick expression information and graph cut Please the A 2016-10-26 of publication number CN 106056650), this method mainly solves prior art can not fast and effective extraction expression Detailed information, and the problem of its information can not be synthesized to target person on the face.First, obtained correspondingly according to face feature point data Expression template, then according to destination object expression shape template by neutral expression's image of destination object and source object it is non-in Property facial expression image is deformed under the non-neutral expression shape of destination object.After the facial expression image block deformation of frequency domain extraction source Expression detailed information, the source object expression details of extraction is filtered using the method for graph cut, to filtered expression Detailed information, it is synthesized to using the method for graph cut in the deformation expression of destination object, obtains final composite result. The advantages of this method is to ensure that grain details do not have a huge impact to fusion, source facial expression is first obtained on frequency domain Details and removal, finally retain structural information merged.Shortcoming is not account for solving shade or intense light irradiation environment following table The difference of feelings Fusion of Color.
Both the above prior art solves the problems, such as the fusion of different task using graph cut, be all based on by source object without Stitch it is comprehensive be integrated into target image, do not solve the problems, such as only to intersecting area merge, people's face and body it is uncoordinated only The problem of chin area colour of skin merges, and if only leaned on the colour of skin of chin area to intended body, chin and people can be caused again The problem of disharmony in the other regions of face.
The content of the invention
The present invention solves the problems, such as that the face chin colour of skin and mark mould body neck neck region colour of skin transition are unnatural:Due to not Face complexion and photoenvironment difference of taking pictures with people, cause different face complexions can not fix the body colour of skin with mark mould natural Transition;Because the number of people image taken out has transition pixel of the number of people to background, if transition pixel is directly adhered directly onto mark It can seem on mould body very lofty, so the processing to number of people image transition pixel is very necessary;If simply using graph cut, Only face chin area is merged with mark mould body neck area, the change of chin color can be with the other field colors of face Have a long way to go, cause unnatural effect.
In view of the above-mentioned problems, a kind of face of the present invention and the method for body fusion treatment, including,
People's head region acquisition process;
Mould body color transfer will be marked to face complexion color;
Fusion background area is set;
Carry out graph cut.
Further, described pending human body shooting image obtains the number of people with hair by number of people cutting techniques Area image.
Further, described number of people cutting techniques, realized based on number of people segmentation convolutional neural networks, the volume of number of people segmentation Product neural network sample, based on mark number of people area image.
Further, it is described to realize that the colour of skin migrates by the way of pixel color passage average is alignd with variance, including:
Obtained by face key point and remove the correct face complexion area of face region searching;
Using Face datection and face critical point detection technology, according to face outline key point, human face region is obtained.
Further, described human face region includes, the non-area of skin color such as eyes, eyebrow, face, to take underthe nose to arrange Except the region of face, that is, remove average of the chin area of face as face skin area A, respectively calculating RGB triple channels AMeanR, AMeanG, AMeanB and variance AVarR, AVarG, AVarB;
It is area of skin color to mark mould body region, is designated as B, equally calculates the RGB threeways of mark mould body area of skin color respectively The average BMeanR, BMeanG, BMeanB and variance BVarR, BVarG, BVarB in road;
Migrate area of skin color of human body B each pixel P as follows:
PR ,=AVarR/BVarR* (PR-BMeanR)+AMeanR
PG ,=AVarG/BVarG* (PG-BMeanG)+AMeanG
PB ,=AVarB/BVarB* (PB-BMeanB)+AMeanB
Wherein PR,Pixel value corresponding to representing area of skin color of human body some pixel of B P new R passages, that is, correspond to the colour of skin Pixel value after migration.PR represents the original R passages pixel values of area of skin color of human body some pixel of B P, by 3 passages Each pixel value of area of skin color of human body is migrated respectively so that the color of area of skin color of human body is close proximity to face complexion area.
Further, the principle of described graph cut, ensure the constant premise of border color, minimize in integration region The change of Grad, i.e. below equation:
Wherein, f is integration region, and Ω is border,Merged after representing fusion The gradient in region, v represent the gradient of integration region before fusion, f*For background area;According to formula, graph cut must is fulfilled for melting Border color and background color after conjunction are consistent, and are merged front and rear integration region gradient and kept constant as far as possible, corresponding circle of sensation It is to ensure that the minutia of fused images is constant that domain gradient, which keeps constant, and the color of integration region and background area are on border Solid colour is to ensure that transition is natural.
Further, described graph cut, including, foreground image, background image, the Mask of foreground area to be fused The position of background area where mask and Mask masks, wherein foreground image are facial image to be fused, background image For the fusion background image of making, Mask masks image is entirely stingy tribal chief's face image, the position of background area where Mask masks Put, as make the position that fusion background places facial image.
Further, described graph cut is to meet the Mask of Poisson's equation condition in tri- path computations of RGB respectively The pixel value in mask region, by graph cut, the edge transition of head image is scratched at face edge, and pixel fusion is entered to mark under mould body Bar region, meanwhile, other regions beyond face chin are without because fusion produces cross-color effect.
The present invention provides a kind of face and the system of body fusion treatment, including,
People's head region acquisition process module;
Mould body color transfer will be marked to face complexion color module;
Fusion background area module is set;
Carry out graph cut module.
The present invention provides a kind of face and the product of body fusion treatment, including suitable for virtual reality, virtual fitting, void Intend social, U.S. figure, image restoring etc..
Beneficial effect
The present invention proposes a kind of face and body method for amalgamation processing and system, on the one hand, due to different people face complexion Otherness it is larger, and merge intended body color fix, for solve fusion facial image and intended body color it is (bright Degree) difference it is too big when merge the problem of unnatural, using the method that migrate to face complexion of the mark mould body colour of skin, ensure before merging Pretreatment can reach the purpose of color nature transition.On the other hand restricted boundary condition is passed through so that face chin and the number of people Region transfers pixel and human body neck neck intersecting area are merged, and remove the influence of face chin transition pixel.
Brief description of the drawings
The original sticking effect schematic diagrames of Fig. 1
Fig. 2 merges background image schematic diagram
Fig. 3 fusion results figures
Original sticking effect refer to directly by the number of people image after stingy head pastes mark mould body on, it can be seen that the number of people with Chin linking is unnatural.
It is that step 3 makes background area image to merge background image, it is therefore an objective to sets boundary condition for fusion.
It is smooth naturally many that facial image face chin after fusion is substantially transitioned into mark mould body.
Embodiment
The embodiment of the present invention provides a kind of face and the method for body fusion treatment, including,
People's head region acquisition process;
Mould body color transfer will be marked to face complexion color;
Fusion background area is set;
Carry out graph cut.
Preferred embodiment, pending human body shooting image is carried by number of people cutting techniques, acquisition in the embodiment of the present invention The number of people area image of hair.
Preferred embodiment, number of people cutting techniques in the present embodiment, realized based on number of people segmentation convolutional neural networks, preferably Embodiment, realize that the colour of skin migrates in the present embodiment by the way of pixel color passage average is alignd with variance, including:
Obtained by face key point and remove the correct face complexion area of face region searching;
Using Face datection and face critical point detection technology, according to face outline key point, human face region is obtained.
Preferred embodiment, human face region includes in the present embodiment, the non-area of skin color such as eyes, eyebrow, face, to take nose Lower section excludes the region of face, that is, removes the chin area of face as face skin area A, calculate RGB triple channels respectively Average AMeanR, AMeanG, AMeanB and variance AVarR, AVarG, AVarB;
It is area of skin color to mark mould body region, is designated as B, equally calculates the RGB threeways of mark mould body area of skin color respectively The average BMeanR, BMeanG, BMeanB and variance BVarR, BVarG, BVarB in road;
Migrate area of skin color of human body B each pixel P as follows:
PR ,=AVarR/BVarR* (PR-BMeanR)+AMeanR
PG ,=AVarG/BVarG* (PG-BMeanG)+AMeanG
PB ,=AVarB/BVarB* (PB-BMeanB)+AMeanB
Wherein PR, pixel value corresponding to expression area of skin color of human body some pixel of B P new R passages, that is, correspond to the colour of skin Pixel value after migration.PR represents the original R passages pixel values of area of skin color of human body some pixel of B P, by 3 passages Each pixel value of area of skin color of human body is migrated respectively so that the color of area of skin color of human body is close proximity to face complexion area.
Preferred embodiment, the principle of graph cut in the present embodiment, ensure the constant premise of border color, minimize fusion The change of region manhole ladder angle value, i.e. below equation:
Wherein, f is integration region, and Ω is border, and ▽ f represent the gradient of integration region after fusion, and v represents to melt
The gradient of integration region, f before conjunction*For background area;According to formula, graph cut must is fulfilled for the border after fusion Color and background color keep is consistent, and merges front and rear integration region gradient and keep constant as far as possible, and integration region gradient is kept Constant is to ensure that the minutia of fused images is constant, and the color of integration region and back of the body preferred embodiment, is moored in the present embodiment Pine fusion, including, foreground image, background image, the Mask masks of foreground area to be fused and background where Mask masks The position in region, wherein foreground image are facial image to be fused, and background image is the fusion background image made, Mask Mask image is entirely stingy tribal chief's face image, the position of background area where Mask masks, as makes fusion background and places people The position of face image:Because people's head region to be fused and mark mould body are two separated images, so must be first by them An image is synthesized into, the image of synthesis is as shown in Fig. 2 fusion background image in accompanying drawing.According to mark mould body and number of people size Meet fixed proportion, by number of people image scaling to mark mould body sizes size, and by number of people image according to the number of people in neck neck experience Position, mark mould body neck neck region is put into advance, and merge background image in mark mould body neck neck region and number of people image weight Close region, i.e. neck neck and chin overlapping region, display mark mould body neck neck region;The purpose for making fusion background area is limitation Boundary condition is merged, the number of people for removing chin area merges border color with keeping constant before, the border of number of people chin area Color is on the basis of marking mould body color.Setting this reason for merging boundary condition is:According to graph cut principle, border face Color keeps constant, and fusion inner gradient change keeps constant benchmark as far as possible, so the border color of face chin area is changed into Mark mould body neck neck field color, and to keep gradient keep it is constant in the case of, only face chin integral color to mark mould Body color is leaned on;Mark mould body color due to having done step 2 is moved to face complexion color transfer so finishing the colour of skin The border color and face chin border color gap of mark mould body after shifting are little, on the one hand in order to increase in uneven illumination ring The face complexion Shandong nation property taken pictures under border, on the other hand in order to solve the problems, such as that the transition pixel of stingy tribal chief's head region is unnatural, So also need to do graph cut processing as the border color of face chin using the color for marking mould body neck neck region.
Preferred embodiment, graph cut is to meet Poisson's equation condition in tri- path computations of RGB respectively in the present embodiment Mask masks region pixel value, by graph cut, the edge transition of head image is scratched at face edge, and pixel fusion is entered to mark mould Body chin area, meanwhile, other regions beyond face chin are without because fusion produces cross-color effect.
The embodiment of the present invention provides a kind of face and the system of body fusion treatment, including,
People's head region acquisition process module;
Mould body color transfer will be marked to face complexion color module;
Fusion background area module is set;
Carry out graph cut module.
The embodiment of the present invention provides a kind of face and the product of body fusion treatment, including suitable for virtual reality, virtual Fitting, virtual social, U.S. figure, image restoring etc..

Claims (10)

1. a kind of face and the method for body fusion treatment, it is characterised in that including,
People's head region acquisition process;
Mould body color transfer will be marked to face complexion color;
Fusion background area is set;
Carry out graph cut.
2. a kind of face as claimed in claim 1 and the method for body fusion treatment, it is characterised in that described pending people Body shooting image obtains the number of people area image with hair by number of people cutting techniques.
3. the method that a kind of face as claimed in claim 1 integrates processing with body, it is characterised in that described number of people segmentation Technology, realized based on number of people segmentation convolutional neural networks, the convolutional neural networks sample of number of people segmentation, based on mark people's head region Image.
4. the method that a kind of face as claimed in claim 1 integrates processing with body, described use pixel color passage are equal The mode that value is alignd with variance realizes that the colour of skin migrates, including:
Obtained by face key point and remove the correct face complexion area of face region searching;
Using Face datection and face critical point detection technology, according to face outline key point, human face region is obtained.
5. the method that a kind of face as claimed in claim 4 integrates processing with body, described human face region include, eyes, The non-area of skin color such as eyebrow, face, to take underthe nose to exclude the region of face, that is, the chin area of face is removed as face Skin area A, the average AMeanR, AMeanG, AMeanB and variance AVarR, AVarG, AVarB of RGB triple channels are calculated respectively;
It is area of skin color to mark mould body region, is designated as B, equally calculates the RGB triple channels of mark mould body area of skin color respectively Average BMeanR, BMeanG, BMeanB and variance BVarR, BVarG, BVarB;
Migrate area of skin color of human body B each pixel P as follows:
PR '=AVarR/BVarR* (PR-BMeanR)+AMeanR
PG '=AVarG/BVarG* (PG-BMeanG)+AMeanG
PB '=AVarB/BVarB* (PB-BMeanB)+AMeanB
Pixel value corresponding to wherein PR ' expression area of skin color of human body some pixel of B P new R passages, that is, correspond to colour of skin migration Pixel value afterwards.PR represents the original R passages pixel values of area of skin color of human body some pixel of B P, by distinguishing on 3 passages Migrate each pixel value of area of skin color of human body so that the color of area of skin color of human body is close proximity to face complexion area.
6. the method that a kind of face as claimed in claim 1 integrates processing with body, it is characterised in that described graph cut Principle, ensure the constant premise of border color, minimize the change of integration region manhole ladder angle value, i.e. below equation:
<mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>f</mi> </munder> <mo>&amp;Integral;</mo> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <mo>|</mo> <mo>&amp;dtri;</mo> <mi>f</mi> <mo>-</mo> <mi>v</mi> <msup> <mo>|</mo> <mn>2</mn> </msup> <mi>w</mi> <mi>i</mi> <mi>t</mi> <mi>h</mi> <mi>f</mi> <msub> <mo>|</mo> <mrow> <mo>&amp;part;</mo> <mi>&amp;Omega;</mi> </mrow> </msub> <mo>=</mo> <msup> <mi>f</mi> <mo>*</mo> </msup> <msub> <mo>|</mo> <mrow> <mo>&amp;part;</mo> <mi>&amp;Omega;</mi> </mrow> </msub> </mrow>
Wherein, f is integration region, and Ω is border, and ▽ f represent the gradient of integration region after fusion, and v represents integration region before fusion Gradient, f*For background area;According to formula, graph cut must is fulfilled for the border color after fusion and background color keeps one Cause, and merge front and rear integration region gradient and keep constant as far as possible, it is to ensure fused images that integration region gradient, which keeps constant, Minutia is constant, and the color of integration region is to ensure that transition is natural with background area solid colour on border.
7. the method that a kind of face as claimed in claim 1 integrates processing with body, described graph cut, including, prospect Image, background image, the Mask masks of foreground area to be fused and the position of background area where Mask masks, wherein before Scape image is facial image to be fused, and background image is the fusion background image made, and Mask masks image is whole stingy Tribal chief's face image, the position of background area where Mask masks, as make the position that fusion background places facial image.
8. a kind of method of people's face and body fusion treatment as claimed in claim 7, described graph cut is respectively in RGB Three path computations meet the pixel value in the Mask masks region of Poisson's equation condition, and by graph cut, head is scratched at face edge The edge transition of image, pixel fusion enter to mark mould body chin area, meanwhile, other regions beyond face chin without because Fusion produces cross-color effect.
9. a kind of face and the system of body fusion treatment, it is characterised in that including,
People's head region acquisition process module;
Mould body color transfer will be marked to face complexion color module;
Fusion background area module is set;
Carry out graph cut module.
10. a kind of face and the product of body fusion treatment, it is characterised in that including suitable for virtual reality, virtual fitting, void Intend social, beautiful figure, image restoring etc., it is characterised in that the face and the product of body fusion treatment are claim 1 to 8 A kind of face and the method and system of body fusion treatment described in middle any one.
CN201710994338.7A 2017-10-23 2017-10-23 Method and system for fusion processing of human face and human body Active CN107680071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710994338.7A CN107680071B (en) 2017-10-23 2017-10-23 Method and system for fusion processing of human face and human body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710994338.7A CN107680071B (en) 2017-10-23 2017-10-23 Method and system for fusion processing of human face and human body

Publications (2)

Publication Number Publication Date
CN107680071A true CN107680071A (en) 2018-02-09
CN107680071B CN107680071B (en) 2020-08-07

Family

ID=61141438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710994338.7A Active CN107680071B (en) 2017-10-23 2017-10-23 Method and system for fusion processing of human face and human body

Country Status (1)

Country Link
CN (1) CN107680071B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596839A (en) * 2018-03-22 2018-09-28 中山大学 A kind of human-face cartoon generation method and its device based on deep learning
CN108764143A (en) * 2018-05-29 2018-11-06 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN108932735A (en) * 2018-07-10 2018-12-04 广州众聚智能科技有限公司 A method of generating deep learning sample
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109615593A (en) * 2018-11-29 2019-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109784301A (en) * 2019-01-28 2019-05-21 广州酷狗计算机科技有限公司 Image processing method, device, computer equipment and storage medium
CN109949207A (en) * 2019-01-31 2019-06-28 深圳市云之梦科技有限公司 Virtual objects synthetic method, device, computer equipment and storage medium
CN110069125A (en) * 2018-09-21 2019-07-30 北京微播视界科技有限公司 The control method and device of virtual objects
CN110084744A (en) * 2019-03-06 2019-08-02 深圳市云之梦科技有限公司 Image processing method, device, computer equipment and storage medium
CN110348496A (en) * 2019-06-27 2019-10-18 广州久邦世纪科技有限公司 A kind of method and system of facial image fusion
CN110490029A (en) * 2018-05-15 2019-11-22 瑞昱半导体股份有限公司 The image treatment method of differentiation processing can be done to face data
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN111654622A (en) * 2020-05-28 2020-09-11 维沃移动通信有限公司 Shooting focusing method and device, electronic equipment and storage medium
CN112949360A (en) * 2019-12-11 2021-06-11 广州市久邦数码科技有限公司 Video face changing method and device
CN112990134A (en) * 2021-04-29 2021-06-18 北京世纪好未来教育科技有限公司 Image simulation method and device, electronic equipment and storage medium
CN113409329A (en) * 2021-06-03 2021-09-17 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal, and readable storage medium
CN113781292A (en) * 2021-08-23 2021-12-10 北京达佳互联信息技术有限公司 Image processing method and device, electronic device and storage medium
CN113808003A (en) * 2020-06-17 2021-12-17 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN113870404A (en) * 2021-09-23 2021-12-31 聚好看科技股份有限公司 Skin rendering method and device of 3D model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607554A (en) * 2013-10-21 2014-02-26 无锡易视腾科技有限公司 Fully-automatic face seamless synthesis-based video synthesis method
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607554A (en) * 2013-10-21 2014-02-26 无锡易视腾科技有限公司 Fully-automatic face seamless synthesis-based video synthesis method
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATRICK P´EREZ ET AL.: "Poisson image editing", 《ACM TRANSACTIONS ON GRAPHICS》 *
周漾 等: "面泊松融合结合色彩变换的无缝纹理辐射处理", 《中国图象圈形学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596839A (en) * 2018-03-22 2018-09-28 中山大学 A kind of human-face cartoon generation method and its device based on deep learning
CN110490029B (en) * 2018-05-15 2022-04-15 瑞昱半导体股份有限公司 Image processing method capable of performing differentiation processing on face data
CN110490029A (en) * 2018-05-15 2019-11-22 瑞昱半导体股份有限公司 The image treatment method of differentiation processing can be done to face data
CN108764143A (en) * 2018-05-29 2018-11-06 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN108764143B (en) * 2018-05-29 2020-11-24 北京字节跳动网络技术有限公司 Image processing method, image processing device, computer equipment and storage medium
CN108932735A (en) * 2018-07-10 2018-12-04 广州众聚智能科技有限公司 A method of generating deep learning sample
CN110069125A (en) * 2018-09-21 2019-07-30 北京微播视界科技有限公司 The control method and device of virtual objects
CN110069125B (en) * 2018-09-21 2023-12-22 北京微播视界科技有限公司 Virtual object control method and device
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment
CN109615593A (en) * 2018-11-29 2019-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109784301A (en) * 2019-01-28 2019-05-21 广州酷狗计算机科技有限公司 Image processing method, device, computer equipment and storage medium
CN109949207A (en) * 2019-01-31 2019-06-28 深圳市云之梦科技有限公司 Virtual objects synthetic method, device, computer equipment and storage medium
CN109949207B (en) * 2019-01-31 2023-01-10 深圳市云之梦科技有限公司 Virtual object synthesis method and device, computer equipment and storage medium
CN110084744A (en) * 2019-03-06 2019-08-02 深圳市云之梦科技有限公司 Image processing method, device, computer equipment and storage medium
CN110084744B (en) * 2019-03-06 2022-11-08 深圳市云之梦科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN110348496A (en) * 2019-06-27 2019-10-18 广州久邦世纪科技有限公司 A kind of method and system of facial image fusion
CN110348496B (en) * 2019-06-27 2023-11-14 广州久邦世纪科技有限公司 Face image fusion method and system
CN112949360A (en) * 2019-12-11 2021-06-11 广州市久邦数码科技有限公司 Video face changing method and device
CN111654622A (en) * 2020-05-28 2020-09-11 维沃移动通信有限公司 Shooting focusing method and device, electronic equipment and storage medium
CN111612897B (en) * 2020-06-05 2023-11-10 腾讯科技(深圳)有限公司 Fusion method, device and equipment of three-dimensional model and readable storage medium
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN113808003A (en) * 2020-06-17 2021-12-17 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN113808003B (en) * 2020-06-17 2024-02-09 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN112990134B (en) * 2021-04-29 2021-08-20 北京世纪好未来教育科技有限公司 Image simulation method and device, electronic equipment and storage medium
CN112990134A (en) * 2021-04-29 2021-06-18 北京世纪好未来教育科技有限公司 Image simulation method and device, electronic equipment and storage medium
CN113409329A (en) * 2021-06-03 2021-09-17 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal, and readable storage medium
CN113409329B (en) * 2021-06-03 2023-11-14 Oppo广东移动通信有限公司 Image processing method, image processing device, terminal and readable storage medium
CN113781292A (en) * 2021-08-23 2021-12-10 北京达佳互联信息技术有限公司 Image processing method and device, electronic device and storage medium
CN113870404A (en) * 2021-09-23 2021-12-31 聚好看科技股份有限公司 Skin rendering method and device of 3D model
CN113870404B (en) * 2021-09-23 2024-05-07 聚好看科技股份有限公司 Skin rendering method of 3D model and display equipment

Also Published As

Publication number Publication date
CN107680071B (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN107680071A (en) A kind of face and the method and system of body fusion treatment
CN102663766B (en) Non-photorealistic based art illustration effect drawing method
CN103914863B (en) A kind of coloured image abstract method for drafting
RU2019101747A (en) DIGITAL MAKEUP MIRROR SYSTEM AND METHOD FOR ITS IMPLEMENTATION
EP1030267B1 (en) Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
US8325205B2 (en) Methods and files for delivering imagery with embedded data
CN103473780B (en) The method of portrait background figure a kind of
CN107993216A (en) A kind of image interfusion method and its equipment, storage medium, terminal
CN103942794B (en) A kind of image based on confidence level is collaborative scratches drawing method
CN102831584B (en) Data-driven object image restoring system and method
CN103258343B (en) A kind of eyes image disposal route based on picture editting
CN109829930A (en) Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN102903135A (en) Method and apparatus for realistic simulation of wrinkle aging and de-aging
JPH11224329A (en) Photo booth for forming digitally processed image
JP2001057630A (en) Image processing unit and image processing method
CN104063888B (en) A kind of wave spectrum artistic style method for drafting based on feeling of unreality
Ye et al. Hybrid scheme of image’s regional colorization using mask r-cnn and Poisson editing
CN106940792A (en) The human face expression sequence truncation method of distinguished point based motion
Seo et al. Interactive painterly rendering with artistic error correction
JP2007144194A (en) Method for face image modification, method for makeup simulation, method for makeup, support equipment for makeup and cosmetic foundation transcription film
CN109345470A (en) Facial image fusion method and system
Wang et al. Facial image composition based on active appearance model
CN102355555B (en) Video processing method and system
Doyle et al. Painted stained glass
KR101340936B1 (en) Pop art portraiture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231127

Address after: Gao Lou Zhen Hong Di Cun, Rui'an City, Wenzhou City, Zhejiang Province, 325200

Patentee after: Wang Conghai

Address before: 10 / F, Yihua financial technology building, 2388 Houhai Avenue, high tech park, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: SHENZHEN CLOUDREAM INFORMATION TECHNOLOGY CO.,LTD.