CN110363170A - Video is changed face method and apparatus - Google Patents
Video is changed face method and apparatus Download PDFInfo
- Publication number
- CN110363170A CN110363170A CN201910660934.0A CN201910660934A CN110363170A CN 110363170 A CN110363170 A CN 110363170A CN 201910660934 A CN201910660934 A CN 201910660934A CN 110363170 A CN110363170 A CN 110363170A
- Authority
- CN
- China
- Prior art keywords
- image
- picture frame
- processed
- frame
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000001815 facial effect Effects 0.000 claims abstract description 272
- 210000000056 organ Anatomy 0.000 claims description 84
- 238000001514 detection method Methods 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 16
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 10
- 230000003362 replicative effect Effects 0.000 claims description 6
- 238000005461 lubrication Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 35
- 239000011159 matrix material Substances 0.000 description 25
- 239000011521 glass Substances 0.000 description 19
- 230000008569 process Effects 0.000 description 18
- 210000004709 eyebrow Anatomy 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 210000000887 face Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000017105 transposition Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000002620 method output Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/164—Detection; Localisation; Normalisation using holistic features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of video and changes face method and apparatus, obtain picture frame to be processed, determine the first outer profile point for being used to indicate the profile of the first facial image, it is used to indicate the second outer profile point of the profile of the second facial image, the first replacement region and the second replacement region are marked off in picture frame to be processed according to outer profile point, and first object region and the second target area are determined in duplicated frame;Wherein, first object region is identical as the second replacement region shape, second target area is identical as the first replacement region shape, the image in the first replacement region of picture frame to be processed is finally replaced with into the image in the second target area of duplicated frame, image in second replacement region of picture frame to be processed is replaced with into the image in the first object region of duplicated frame, the picture frame after being changed face.The present invention only needs interception image and is replaced to can be achieved with changing face, without carrying out time-consuming three-dimensional modeling, to improve the efficiency that video is changed face.
Description
Technical field
The present invention relates to video technique field, in particular to a kind of video is changed face method and apparatus.
Background technique
With the development of depth learning technology, more and more video softwares are proposed video and change face function at present, user
The video of starting application program is changed face after function, application program can two faces in automatic identification video to be processed, and
The two faces are exchanged, achieve the effect that video is changed face.
Existing video is changed face technology, and usually according to two faces in video, it is three-dimensional to establish corresponding face respectively
Model carries out video based on the two human face three-dimensional models and changes face, takes a long time due to establishing human face three-dimensional model, because
This, changes face relatively time-consuming, it is difficult to which satisfaction carries out real-time video in video display process according to the video that the prior art carries out
The needs changed face.
Summary of the invention
Based on above-mentioned the deficiencies in the prior art, the present invention proposes that a kind of video is changed face method and apparatus, for improving video
The speed changed face achievees the effect that real-time video is carried out in video display process changes face.
First aspect present invention discloses a kind of video and changes face method, comprising:
Obtain the picture frame to be processed in video to be processed;Wherein, the picture frame to be processed includes at least two faces
Image;
Determine the multiple first outer profile points and multiple second outer profile points of the picture frame to be processed;Wherein, described
Multiple first outer profile points are used to indicate the profile of the first facial image of the picture frame to be processed, the multiple second foreign steamer
Exterior feature point is used to indicate the profile of the second facial image of the picture frame to be processed;
The first replacement region and the second replacement region are marked off in the picture frame to be processed;Wherein, it described first replaces
It changes region to be divided according to the multiple first outer profile point, second replacement region is according to the multiple second outer profile click and sweep
Point;
First object region and the second target area are determined in duplicated frame;Wherein, the first object region is according to institute
Multiple first outer profile points to be stated to determine, second target area is determined according to the multiple second outer profile point, described first
The shape of target area is identical as the second replacement shape in region, and the shape of second target area is replaced with described first
The shape for changing region is identical, and the duplicated frame is obtained by replicating the picture frame to be processed;
Image in first replacement region of the picture frame to be processed is replaced with to the second target area of the duplicated frame
Image in domain, also, the image in the second of the picture frame to be processed the replacement region is replaced with the of the duplicated frame
Image in one target area, the picture frame after being changed face.
Optionally, described that the first object region is determined according to the multiple first outer profile point in duplicated frame, packet
It includes:
Affine transformation is carried out to the first outer profile point in the duplicated frame, make the first facial image of the duplicated frame with
Second facial image of the picture frame to be processed is aligned, the first facial image after being aligned;
Region is replaced by the second of the picture frame to be processed, perpendicular to the plane where the picture frame to be processed
The first facial image after direction projection to the alignment obtains the first object region of the duplicated frame;
It is described that second target area is determined according to the multiple second outer profile point in duplicated frame, comprising:
Affine transformation is carried out to the second outer profile point in the duplicated frame, make the second facial image of the duplicated frame with
First facial image of the picture frame to be processed is aligned, the second facial image after being aligned;
Region is replaced by the first of the picture frame to be processed, perpendicular to the plane where the picture frame to be processed
The second facial image after direction projection to the alignment obtains the second target area of the duplicated frame.
Optionally, after the picture frame to be processed obtained in video to be processed, further includes:
Multiple first Internal periphery points and multiple second of the picture frame to be processed are determined using feature point detection algorithm
Internal periphery point;Wherein, the multiple first Internal periphery point is used to indicate the profile of the face organ of first facial image, institute
State the profile that multiple second Internal periphery points are used to indicate the face organ of second facial image;
Image in the first replacement region by the picture frame to be processed replaces with the second mesh of the duplicated frame
The image in region is marked, also, the image in the second replacement region of the picture frame to be processed is replaced with into the duplicated frame
First object region in image, before the picture frame after being changed face, further includes:
The picture frame to be processed is judged according to the multiple first Internal periphery point and the multiple second Internal periphery point
With the presence or absence of the image of the object in addition to the face organ in first facial image and second facial image;
If in first facial image and second facial image, existing at least one described facial image and removing institute
The image for stating the object other than face organ marks off the more of the picture frame to be processed according to the multiple first Internal periphery point
A first subgraph, and mark off according to the multiple second Internal periphery point multiple second subgraphs of the picture frame to be processed
Picture;Wherein, each face organ of first facial image is included in corresponding first subgraph of the face organ
As in, each face organ of second facial image is included in corresponding second subgraph of the face organ
It is interior, also, a kind of corresponding first subgraph of face organ and the second subgraph are of similar shape;
For each face organ, by the position of the first subgraph including the face organ and including the face
The location swap of second subgraph of organ, the picture frame after being changed face.
Optionally, the image in the first replacement region by the picture frame to be processed replaces with the duplicated frame
Image in second target area, also, the image in the second of the picture frame to be processed the replacement region replaced with described
Image in the first object region of duplicated frame, before the picture frame after being changed face, further includes:
Calculate the attitude angle of first facial image and the attitude angle of second facial image;
Judge whether the attitude angle of first facial image is less than or equal to attitude angle threshold value, and judges described second
Whether the attitude angle of facial image is less than or equal to attitude angle threshold value;
If in the attitude angle of first facial image and the attitude angle of second facial image, at least one described appearance
State angle is greater than the attitude angle threshold value, obtains the critical frame of first pre-saved and the second critical frame;Wherein, described first is critical
The attitude angle of first facial image of frame is equal to the attitude angle threshold value, the posture of the second facial image of the second critical frame
Angle is equal to the attitude angle threshold value;
According to the first outer profile point of the described first critical frame, the first critical target region is determined, also, according to described
Second outer profile point of two critical frames, determines the second critical target region;Wherein, the shape in first critical target region with
The shape in the second replacement region of the picture frame to be processed is identical, and the shape in second critical target region is with described wait locate
The shape for managing the first replacement region of picture frame is identical;
Image in first replacement region of the picture frame to be processed is replaced with the second of the described second critical frame to face
Image in boundary target area, also, the image in the second of the picture frame to be processed the replacement region is replaced with described the
Image in first critical target region of one critical frame, the picture frame after being changed face.
It is optionally, described that the first replacement region and the second replacement region are marked off in the picture frame to be processed, comprising:
According to the mobile the multiple first outer profile point of preset weight and the multiple second outer profile point, obtain multiple
The first outer profile point after movement and the second outer profile point after multiple movements;
The first outer profile point after connecting the multiple movement obtains the first replacement region, also, connects the multiple shifting
The second outer profile point after dynamic obtains the second replacement region.
Optionally, the image in the first replacement region by the picture frame to be processed replaces with the duplicated frame
Image in second target area, also, the image in the second of the picture frame to be processed the replacement region replaced with described
Image in the first object region of duplicated frame, the picture frame after being changed face, comprising:
Image in first replacement region of the picture frame to be processed is replaced with to the second target area of the duplicated frame
Image in domain, also, the image in the second of the picture frame to be processed the replacement region is replaced with the of the duplicated frame
Image in one target area, the picture frame after being exchanged;
Image co-registration is carried out to the picture frame after the exchange using preset Image Fusion, the figure after being changed face
As frame.
Second aspect of the present invention discloses a kind of video and changes face device, comprising:
Acquiring unit, for obtaining the picture frame to be processed in video to be processed;Wherein, the picture frame to be processed refers to
In the video to be processed, the picture frame that do not change face including at least two facial images;
Determination unit, for determining multiple first outer profiles of the picture frame to be processed using feature point detection algorithm
Point and multiple second outer profile points;Wherein, the multiple first outer profile point is used to indicate the first of the picture frame to be processed
The profile of facial image, the multiple second outer profile point are used to indicate the wheel of the second facial image of the picture frame to be processed
It is wide;
Division unit, for marking off the first replacement region and the second replacement region in the picture frame to be processed;Its
In, first replacement region is divided according to the multiple first outer profile point, and second replacement region is according to the multiple
Second outer profile point divides;
The division unit, for determining first object region and the second target area in duplicated frame;Wherein, described
One target area determines that second target area is according to the multiple second outer profile according to the multiple first outer profile point
Point determines that the shape in the first object region is identical as the second replacement shape in region, second target area
Shape is identical as the first replacement shape in region, and the duplicated frame is obtained by replicating the picture frame to be processed;
Replacement unit replaces with the duplicated frame for the image in the first replacement region by the picture frame to be processed
The second target area in image, also, the image in the second of the picture frame to be processed the replacement region is replaced with into institute
State the image in the first object region of duplicated frame, the picture frame after being changed face.
Optionally, the determination unit is also used to:
Multiple first Internal periphery points and multiple second of the picture frame to be processed are determined using feature point detection algorithm
Internal periphery point;Wherein, the multiple first Internal periphery point is used to indicate the profile of the face organ of first facial image, institute
State the profile that multiple second Internal periphery points are used to indicate the face organ of second facial image;
The video is changed face device further include:
Judging unit, for according to the multiple first Internal periphery point and the judgement of the multiple second Internal periphery point it is described to
Handle the object that whether there is in addition to the face organ in the first facial image of picture frame and in second facial image
The image of body;
The division unit is also used to:
If the judging unit is judged in first facial image and second facial image, described at least one
There is the image of the object in addition to the face organ in facial image, institute is marked off according to the multiple first Internal periphery point
Multiple first subgraphs of picture frame to be processed are stated, and the figure to be processed is marked off according to the multiple second Internal periphery point
As multiple second subgraphs of frame;Wherein, each face organ of first facial image, is included in the face
In corresponding first subgraph of organ, each face organ of second facial image is included in the facial device
In corresponding second subgraph of official, also, a kind of corresponding first subgraph of face organ and the second subgraph are having the same
Shape;
The exchange unit is also used to:
If the judging unit is judged, in first facial image and second facial image, at least one institute
The image that there is the object in addition to the face organ in facial image is stated, will include described for each face organ
The location swap of the position of the first subgraph of face organ and the second subgraph including the face organ, after being changed face
Picture frame.
Optionally, the video is changed face device further include:
Computing unit, for calculating the attitude angle of first facial image and the attitude angle of second facial image;
Judging unit, for judging whether the attitude angle of first facial image is less than or equal to attitude angle threshold value, and
And judge whether the attitude angle of second facial image is less than or equal to attitude angle threshold value;
The replacement unit is also used to:
If in the attitude angle of first facial image and the attitude angle of second facial image, at least one described appearance
State angle is greater than the attitude angle threshold value, obtains the critical frame of first pre-saved and the second critical frame;Wherein, described first is critical
The attitude angle of first facial image of frame is equal to the attitude angle threshold value, the posture of the second facial image of the second critical frame
Angle is equal to the attitude angle threshold value;
According to the first outer profile point of the described first critical frame, the first critical target region is determined, also, according to described
Second outer profile point of two critical frames, determines the second critical target region;Wherein, the shape in first critical target region with
The shape in the second replacement region of the picture frame to be processed is identical, and the shape in second critical target region is with described wait locate
The shape for managing the first replacement region of picture frame is identical;
Image in first replacement region of the picture frame to be processed is replaced with the second of the described second critical frame to face
Image in boundary target area, also, the image in the second of the picture frame to be processed the replacement region is replaced with described the
Image in first critical target region of one critical frame, the picture frame after being changed face.
Optionally, the replacement unit includes:
Subelement is exchanged, replaces with the duplication for the image in the first replacement region by the picture frame to be processed
Image in second target area of frame, also, the image in the second replacement region of the picture frame to be processed is replaced with
Image in the first object region of the duplicated frame, the picture frame after being exchanged;
Integrated unit, for carrying out image co-registration to the picture frame after the exchange using preset Image Fusion,
Picture frame after being changed face.
The present invention provides a kind of video and changes face method and apparatus, obtains picture frame to be processed, determines to be used to indicate first
First outer profile point of the profile of facial image is used to indicate the second outer profile point of the profile of the second facial image, according to outer
Profile point marks off the first replacement region and the second replacement region in picture frame to be processed, and first object is determined in duplicated frame
Region and the second target area;Wherein, first object region is identical as the second replacement region shape, the second target area and first
Replacement region shape is identical, and the image in the first replacement region of picture frame to be processed is finally replaced with to the second mesh of duplicated frame
The image in region is marked, the image in the second replacement region of picture frame to be processed is replaced with to the first object region of duplicated frame
Interior image, the picture frame after being changed face.The present invention only needs interception image and is replaced to can be achieved with changing face, without
Time-consuming three-dimensional modeling is carried out, to improve the efficiency that video is changed face.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is that a kind of video provided by the embodiments of the present application is changed face the flow chart of method;
Fig. 2 is the schematic diagram of the outer profile point of facial image provided by the embodiments of the present application;
Fig. 3 is the image in image and the second target area in replacement first object region provided by the embodiments of the present application
Schematic diagram;
Fig. 4 is that the video that provides of another embodiment of the application is changed face the flow chart of method;
Fig. 5 is the outer profile point for the facial image that another embodiment of the application provides and the schematic diagram of Internal periphery point;
Fig. 6 is that the video that provides of the another embodiment of the application is changed face the flow chart of method;
Fig. 7 is that video provided by the embodiments of the present application is changed face the structural schematic diagram of device.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The embodiment of the present application provides a kind of video and changes face method, referring to FIG. 1, method includes the following steps:
Picture frame to be processed in S101, acquisition video to be processed.
Wherein, picture frame to be processed is the picture frame including at least two facial images in the video to be processed.?
The method that is video provided in this embodiment is changed face, be mainly used for include more than two facial images picture frame into
Row processing.In order to facilitate understanding, when the subsequent other embodiments for introducing the present embodiment and the application, just to include two
It is illustrated for the picture frame of facial image, two facial images therein are denoted as the first facial image and the second face respectively
Image.Based on the method for changing face provided herein for being directed to two facial images, those skilled in the art can directly be extended to
For the method for changing face of the picture frame of the facial image including three or more, therefore these methods are also in the application protection scope
It is interior.
Judge in a picture frame whether to include at least two facial images, it is real to can use existing human face detection tech
Existing, human face detection tech can be handled a picture frame, and the face figure in picture frame is then marked with polygon frame
Picture includes a complete facial image in each polygon frame, according to the quantity of polygon frame it may determine that face out
The quantity of image.
It should be noted that the polygon frame of Face datection algorithm tag, can only reflect the position range of facial image.
The polygon frame of Face datection algorithm tag is one and includes the region including facial image and other images, Face datection algorithm
It can only determine that there are a facial images in the range of this region, but the accurate location of facial image cannot be provided, therefore
Detecting that there are after facial image in picture frame using Face datection algorithm, it is also necessary to execute the subsequent step of the embodiment of the present application
Suddenly it is changed face with realizing to picture frame.
In addition, it will be appreciated by those skilled in the art that video be equivalent to multiple images frame composition image frame sequence, for
In video to be processed picture frame to be handled execute in the application and be situated between by taking the treatment process of a picture frame to be processed as an example
The method of changing face to continue, so that it may complete the process of changing face to video to be processed.
S102, the multiple first outer profile points and multiple second outer profile points for determining picture frame to be processed.
Wherein, multiple first outer profile points are used to indicate the profile of the first facial image of picture frame to be processed, Duo Ge
2 outer profile points are used to indicate the profile of the second facial image of picture frame to be processed.
The outer profile point of facial image, it is also assumed that being a kind of characteristic point of facial image.First facial image and
The characteristic point of two facial images can use any one existing feature point detection algorithm and determine.
Position of the outer profile point of facial image in facial image can refer to Fig. 2.It should be noted that Fig. 2 is only
It is the schematic diagram of an outer profile point, the outer profile point that this method determines in actual use is actually far more than shown in Fig. 2 outer
The quantity of profile point, as long as and the outer profile point that actually generates be able to reflect out the profile of corresponding facial image, and
It is arranged not in strict accordance with position shown in Fig. 2.
S103, the first replacement region and the second replacement region are marked off in picture frame to be processed.
Wherein, the first replacement region is divided according to multiple first outer profile points, and the second replacement region is according to outside multiple second
Profile point divides.
The specific implementation process of step S103 include: for each the first outer profile point, should according to preset weight
First outer profile point moves inward, the first outer profile point after being moved;After all first outer profile points all carry out movement, even
The first outer profile point after connecing each movement, obtains the first replacement region.
The process for dividing the second replacement region is similar, and details are not described herein again.
In the above process, the weight of each outer profile point, it is believed that be that a position for external profile point carries out
The parameter of fine tuning, each outer profile point have a corresponding parameter, these parameters can be determined according to historical data.
S104, first object region and the second target area are determined in duplicated frame.
Wherein, first object region determines that the second target area is according to outside multiple second according to multiple first outer profile points
Profile point determines that the shape in first object region is identical as the second replacement shape in region in step S103, the second target area
The shape in domain is identical as the first replacement shape in region in step S103.
Duplicated frame is obtained by replicating picture frame to be processed.
The specific implementation process of step S104 includes:
Affine transformation is carried out to the first outer profile point in duplicated frame, makes the first facial image and figure to be processed of duplicated frame
The first facial image as the second facial image alignment of frame, after being aligned.
Region is replaced by the second of picture frame to be processed, with the direction projection perpendicular to the plane where picture frame to be processed
The first facial image after to alignment obtains the first object region of duplicated frame.
After determining first object region, affine transformation is carried out to the second outer profile point in duplicated frame, makes duplicated frame
Second facial image is aligned with the first facial image of picture frame to be processed, the second facial image after being aligned.
Region is replaced by the first of picture frame to be processed, with the direction projection perpendicular to the plane where picture frame to be processed
The second facial image after to alignment obtains the second target area of duplicated frame.
Wherein, external profile point carries out the process of affine transformation, comprising:
Using the Coordinate generation first object matrix (being denoted as p) of all first outer profile points, and utilize outside all second
The second objective matrix of Coordinate generation (being denoted as q) of profile point.Specifically, assuming respectively by N number of first outer profile point and N number of second
Outer profile point, N are positive integer, and the coordinate representation of each profile point is (xi, yi), then first object matrix and the second objective matrix
It is the matrix for the 2 row N column that corresponding N number of combinatorial coordinates obtain.
Decentralization is carried out to first object matrix and the second objective matrix respectively, obtains the first object square of decentralization
The second objective matrix (being denoted as B) of battle array (being denoted as A) and decentralization.
With following formula (1) to matrix B ATProgress singular value decomposition (Singular Value Decomposition,
SVD).Wherein, ATIndicate the transposed matrix to matrix A.
(1) BAT=UCVT
Wherein, U, C and VTIt is matrix, three obtained matrix after SVD is recycled to press formula (2) and formula (3) meter
Calculation obtains the first spin matrix R1And the first zoom factor s1。
(2) R1=UVT
Wherein, the sum of mark of trace (C) representing matrix C, that is, all diagonal entries of Matrix C.
Again using formula (1) to matrix A BTCarry out singular value decomposition, be based on calculated result, using above-mentioned formula (2) and
The second spin matrix R can be calculated in formula (3)2With the second zoom factor s2.Then it can be counted according to following formula (4)
Calculation obtains transformed first coordinates matrix and transformed second coordinates matrix.
(4) p1=s1R1p
Wherein, p1Transformed first coordinates matrix is indicated, by replacing with for the parameter in above-mentioned formula is corresponding
Two coordinates matrixs, the second spin matrix and the second zoom factor, so that it may transformed second coordinates matrix q be calculated1。
According to the first outer profile point in the mobile duplicated frame of transformed first coordinates matrix, in the duplicated frame after movement
First facial image can be aligned to the second facial image of picture frame to be processed.It is moved according to transformed second coordinates matrix
The second outer profile point in dynamic duplicated frame, the second facial image in duplicated frame after movement can be aligned to image to be processed
First facial image of frame.
S105, the image in the first replacement region of picture frame to be processed is replaced in the second target area of duplicated frame
Image, also, the image in the second of picture frame to be processed the replacement region is replaced in the first object region of duplicated frame
Image, the picture frame after being changed face.
With reference to Fig. 3, step S105 be can be understood as, and the second target area is taken out from duplicating image frame with the mode of shearing
Image in second target area of duplicating image frame is affixed to the first replacement region of picture frame to be processed by interior image,
Then from the image being cut out in duplicating image frame in first object region, by the figure in the first object region of duplicating image frame
The second replacement region as affixing to picture frame to be processed, the picture frame after just being changed face.
The position of the outer profile point of facial image according to Fig.2, it will be understood by those skilled in the art that according to first
The image in first object region that first outer profile click and sweep of facial image branches away, contains the first facial image substantially
In, the main features such as the shape of face and relative position, likewise, the image in the second target area also contains the second face
The main feature of image, therefore, second that the image in the first object region of duplicated frame is affixed to picture frame to be processed are replaced
Region is changed, the image in the second target area of duplicated frame is affixed to the first replacement region of picture frame to be processed, is equivalent to
First facial image of picture frame to be processed is replaced with into the second facial image, and by the second facial image of picture frame to be processed
Original first facial image is replaced with, therefore can be realized the visual effect changed face.
The embodiment of the present application provides a kind of video and changes face method, obtains picture frame to be processed, determines to be used to indicate first
First outer profile point of the profile of facial image is used to indicate the second outer profile point of the profile of the second facial image, according to outer
Profile point marks off the first replacement region and the second replacement region in picture frame to be processed, and first object is determined in duplicated frame
Region and the second target area;Wherein, first object region is identical as the second replacement region shape, the second target area and first
Replacement region shape is identical, and the image in the first replacement region of picture frame to be processed is finally replaced with to the second mesh of duplicated frame
The image in region is marked, the image in the second replacement region of picture frame to be processed is replaced with to the first object region of duplicated frame
Interior image, the picture frame after being changed face.The present invention only needs interception image and is replaced to can be achieved with changing face, without
Time-consuming three-dimensional modeling is carried out, to improve the efficiency that video is changed face.
The video that previous embodiment provides is changed face method, and the foreign steamer that feature point detection algorithm determines facial image is related to the use of
Wide point.It can be used to implement above-mentioned steps there are many feature point detection algorithm in the prior art.A kind of available characteristic point detection
Algorithm is to supervise descending method (Supervised Descent Method, SDM).The realization principle conduct of SDM is described below
With reference to certainly, the embodiment of the present application can also be realized using other feature point detection algorithms, will not enumerate herein.
SDM is trained firstly the need of using sample data, so that it is determined that going out wherein for calculating the parameter of characteristic point.With
Multiple facial images in trained sample data, wherein each facial image it is artificial demarcated corresponding outer profile point.
Trained process is: for the outer profile point demarcated in advance, calculating the scale invariant feature of each outer profile point
(Scale Invariant Feature Transform, SIFT), the characteristic value for obtaining each outer profile point (is denoted as hi), outside
The coordinate of profile point is denoted as xi, i indicates the mark of outer profile point, is greater than or equal to 1 positive integer, then calculates preset first
The SIFT feature of initial point, the corresponding characteristic value of current coordinate for obtaining initial point (are denoted as kj), the changing coordinates note of initial point
For yj, j indicates the mark of initial point, and the quantity of initial point is identical as the quantity of the outer profile of calibration point.
Then according to the characteristic value of the characteristic value of initial point and outer profile point, multiple optimal moving parameters are determined, as
One group of optimal moving parameter.Each corresponding initial point of optimal moving parameter, a practical optimal moving parameter includes two
Numerical value is denoted as (R1j, B1j), 1 indicates that this optimal moving parameter is one in first group of optimal moving parameter, and j indicates to correspond to
Initial point, initial point can be moved to new coordinate according to corresponding optimal moving parameter and itself characteristic value, new coordinate
Calculation formula is as follows:
y1j=yy+kj×R1j+B1j
Wherein, yj1Indicate the new coordinate that initial point j is calculated according to first group of optimal moving parameter, yjAs initial point
J current coordinate, above-mentioned formula may be considered the changing coordinates with initial point j plus coordinate updated value, obtain new coordinate.
It determines optimal moving parameter, is in fact exactly many group moving parameters that make repeated attempts, for each group of moving parameter meter
The corresponding loss function of this group of moving parameter is calculated, until finding the corresponding loss function of one group of moving parameter and reaching minimum,
This group of moving parameter is exactly one group of optimal moving parameter.Loss function L can be indicated with following formula:
In above-mentioned formula, j indicates the mark of initial point, Δ xjIt can be denoted as, the changing coordinates of initial point j are poor, are used for table
Show the difference of the changing coordinates of initial point j and the coordinate of corresponding outer profile point, it is which is organized most that m, which indicates currently calculative,
Excellent moving parameter, if currently to determine the optimal moving parameter of one group, m is equal to 1.Above-mentioned formula may be considered, for every
One initial point, the changing coordinates subtractive for calculating this initial point go the coordinate being calculated according to current moving parameter to update
Value, obtains the penalty values of this initial point, the quadratic sum of all penalty values is exactly the current corresponding damage of this group of moving parameter
Lose function.
After obtaining first group of optimal moving parameter and initial point being moved to corresponding new coordinate, according to initial after movement
Point executes aforementioned process again, determines second group of optimal moving parameter, third group moving parameter, and so on, until a certain
Until the corresponding loss function of the optimal moving parameter of group is less than or equal to preset threshold, entire training process is just completed.
After completing training process, it is only necessary to the coordinate of given initial point, so that it may utilize the optimal mobile ginseng of the multiple groups in SDM
Number, calculates the coordinate of outer profile point, so that it is determined that the outer profile point of facial image.
It should be noted that SDM can be not only used for determining the outer profile point of facial image, sample data appropriate is utilized
After being trained, SDM can be equally used for determining the Internal periphery point of facial image, namely for the face of instruction facial image
Profile characteristic point, principle with determine the principle of outer profile point it is similar.
Another embodiment of the application also provides a kind of video and changes face method, on the basis of previous embodiment, the present embodiment
It further determines the Internal periphery point of facial image, then judges in facial image whether to include except face according to Internal periphery point
The image of object other than organ, and different strategies of changing face is taken according to judging result.
Firstly the need of explanation, carry out video and change face it is generally desirable to only exchange the position where face, without change face with
The position of other outer objects, for example, if further including glasses image in first facial image in picture frame to be processed, and the
Two facial images do not include glasses image, then the position of glasses image is constant in the picture frame after changing face, the first facial image
With the location swap of the second facial image, being equivalent in the picture frame after changing face in the first facial image does not include glasses image,
And the second facial image includes glasses image.In order to reach said effect, present embodiments provides video as shown in Figure 4 and change face
Method, comprising:
Picture frame to be processed in S401, acquisition video to be processed.
S402, outer profile point and Internal periphery point in picture frame to be processed are determined.
Wherein, the outer profile point of picture frame to be processed includes aforesaid plurality of first outer profile point and multiple second outer profiles
Point.The Internal periphery point of picture frame to be processed includes multiple first Internal periphery points and multiple second Internal periphery points, the first Internal periphery point
It is used to indicate each face organ of the first facial image, the second Internal periphery point is used to indicate each face of the second facial image
Organ, face organ include eyes, nose, mouth, eyebrow, ear etc..
The position of Internal periphery point in one facial image can refer to Fig. 5.
S403, judge in facial image with the presence or absence of the image of the object in addition to face organ.
The judgement of step S403 is mainly carried out according to Internal periphery point.
It should be noted that step S403 is to carry out above-mentioned judgement to the first facial image and the second facial image.
It,, can be true respectively according to the first Internal periphery point and the second Internal periphery point after determining Internal periphery point by taking glasses as an example
Make the position of nose in the position of nose and the second facial image in the first facial image.With above nose, between right and left eyes
Region as Glasses detection region, the class Lis Hartel for extracting the Glasses detection region of the first facial image levies (Haar-like
features).Then the eye detection of the first facial image is judged using adaptive boosting algorithm (Adaptive Boosting)
Whether the correlated characteristic that whether there is glasses image in region, be equivalent in the Glasses detection region for judge the first facial image and deposit
In the mirror holder crossbeam of the left and right frame for connecting glasses, exist if detected in the eye detection region of the first facial image
Mirror holder crossbeam indicates that include glasses image in the first facial image, conversely, then judge do not include in the first facial image
Glasses image.Similar, the Glasses detection region of the second facial image can be determined according to the second Internal periphery point, and then be based on
Whether preceding method judges in the second face to include glasses image.
If the first facial image and the second facial image do not include the image of the object in addition to face organ, judge
There is no the images of object in addition to face organ in facial image out, execute step S404, if the first facial image and the
In two facial images, at least one facial image includes the image of the object in addition to face organ, then judges facial image
The middle image that there is the object in addition to face organ, executes step S406.
Certainly, step S403 not only can detecte glasses image, can also detect other objects for being not belonging to face organ
The image of body, such as earrings image.
S404, the first replacement region and the second replacement region are marked off in picture frame to be processed, also, in duplicated frame
Determine first object region and the second target area.
The method for dividing the first replacement region and the second replacement region is consistent with previous embodiment, determines in duplicated frame
The method of first object region and the second target area is consistent with previous embodiment.
S405, the image in the first replacement region of picture frame to be processed is replaced in the second target area of duplicated frame
Image, also, the image in the second of picture frame to be processed the replacement region is replaced in the first object region of duplicated frame
Image, the picture frame after being exchanged.
S406, multiple first subgraphs and multiple second subgraphs for marking off picture frame to be processed.
Wherein, the first subgraph is divided according to the first Internal periphery point of picture frame to be processed, and the second subgraph is according to wait locate
The the second Internal periphery point for managing picture frame divides.Each obtained first subgraph is divided, includes, and only include one first
The face organ of facial image, and each face organ of the first facial image is included in first subgraph
It is interior.Likewise, dividing each obtained second subgraph, include, and only include the facial device of second facial image
Official, and each face organ of the second facial image is included in second subgraph.
Each first subgraph is corresponded to each other with second subgraph, and the corresponding subgraph tool of any two
There is identical shape.Wherein, for any one the first subgraph, if the face organ that this first subgraph includes, with one
The face organ that a second subgraph includes is same face organ, then the two subgraphs is claimed to correspond to each other.E.g., including
First subgraph of the left eye of the first facial image, and the second subgraph of the left eye including the second facial image correspond to each other;
First subgraph of the mouth including the first facial image, and the second subgraph of the mouth including the second facial image are mutually right
It answers.
Determine that the process of subgraph is as follows according to Internal periphery point:
By taking the left eye of the first facial image as an example, firstly, according to the first Internal periphery point meter relevant to the profile of left eyebrow
The width value of left eyebrow is calculated, and calculates the width value of right eyebrow according to the first Internal periphery point relevant to the profile of right eyebrow,
The width value of initial eyes image using the larger value of the two as left eye.
It should be noted that in the outer profile point for the facial image that feature point detection algorithm is calculated, as shown in Fig. 2,
Outer profile point above facial image substantially therefore can be directly straight by this part outer profile point with the contour convergence of eyebrow
It connects as Internal periphery point relevant to eyebrow, it is of course also possible to not use this part outer profile point, recalculates the wheel with eyebrow
Wide relevant Internal periphery point.
Then the height value of the initial eyes image of basis Internal periphery point relevant to left eye calculating left eye, and left eye
The corresponding Internal periphery point of the tail of the eye of the corresponding Internal periphery point of inner eye corner and left eye, calculates the central point of left eye.With left eye
Centered on central point, according to the width value of the initial eyes image of left eye, the height value of the initial eyes image of left eye is determined
A rectangular area out, the image in this rectangular area are exactly the initial eyes image of left eye.
After obtaining the initial eyes image of left eye, it is primarily based on Otsu algorithm and two-value is carried out to the initial eyes image of left eye
Change processing, obtains the binary image of left eye, carries out expansion process to the binary image of left eye, the left eye after being expanded
Binary image.To the binary conversion treatment of image, refer to, a color image is converted into gray level image.
Expansion process is carried out to binary image, is that the kernel of binary image and 3 × 3 is carried out to convolution, each convolution,
All by the pixel value of the central point in this region of local maximum in the corresponding region of convolution kernel.
Floor projection is carried out to the binary image of the left eye after expansion, so that it may obtain the area-of-interest of left eye
The initial position in the horizontal direction (Region of Interest, ROI) and final position, are denoted as X respectivelya, Xb, after expansion
Left eye binary image carry out upright projection, so that it may obtain initial position and termination of the ROI of left eye in vertical direction
Position is denoted as Y respectivelya, Yb, with [(Xa-Xb)/2, (Ya-Yb)/2] centered on point, with Xa-XbAs width, with Ya-YbAs
The rectangular area of height, is exactly the area-of-interest of left eye, the image in the area-of-interest of left eye, is exactly left eye corresponding
One specific item logo image.
The corresponding first specific item logo image of other face organs of first facial image and the second facial image and the second son
Target image can be divided with reference to the above process.
S407, it is directed to each face organ, by the position of the first subgraph including this face organ and including this
The location swap of the second subgraph of kind face organ, the picture frame after being exchanged.
Step S407 includes, by the position of the first specific item logo image of the left eye including the first facial image and including second
The location swap of second specific item logo image of the left eye of facial image;By the first sub-goal of the right eye including the first facial image
The location swap of second specific item logo image of the position of image and the right eye including the second facial image;It will include the first face figure
The position of second specific item logo image of the position of the first specific item logo image of the nose of picture and the nose including the second facial image
It exchanges, and so on.First specific item logo image namely corresponding to each face organ and the second specific item logo image carry out
Above-mentioned location swap, the picture frame after being exchanged.
Optionally, the picture frame after the exchange obtained after abovementioned steps, can be directly as the picture frame after changing face.
The picture frame after aforementioned exchange can also be handled by step S408, finally obtain the picture frame after changing face.
S408, image co-registration is carried out to the picture frame after exchange using preset Image Fusion, obtained fused
Picture frame.
Fused picture frame, so that it may as the picture frame after changing face.
Specifically, directly replacing image and the second target area in entire first object region if it is by step S405
Picture frame after the exchange that image in domain obtains, then just using Image Fusion in the picture frame after exchange, the
One target area and the second target area carry out image co-registration;It is obtained if it is specific item logo image is replaced respectively by step S407
Exchange after picture frame, just using Image Fusion respectively to the first specific item logo image and the in the picture frame after exchange
Two specific item logo images carry out image co-registration.
The video provided in this embodiment position that method mainly passes through where exchanging the parts of images in picture frame of changing face is real
It now changes face, after this method is based on this method transposition, the edge of image in a new location may be with the image of surrounding
There are visual differences can by carrying out image co-registration by the edge of the image of transposition in the picture frame after exchange
To eliminate this vision difference, the picture frame after visual effect is preferably changed face is obtained.
Optionally, multiple fusion algorithm, which exists in the prior art, can be used for executing step S408,.It is a kind of common
Algorithm is graph cut algorithm.
The basic principle of graph cut algorithm is the gradient fields for calculating separately target image and the gradient fields of source images,
Then by the gradient fields of source images, the gradient fields in the target area of source images replace with the gradient fields of target image,
According to the edge-restraint condition of replaced new gradient fields and source images, so that it may target image and source images be calculated
Fused image completes image co-registration.
Specifically in the present embodiment, target image is exactly the target image intercepted from picture frame to be processed or specific item
Logo image, source images can be complete picture frame to be processed, have been also possible to after cut target image or specific item logo image
Picture frame to be processed.The target area of source images is exactly the position for needing to paste target image in source images, for example, by
The occasion of the location swap of one target image and the second target image, in the picture frame after exchange, first object image is covered
Region, be exactly the corresponding target area of first object image.
Certainly, the step of above-mentioned image co-registration, the video that can be adapted for the offer of the application any embodiment are changed face method
In.
Video provided in this embodiment is changed face method, Internal periphery point is determined in facial image, and according to Internal periphery point
Judge in facial image whether to include glasses image, at least one people in judging the first facial image or the second facial image
In the case that face image includes glasses image, the position of the corresponding specific item logo image of each face organ is replaced respectively, thus real
Now change face.Based on the above-mentioned technical proposal, this method can be realized the position for only exchanging face, the effect of the position without changing glasses
Fruit.
Further, method provided in this embodiment also utilizes the picture frame after the exchange for completing to obtain after location swap
Image Fusion carries out image co-registration, using picture frame of the fused picture frame as after changing face, so that the present embodiment provides
Video change face method output change face after picture frame more naturally, generating better visual effect.
The another embodiment of the application also provides a kind of video and changes face method, referring to FIG. 6, this method comprises:
Picture frame to be processed in S601, acquisition video to be processed.
S602, the multiple first outer profile points and multiple second outer profile points for determining picture frame to be processed.
S603, the first replacement region for marking off picture frame to be processed and the second replacement region.
The attitude angle of S604, the attitude angle for calculating the first facial image and the second facial image.
Wherein, a facial image has three different attitude angles, is denoted as pitch angle, yaw angle and roll angle respectively.
Pitch angle according to the distance (being denoted as mb) of place between the eyebrows to nose and nose to chin point (i.e. the bottom end of chin, in other words
It is the minimum point of face) distance (being denoted as bx) calculating.In general, it is considered that mb and bx are equal, i.e., when pitch angle is equal to 0
Pitching distance than (mb/bx) be equal to 1, be based on this, by calculate target image pitching distance than, so that it may bowed according to preset
Face upward the distance pitch angle that than the mapping relations with pitch angle to determine target image current.
Yaw angle can be according to the horizontal distance (being denoted as Lm) of place between the eyebrows to the left eye tail of the eye and place between the eyebrows to the right eye tail of the eye
Distance (being denoted as Rm) calculates.It is similar with aforementioned pitch angle, it is believed that when yaw angle be equal to 0 when, Lm and Rm are equal, i.e., yaw away from
It is equal to 1 from than (Lm/Rm), is based on this, by the cross track distance ratio for calculating target image, so that it may according to preset cross track distance
The yaw angle that determine target image than the mapping relations with yaw angle current.
Roll angle is defined as the angle between the line and horizontal direction according to two tail of the eyes, accordingly, it is determined that going out a left side
After the coordinate of the eye tail of the eye and the coordinate of the right eye tail of the eye, two coordinate lines can be acquired into roll angle.
Whether the attitude angle of S605, the attitude angle for judging the first facial image and the second facial image are less than or equal to posture
Angle threshold value.
If all attitude angles are equal in three attitude angles of the first facial image and three attitude angles of the second facial image
Less than or equal to attitude angle threshold value, step S606 is executed.
If in three attitude angles of the first facial image and three attitude angles of the second facial image, have any one or
Multiple attitude angles are greater than attitude angle threshold value, then follow the steps S607.
In general, attitude angle threshold value can be set to 30 degree or 45 degree.
S606, the image in the first replacement region of picture frame to be processed is replaced in the second target area of duplicated frame
Image, also, the image in the second of picture frame to be processed the replacement region is replaced in the first object region of duplicated frame
Image, the picture frame after being changed face.
The the first critical frame and the second critical frame that S607, acquisition pre-save.
Wherein, the attitude angle of the first facial image of the described first critical frame be equal to the attitude angle threshold value, described second
The attitude angle of second facial image of critical frame is equal to the attitude angle threshold value
First critical frame is other picture frames for meeting following conditions:
The attitude angle greater than attitude angle threshold value judged in step S605, in the first facial image of the first critical frame
In value be equal to attitude angle threshold value.
Second critical frame is other picture frames for meeting following conditions:
The attitude angle greater than attitude angle threshold value judged in step S605, in the second facial image of the second critical frame
In value be equal to attitude angle threshold value.
Other picture frames refer to the picture frame in video to be processed in addition to picture frame to be processed.
For each picture frame in video to be processed, if calculating in this picture frame, the first facial image some
Attitude angle is equal to attitude angle threshold value, then can directly save this picture frame as the first critical frame, it is similar, it can also be straight
It connects and saves some attitude angle equal to the picture frame where the second facial image of attitude angle threshold value as the second critical frame, exist in this way
When handling other picture frames, the first critical frame and the second critical frame can be read directly, improve video and change face efficiency.
Specifically, if judging that the pitch angle of the first facial image is greater than attitude angle threshold value, then obtaining in step S605
The first critical frame be exactly in other picture frames that the pitch angle of the first facial image is equal to the picture frame of attitude angle threshold value, and
And second critical frame be that the pitch angle of the second facial image is equal to the picture frame of attitude angle threshold value in other picture frames.
S608, the first outer profile point according to the first critical frame, determine the first critical target region, also, according to second
Second outer profile point of critical frame, determines the second critical target region.
Wherein, the shape in the first critical target region is identical as the second replacement shape in region of picture frame to be processed, the
The shape in two critical target regions is identical as the first replacement shape in region of picture frame to be processed.
The method for determining the first critical target region is similar with the method in first object region in duplicated frame, comprising:
Affine transformation is carried out to the first outer profile point in the first critical frame, make the first facial image of the first critical frame with
Second facial image of picture frame to be processed is aligned, the first facial image after being aligned.
Region is replaced by the second of picture frame to be processed, with the direction projection perpendicular to the plane where picture frame to be processed
The first facial image after to alignment obtains the first critical target region of the first critical frame.
The method for determining the second critical target region is similar, and details are not described herein again.
The process of affine transformation refers to previous embodiment.
S609, the second critical mesh that the image in the first replacement region of picture frame to be processed is replaced with to the second critical frame
It marks the image in region, also, the image in the second of picture frame to be processed the replacement region is replaced with the of the first critical frame
Image in one critical target region, the picture frame after being changed face.
When changing face to picture frame to be processed, if any one face figure in the first facial image or the second facial image
The attitude angle of picture is excessive (being greater than attitude angle threshold value), even when being mutually aligned the two facial images simultaneously using affine transformation
It changes face, the visual effect of the picture frame after what is obtained change face is also poor.Therefore, the present embodiment calculates figure to be processed before changing face
As the attitude angle of the attitude angle of the first facial image and the second facial image in frame, judging any attitude angle greater than appearance
After the threshold value of state angle, the facial image that corresponding attitude angle is equal to attitude angle threshold value is obtained from critical frame, with the face of critical frame
Image is changed face.The attitude angle of the facial image of critical frame is exactly equal to attitude angle threshold value, no matter therefore attitude angle it is whether big
In attitude angle threshold value, preferable visual effect can be obtained by change face after the facial image of critical frame is aligned.
The method it should be noted that video that aforementioned the application any embodiment provides is changed face, can corresponding and sheet
Apply for that the change face correlation step of method of video that other one or more embodiments provide is combined, is mentioned to obtain the application
The video of confession is changed face the new embodiment of method, these embodiments belong in the protection scope of the application.
It changes face method in conjunction with the video that aforementioned the application any embodiment provides, the embodiment of the present application also provides a kind of video
It changes face device, referring to FIG. 7, the video of the present embodiment is changed face, device includes with lower unit:
Acquiring unit 701, for obtaining the picture frame to be processed in video to be processed;Wherein, picture frame to be processed refers to
In video to be processed, the picture frame that do not change face including at least two facial images.
Determination unit 702, for determining multiple first outer profiles of picture frame to be processed using feature point detection algorithm
Point and multiple second outer profile points;Wherein, multiple first outer profile points are used to indicate the first facial image of picture frame to be processed
Profile, multiple second outer profile points are used to indicate the profile of the second facial image of picture frame to be processed.
Division unit 703, for marking off the first replacement region and the second replacement region in the picture frame to be processed;
Wherein, first replacement region is divided according to the multiple first outer profile point, and second replacement region is according to described more
A second outer profile point divides;
The division unit 703 is also used to determine first object region and the second target area in duplicated frame;Wherein,
The first object region determines that second target area is according to the multiple second according to the multiple first outer profile point
Outer profile point determines that the shape in the first object region is identical as the second replacement shape in region, second target
The shape in region is identical as the first replacement shape in region, and the duplicated frame is obtained by replicating the picture frame to be processed
It arrives;
Replacement unit 704 replaces with described multiple for the image in the first replacement region by the picture frame to be processed
Image in second target area of frame processed, also, the image in the second replacement region of the picture frame to be processed is replaced
Picture frame for the image in the first object region of the duplicated frame, after being changed face.
Optionally, determination unit 702 is also used to:
Using feature point detection algorithm determine picture frame to be processed multiple first Internal periphery points and multiple second lubrication grooves
Wide point;Wherein, multiple first Internal periphery points are used to indicate the profile of the face organ of the first facial image, multiple second Internal peripheries
Point is used to indicate the profile of the face organ of the second facial image.
Video is changed face device further include:
Judging unit 705, for judging image to be processed according to multiple first Internal periphery points and multiple second Internal periphery points
It whether there is the image of the object in addition to face organ in first facial image of frame and in the second facial image.
Division unit 703 is also used to:
If judging unit is judged in the first facial image and the second facial image, exists at least one facial image and remove
The image of object other than face organ marks off multiple first subgraphs of picture frame to be processed according to multiple first Internal periphery points
Picture, and mark off according to multiple second Internal periphery points multiple second subgraphs of picture frame to be processed;Wherein, the first face figure
Each face organ of picture is included in corresponding first subgraph of face organ, the second facial image each
Face organ is included in corresponding second subgraph of face organ, also, a kind of corresponding first subgraph of face organ
Picture and the second subgraph are of similar shape.
Replacement unit 704 is also used to:
If judging unit is judged, in the first facial image and the second facial image, exist at least one facial image
The image of object in addition to face organ, for each face organ, by the position of the first subgraph including face organ
It sets and the location swap of the second subgraph including face organ, the picture frame after being changed face.
Video is changed face device further include:
Computing unit 706, for calculating the attitude angle of the first facial image and the attitude angle of the second facial image.
Judging unit 705 is also used to, and judges whether the attitude angle of the first facial image is less than or equal to attitude angle threshold value, and
And judge whether the attitude angle of the second facial image is less than or equal to attitude angle threshold value.
Replacement unit 704 is also used to:
If judging unit is judged, in the attitude angle of the attitude angle of the first facial image and the second facial image, at least one
A attitude angle is greater than attitude angle threshold value, from the changing face of video to be processed after picture frame in, determine spare first object image and
Spare second target image;Wherein, spare first object image is that attitude angle is equal to the first object image of attitude angle threshold value,
Spare second target image is that attitude angle is equal to the second target image of attitude angle threshold value.
If judging unit 705 judges the attitude angle of first facial image and the attitude angle of second facial image
In, at least one described attitude angle is greater than the attitude angle threshold value, obtains the critical frame of first pre-saved and the second critical frame;
Wherein, the attitude angle of the first facial image of the described first critical frame is equal to the attitude angle threshold value, the second critical frame
The attitude angle of second facial image is equal to the attitude angle threshold value;
According to the first outer profile point of the described first critical frame, the first critical target region is determined, also, according to described
Second outer profile point of two critical frames, determines the second critical target region;Wherein, the shape in first critical target region with
The shape in the second replacement region of the picture frame to be processed is identical, and the shape in second critical target region is with described wait locate
The shape for managing the first replacement region of picture frame is identical;
Image in first replacement region of the picture frame to be processed is replaced with the second of the described second critical frame to face
Image in boundary target area, also, the image in the second of the picture frame to be processed the replacement region is replaced with described the
Image in first critical target region of one critical frame, the picture frame after being changed face.
Optionally, division unit 703 is specifically used for:
Picture frame to be processed is replicated, duplicated frame is obtained.
According to the mobile multiple first outer profile points of preset weight and multiple second outer profile points, after obtaining multiple movements
The second outer profile point after first outer profile point and multiple movements.
The first outer profile point after connecting multiple movements obtains the first replacement region, and connect after multiple movements second outside
Profile point obtains the second replacement region.
Affine transformation is carried out to the first outer profile point in the duplicated frame, make the first facial image of the duplicated frame with
Second facial image of the picture frame to be processed is aligned, the first facial image after being aligned;
Region is replaced by the second of the picture frame to be processed, perpendicular to the plane where the picture frame to be processed
The first facial image after direction projection to the alignment obtains the first object region of the duplicated frame;
Affine transformation is carried out to the second outer profile point in the duplicated frame, make the second facial image of the duplicated frame with
First facial image of the picture frame to be processed is aligned, the second facial image after being aligned;
Region is replaced by the first of the picture frame to be processed, perpendicular to the plane where the picture frame to be processed
The second facial image after direction projection to the alignment obtains the second target area of the duplicated frame.
Optionally, replacement unit 704, comprising:
Subelement is exchanged, replaces with the duplication for the image in the first replacement region by the picture frame to be processed
Image in second target area of frame, also, the image in the second replacement region of the picture frame to be processed is replaced with
Image in the first object region of the duplicated frame, the picture frame after being exchanged;
Integrated unit, for carrying out image co-registration to the picture frame after the exchange using preset Image Fusion,
Picture frame after being changed face
Video provided by the embodiments of the present application is changed face device, and concrete operating principle can refer to the application any embodiment
The video of offer is changed face the correspondence step in method, and and will not be described here in detail.
The present invention provides a kind of video and changes face device, and acquiring unit 701 obtains picture frame to be processed, and determination unit 702 is true
The the first outer profile point for making the profile for being used to indicate the first facial image, is used to indicate the second of the profile of the second facial image
Outer profile point, division unit 703 mark off the first replacement region and the second replacement according to outer profile point in picture frame to be processed
Region determines first object region and the second target area in duplicated frame;Wherein, first object region and the second replacement region
Shape is identical, and the second target area is identical as the first replacement region shape, and last replacement unit 704 is by the of picture frame to be processed
Image in one replacement region replaces with the image in the second target area of duplicated frame, by the second replacement of picture frame to be processed
Image in region replaces with the image in the first object region of duplicated frame, the picture frame after being changed face.The present invention only needs
It wants interception image and is replaced to can be achieved with changing face, without carrying out time-consuming three-dimensional modeling, to improve what video was changed face
Efficiency.
Professional technician can be realized or use the application.Profession of the various modifications to these embodiments to this field
It will be apparent for technical staff, the general principles defined herein can not depart from spirit herein or model
In the case where enclosing, realize in other embodiments.Therefore, the application is not intended to be limited to the embodiments shown herein,
And it is to fit to the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
- A kind of method 1. video is changed face characterized by comprisingObtain the picture frame to be processed in video to be processed;Wherein, the picture frame to be processed includes at least two facial images;Determine the multiple first outer profile points and multiple second outer profile points of the picture frame to be processed;Wherein, the multiple First outer profile point is used to indicate the profile of the first facial image of the picture frame to be processed, the multiple second outer profile point It is used to indicate the profile of the second facial image of the picture frame to be processed;The first replacement region and the second replacement region are marked off in the picture frame to be processed;Wherein, first replacement area Domain is divided according to the multiple first outer profile point, and second replacement region is divided according to the multiple second outer profile point;First object region and the second target area are determined in duplicated frame;Wherein, the first object region is according to described more A first outer profile point determines that second target area is determined according to the multiple second outer profile point, the first object The shape in region is identical as the second replacement shape in region, the shape of second target area and first replacement area The shape in domain is identical, and the duplicated frame is obtained by replicating the picture frame to be processed;Image in first replacement region of the picture frame to be processed is replaced in the second target area of the duplicated frame Image, also, the image in the second of the picture frame to be processed the replacement region is replaced with to the first mesh of the duplicated frame Mark the image in region, the picture frame after being changed face.
- 2. the method according to claim 1, wherein it is described in duplicated frame according to the multiple first outer profile Point determines the first object region, comprising:To in the duplicated frame the first outer profile point carry out affine transformation, make the first facial image of the duplicated frame with it is described Second facial image of picture frame to be processed is aligned, the first facial image after being aligned;Region is replaced by the second of the picture frame to be processed, with the direction perpendicular to the plane where the picture frame to be processed Projection obtains the first object region of the duplicated frame to the first facial image after the alignment;It is described that second target area is determined according to the multiple second outer profile point in duplicated frame, comprising:To in the duplicated frame the second outer profile point carry out affine transformation, make the second facial image of the duplicated frame with it is described First facial image of picture frame to be processed is aligned, the second facial image after being aligned;Region is replaced by the first of the picture frame to be processed, with the direction perpendicular to the plane where the picture frame to be processed Projection obtains the second target area of the duplicated frame to the second facial image after the alignment.
- The method 3. video according to claim 1 is changed face, which is characterized in that be processed in the acquisition video to be processed After picture frame, further includes:Using feature point detection algorithm determine the picture frame to be processed multiple first Internal periphery points and multiple second lubrication grooves Wide point;Wherein, the multiple first Internal periphery point is used to indicate the profile of the face organ of first facial image, described more A second Internal periphery point is used to indicate the profile of the face organ of second facial image;Image in the first replacement region by the picture frame to be processed replaces with the second target area of the duplicated frame Image in domain, also, the image in the second of the picture frame to be processed the replacement region is replaced with the of the duplicated frame Image in one target area, before the picture frame after being changed face, further includes:Judged described in the picture frame to be processed according to the multiple first Internal periphery point and the multiple second Internal periphery point With the presence or absence of the image of the object in addition to the face organ in first facial image and second facial image;If in first facial image and second facial image, existing at least one described facial image and removing the face The image of object other than portion's organ marks off multiple the of the picture frame to be processed according to the multiple first Internal periphery point One subgraph, and mark off according to the multiple second Internal periphery point multiple second subgraphs of the picture frame to be processed; Wherein, each face organ of first facial image is included in corresponding first subgraph of the face organ Interior, each face organ of second facial image is included in corresponding second subgraph of the face organ, Also, a kind of corresponding first subgraph of face organ and the second subgraph are of similar shape;For each face organ, by the position of the first subgraph including the face organ and including the face organ The second subgraph location swap, the picture frame after being changed face.
- 4. the method according to claim 1, wherein described replace region for the first of the picture frame to be processed Interior image replaces with the image in the second target area of the duplicated frame, also, by the second of the picture frame to be processed Replacement region in image replace with the image in the first object region of the duplicated frame, the picture frame after being changed face it Before, further includes:Calculate the attitude angle of first facial image and the attitude angle of second facial image;Judge whether the attitude angle of first facial image is less than or equal to attitude angle threshold value, and judges second face Whether the attitude angle of image is less than or equal to attitude angle threshold value;If in the attitude angle of first facial image and the attitude angle of second facial image, at least one described attitude angle Greater than the attitude angle threshold value, the critical frame of first pre-saved and the second critical frame are obtained;Wherein, the described first critical frame The attitude angle of first facial image is equal to the attitude angle threshold value, the attitude angle etc. of the second facial image of the second critical frame In the attitude angle threshold value;According to the first outer profile point of the described first critical frame, the first critical target region is determined, also, face according to described second Second outer profile point of boundary's frame, determines the second critical target region;Wherein, the shape in first critical target region with it is described The shape in the second replacement region of picture frame to be processed is identical, the shape in second critical target region and the figure to be processed As the shape in the first replacement region of frame is identical;Image in first replacement region of the picture frame to be processed is replaced with to the second critical mesh of the described second critical frame The image in region is marked, also, the image in the second replacement region of the picture frame to be processed is replaced with described first and is faced Image in first critical target region of boundary's frame, the picture frame after being changed face.
- 5. marking off first the method according to claim 1, wherein described in the picture frame to be processed and replacing Change region and the second replacement region, comprising:According to the mobile the multiple first outer profile point of preset weight and the multiple second outer profile point, multiple movements are obtained The second outer profile point after rear the first outer profile point and multiple movements;The first outer profile point after connecting the multiple movement obtains the first replacement region, also, after the multiple movement of connection The second outer profile point obtain the second replacement region.
- 6. method as claimed in any of claims 1 to 5, which is characterized in that described by the picture frame to be processed The first replacement region in image replace with the image in the second target area of the duplicated frame, also, by described wait locate Image in second replacement region of reason picture frame replaces with the image in the first object region of the duplicated frame, is changed face Picture frame afterwards, comprising:Image in first replacement region of the picture frame to be processed is replaced in the second target area of the duplicated frame Image, also, the image in the second of the picture frame to be processed the replacement region is replaced with to the first mesh of the duplicated frame Mark the image in region, the picture frame after being exchanged;Image co-registration is carried out to the picture frame after the exchange using preset Image Fusion, the image after being changed face Frame.
- The device 7. a kind of video is changed face characterized by comprisingAcquiring unit, for obtaining the picture frame to be processed in video to be processed;Wherein, described in the picture frame to be processed refers to In video to be processed, the picture frame that do not change face including at least two facial images;Determination unit, for determining multiple first outer profile points of the picture frame to be processed using feature point detection algorithm, With multiple second outer profile points;Wherein, the multiple first outer profile point is used to indicate the first of the picture frame to be processed The profile of face image, the multiple second outer profile point are used to indicate the wheel of the second facial image of the picture frame to be processed It is wide;Division unit, for marking off the first replacement region and the second replacement region in the picture frame to be processed;Wherein, institute It states the first replacement region to be divided according to the multiple first outer profile point, second replacement region is according to outside the multiple second Profile point divides;The division unit, for determining first object region and the second target area in duplicated frame;Wherein, first mesh It marks region to be determined according to the multiple first outer profile point, second target area is true according to the multiple second outer profile point Fixed, the shape in the first object region is identical as the second replacement shape in region, the shape of second target area Identical as the first replacement shape in region, the duplicated frame is obtained by replicating the picture frame to be processed;Replacement unit, for the image in the first of the picture frame to be processed the replacement region to be replaced with the of the duplicated frame Image in two target areas, also, the image in the second of the picture frame to be processed the replacement region replaced with described multiple Image in the first object region of frame processed, the picture frame after being changed face.
- The device 8. video according to claim 7 is changed face, which is characterized in that the determination unit is also used to:Using feature point detection algorithm determine the picture frame to be processed multiple first Internal periphery points and multiple second lubrication grooves Wide point;Wherein, the multiple first Internal periphery point is used to indicate the profile of the face organ of first facial image, described more A second Internal periphery point is used to indicate the profile of the face organ of second facial image;The video is changed face device further include:Judging unit, it is described to be processed for being judged according to the multiple first Internal periphery point and the multiple second Internal periphery point It whether there is the object in addition to the face organ in first facial image of picture frame and in second facial image Image;The division unit is also used to:If the judging unit is judged in first facial image and second facial image, at least one described face Image memory the object in addition to the face organ image, according to the multiple first Internal periphery point mark off it is described to Multiple first subgraphs of picture frame are handled, and the picture frame to be processed is marked off according to the multiple second Internal periphery point Multiple second subgraphs;Wherein, each face organ of first facial image, is included in the face organ In corresponding first subgraph, each face organ of second facial image is included in the face organ couple In the second subgraph answered, also, a kind of corresponding first subgraph of face organ and the second subgraph are of similar shape;The exchange unit is also used to:If the judging unit is judged, in first facial image and second facial image, at least one described people The interior image that there is the object in addition to the face organ of face image will include the face for each face organ The location swap of the position of first subgraph of organ and the second subgraph including the face organ, the figure after being changed face As frame.
- The device 9. video according to claim 7 is changed face, which is characterized in that the video is changed face device further include:Computing unit, for calculating the attitude angle of first facial image and the attitude angle of second facial image;Judging unit for judging whether the attitude angle of first facial image is less than or equal to attitude angle threshold value, and is sentenced Whether the attitude angle of second facial image of breaking is less than or equal to attitude angle threshold value;The replacement unit is also used to:If in the attitude angle of first facial image and the attitude angle of second facial image, at least one described attitude angle Greater than the attitude angle threshold value, the critical frame of first pre-saved and the second critical frame are obtained;Wherein, the described first critical frame The attitude angle of first facial image is equal to the attitude angle threshold value, the attitude angle etc. of the second facial image of the second critical frame In the attitude angle threshold value;According to the first outer profile point of the described first critical frame, the first critical target region is determined, also, face according to described second Second outer profile point of boundary's frame, determines the second critical target region;Wherein, the shape in first critical target region with it is described The shape in the second replacement region of picture frame to be processed is identical, the shape in second critical target region and the figure to be processed As the shape in the first replacement region of frame is identical;Image in first replacement region of the picture frame to be processed is replaced with to the second critical mesh of the described second critical frame The image in region is marked, also, the image in the second replacement region of the picture frame to be processed is replaced with described first and is faced Image in first critical target region of boundary's frame, the picture frame after being changed face.
- 10. the method according to any one of claim 7 to 9, which is characterized in that the replacement unit includes:Subelement is exchanged, replaces with the duplicated frame for the image in the first replacement region by the picture frame to be processed Image in second target area, also, the image in the second of the picture frame to be processed the replacement region replaced with described Image in the first object region of duplicated frame, the picture frame after being exchanged;Integrated unit is obtained for carrying out image co-registration to the picture frame after the exchange using preset Image Fusion Picture frame after changing face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660934.0A CN110363170B (en) | 2019-07-22 | 2019-07-22 | Video face changing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660934.0A CN110363170B (en) | 2019-07-22 | 2019-07-22 | Video face changing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363170A true CN110363170A (en) | 2019-10-22 |
CN110363170B CN110363170B (en) | 2022-02-01 |
Family
ID=68221275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910660934.0A Active CN110363170B (en) | 2019-07-22 | 2019-07-22 | Video face changing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363170B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160193A (en) * | 2021-04-28 | 2021-07-23 | 贵州电网有限责任公司 | Ultraviolet image segmentation method and system based on bat algorithm and Otsu method with Levy flight characteristics |
CN113222810A (en) * | 2021-05-21 | 2021-08-06 | 北京大米科技有限公司 | Image processing method and image processing apparatus |
WO2022006693A1 (en) * | 2020-07-06 | 2022-01-13 | Polycom Communications Technology (Beijing) Co. Ltd. | Videoconferencing systems with facial image rectification |
CN114972623A (en) * | 2022-01-01 | 2022-08-30 | 昆明理工大学 | Efficient and accurate three-dimensional reconstruction modeling method for female pelvic floor support system |
CN118052723A (en) * | 2023-12-08 | 2024-05-17 | 深圳市石代科技集团有限公司 | Intelligent design system for face replacement |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100128927A1 (en) * | 2008-03-14 | 2010-05-27 | Sony Computer Entertainment Inc. | Image processing apparatus and image processing method |
CN106023063A (en) * | 2016-05-09 | 2016-10-12 | 西安北升信息科技有限公司 | Video transplantation face changing method |
CN108712603A (en) * | 2018-04-27 | 2018-10-26 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
-
2019
- 2019-07-22 CN CN201910660934.0A patent/CN110363170B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100128927A1 (en) * | 2008-03-14 | 2010-05-27 | Sony Computer Entertainment Inc. | Image processing apparatus and image processing method |
CN106023063A (en) * | 2016-05-09 | 2016-10-12 | 西安北升信息科技有限公司 | Video transplantation face changing method |
CN108712603A (en) * | 2018-04-27 | 2018-10-26 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022006693A1 (en) * | 2020-07-06 | 2022-01-13 | Polycom Communications Technology (Beijing) Co. Ltd. | Videoconferencing systems with facial image rectification |
CN113160193A (en) * | 2021-04-28 | 2021-07-23 | 贵州电网有限责任公司 | Ultraviolet image segmentation method and system based on bat algorithm and Otsu method with Levy flight characteristics |
CN113222810A (en) * | 2021-05-21 | 2021-08-06 | 北京大米科技有限公司 | Image processing method and image processing apparatus |
CN114972623A (en) * | 2022-01-01 | 2022-08-30 | 昆明理工大学 | Efficient and accurate three-dimensional reconstruction modeling method for female pelvic floor support system |
CN114972623B (en) * | 2022-01-01 | 2024-05-03 | 昆明理工大学 | Efficient and accurate three-dimensional reconstruction model method for female pelvic floor support system |
CN118052723A (en) * | 2023-12-08 | 2024-05-17 | 深圳市石代科技集团有限公司 | Intelligent design system for face replacement |
Also Published As
Publication number | Publication date |
---|---|
CN110363170B (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363170A (en) | Video is changed face method and apparatus | |
US11215845B2 (en) | Method, device, and computer program for virtually adjusting a spectacle frame | |
CN110443885B (en) | Three-dimensional human head and face model reconstruction method based on random human face image | |
CN111414798A (en) | Head posture detection method and system based on RGB-D image | |
CN106709947A (en) | RGBD camera-based three-dimensional human body rapid modeling system | |
CN106652015B (en) | Virtual character head portrait generation method and device | |
CN109671142A (en) | A kind of intelligence makeups method and intelligent makeups mirror | |
CN109559371A (en) | A kind of method and apparatus for three-dimensional reconstruction | |
CN111079676B (en) | Human eye iris detection method and device | |
CN109800653A (en) | A kind of characteristics of human body's parameter extracting method and system based on image analysis | |
CN105593896B (en) | Image processing apparatus, image display device, image processing method | |
CN111127642A (en) | Human face three-dimensional reconstruction method | |
CN109215085A (en) | A kind of article statistic algorithm using computer vision and image recognition | |
CN110179192A (en) | A kind of measuring system and its measurement method of human 3d model | |
CN109118455A (en) | A kind of ancient human's skull cranium face interactive restoration method based on the distribution of modern's soft tissue | |
Achenbach et al. | Accurate Face Reconstruction through Anisotropic Fitting and Eye Correction. | |
CN115294301A (en) | Head model construction method, device, equipment and medium based on face image | |
TWI393071B (en) | Image processing method for feature retention and the system of the same | |
CN106446805A (en) | Segmentation method and system for optic cup in eye ground photo | |
CN116966086A (en) | Human back acupoints calibrating method and system based on real-time image optimization | |
CN112381952B (en) | Face contour point cloud model reconstruction method and device based on multiple cameras | |
Gibson et al. | Optic nerve head registration via hemispherical surface and volume registration | |
Huysmans et al. | Automatic construction of correspondences for tubular surfaces | |
CN110514140A (en) | A kind of three-D imaging method, device, equipment and storage medium | |
CN114419255A (en) | Three-dimensional human head model generation method and device fusing real human faces, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |