CN109754364A - A kind of video character face's replacement method based on deep learning - Google Patents
A kind of video character face's replacement method based on deep learning Download PDFInfo
- Publication number
- CN109754364A CN109754364A CN201910050734.3A CN201910050734A CN109754364A CN 109754364 A CN109754364 A CN 109754364A CN 201910050734 A CN201910050734 A CN 201910050734A CN 109754364 A CN109754364 A CN 109754364A
- Authority
- CN
- China
- Prior art keywords
- facial
- replaced
- expression
- deep learning
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000013135 deep learning Methods 0.000 title claims abstract description 13
- 230000001815 facial effect Effects 0.000 claims abstract description 39
- 230000008921 facial expression Effects 0.000 claims abstract description 36
- 230000002996 emotional effect Effects 0.000 claims abstract description 22
- 238000013136 deep learning model Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 13
- 241001269238 Data Species 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Abstract
The invention discloses a kind of video character face's replacement method based on deep learning, work chooses a certain number of facial image data for being replaced personage and target person, training image inputs training pattern, obtain the characteristic value for being replaced personage and target person, with characteristic value, independently more image datas are extracted in retrieval in internet, intercept the facial parts in image data, the facial characteristics of output layer is incorporated into facial overall profile, as facial expression, establish expression deep learning model, using facial expression as visible layer, using the variation of facial expression corresponding contour as the first hidden layer, using emotional characteristics as output layer;The every frame picture of video to be replaced is input in expression deep learning model, corresponding emotional characteristics are analyzed, by the facial expression for being replaced character face's expression and replacing with target person of corresponding emotional characteristics;During the present invention is implemented, the working time of image procossing in video character replacement is reduced.
Description
Technical field
The present invention relates to video pictures process fields, and in particular to a kind of video character face replacement based on deep learning
Method.
Background technique
After the shooting of video display video is completed, probably due to a variety of causes needs to replace performer, but cost is re-shoot
Huge, traditional montage textures mode needs staff to scratch the replacement of figure textures frame by frame, time-consuming and laborious.
Along with the promotion for calculating equipment calculation power, if image replacement, Neng Gou great can be carried out by the help of artificial intelligence
The big contracting end working time.
A kind of method for replacing movie and television play personage is disclosed in the patent of Patent No. CN104376589A, by movie and television film
In 25 photos per second each photo in same personage's difference posture image be replaced, it is characterised in that replacement process
It is walked including being replaced people information analysis and feature extraction, the acquisition of replacement people information and feature extraction and comparing replacement three
It is rapid: (1) to be replaced people information analysis and feature extraction: first using conventional personage's tracking and detection technique method, by former shadow
It specified be replaced character face depending on what is contained in video and position, detects and divide;Then using conventional edge detection and
Segmentation operators method extracts the characteristic point of replacement character face's face;Again using conventional triangle geometric projection, classification, cluster and
The method access of computer disposal is replaced deflection, illumination, color and the shadow character of personage;(2) replacement people information is adopted
Collection and feature extraction: make to replace the display that personage has automatic camera function in film studio in face of one, or in display
A video camera is arranged in direction, then allows replacement personage to watch the demonstration movie and television film played on display repeatedly and imitate quasi- be replaced
Personage facial performance;When replacement personage formally performs, opens video camera and stored to replacement personage's camera shooting, then in computer
Person detecting and personage's organ test and analyze the size of replacement character face, light that technology records camera shooting and survey
It is stored after amount analysis, and the characteristic point of replacement personage's organ is positioned, extract direction, illumination and the face of replacement personage
Color characteristic;(3) it compares replacement: replacing computer software to the quilt in former movie and television film using the conventional personage stored in computer
The facial image features and replacement character face's characteristics of image for replacing personage match feature using conventional method, size
Scaling, light compensation and seamless smooth comparison one by one are simultaneously replaced automatically, realize that replacement character face becomes the quilt in movie and television film
Replace character face.
But aforesaid way needs manually to retake textures are carried out, workload is still huge.
Summary of the invention
It is an object of the invention to overcome the above-mentioned problems in the prior art, a kind of view based on deep learning is provided
Frequency character face's replacement method reduces the working time of image procossing in video character replacement.
To realize above-mentioned technical purpose and the technique effect, the present invention is achieved through the following technical solutions:
A kind of video character face's replacement method based on deep learning, includes the following steps,
Step S1, acquisition multiple groups are replaced the facial image data of personage and target person;
Step S2, the deep learning model at training facial characteristics position, including,
Step S2.1, using the input pixel in facial image as the visible layer of model,
Step S2.2, using the boundary of color lump as the first hidden layer,
Step S2.3, using the profile of boundary composition as the second hidden layer,
Step S2.4, using facial characteristics position as output layer;
The facial characteristics of output layer is incorporated into facial overall profile by step S3, as facial expression, is established and is trained expression
Deep learning model, including,
Step S3.1, using facial expression as visible layer,
Step S3.2, using the variation of facial expression corresponding contour as the first hidden layer
Step S3.2, using emotional characteristics as output layer;
The every frame picture of video to be replaced is input in expression deep learning model, analyzes corresponding emotional characteristics by step S4;
Corresponding emotional characteristics are replaced the replacement of character face's expression according to the emotional characteristics obtained in step S4 by step S5
For the facial expression of target person;
Step S6, image frame reconfigures as video after replacing in step S5.
Further, in the step S1,
Step S1.1 manually chooses a certain number of facial image data for being replaced personage and target person,
Image in step S1.1 is inputted training pattern, obtains the characteristic value for being replaced personage and target person by step S1.2,
Step S1.3, with characteristic value, independently more image datas are extracted in retrieval in internet,
Step S1.4 intercepts the facial parts in image data.
Further, in the step S2, choose following learning parameter: the momentum term factor is 0.65, learning rate 0.25,
The activation primitive of random number of the initial weight and threshold value between [- 0.618,0.618], hidden layer and output layer is respectively
Tangent sigmoid and log-sigmoid, learning algorithm are BP algorithm.
Further, in the step S5, using textures mode, corresponding emotional characteristics are replaced character face's expression
Replace with the facial expression of target person.
Further, character face's image after textures is analyzed, if after face-image fitting, face contour
It is not bonded, is then replaced in the following ways,
The facial expression of target person is decomposed into the profile that boundary forms in face, and profile is proportionally replaced to fitting
To the face for being replaced personage.
Income effect of the invention is:
It based on deep learning, acquires large sample television data and is trained, reduce the work of image procossing in video character replacement
Time.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the schematic diagram of video character face replacement method of the present invention;
Fig. 2 is the training flow diagram of the deep learning model at facial characteristics position of the present invention;
Fig. 3 is the training flow diagram of expression deep learning model of the present invention;
Fig. 4 is the collecting flowchart schematic diagram for the facial image data that multiple groups are replaced personage and target person.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figs 1-4, the present invention is
A kind of video character face's replacement method based on deep learning, includes the following steps,
Step S1, acquisition multiple groups are replaced the facial image data of personage and target person;
Step S2, the deep learning model at training facial characteristics position, including,
Step S2.1, using the input pixel in facial image as the visible layer of model,
Step S2.2, using the boundary of color lump as the first hidden layer,
Step S2.3, using the profile of boundary composition as the second hidden layer,
Step S2.4, using facial characteristics position as output layer;
The facial characteristics of output layer is incorporated into facial overall profile by step S3, as facial expression, is established and is trained expression
Deep learning model, including,
Step S3.1, using facial expression as visible layer,
Step S3.2, using the variation of facial expression corresponding contour as the first hidden layer
Step S3.2, using emotional characteristics as output layer;
The every frame picture of video to be replaced is input in expression deep learning model, analyzes corresponding emotional characteristics by step S4;
Corresponding emotional characteristics are replaced the replacement of character face's expression according to the emotional characteristics obtained in step S4 by step S5
For the facial expression of target person;
Step S6, image frame reconfigures as video after replacing in step S5.
Preferably, in the step S1,
Step S1.1 manually chooses a certain number of facial image data for being replaced personage and target person,
Image in step S1.1 is inputted training pattern, obtains the characteristic value for being replaced personage and target person by step S1.2,
Step S1.3, with characteristic value, independently more image datas are extracted in retrieval in internet,
Step S1.4 intercepts the facial parts in image data.
Preferably, in the step S2, choose following learning parameter: the momentum term factor is 0.65, learning rate 0.25, just
The activation primitive of random number of the weight and threshold value of beginning between [- 0.618,0.618], hidden layer and output layer is respectively
Tangent sigmoid and log-sigmoid, learning algorithm are BP algorithm.
Preferably, in the step S5, using textures mode, the character face's expression that is replaced of corresponding emotional characteristics is replaced
It is changed to the facial expression of target person.
Preferably, character face's image after textures is analyzed, if face contour is not after face-image fitting
Fitting, then be replaced in the following ways,
The facial expression of target person is decomposed into the profile that boundary forms in face, and profile is proportionally replaced to fitting
To the face for being replaced personage.
One concrete application of the present embodiment are as follows:
A certain number of facial image data for being replaced personage and target person are manually chosen, training image inputs training mould
Type obtains the characteristic value for being replaced personage and target person, and with characteristic value, independently more images are extracted in retrieval in internet
Data intercepts the facial parts in image data;
The deep learning model at training facial characteristics position will using the input pixel in facial image as the visible layer of model
The boundary of color lump is as the first hidden layer, using the profile of boundary composition as the second hidden layer, using facial characteristics position as defeated
Layer out, chooses following learning parameter: the momentum term factor is 0.65, learning rate 0.25, initial weight and threshold value be [-
0.618,0.618] activation primitive of the random number between, hidden layer and output layer is respectively tangent sigmoid and log-
Sigmoid, learning algorithm are BP algorithm;
The facial characteristics of output layer is incorporated into facial overall profile, as facial expression, establishes expression deep learning model,
Using facial expression as visible layer, using the variation of facial expression corresponding contour as the first hidden layer, using emotional characteristics as defeated
Layer out;
The every frame picture of video to be replaced is input in expression deep learning model, corresponding emotional characteristics are analyzed;
Will corresponding emotional characteristics the facial expression for being replaced character face's expression and replacing with target person, using textures mode,
By the facial expression for being replaced character face's expression and replacing with target person of corresponding emotional characteristics, by the personage face after textures
Portion's image is analyzed, if face contour is not bonded, then is replaced in the following ways, by mesh after face-image fitting
The facial expression of mark personage is decomposed into the profile that boundary forms in face, and profile proportionally is replaced to conform to and is replaced
The face of personage;
Image frame after replacement is reconfigured as video.
In aforesaid operations, compare traditional approach, reduces the working time of image procossing in video character replacement.
In the description of this specification, the descriptions such as reference term " one embodiment ", " example ", " specific example " mean to tie
Specific features, structure, the material for closing embodiment or example description live feature and are contained at least one embodiment of the present invention
Or in example.In the present specification, schematic expression of the above terms may not refer to the same embodiment or example.And
And particular features, structures, materials, or characteristics described can be in any one or more of the embodiments or examples with suitable
Mode combine.
Present invention disclosed above preferred embodiment is only intended to help to illustrate the present invention.There is no detailed for preferred embodiment
All details are described, are not limited the invention to the specific embodiments described.Obviously, according to the content of this specification,
It can make many modifications and variations.These embodiments are chosen and specifically described to this specification, is in order to better explain the present invention
Principle and practical application, so that skilled artisan be enable to better understand and utilize the present invention.The present invention is only
It is limited by claims and its full scope and equivalent.
Claims (5)
1. a kind of video character face's replacement method based on deep learning, it is characterised in that: include the following steps,
Step S1, acquisition multiple groups are replaced the facial image data of personage and target person;
Step S2, the deep learning model at training facial characteristics position, including,
Step S2.1, using the input pixel in facial image as the visible layer of model,
Step S2.2, using the boundary of color lump as the first hidden layer,
Step S2.3, using the profile of boundary composition as the second hidden layer,
Step S2.4, using facial characteristics position as output layer;
The facial characteristics of output layer is incorporated into facial overall profile by step S3, as facial expression, is established and is trained expression
Deep learning model, including,
Step S3.1, using facial expression as visible layer,
Step S3.2, using the variation of facial expression corresponding contour as the first hidden layer,
Step S3.2, using emotional characteristics as output layer;
The every frame picture of video to be replaced is input in expression deep learning model, analyzes corresponding emotional characteristics by step S4;
Corresponding emotional characteristics are replaced the replacement of character face's expression according to the emotional characteristics obtained in step S4 by step S5
For the facial expression of target person;
Step S6, image frame reconfigures as video after replacing in step S5.
2. a kind of video character face's replacement method based on deep learning according to claim 1, it is characterised in that: institute
It states in step S1,
Step S1.1 manually chooses a certain number of facial image data for being replaced personage and target person,
Image in step S1.1 is inputted training pattern, obtains the characteristic value for being replaced personage and target person by step S1.2,
Step S1.3, with characteristic value, independently more image datas are extracted in retrieval in internet,
Step S1.4 intercepts the facial parts in image data.
3. a kind of video character face's replacement method based on deep learning according to claim 1, it is characterised in that: institute
It states in step S2, choose following learning parameter: the momentum term factor is 0.65, learning rate 0.25, initial weight and threshold value are
The activation primitive of random number between [- 0.618,0.618], hidden layer and output layer is respectively tangent sigmoid and log-
Sigmoid, learning algorithm are BP algorithm.
4. a kind of video character face's replacement method based on deep learning according to claim 1, it is characterised in that: institute
It states in step S5, using textures mode, by the face for being replaced character face's expression and replacing with target person of corresponding emotional characteristics
Portion's expression.
5. a kind of video character face's replacement method based on deep learning according to claim 4, it is characterised in that: will
Character face's image after textures is analyzed, if face contour is not bonded, then is used with lower section after face-image fitting
Formula is replaced,
The facial expression of target person is decomposed into the profile that boundary forms in face, and profile is proportionally replaced to fitting
To the face for being replaced personage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910050734.3A CN109754364A (en) | 2019-01-20 | 2019-01-20 | A kind of video character face's replacement method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910050734.3A CN109754364A (en) | 2019-01-20 | 2019-01-20 | A kind of video character face's replacement method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109754364A true CN109754364A (en) | 2019-05-14 |
Family
ID=66404698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910050734.3A Pending CN109754364A (en) | 2019-01-20 | 2019-01-20 | A kind of video character face's replacement method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754364A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376589A (en) * | 2014-12-04 | 2015-02-25 | 青岛华通国有资本运营(集团)有限责任公司 | Method for replacing movie and TV play figures |
CN107067429A (en) * | 2017-03-17 | 2017-08-18 | 徐迪 | Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced |
CN109063658A (en) * | 2018-08-08 | 2018-12-21 | 吴培希 | A method of it is changed face using deep learning in multi-mobile-terminal video personage |
CN109241889A (en) * | 2018-08-24 | 2019-01-18 | 合肥景彰科技有限公司 | A kind of facial image replacement method and device |
-
2019
- 2019-01-20 CN CN201910050734.3A patent/CN109754364A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376589A (en) * | 2014-12-04 | 2015-02-25 | 青岛华通国有资本运营(集团)有限责任公司 | Method for replacing movie and TV play figures |
CN107067429A (en) * | 2017-03-17 | 2017-08-18 | 徐迪 | Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced |
CN109063658A (en) * | 2018-08-08 | 2018-12-21 | 吴培希 | A method of it is changed face using deep learning in multi-mobile-terminal video personage |
CN109241889A (en) * | 2018-08-24 | 2019-01-18 | 合肥景彰科技有限公司 | A kind of facial image replacement method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rematas et al. | Soccer on your tabletop | |
Lalonde et al. | Photo clip art | |
Li et al. | Reconstructing building mass models from UAV images | |
US8660342B2 (en) | Method to assess aesthetic quality of photographs | |
CN107493488A (en) | The method that video content thing based on Faster R CNN models is intelligently implanted into | |
CN102799868B (en) | Method for identifying key facial expressions of human faces | |
CN109598290A (en) | A kind of image small target detecting method combined based on hierarchical detection | |
CN109886153B (en) | Real-time face detection method based on deep convolutional neural network | |
CN103824284B (en) | Key frame extraction method based on visual attention model and system | |
CN102982520B (en) | Robustness face super-resolution processing method based on contour inspection | |
CN110443763B (en) | Convolutional neural network-based image shadow removing method | |
CN108388882A (en) | Based on the gesture identification method that the overall situation-part is multi-modal RGB-D | |
WO2018053952A1 (en) | Video image depth extraction method based on scene sample library | |
CN105139445A (en) | Scenario reconstruction method and apparatus | |
Zhang et al. | Detecting photographic composites using shadows | |
CN109685045A (en) | A kind of Moving Targets Based on Video Streams tracking and system | |
CN102034267A (en) | Three-dimensional reconstruction method of target based on attention | |
CN106650795A (en) | Sorting method of hotel room type images | |
CN106709883A (en) | Point cloud denoising method based on joint bilateral filtering and sharp feature skeleton extraction | |
CN104063871A (en) | Method for segmenting image sequence scene of wearable device | |
CN101526955B (en) | Method for automatically withdrawing draft-based network graphics primitives and system thereof | |
Zhang et al. | Atmospheric perspective effect enhancement of landscape photographs through depth-aware contrast manipulation | |
CN104298961B (en) | Video method of combination based on Mouth-Shape Recognition | |
CN109754364A (en) | A kind of video character face's replacement method based on deep learning | |
TW201742006A (en) | Method of capturing and reconstructing court lines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190514 |