CN110267079A - The replacement method and device of face in video to be played - Google Patents
The replacement method and device of face in video to be played Download PDFInfo
- Publication number
- CN110267079A CN110267079A CN201810276537.9A CN201810276537A CN110267079A CN 110267079 A CN110267079 A CN 110267079A CN 201810276537 A CN201810276537 A CN 201810276537A CN 110267079 A CN110267079 A CN 110267079A
- Authority
- CN
- China
- Prior art keywords
- face
- key point
- video
- threedimensional model
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000001360 synchronised effect Effects 0.000 claims abstract description 9
- 238000004590 computer program Methods 0.000 claims description 3
- 230000005055 memory storage Effects 0.000 claims description 2
- 230000014509 gene expression Effects 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000001815 facial effect Effects 0.000 description 18
- 238000012545 processing Methods 0.000 description 11
- 210000000887 face Anatomy 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000003068 static effect Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 240000006409 Acacia auriculiformis Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- YAFQFNOUYXZVPZ-UHFFFAOYSA-N liproxstatin-1 Chemical group ClC1=CC=CC(CNC=2C3(CCNCC3)NC3=CC=CC=C3N=2)=C1 YAFQFNOUYXZVPZ-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000352 storage cell Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
Present disclose provides the replacement methods and device of face in a kind of video to be played.This method comprises: identifying the first face from the decoding frame data of video to be played;Using the key point of the first face identified as vertex, carry out three-dimensional modeling keeps the threedimensional model apex coordinate synchronous with the key point coordinate holding of the first face in video to be played to obtain threedimensional model;Obtain the second face;Using the second face of acquisition as texture, it is applied to the threedimensional model.Present disclose provides the technologies that a kind of face by video replaces with the face of another person, it can go back the features such as direction and expression of face in original video.
Description
Technical field
This disclosure relates to field of image processing, and in particular to the replacement method and device of face in a kind of video to be played.
Background technique
In current facial image editing and processing, it is often necessary to face replacement, i.e., commonly called " changing face ".Current
" changing face " primarily directed to " changing face " between static images, i.e., " stingy " goes out the face of user A from the static images of user A,
Replace the face of user B in the static images of user B.
This " changing face " technical application effect into the face replacement of video is poor.Because this technology is simple quiet
The replacement of state image content can not go back the features such as direction and expression of face in original video.
Summary of the invention
One of the disclosure is designed to provide the technology that a kind of face by video replaces with the face of another person,
It can go back the features such as direction and expression of face in original video.
According to the first aspect of the embodiments of the present disclosure, a kind of replacement method of face in video to be played is disclosed, comprising:
The first face is identified from the decoding frame data of video to be played;
Using the key point of the first face identified as vertex, carrying out three-dimensional modeling to obtain threedimensional model makes described three
Dimension module apex coordinate is synchronous with the key point coordinate holding of the first face in video to be played;
Obtain the second face;
Using the second face of acquisition as texture, it is applied to the threedimensional model.
According to the second aspect of an embodiment of the present disclosure, a kind of alternative of face in video to be played is disclosed, comprising:
Recognition unit, for identifying the first face from the decoding frame data of video to be played;
Three-dimensional modeling unit, for three-dimensional modeling being carried out, to obtain using the key point of the first face identified as vertex
Threedimensional model keeps the threedimensional model apex coordinate synchronous with the key point coordinate holding of the first face in video to be played;
Acquiring unit, for obtaining the second face;
Applying unit, for being applied to the threedimensional model using the second face of acquisition as texture.
According to the third aspect of an embodiment of the present disclosure, a kind of alternative of face in video to be played is disclosed, comprising:
Memory is stored with computer-readable instruction;
Processor reads the computer-readable instruction of memory storage, to execute the process described above.
According to a fourth aspect of embodiments of the present disclosure, a kind of computer program medium is disclosed, computer is stored thereon with
Readable instruction makes computer execute above-described side when the computer-readable instruction is executed by the processor of computer
Method.
In embodiment of the disclosure, the first face is identified from the decoding frame data of video to be played.To identify
The first face key point be vertex, carry out three-dimensional modeling, to obtain threedimensional model, make the threedimensional model apex coordinate with
The key point coordinate of first face keeps synchronizing in video to be played.With the direction and expression of the first face in video to be played
Variation, the threedimensional model obtained by the key point of the first face will follow this direction and expression shape change.But the three-dimensional mould
Type is the contour pattern that key point is constituted, and does not have color, that is, lacks texture.Then, the embodiment of the present disclosure is the second face
As texture, it is applied to the threedimensional model.In this way, the face in obtained picture just has the appearance of the second face, but have
There are the direction and expression of the first face.Therefore, the direction and expression that original video Central Plains face is gone back in face replacement process have been reached
Effect.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure
Practice and acquistion.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited
It is open.
Detailed description of the invention
Its example embodiment is described in detail by referring to accompanying drawing, above and other target, feature and the advantage of the disclosure will
It becomes more fully apparent.
Fig. 1 shows an application of the replacement method of face in the video to be played according to one example embodiment of the disclosure
The framework map of environment.
Fig. 2 shows the flow charts of the replacement method of face in the video to be played according to one example embodiment of the disclosure.
Fig. 3 shows the flow chart of the replacement method of face in the video to be played according to one example embodiment of the disclosure.
Fig. 4 shows the detail flowchart of the three-dimensional modeling according to one example embodiment of the disclosure.
Fig. 5, which is shown, is applied to the threedimensional model using the second face as texture according to one example embodiment of the disclosure
Detail flowchart.
Fig. 6 A shows identifying in the decoding frame data of video to be played according to one example embodiment of the disclosure
The key point of first face.
Fig. 6 B shows three-dimensional mould according to one example embodiment of the disclosure, establishing based on the key point identified in Fig. 6 A
Type.
Fig. 6 C shows the schematic diagram of the second face with key point according to one example embodiment of the disclosure.
Fig. 6 D, which is shown, is applied to figure as texture according to one example embodiment of the disclosure using the second face shown in Fig. 6 C
The result schematic diagram that threedimensional model shown in 6B obtains.
Fig. 7 shows the block diagram of the alternative of face in the video to be played according to one example embodiment of the disclosure.
Fig. 8 shows the block diagram of the alternative of face in the video to be played according to one example embodiment of the disclosure.
The replacement method that Fig. 9 shows face in the video to be played according to one example embodiment of the disclosure is applied in template
Specific flow chart under the scene changed face.
Figure 10 A shows to be applied according to one example embodiment of the disclosure allows user to select under the scene for template video of changing face
It changes face the interface of template video.
Figure 10 B shows to be applied according to one example embodiment of the disclosure allows user to take pictures under the scene for template video of changing face
Interface.
Figure 11 shows the structure chart of the alternative of face in the video to be played according to one example embodiment of the disclosure.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes
Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, providing these example embodiments makes the disclosure
Description will be more full and complete, and the design of example embodiment is comprehensively communicated to those skilled in the art.Attached drawing
The only schematic illustrations of the disclosure are not necessarily drawn to scale.Identical appended drawing reference indicates same or like in figure
Part, thus repetition thereof will be omitted.
In addition, described feature, structure or characteristic can be incorporated in one or more examples in any suitable manner
In embodiment.In the following description, many details are provided to provide filling to the example embodiment of the disclosure
Sub-argument solution.It will be appreciated, however, by one skilled in the art that the specific detail can be omitted with technical solution of the disclosure
In it is one or more, or can be using other methods, constituent element, step etc..In other cases, it is not shown in detail or retouches
Known features, method, realization or operation are stated to avoid a presumptuous guest usurps the role of the host and all aspects of this disclosure is made to thicken.
Some block diagrams shown in the drawings are functional entitys, not necessarily must be with physically or logically independent entity phase
It is corresponding.These functional entitys can be realized using software form, or in one or more hardware modules or integrated circuit in fact
These existing functional entitys, or these functions reality is realized in heterogeneous networks and/or processor device and/or microcontroller device
Body.
Fig. 1 shows the application environment of the replacement method of face in the video to be played according to one example embodiment of the disclosure
Framework map.
The replacement method of face, which refers to, in video to be played is substituted for another face for the face in video to be played
Method specifically in the disclosure, refers to the method that the first face in video to be played is substituted for the second face.
Application environment shown in FIG. 1 includes internet 1, user equipment 1 and user equipment 2.User equipment 2 is replaced to face
The supplier of the video changed.User 2 records one section of video by user equipment 2, is put on internet 1.User 1 is set by user
Standby 1 sees the video on internet 1, it is desirable to the face of the user 2 in the video is substituted for the face of oneself, to issue
Good friend.Then, the picture that user 1 passes through the shooting of camera 11 one in user equipment 1 oneself.Then, picture transfer to use
Image processing unit 12 in family equipment 1.Meanwhile image processing unit 12 downloads the video of user 2 from internet.At image
The video for managing picture and user 2 of the unit 12 based on user 1, is substituted for user 1 for the face of user 2 in the video of user 2
Face, so as to complete changing face.Video after changing face is shown by display 13.User 1 can also send out the video
To the good friend of oneself.
As shown in Fig. 2, according to one embodiment of the disclosure, a kind of replacement side of face in video to be played is provided
Method, comprising:
Step 110 identifies the first face from the decoding frame data of video to be played;
Step 120, using the key point of the first face identified as vertex, carry out three-dimensional modeling, to obtain threedimensional model,
Keep the threedimensional model apex coordinate synchronous with the key point coordinate holding of the first face in video to be played;
Step 125 obtains the second face;
Step 130, using the second face of acquisition as texture, be applied to the threedimensional model.
These steps are described in detail below.
In step 110, the first face is identified from the decoding frame data of video to be played.
In one embodiment, step 110 includes:
Frame sequential in video to be played is decoded into decoding frame data;
Decoding frame data are put into buffer area;
The first face is identified from the decoding frame data of buffer area.
In general, when playing video, the format of the video to be played of storage is different from showing on a display screen
Format.Therefore, video to be played is decoded, then could be played over the display.In order to improve broadcasting speed
Video to be played is generally first first put into buffer area according to decoded sequence, then sequentially taken out from buffer area again by uniformity
Decoded decoding frame data carry out screen broadcasting.Therefore, the frame sequential in video to be played is decoded into decoding frame number first
According to, and it is put into buffer area.Decoding frame data refer to the frame data obtained after the frame decoding to video to be played.In the embodiment
In, the not directly upper screen display of the decoding frame data being decoded into, but first to complete the process of " changing face ", i.e., by video to be played
In the first face be substituted for the second face.Therefore, in this embodiment, need to identify from the decoding frame data of buffer area
First face, rather than directly will shield display on decoding frame data.
In the decoding frame data for being put into buffer area, the first face is contained in some frame data, is free of in some frame data
There is the first face.In one embodiment, the decoding frame data of each frame sequentially are taken out from buffer area and identifies wherein whether there is the
One face, until finding the decoding frame data containing the first face, then it is assumed that identified from the decoding frame data of video to be played
First face out.
In one embodiment, the first face is identified from the decoding frame data of buffer area by face recognition technology.
In the step 120, using the key point of the first face identified as vertex, three-dimensional modeling is carried out, to obtain three-dimensional
Model makes threedimensional model apex coordinate synchronous with the key point coordinate holding of the first face in video to be played
In face recognition technology, key point refers to mostly important point for difference different faces.These points are most
The difference between different faces can be represented.Fig. 6 A shows the frame in video to be played according to one example embodiment of the disclosure
In the key point of the first face that identifies.As shown in Figure 6A, in the face contour of face, eyebrow outline, eye contour, nose
Profile, mouth profile on be dispersed with many key points.By taking the profile of mouth as an example, as shown in Figure 6A, the left corners of the mouth, the left corners of the mouth to lower lip
1/3 position of lowest part, the left corners of the mouth to 2/3 position of lower lip lowest part, lower lip lowest part, lower lip lowest part to right 1/3 position of the corners of the mouth,
Lower lip lowest part to right 2/3 position of the corners of the mouth, the right corners of the mouth be 6 key points respectively.For example, key point 801 indicate the right corners of the mouth this
Key point, key point 802 indicate the key point at 1/2 between point and the vertical minimum point of auris dextra.
Key point can be by user's predefined.
Threedimensional model refers to the three-dimensional model built with three-dimensional software, including various buildings, personage, vegetation, machinery etc.,
Such as the three-dimensional model diagram in a building.It is the figure for being connected according to predetermined order by several vertex and being formed.For example, building
Threedimensional model be the appropriate coordinate position being depicted in using each corner angle in building as vertex in three-dimensional system of coordinate, then by these
The figure that vertex is formed by connecting according to predetermined order.It can be static state, be also possible to dynamic.Static three-dimensional model refers to
The model that apex coordinate in threedimensional model is fixed.Dynamic 3 D model refers to the mould of the variation of the apex coordinate in threedimensional model
Type.In the embodiment of the present disclosure, threedimensional model refers to the latter.Vertex refers to the point that the profile of threedimensional model is passed through, the three-dimensional
Model passes through these vertex by profile and draws.In dynamic 3 D model, these vertex dynamic changes, so that threedimensional model
Shape dynamic change.
Three-dimensional modeling, which refers to the process of, establishes threedimensional model.
In this embodiment, using the key point of the first face identified in step 110 as vertex.These vertex are pressed
It is attached according to predetermined order, to construct threedimensional model as shown in Figure 6B.Key in threedimensional model shown in Fig. 6 B
Point is not static constant, with the first face in video to be played key point each interframe of video variation and change.
In one embodiment, using the key point of the first face identified as vertex, three-dimensional modeling is carried out, it is specific to wrap
It includes:
The key point of step 1201, identification first face;
Step 1202, the coordinate for determining the key point identified;
Step 1203, using the coordinate of the key point identified as apex coordinate, carry out three-dimensional modeling.
In step 1201, by face recognition technology, the key point of first face is identified.In one embodiment
In, step 1202 includes: to establish x, y-coordinate by the origin of coordinate system of the center of first face, and the key point is being sat
X, y-coordinate in mark system are coordinate of the key point of the first face in the video frame to be played.
In one embodiment, the first face can be centrally disposed in nose position.In another embodiment,
It can be using the geometric center of the first face as the center of the first face.
In one embodiment, step 1203 includes: using the coordinate of the key point identified as apex coordinate, by vertex
Coordinate is attached according to predetermined order, the threedimensional model established.
Predetermined order is defined in advance.For example, for the lower lip profile shown in Fig. 6 A, by the left corners of the mouth, Zuo Zui
Angle is to 1/3 position of lower lip lowest part, the left corners of the mouth to 2/3 position of lower lip lowest part, lower lip lowest part, lower lip lowest part to the right corners of the mouth
1/3 position, lower lip lowest part to right 2/3 position of the corners of the mouth, the right corners of the mouth this 6 key points are linked in sequence.For on the profile of face
Key point, eyes profile on key point etc., have scheduled their sequence of connection respectively.According to the predetermined order,
The threedimensional model connected out is as shown in Figure 6B.But the threedimensional model of Fig. 6 B is not static.It is each on the threedimensional model of Fig. 6 B
A vertex is not stationary, but the key point coordinate of first face keeps same in its apex coordinate and video to be played
Step.
In order to realize this synchronization, as shown in figure 4, in one embodiment, make the threedimensional model apex coordinate with to
The key point coordinate for playing the first face in video keeps synchronizing, and specifically includes:
Step 1204, the key point for tracking the first face in the frame of video to be played;
Step 1205 tracks the coordinate of the key point of the first face in each frame;
Step 1206 keeps the threedimensional model apex coordinate with coordinate of the corresponding key point in each frame of tracking
Unanimously.
In step 1204, the key point for tracking the first face in the frame of video to be played refers to, in step 1201 from
After the key point for identifying first face in the decoding frame data of a certain frame of video to be played, for each key
Point persistently identifies the key point in the decoding frame data of frame later.
For example, having identified the pass of the right corners of the mouth of the first face in the decoding frame data of the 4th frame of video to be played
After key point, the key point of the right corners of the mouth is persistently identified in the decoding frame data of the 5th, 6,7 ... a frames later.
In one embodiment, step 1205 include: with trace into the key point of the first face frame in the first face
Center is that the origin of coordinate system establishes x, y-coordinate, with x, the y-coordinate of the key point that traces into a coordinate system, as the first
Coordinate of the key point of face in frame.
In step 1206, make coordinate of the threedimensional model apex coordinate with corresponding key point in each frame of tracking
It is consistent.
For example, having identified the pass of the right corners of the mouth of the first face in the decoding frame data of the 4th frame of video to be played
The coordinate of key point is (2, -2), and the key point of the right corners of the mouth of the first face is had identified in the decoding frame data of the 5th frame
Coordinate is (2, -2.1), and the coordinate of the key point of the right corners of the mouth of the first face is had identified in the decoding frame data of the 6th frame
For (2, -2.2) ..., then the apex coordinate at the right corners of the mouth of threedimensional model as shown in Figure 6B changed to by (2, -2) (2, -
2.1) it, then changes to (2, -2.2) ....
In step 125, the second face is obtained.
In one embodiment, the second face is identified from the facial image of oneself or other people that store on user equipment
Face out.In this case, step 125 can include:
The facial image stored in calling and obtaining user equipment;
Face is identified from facial image;
The face identified is obtained from facial image, as the second face.
In one case, a facial image is only stored on user equipment, which may be execution face
The image of the user of replacement operation oneself, it is also possible to other people image.At this moment, identified from the facial image transferred
Two faces are unique faces.
In another case, multiple facial images may be store on user equipment, but execute face replacement operation
User need for other people specific face of wherein oneself or some to be substituted on the face in video.In this case,
If obtaining the face identified from facial image includes: the face that the face identified is specific user, just from face figure
The face identified is obtained as in.
Herein, specific user can be the user oneself for executing face replacement operation, be also possible to other users.Example
Such as, user A may want to user's C face in the photo of the user C stored in user equipment, to replace user B in video
Face.At this moment, specific user is exactly user C.
In one embodiment, specific user can be specified by user that operation face is replaced.In this embodiment, from
The face identified is obtained in facial image includes:
Specific user is specified in reception;
If the face identified is the face of specified specific user, the people identified is just obtained from facial image
Face.
Benefit in the embodiment is to increase the flexibility of face selection in face replacement.In this case, executor
The user of face replacement operation can specify different specific users, to complete different face replacements.
In one embodiment, it receives and includes: to the specified of specific user
Show face list of identities corresponding with the face identified from facial image;
The selection to face identity in face list of identities is received, is specified as to specific user.
In this embodiment, when user takes pictures every time, user equipment may prompt user to input face body in photo
Part.User equipment by photo it is corresponding with face identity storage.In this way, when identifying people from the facial image that user equipment stores
After face, so that it may obtain corresponding face identity, place it in face list of identities for showing.Execute the use of face replacement
Family can easily select a face identity from face list of identities, as specified specific user.The embodiment it is excellent
Point is the second face that the user for executing face replacement operation can be allowed easily to specify for replacement, improves replacement efficiency.
In another embodiment, it receives and includes: to the specified of specific user
Show the face thumbnail list of the face identified from facial image;
Face thumbnail in face thumbnail list is specified in reception, is specified as to specific user.
The advantages of embodiment is the face for not needing user and all being inputted on a user device in photo when taking pictures every time
Identity.After identifying face from the facial image that user equipment stores, these faces are become into thumbnail and are placed on face contracting
It is shown in sketch map list.The user for executing face replacement operation can specify face breviary in the face thumbnail list
Figure, equally can specify specific user.In this way, even if user does not input the people in photo on a user device after taking pictures every time
Face identity also can conveniently and efficiently specify specific user by face thumbnail.
In another embodiment, the second face is that prompt user takes pictures, and extracts from the facial image that user shoots
Face.That is, it does not extract the second face from the image of storage, image is taken on site, therefrom extracts the
Two faces.The benefit of the embodiment is, in the unsatisfied situation of human face photo that it can be stored in user to user equipment,
Facial image is taken on site, augmented reality is mutually dynamic.
In this embodiment, step 125 can include:
Display user takes pictures option;
In response to taking pictures the selection of option to user, image is shot;
Face is identified from the image of shooting;
The face that identification is obtained from the image of shooting, as the second face.
In one embodiment, the user of display take pictures option include display camera icon.If executing face to replace
The user changed is then started to shoot image by the camera icon shown on sub-screen.
In another embodiment, the user of display option of taking pictures includes the text prompt taken pictures.If user click or
The text prompt is touched, then starts to shoot image.
It will be appreciated by those skilled in the art that user takes pictures, option can also take other forms, be such as pre-positioned on screen
The region set indicates that user takes pictures option.If user touches the region, start to shoot image.Those skilled in the art may be used also
It is taken pictures in the form of option by envisioning other users.
In addition, can recognize that multiple faces from the image of shooting.In this case, in one embodiment, from
The face that identification is obtained in the image of shooting is specifically included as the second face:
The multiple face extractions being identified as are gone out to show;
Selection in response to user to the multiple faces extracted, the face that user is selected is as the second face.
For example, user A wants the face of user B in the face replacement video with user C, but when shooting, luckily user D is leaned on
Nearly user C, enters the image of shooting.At this point, identifying the face of user C and user D from the image of shooting.It will identify that
User C and the face of user D all shown to user A.User A the face of the user C listed is selected by way of touch and
It is not the face of user D.In this way, just using the face of user C as the second face, rather than using the face of user D as second
Face.
The advantages of embodiment is can to allow when the picture of shooting has other people to swarm into and execute the user side that face is replaced
Just it is therefrom specified really to be desirably used for the face of the people of replacement.
In step 130, using the second face as texture, it is applied to the threedimensional model.
Even if the texture body surface that the texture in computer graphics had both included body surface on ordinary meaning presents recessed
The rill of convex injustice, while being also included within the multicolour pattern on the smooth surface of object.Refer to the latter in the disclosure.Texture is
One pixel array has several rows and several columns.The corresponding pixel in the crosspoint of row and column.Each pixel has R, G, B, A
Four values, respectively indicate red (R) color value, green (G) color value, blue (B) color value, opacity (α) of corresponding position.Texture
The external form of expression is similar to a color image.Second face as shown in Figure 6 C, is considered as picture, actually can also be with
Regard texture, i.e., the pixel array of several row several columns as, each pixel there are tetra- values of R, G, B, A.
Since the key point for the threedimensional model established in step 120 is dynamic change, it is reflected in video to be played
The direction and expression shape change of first face.But the threedimensional model is the contour pattern that key point connects into, and does not have face
Color lacks texture.Then, the embodiment of the present disclosure is applied to the threedimensional model using the second face as texture.In this way,
To picture in face just there is the appearance of the second face, but with the direction and expression of the first face.Therefore, people has been reached
The direction of original video Central Plains face and the effect of expression are gone back in face replacement process.
In one embodiment, step 130 may include:
With the color of the second face edge pixel in the texture, the face edge pixel of threedimensional model is filled;
With the color of the second face face interior pixels in the texture, the face interior pixels of threedimensional model are filled.
Texture is considered as the pixel array of several row several columns.Face edge pixel in texture refers in this pixel
Pixel those of is passed through by the edge of face in array.Threedimensional model be an apex coordinate with it is the first in video to be played
The key point coordinate of face is always maintained at synchronous, continually changing model, its outer peripheral edge is still the shape of a face, the face
Shape is also to be made of pixel.The face edge pixel of threedimensional model just refers to the pixel in the outer peripheral edge of threedimensional model, only not
It crosses before not carrying out color filling, these pixels are blank.
In one embodiment, if in texture the number of the second face edge pixel and threedimensional model face edge picture
The number of element is equal, then the color of the second each pixel in face edge in texture can be filled into threedimensional model correspondingly
Face edge blank pixel in.
In another embodiment, if in texture the number of the second face edge pixel and threedimensional model face edge
The number of pixel is unequal, then the color of the second each pixel in face edge in texture can be carried out the transformation of picture element interpolation method.Picture
Plain interpolation method is existing method, and specific implementation does not repeat.After the transformation, the number of the second face edge pixel can become in texture
It obtains equal with the number of face edge pixel of threedimensional model.It is then possible to which the second face edge in transformed texture is each
The color of pixel is filled into correspondingly in the blank pixel at face edge of threedimensional model.For example, the second face in texture
The number of edge pixel is 100, and the number of the face edge pixel of threedimensional model is 200, first according to picture element interpolation method,
A pixel is being inserted into texture between the adjacent pixel at the second face edge, is estimating this pixel according to pixel value difference method
Color (including red color, green chroma, chroma blue, opacity value).In this way, the second face edge pixel in texture
Number become 200, their color is filled into correspondingly in the blank pixel at face edge of threedimensional model.Again
For example, the number of the second face edge pixel is 100 in texture, and the number of the face edge pixel of threedimensional model is 150
It is a.3 adjacent pixels are taken in the second face edge in texture, centre there are 2 sections of distance between pixels.By 2 sections of pel spacings
From 3 sections of distance between pixels are become, that is, the pixel intermediate means for becoming both ends are inserted into 2 pixels.At this point, according to the second face edge
In this 3 adjacent pixels color value, this 2 respective color values of pixel of insertion can be found out.In this way, second in texture
The number of face edge pixel becomes 150, their color is filled into the face edge of threedimensional model correspondingly
In blank pixel.
Then, start the color with the second face face interior pixels in the texture, to the face interior pixels of threedimensional model
It is filled.
Texture is considered as the pixel array of several row several columns.Face face interior pixels in texture refer in this picture
The pixel in interior zone surrounded in pixel array by the edge of face.Threedimensional model is an apex coordinate and view to be played
The key point coordinate of the first face is always maintained at synchronous, continually changing model in frequency, its outer peripheral edge is still a face
Shape, the shape of the face are also to be made of pixel.The face interior pixels of threedimensional model just refer to that the outer peripheral edge of threedimensional model surrounds
Region in pixel.Before not carrying out color filling, these pixels are blank.
In one embodiment, with the color of the second face face interior pixels in the texture, in the face of threedimensional model
Portion's pixel, which is filled, can also use pixel value difference method.
In one example, it since texture is pixel array, can take out in the texture inside the second face face
Certain a line all pixels.It is also made of pixel inside the face of threedimensional model, all of face inside corresponding line can also be taken out
Blank pixel.If the number of the pixel of corresponding line inside the number of the pixel of the row taken out in texture and the face of threedimensional model
Mesh is equal, then the color of the pixel of the row taken out in texture can be filled into correspondingly in the face of threedimensional model
In the blank pixel of portion's corresponding line.If the number of the pixel of the row taken out in texture is corresponding to inside the face of threedimensional model
The number of capable pixel is unequal, then can carry out the transformation of picture element interpolation method with the color of the pixel of the row taken out in texture.?
After the transformation, the number of the pixel of the row taken out in texture can become and the pixel of corresponding line inside the face of threedimensional model
Number is equal.It is then possible to which the color of the pixel of the row taken out in transformed texture is filled into three-dimensional correspondingly
Inside the face of model in the blank pixel of corresponding line.
In another embodiment, as shown in figure 5, step 130 includes:
Step 1301, with the color of the pixel of key point in the texture, to the corresponding vertex in the threedimensional model
Pixel is filled;
Step 1302, the color based on the pixel between key point in the texture, to the correspondence in the threedimensional model
Pixel between vertex is filled.
In step 1301, with the color of the pixel of key point in the texture, corresponding in the threedimensional model is pushed up
The pixel of point is filled.In one embodiment, the color of the pixel includes red (R) coloration, the green of the pixel
(G) coloration, blue (B) coloration, opacity (α).
For example, in texture with the second face at the right corners of the mouth pixel R, G, B, α value, at the corners of the mouth right in threedimensional model
Pixel is filled, and the pixel is made to have R, G, B, α value.In this way, the three-dimensional established using the key point of the first face as vertex
The pixel of the right corners of the mouth of model just has R, G, B, α value of right corners of the mouth pixel in the texture of the second face.
After the pixel on vertex all in threedimensional model is filled in step 1301, be left in threedimensional model vertex with
Outer region is not by color filling.These regions are filled in step 1302.In step 1302, based on crucial in the texture
The color of pixel between point, is filled the pixel between the corresponding vertex in the threedimensional model.For example, in texture
Have 2 pixels P1, P2 between the pixel of two key points K1, K2, be respectively provided with R, G, B, α value (R1, G1, B1, α 1) and (R2,
G2,B2,α2).But in the three-dimensional model, there are 3 pixels between the pixel on corresponding vertex.At this point, R, G of this 3 pixels,
B, α value can be found out according to (R1, G1, B1, α 1) and (R2, G2, B2, α 2) using picture element interpolation method.
It is this to be filled using key point relative to front using the embodiment of edge pixel filling and the filling of face interior pixels
The embodiment of pixel filling between key point can make replaced human face expression more lively, true.Because key point is face
In can most represent the point of facial feature, these points are first subjected to pixel filling, then to pixel filling is carried out between them, then can
Typical change when face preferably to be done to various expressions embodies, and makes the human face expression showed closer to true.
In one embodiment, as shown in figure 3, the method also includes:
Step 140 is based on decoding frame data, is depicted as display frame;
Step 150, using the second face as the result that texture is applied to the threedimensional model be plotted in the display frame it
On, cover the first face.
These steps are described in detail below.
In step 140, based on decoding frame data, it is depicted as display frame.
Decoding frame data are the frame data that the video to be played with the first face is decoded into.Show that frame is for display
One width picture, it is corresponding with the frame in video to be played.It is with the based on the display frame that the decoding frame data are drawn
The display frame of one face.
In step 150, the display is plotted in using the second face as the result that texture is applied to the threedimensional model
On frame, the first face is covered.
Using the second face as texture be applied to the threedimensional model the result is that the texture (face with the second face
Colored pattern), but with the first face direction and expression face.The face has equal big with the first face on display frame
It is small.It is plotted in the second face as the result that texture is applied to the threedimensional model on the display frame, just covers
One face, to obtain the video to be played for replacing the first face with the second face, as shown in Figure 6 D.The second people in video
Face maintains the direction and expression of the first face.
The replacement method that Fig. 9 shows face in the video to be played according to one example embodiment of the disclosure is applied in template
Specific flow chart under the scene changed face.
In step 901, terminal device shows that template video of changing face allows user to select, as shown in Figure 10 A.
In step 902, user has selected a template video of changing face.
In step 903, terminal device display interface prompts user to take pictures or select photo.It is used as shown in Figure 10 B for prompt
The interface that family is taken pictures.
In step 904, user takes pictures or selects photo.
In step 905, terminal device, by the frame decoding in template video of changing face, becomes decoding frame number using decoder
According to being put into buffer area, shield in waiting.
In step 906, terminal device identifies the people in template video of changing face from the decoding frame data of buffer area
Face.
In step 907, terminal device identifies the key point of the face in template video of changing face.
In step 908, terminal device determines the coordinate of the key point identified.
In step 909, key point coordinate of the terminal device to identify carries out three-dimensional modeling as apex coordinate.
In step 910, terminal device tracks the key point of the face in template video of changing face.
In step 911, terminal device tracks the coordinate of the key point of the face in each frame.
In step 912, terminal device makes the threedimensional model apex coordinate and corresponding key point in each frame of tracking
In coordinate be consistent.
In step 913, user's face in photo that terminal device shoots user or select uses texture as texture
The color of the pixel of middle key point is filled the pixel of corresponding vertex in threedimensional model.
In step 914, color of the terminal device based on the pixel between key point in texture, to pair in threedimensional model
The pixel between vertex is answered to be filled.
In step 915, terminal device is based on decoding frame data, is depicted as display frame.
In step 916, the result that user's face is applied to the threedimensional model as texture is plotted in by terminal device
On the display frame, the face in template video is covered.
In step 917, the result that user's face is applied to the threedimensional model as texture is plotted in by terminal device
Shield display in result after on the display frame.
As shown in fig. 7, according to one embodiment of the disclosure, a kind of replacement dress of face in video to be played is provided
It sets, comprising:
Recognition unit 710, for identifying the first face from the decoding frame data of video to be played;
Three-dimensional modeling unit 720, for carrying out three-dimensional modeling using the key point of the first face identified as vertex, with
Threedimensional model is obtained, the key point coordinate of the first face in the threedimensional model apex coordinate and video to be played is made to keep same
Step;
Acquiring unit 725, for obtaining the second face;
Applying unit 730, for being applied to the threedimensional model using the second face of acquisition as texture.
Optionally, as shown in figure 8, described device further include:
First drawing unit 740, for being depicted as display frame based on decoding frame data;
Second drawing unit 750, the result for the second face to be applied to the threedimensional model as texture are plotted in
On the display frame, the first face is covered.
Optionally, the three-dimensional modeling unit 720 is further used for:
Identify the key point of first face;
Determine the coordinate of the key point of identification;
Using the coordinate of the key point identified as apex coordinate, three-dimensional modeling is carried out.
Optionally, the three-dimensional modeling unit 720 is further used for:
Track the key point of the first face in the frame of video to be played;
Track the coordinate of the key point of the first face in each frame;
It is consistent the threedimensional model apex coordinate with coordinate of the corresponding key point in each frame of tracking.
Optionally, the applying unit 730 is further used for:
With the color of the pixel of key point in the texture, the pixel of the corresponding vertex in the threedimensional model is filled out
It fills;
Based on the color of the pixel between key point in the texture, between the corresponding vertex in the threedimensional model
Pixel is filled.
Optionally, the color of the pixel includes red (R) coloration, green (G) coloration, blue (B) color of the pixel
Degree, opacity (α).
The alternative 9 of face in video to be played according to embodiment of the present disclosure is described referring to Figure 11.
The alternative 9 of face is only an example in the video to be played that Figure 11 is shown, should not be to the function of the embodiment of the present invention
Any restrictions are brought with use scope.
As shown in figure 11, the alternative 9 of face is showed in the form of universal computing device in video to be played.It is to be played
The component of the alternative 9 of face can include but is not limited in video: at least one processing unit 810, at least one storage
Unit 820, the bus 830 for connecting different system components (including storage unit 820 and processing unit 810).
The storage unit is stored with program code, and said program code can be executed by the processing unit 810, so that
The processing unit 810 executes described in the description section of this specification above-mentioned example method various examples according to the present invention
The step of property embodiment.For example, the processing unit 810 can execute each step as shown in Figure 2.
Storage unit 820 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit
(RAM) 8201 and/or cache memory unit 8202, it can further include read-only memory unit (ROM) 8203.
Storage unit 820 can also include program/utility with one group of (at least one) program module 8205
8204, such program module 8205 includes but is not limited to: operating system, one or more application program, other program moulds
It may include the realization of network environment in block and program data, each of these examples or certain combination.
Bus 830 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage
Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures
Local bus.
The alternative 9 of face (such as keyboard, can also refer to one or more external equipments 700 in video to be played
To equipment, bluetooth equipment etc.) communication, the replacement with face in the video to be played can be also enabled a user to one or more
The equipment communication of the interaction of device 9, and/or with enable in the video to be played the alternative 9 of face and it is one or more its
It calculates any equipment (such as router, modem etc.) communication that equipment is communicated.This communication can pass through
Input/output (I/O) interface 650 carries out.Also, the alternative 9 of face can also pass through Network adaptation in video to be played
Device 860 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, such as internet)
Communication.As shown, other modules of the network adapter 860 by the alternative 9 of face in bus 830 and video to be played
Communication.It should be understood that although not shown in the drawings, other hardware can be used with the alternative 9 of face in video to be played
And/or software module is realized, including but not limited to: microcode, device driver, redundant processing unit, external disk drive battle array
Column, RAID system, tape drive and data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure
The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one
Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating
Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to disclosure embodiment
Method.
In an exemplary embodiment of the disclosure, a kind of computer program medium is additionally provided, computer is stored thereon with
Readable instruction makes computer execute the above method and implements when the computer-readable instruction is executed by the processor of computer
The method of example part description.
According to one embodiment of the disclosure, a kind of journey for realizing the method in above method embodiment is additionally provided
Sequence product, can be using portable compact disc read only memory (CD-ROM) and including program code, and can set in terminal
It is standby, such as run on PC.However, program product of the invention is without being limited thereto, in this document, readable storage medium storing program for executing can
With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or
Person is in connection.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or
System, device or the device of semiconductor, or any above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive
List) include: electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only
Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory
(CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal,
Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing
Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its
The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have
Line, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages
Code, described program design language include object oriented program language-Java, C++ etc., further include conventional
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user
It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating
Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far
Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network
(WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP
To be connected by internet).
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description
Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more
Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould
The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
In addition, although describing each step of method in the disclosure in the accompanying drawings with particular order, this does not really want
These steps must be executed in this particular order by asking or implying, or having to carry out step shown in whole could realize
Desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/
Or a step is decomposed into execution of multiple steps etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure
The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one
Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating
Equipment (can be personal computer, server, mobile terminal or network equipment etc.) is executed according to disclosure embodiment
Method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by appended
Claim is pointed out.
Claims (14)
1. the replacement method of face in a kind of video to be played characterized by comprising
The first face is identified from the decoding frame data of video to be played;
Using the key point of the first face identified as vertex, three-dimensional modeling is carried out, to obtain threedimensional model, makes the three-dimensional mould
Type apex coordinate is synchronous with the key point coordinate holding of the first face in video to be played;
Obtain the second face;
Using the second face of acquisition as texture, it is applied to the threedimensional model.
2. the method according to claim 1, wherein further include:
Based on decoding frame data, it is depicted as display frame;
It is plotted on the display frame, covers the first using the second face as the result that texture is applied to the threedimensional model
Face.
3. the method according to claim 1, wherein using the key point of the first face identified as vertex, into
Row three-dimensional modeling, specifically includes:
Identify the key point of first face;
Determine the coordinate of the key point of identification;
Using the coordinate of the key point identified as apex coordinate, three-dimensional modeling is carried out.
4. the method according to claim 1, wherein making in the threedimensional model apex coordinate and video to be played
The key point coordinate of first face keeps synchronizing, and specifically includes:
Track the key point of the first face in the frame of video to be played;
Track the coordinate of the key point of the first face in each frame;
It is consistent the threedimensional model apex coordinate with coordinate of the corresponding key point in each frame of tracking.
5. the method according to claim 1, wherein being applied to described using the second face of acquisition as texture
Threedimensional model specifically includes:
With the color of the pixel of key point in the texture, the pixel of the corresponding vertex in the threedimensional model is filled;
Based on the color of the pixel between key point in the texture, to the pixel between the corresponding vertex in the threedimensional model
It is filled.
6. according to the method described in claim 5, it is characterized in that, the color of the pixel includes the red (R) of the pixel
Coloration, green (G) coloration, blue (B) coloration, opacity (α).
7. the alternative of face in a kind of video to be played characterized by comprising
Recognition unit, for identifying the first face from the decoding frame data of video to be played;
Three-dimensional modeling unit, for three-dimensional modeling being carried out, to obtain three-dimensional using the key point of the first face identified as vertex
Model keeps the threedimensional model apex coordinate synchronous with the key point coordinate holding of the first face in video to be played;
Acquiring unit, for obtaining the second face;
Applying unit, for being applied to the threedimensional model using the second face of acquisition as texture.
8. device according to claim 7, which is characterized in that further include:
First drawing unit, for being depicted as display frame based on decoding frame data;
Second drawing unit, for being plotted in the display for the second face as the result that texture is applied to the threedimensional model
On frame, the first face is covered.
9. device according to claim 7, which is characterized in that the three-dimensional modeling unit is further used for:
Identify the key point of first face;
Determine the coordinate of the key point of identification;
Using the coordinate of the key point identified as apex coordinate, three-dimensional modeling is carried out.
10. device according to claim 7, which is characterized in that the three-dimensional modeling unit is further used for:
Track the key point of the first face in the frame of video to be played;
Track the coordinate of the key point of the first face in each frame;
It is consistent the threedimensional model apex coordinate with coordinate of the corresponding key point in each frame of tracking.
11. device according to claim 7, which is characterized in that the applying unit is further used for:
With the color of the pixel of key point in the texture, the pixel of the corresponding vertex in the threedimensional model is filled;
Based on the color of the pixel between key point in the texture, to the pixel between the corresponding vertex in the threedimensional model
It is filled.
12. device according to claim 11, which is characterized in that the color of the pixel includes the red of the pixel
(R) coloration, green (G) coloration, blue (B) coloration, opacity (α).
13. the alternative of face in a kind of video to be played characterized by comprising
Memory is stored with computer-readable instruction;
Processor reads the computer-readable instruction of memory storage, requires side described in any of 1-6 with perform claim
Method.
14. a kind of computer program medium, is stored thereon with computer-readable instruction, when the computer-readable instruction is calculated
When the processor of machine executes, computer perform claim is made to require method described in any of 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810276537.9A CN110267079B (en) | 2018-03-30 | 2018-03-30 | Method and device for replacing human face in video to be played |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810276537.9A CN110267079B (en) | 2018-03-30 | 2018-03-30 | Method and device for replacing human face in video to be played |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110267079A true CN110267079A (en) | 2019-09-20 |
CN110267079B CN110267079B (en) | 2023-03-24 |
Family
ID=67911550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810276537.9A Active CN110267079B (en) | 2018-03-30 | 2018-03-30 | Method and device for replacing human face in video to be played |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110267079B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728621A (en) * | 2019-10-17 | 2020-01-24 | 北京达佳互联信息技术有限公司 | Face changing method and device for face image, electronic equipment and storage medium |
CN112188145A (en) * | 2020-09-18 | 2021-01-05 | 随锐科技集团股份有限公司 | Video conference method and system, and computer readable storage medium |
CN113988243A (en) * | 2021-10-19 | 2022-01-28 | 艾斯芸防伪科技(福建)有限公司 | Three-dimensional code generation and verification method, system, equipment and medium with verification code |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999942A (en) * | 2012-12-13 | 2013-03-27 | 清华大学 | Three-dimensional face reconstruction method |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN103646416A (en) * | 2013-12-18 | 2014-03-19 | 中国科学院计算技术研究所 | Three-dimensional cartoon face texture generation method and device |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
US20160335774A1 (en) * | 2015-02-06 | 2016-11-17 | Ming Chuan University | Method for automatic video face replacement by using a 2d face image to estimate a 3d vector angle of the face image |
CN107067429A (en) * | 2017-03-17 | 2017-08-18 | 徐迪 | Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
CN107610209A (en) * | 2017-08-17 | 2018-01-19 | 上海交通大学 | Human face countenance synthesis method, device, storage medium and computer equipment |
-
2018
- 2018-03-30 CN CN201810276537.9A patent/CN110267079B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999942A (en) * | 2012-12-13 | 2013-03-27 | 清华大学 | Three-dimensional face reconstruction method |
CN103473804A (en) * | 2013-08-29 | 2013-12-25 | 小米科技有限责任公司 | Image processing method, device and terminal equipment |
CN103646416A (en) * | 2013-12-18 | 2014-03-19 | 中国科学院计算技术研究所 | Three-dimensional cartoon face texture generation method and device |
US20160335774A1 (en) * | 2015-02-06 | 2016-11-17 | Ming Chuan University | Method for automatic video face replacement by using a 2d face image to estimate a 3d vector angle of the face image |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN107067429A (en) * | 2017-03-17 | 2017-08-18 | 徐迪 | Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
CN107610209A (en) * | 2017-08-17 | 2018-01-19 | 上海交通大学 | Human face countenance synthesis method, device, storage medium and computer equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728621A (en) * | 2019-10-17 | 2020-01-24 | 北京达佳互联信息技术有限公司 | Face changing method and device for face image, electronic equipment and storage medium |
CN110728621B (en) * | 2019-10-17 | 2023-08-25 | 北京达佳互联信息技术有限公司 | Face changing method and device of face image, electronic equipment and storage medium |
CN112188145A (en) * | 2020-09-18 | 2021-01-05 | 随锐科技集团股份有限公司 | Video conference method and system, and computer readable storage medium |
CN113988243A (en) * | 2021-10-19 | 2022-01-28 | 艾斯芸防伪科技(福建)有限公司 | Three-dimensional code generation and verification method, system, equipment and medium with verification code |
CN113988243B (en) * | 2021-10-19 | 2023-10-27 | 艾斯芸防伪科技(福建)有限公司 | Three-dimensional code generation and verification method, system, equipment and medium with verification code |
Also Published As
Publication number | Publication date |
---|---|
CN110267079B (en) | 2023-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102292537B1 (en) | Image processing method and apparatus, and storage medium | |
CN112348969B (en) | Display method and device in augmented reality scene, electronic equipment and storage medium | |
US20200020173A1 (en) | Methods and systems for constructing an animated 3d facial model from a 2d facial image | |
WO2018188499A1 (en) | Image processing method and device, video processing method and device, virtual reality device and storage medium | |
CN110465097B (en) | Character vertical drawing display method and device in game, electronic equipment and storage medium | |
US20210264139A1 (en) | Creating videos with facial expressions | |
CN111008927B (en) | Face replacement method, storage medium and terminal equipment | |
WO2015070668A1 (en) | Image processing method and apparatus | |
CN111602100A (en) | Method, device and system for providing alternative reality environment | |
WO2021098338A1 (en) | Model training method, media information synthesizing method, and related apparatus | |
WO2022095468A1 (en) | Display method and apparatus in augmented reality scene, device, medium, and program | |
CN110267079A (en) | The replacement method and device of face in video to be played | |
KR102353556B1 (en) | Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face | |
CN108762508A (en) | A kind of human body and virtual thermal system system and method for experiencing cabin based on VR | |
CN106648098A (en) | User-defined scene AR projection method and system | |
JP2024506014A (en) | Video generation method, device, equipment and readable storage medium | |
CN113965773A (en) | Live broadcast display method and device, storage medium and electronic equipment | |
JPWO2020161816A1 (en) | Mixed reality display device and mixed reality display method | |
CN113066189B (en) | Augmented reality equipment and virtual and real object shielding display method | |
CN111640190A (en) | AR effect presentation method and apparatus, electronic device and storage medium | |
CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
CN115907912A (en) | Method and device for providing virtual trial information of commodities and electronic equipment | |
WO2021155666A1 (en) | Method and apparatus for generating image | |
JP7011728B2 (en) | Image data output device, content creation device, content playback device, image data output method, content creation method, and content playback method | |
Shumaker et al. | Virtual, Augmented and Mixed Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |