CN109147017A - Dynamic image generation method, device, equipment and storage medium - Google Patents
Dynamic image generation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109147017A CN109147017A CN201810985746.0A CN201810985746A CN109147017A CN 109147017 A CN109147017 A CN 109147017A CN 201810985746 A CN201810985746 A CN 201810985746A CN 109147017 A CN109147017 A CN 109147017A
- Authority
- CN
- China
- Prior art keywords
- target point
- key point
- changes
- point
- dummy model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of dynamic image generation method, device, equipment and storage medium, this method comprises: the facial image of acquisition user;Key point is chosen from the facial image;Establish the mapping relations between the key point and the target point of dummy model;By the changes in coordinates information of key point described in continuous N frame facial image, it is converted into the changes in coordinates information of the target point;Wherein, the N is the natural number greater than 1;According to the changes in coordinates information of the target point, the dummy model is driven to generate dynamic image.The present invention can directly drive dummy model and generate dynamic image according to collected user images, and manufacturing process is simple, and producing efficiency is high.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of dynamic image generation method, device, equipment and deposit
Storage media.
Background technique
With the development of terminal technology, more and more terminals are provided with video capture function, and people can pass through end
End is to make cardon or small video.
Currently, the production of expression packet needs pre-rendered or shoots multiple facial expression images, then multiple images are set
Play time interval can just form a dynamic facial expression image when playing multiple facial expression images.
But the complex manufacturing process of this dynamic expression image, need to prepare in advance a large amount of facial expression image, production effect
Rate is low.
Summary of the invention
The present invention provides a kind of dynamic image generation method, device, equipment and storage medium, can be according to collected use
Family image directly drives dummy model and generates dynamic image, and manufacturing process is simple, and producing efficiency is high.
In a first aspect, the embodiment of the present invention provides a kind of dynamic image generation method, comprising:
Acquire the facial image of user;
Key point is chosen from the facial image;
Establish the mapping relations between the key point and the target point of dummy model;
By the changes in coordinates information of key point described in continuous N frame facial image, the coordinate for being converted into the target point becomes
Change information;Wherein, the N is the natural number greater than 1;
According to the changes in coordinates information of the target point, the dummy model is driven to generate dynamic image.
In a kind of possible design, the facial image of the acquisition user, comprising:
Acquire the facial image of user in real time by monocular or more mesh cameras.
In a kind of possible design, key point is chosen from the facial image, comprising:
Facial image is divided into multiple regions according to face, and selects key point from each region;The key point packet
It includes: the label being located on brow region, eye areas, nasal area, mouth region, ear region and face contour region
Point.
In a kind of possible design, the mapping relations between the key point and the target point of dummy model are established, are wrapped
It includes:
Dummy model is divided into multiple regions, and the selection target point from each region according to face;The target point packet
It includes: the label being located on brow region, eye areas, nasal area, mouth region, ear region and face contour region
Point;
According to the distributing position of region and the target point in corresponding region where target point, the pass is established
Mapping relations between key point and the target point of dummy model.
In a kind of possible design, the changes in coordinates information of key point described in continuous N frame facial image is converted into
The changes in coordinates information of the target point, comprising:
According to the coordinate information of same key point in continuous N frame facial image, the changes in coordinates letter of the key point is obtained
Breath;The changes in coordinates information of the key point refers to: the key point is from when being moved to next moment at a moment, the pass
The coordinate value variable quantity of key point;
According to the mapping relations between the key point and the target point of dummy model, determine that the coordinate of the key point becomes
Change information in the dummy model, the changes in coordinates information of the corresponding target point;The changes in coordinates of the target point is believed
Breath refers to: the target point is from when being moved to next moment at a moment, the coordinate value variable quantity of the target point.
In a kind of possible design, according to the changes in coordinates information of the target point, the dummy model is driven to generate
Dynamic image, comprising:
It determines in the dummy model, the target point is moved to the interval of next coordinate position from a coordinate position
Duration;
The dummy model is driven to generate dynamic according to the interval duration according to the changes in coordinates information of the target point
State image.
In a kind of possible design, the dummy model includes: the mask of faceform, animal.
In a kind of possible design, after driving the dummy model to generate dynamic image, further includes:
Save the dynamic image;
The sub- dynamic image for being 2 or 2 or more by the dynamic image segmentation, wherein the sub- dynamic image
Set, which is constituted, changes consistent expression packet with the facial image of user.
In a kind of possible design, by the dynamic image segmentation be 2 or 2 or more sub- dynamic image it
Afterwards, further includes:
Editing and processing is carried out to the sub- dynamic image, the editing and processing includes: addition audio and/or text.
Second aspect, the embodiment of the present invention provide a kind of dynamic image generating means, comprising:
Acquisition module, for acquiring the facial image of user;
Module is chosen, for choosing key point from the facial image;
Mapping block, the mapping relations for establishing between the key point and the target point of dummy model;
Processing module, for being converted into the mesh for the changes in coordinates information of key point described in continuous N frame facial image
The changes in coordinates information of punctuate;Wherein, the N is the natural number greater than 1;
Drive module drives the dummy model to generate Dynamic Graph for the changes in coordinates information according to the target point
Picture.
In a kind of possible design, the acquisition module is specifically used for:
Acquire the facial image of user in real time by monocular or more mesh cameras.
In a kind of possible design, the selection module is specifically used for:
Facial image is divided into multiple regions according to face, and selects key point from each region;The key point packet
It includes: the label being located on brow region, eye areas, nasal area, mouth region, ear region and face contour region
Point.
In a kind of possible design, the mapping block is specifically used for:
Dummy model is divided into multiple regions, and the selection target point from each region according to face;The target point packet
It includes: the label being located on brow region, eye areas, nasal area, mouth region, ear region and face contour region
Point;
According to the distributing position of region and the target point in corresponding region where target point, the pass is established
Mapping relations between key point and the target point of dummy model.
In a kind of possible design, the processing module is specifically used for:
According to the coordinate information of same key point in continuous N frame facial image, the changes in coordinates letter of the key point is obtained
Breath;The changes in coordinates information of the key point refers to: the key point is from when being moved to next moment at a moment, the pass
The coordinate value variable quantity of key point;
According to the mapping relations between the key point and the target point of dummy model, determine that the coordinate of the key point becomes
Change information in the dummy model, the changes in coordinates information of the corresponding target point;The changes in coordinates of the target point is believed
Breath refers to: the target point is from when being moved to next moment at a moment, the coordinate value variable quantity of the target point.
In a kind of possible design, the drive module is specifically used for:
It determines in the dummy model, the target point is moved to the interval of next coordinate position from a coordinate position
Duration;
The dummy model is driven to generate dynamic according to the interval duration according to the changes in coordinates information of the target point
State image.
In a kind of possible design, the dummy model includes: the mask of faceform, animal.
In a kind of possible design, further includes:
Memory module, for saving the dynamic image after driving the dummy model to generate dynamic image;
Divide module, for the sub- dynamic image for being 2 or 2 or more by the dynamic image segmentation, wherein described
The set of sub- dynamic image, which is constituted, changes consistent expression packet with the facial image of user.
In a kind of possible design, further includes:
Editor module, it is right for after by the dynamic image segmentation being 2 or 2 or more sub- dynamic images
The sub- dynamic image carries out editing and processing, and the editing and processing includes: addition audio and/or text.
The third aspect, the embodiment of the present invention provide a kind of dynamic image generating device, comprising: processor and memory are deposited
The executable instruction of the processor is stored in reservoir;Wherein, the processor is configured to via the execution executable finger
It enables to execute dynamic image generation method described in any one of first aspect.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, are stored thereon with computer program,
Dynamic image generation method described in any one of first aspect is realized when the program is executed by processor.
5th aspect, the embodiment of the present invention provide a kind of program product, and described program product includes computer program, described
Computer program is stored in readable storage medium storing program for executing, at least one processor of server can be read from the readable storage medium storing program for executing
The computer program is taken, at least one described processor executes the computer program and makes server implementation first aspect sheet
Any dynamic image generation method of inventive embodiments.
A kind of dynamic image generation method, device, equipment and storage medium provided by the invention, by the people for acquiring user
Face image;Key point is chosen from the facial image;Establish the mapping between the key point and the target point of dummy model
Relationship;By the changes in coordinates information of key point described in continuous N frame facial image, it is converted into the changes in coordinates letter of the target point
Breath;Wherein, the N is the natural number greater than 1;According to the changes in coordinates information of the target point, drive the dummy model raw
At dynamic image.The present invention can directly drive dummy model and generate dynamic image, make according to collected user images
Journey is simple, and producing efficiency is high.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the schematic illustration of an application scenarios of the invention;
Fig. 2 is the flow chart for the dynamic image generation method that the embodiment of the present invention one provides;
Fig. 3 is the selection result schematic diagram of key point in facial image;
Fig. 4 is the flow chart of dynamic image generation method provided by Embodiment 2 of the present invention;
Fig. 5 is the structural schematic diagram for the dynamic image generating means that the embodiment of the present invention three provides;
Fig. 6 is the structural schematic diagram for the dynamic image generating means that the embodiment of the present invention four provides;
Fig. 7 is the structural schematic diagram for the dynamic image generating device that the embodiment of the present invention five provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third ", " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein for example can be to remove
Sequence other than those of illustrating or describe herein is implemented.In addition, term " includes " and " having " and theirs is any
Deformation, it is intended that cover it is non-exclusive include, for example, containing the process, method of a series of steps or units, system, production
Product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for this
A little process, methods, the other step or units of product or equipment inherently.
Technical solution of the present invention is described in detail with specifically embodiment below.These specific implementations below
Example can be combined with each other, and the same or similar concept or process may be repeated no more in some embodiments.
Fig. 1 is the schematic illustration of an application scenarios of the invention, as shown in Figure 1, acquiring user's by camera first
Facial image, camera 10 can record one section of short-sighted frequency of user's human face expression variation, and N frame is then chosen from the short-sighted frequency
Facial image or the direct continuous acquisition N frame facial image of camera.Camera 10 sends collected N frame facial image
To image processor 20, key point is chosen from the 1st frame image by image processor 20, is then sequentially found in subsequent frame image
The label to N frame facial image is completed in the position of same key point.Wherein, key point includes: positioned at brow region, eyes area
Mark point on domain, nasal area, mouth region, ear region and face contour region.
Further, the N frame facial image for having marked key point is sent to data converter 30 by image processor 20.Number
According to converter 30 according to default rule, the mapping in the 1st frame image between key point and the target point of dummy model 40 is established
Relationship.Wherein, target point includes: positioned at brow region, eye areas, nasal area, mouth region, ear region and face
Mark point on contouring region.
Further, data converter 30 is closed according to the coordinate information of same key point in continuous N frame facial image
The changes in coordinates information of key point;The changes in coordinates information of key point refers to: key point is moved to next moment from a moment
When, the coordinate value variable quantity of key point;According to the mapping relations between key point and the target point of dummy model 40, determine crucial
The changes in coordinates information of point is in dummy model 40, the changes in coordinates information of corresponding target point;The changes in coordinates of target point is believed
Breath refers to: target point is from when being moved to next moment at a moment, the coordinate value variable quantity of target point.Finally, dummy model
Changes in coordinates information of 40 driver according to target point, the driving generation dynamic image of dummy model 40.
It should be noted that the mapping relations between key point and the target point of dummy model are not limited in this application scene
Establish mode.
In a kind of optional mode, it can be and predefine the setting target at which position of dummy model face
It is corresponding with target point to collect removal search in facial image from camera 10 then further according to the setting rule of target point for point
Key point position.
In another optional mode, the outline position that human face five-sense-organ is distinguished by face recognition algorithms can be,
And key point is chosen in these outline positions, then face profile position is distinguished in the face of dummy model with similar method
It sets, the mapping relations between key point and target point is determined according to the corresponding relationship of face contour.
Using the method in this application scene, can be directly driven virtual according to the collected user images of camera 10
Model 40 generates and the consistent dynamic image of user's human face expression, so that the producing efficiency of expression packet greatly promotes.
How to be solved with technical solution of the specifically embodiment to technical solution of the present invention and the application below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, the embodiment of the present invention is described.
Fig. 2 is the flow chart for the dynamic image generation method that the embodiment of the present invention one provides, as shown in Fig. 2, the present embodiment
In method may include:
S101, the facial image for acquiring user.
In the present embodiment, the facial image of user can be acquired in real time by monocular or more mesh cameras.When using single
When mesh camera, available 2D facial image;When using more mesh cameras, available 3D facial image.It certainly can also
3D facial image is converted by 2D facial image to turn 3D technology based on existing 2D, for example, soft using PhotoAnim animation
Part, tikuwa software etc..The technology that the present embodiment is converted to 3D rendering to 2D image not limits.
Facial image in the present embodiment is also possible to the facial image extracted from existing video, such as user can
To get video resource from web film or webcast website, and video is subjected to editing, chosen comprising face feature
Picture frame.
S102, key point is chosen from facial image.
In a kind of optional embodiment, facial image can be divided into multiple regions according to face, and from each area
Key point is selected in domain;Key point include: positioned at brow region, eye areas, nasal area, mouth region, ear region, with
And the mark point on face contour region.
Specifically, Fig. 3 is the selection result schematic diagram of key point in facial image, as shown in figure 3, face can be divided
For brow region 51, eye areas 52, nasal area 53, mouth region 54, ear region 55 and face contour region 56.
Then mark point 57 is added in each region, the gray-value variation amount that mark point 57 is typically chosen in neighbor pixel is greater than pre-
If region corresponding to the pixel of threshold value.
In the specific implementation, facial image can be converted to gray level image, and obtain neighbor pixel in facial image
Gray-value variation amount be greater than preset threshold pixel corresponding to region, this is because human face five-sense-organ profile and skin of face
The gray-value variation amount of the pixel of transitional region is larger, and face, when doing expression, face profile can occur to change accordingly.
For example, when people chooses eyebrow movement, the brows of brow region, eyebrow peak, key point at eyebrow tail position can change.
S103, mapping relations between key point and the target point of dummy model are established.
In a kind of optional embodiment, referring to Fig. 3, dummy model can be divided into multiple regions according to face, and
The selection target point from each region;Target point includes: positioned at brow region, eye areas, nasal area, mouth region, ear
Mark point on piece region and face contour region;According to where target point region and target point in corresponding region
Distributing position, establish the mapping relations between key point and the target point of dummy model.
In another optional embodiment, it can be and predefine the setting at which position of dummy model face
It is corresponding with target point to collect removal search in facial image from camera then further according to the setting rule of target point for target point
Key point position.It is also possible to distinguish the outline position of human face five-sense-organ by face recognition algorithms, and in these profile positions
Selection key point is set, then distinguishes face outline position in the face of dummy model with similar method, is taken turns according to face
Wide corresponding relationship determines the mapping relations between key point and target point.
S104, by the changes in coordinates information of key point in continuous N frame facial image, be converted into the changes in coordinates letter of target point
Breath.
In the present embodiment, key point can be obtained according to the coordinate information of same key point in continuous N frame facial image
Changes in coordinates information;Wherein, N is the natural number greater than 1.The changes in coordinates information of key point refers to: key point is from a moment
When being moved to next moment, the coordinate value variable quantity of key point;According to reflecting between key point and the target point of dummy model
Penetrate relationship, determine the changes in coordinates information of key point in dummy model, the changes in coordinates information of corresponding target point;Target point
Changes in coordinates information refer to: target point is from when being moved to next moment at a moment, the coordinate value variable quantity of target point.
In a kind of optionally embodiment, when illustrating dynamic expression in collected N frame facial image, Ke Yicong
Extract key point coordinate information in any one frame facial image, the coordinate information of the key point include: positioned at lip, eyes,
Nose, eyebrow, mark point on face contour coordinate value.
It in another optional embodiment, calculates for convenience, the seat for establishing key point can be recorded by matrix
Mark the corresponding matrix of coordinate information of key point in each frame facial image, therefore available N number of matrix.Further,
It is the seat of target point by the cycling of elements in N number of matrix according to the mapping relations between key point and the target point of dummy model
Mark information.
Specifically, below with the seat of target point in its corresponding dummy model of the changes in coordinates acquisition of information of a certain key point
For marking change information, it is described in detail.
When collected facial image is 2D facial image, in the 1st frame image, the coordinate of a certain key point be (2,
5), the coordinate of corresponding target point is (4,10);In the 2nd frame image, the coordinate of same key point is (2.5,6);Accordingly
It is found that the coordinate value variable quantity of same key point are as follows: X axis coordinate increases by 0.5, and Y axis coordinate increases by 1.According to facial image coordinate
Proportionate relationship between system and dummy model coordinate system, obtains the coordinate value variable quantity of the corresponding target point of the key point are as follows: X-axis
Coordinate increases by 1, and Y axis coordinate increases by 2, and therefore, the coordinate after available target point variation is (5,12).Using similar side
Method, changes in coordinates information of the available same target point at N number of moment.
When collected facial image is 3D facial image, in the 1st frame image, the coordinate of a certain key point be (2,
5,7), the coordinate of corresponding target point is (4,10,14);In the 2nd frame image, the coordinate of same key point be (2.5,6,
12);Accordingly it is found that the coordinate value variable quantity of same key point are as follows: X axis coordinate increases by 0.5, and Y axis coordinate increases by 1, the coordinate of Z axis
Increase by 5.According to the proportionate relationship between facial image coordinate system and dummy model coordinate system, the corresponding target of the key point is obtained
The coordinate value variable quantity of point are as follows: X axis coordinate increases by 1, and Y axis coordinate increases by 2, and Z axis coordinate increases by 10, therefore, available target
Coordinate after point variation is (5,12,24).Using similar method, coordinate of the available same target point at N number of moment becomes
Change information.
It should be noted that being illustrated by taking a certain key point as an example in the present embodiment, but do not limit in practical application
The quantity of the key point of selection.Theoretically, the selection quantity of key point is more, then the expression shape change of its characterization is finer.
In another optional embodiment, the coordinate system in coordinate system and dummy model established in facial image it
Between transformational relation know in advance, it is corresponding in the pass therefore when key point a certain in facial image sends changes in coordinates
Key point can also be obtained in the changes in coordinates of the target point in dummy model according to the transformational relation between two coordinate systems.
It should be noted that the present embodiment does not limit the coordinate set type of facial image and dummy model, coordinate system is built
Vertical purpose is the changes in coordinates information for releasing corresponding target point for the changes in coordinates information according to key point.
S105, the changes in coordinates information according to target point, driving dummy model generate dynamic image.
It in the present embodiment, can determine in dummy model, target point is moved to next coordinate bit from a coordinate position
The interval duration set;According to the changes in coordinates information of target point, according to interval duration, driving dummy model generates dynamic image.
It should be noted that the dummy model in the present embodiment can be the mask of faceform, animal, such as block
The mask of logical personage or animal.
In addition the unlimited interval duration for o'clock next coordinate position being moved to from a coordinate position that sets the goal of the present embodiment,
Interval duration will affect the broadcasting speed of Dynamic Graph, therefore can be adjusted according to the actual situation.For example, interval can be set
Shi Changwei 1s, 0.5s etc..
The present embodiment, by the facial image for acquiring user;Key point is chosen from facial image;Establish key point and void
Mapping relations between the target point of analog model;By the changes in coordinates information of key point in continuous N frame facial image, it is converted into mesh
The changes in coordinates information of punctuate;Wherein, N is the natural number greater than 1;According to the changes in coordinates information of target point, virtual mould is driven
Type generates dynamic image.The present invention can directly drive dummy model and generate dynamic image, system according to collected user images
It is simple to make process, producing efficiency is high.
Fig. 4 is the flow chart of dynamic image generation method provided by Embodiment 2 of the present invention, as shown in figure 4, the present embodiment
In method may include:
S201, the facial image for acquiring user.
S202, key point is chosen from facial image.
S203, mapping relations between key point and the target point of dummy model are established.
S204, by the changes in coordinates information of key point in continuous N frame facial image, be converted into the changes in coordinates letter of target point
Breath.
S205, the changes in coordinates information according to target point, driving dummy model generate dynamic image.
In the present embodiment, step S201~step S205 specific implementation process and technical principle are shown in Figure 2
Method, details are not described herein again.
S206, dynamic image is saved.
In the present embodiment, the picture frame of dummy model generation can be saved, when picture frame is broadcast according to preset interval duration
When putting, so that it may form corresponding dynamic image.
S207, the sub- dynamic image for being 2 or 2 or more by dynamic image segmentation.
In the present embodiment, since collected facial image can be the face video that captured in real-time arrives, one section dynamic
Include the longer one section of video of duration in state image, may include multiple and different expression shape changes in the video, such as like
Happy, indignation, sadness etc..Can use at this time manual type or automated manner by dynamic image segmentation be 2 or 2 with
On sub- dynamic image.Wherein, the set of sub- dynamic image, which is constituted, changes consistent expression packet with the facial image of user.
Optionally, can also be after the sub- dynamic image for being 2 or 2 or more by dynamic image segmentation, antithetical phrase is dynamic
State image carries out editing and processing, and editing and processing includes: addition audio and/or text.
Under an optional application scenarios, one section of video comprising human face expression variation can be downloaded from network, in conjunction with
Scene in Fig. 1, N frame facial image is then chosen from video, and (N frame image can be continuous image, can also be discontinuous
Image).Key point is chosen since the 1st frame image, then sequentially finds the position of same key point in subsequent frame image,
Complete the label to N frame facial image.
It further, will be in the 1st frame image in the N frame facial image that key point marked according to preset mapping ruler
Mapping relations are established between key point and the target point of dummy model.Obtain the seat of same key point in continuous N frame facial image
Information is marked, the changes in coordinates information of key point is obtained;The changes in coordinates information of key point;According to the mesh of key point and dummy model
Mapping relations between punctuate determine the changes in coordinates information of key point in dummy model, and the coordinate of corresponding target point becomes
Change information;The changes in coordinates information of target point.Finally, the changes in coordinates information of driver in dummy model according to target point,
Dummy model is driven to generate dynamic image.
It should be noted that without the facial image of real-time recording user oneself, but can be adopted in this application scene
Dummy model is driven to generate and the consistent dynamic image of human face expression in video with existing any video comprising face.
In another optional embodiment, when get according to the dynamic image of dummy model generate expression packet it
Afterwards, editing and processing can also be carried out to expression packet, such as adds text, and the display of text is set in the picture frame of expression packet
Position.Or audio is added in dynamic image with editing software.Such as background laugh is added in the expression packet of laugh.
The present embodiment, by the facial image for acquiring user;Key point is chosen from facial image;Establish key point and void
Mapping relations between the target point of analog model;By the changes in coordinates information of key point in continuous N frame facial image, it is converted into mesh
The changes in coordinates information of punctuate;Wherein, N is the natural number greater than 1;According to the changes in coordinates information of target point, virtual mould is driven
Type generates dynamic image.The present invention can directly drive dummy model and generate dynamic image, system according to collected user images
It is simple to make process, producing efficiency is high.
In addition, the present embodiment can also according to target point changes in coordinates information come come drive dummy model generate and user
Face changes consistent dynamic image;It is split finally by by dynamic image, multiple sub- dynamic images is obtained, to realize
Expression packet is directly made according to true man's expression shape change, so that the producing efficiency of expression packet greatly promotes.
Fig. 5 is the structural schematic diagram for the dynamic image generating means that the embodiment of the present invention three provides, as shown in figure 5, this reality
The dynamic image generating means for applying example may include:
Acquisition module 61, for acquiring the facial image of user;
Module 62 is chosen, for choosing key point from facial image;
Mapping block 63, the mapping relations for establishing between key point and the target point of dummy model;
Processing module 64, for being converted into target point for the changes in coordinates information of key point in continuous N frame facial image
Changes in coordinates information;Wherein, N is the natural number greater than 1;
Drive module 65 drives dummy model to generate dynamic image for the changes in coordinates information according to target point.
In a kind of possible design, acquisition module 61 is specifically used for:
Acquire the facial image of user in real time by monocular or more mesh cameras.
In a kind of possible design, module 62 is chosen, is specifically used for:
Facial image is divided into multiple regions according to face, and selects key point from each region;Key point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region.
In a kind of possible design, mapping block 63 is specifically used for:
Dummy model is divided into multiple regions, and the selection target point from each region according to face;Target point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region;
According to the distributing position of region and target point in corresponding region where target point, key point and void are established
Mapping relations between the target point of analog model.
In a kind of possible design, processing module 64 is specifically used for:
According to the coordinate information of same key point in continuous N frame facial image, the changes in coordinates information of key point is obtained;It closes
The changes in coordinates information of key point refers to: from when being moved to next moment at a moment, the coordinate value of key point changes key point
Amount;
According to the mapping relations between key point and the target point of dummy model, determine that the changes in coordinates information of key point exists
In dummy model, the changes in coordinates information of corresponding target point;The changes in coordinates information of target point refers to: when target point is from one
When quarter is moved to next moment, the coordinate value variable quantity of target point.
In a kind of possible design, drive module 65 is specifically used for:
It determines in dummy model, target point is moved to the interval duration of next coordinate position from a coordinate position;
According to the changes in coordinates information of target point, according to interval duration, driving dummy model generates dynamic image.
In a kind of possible design, dummy model includes: the mask of faceform, animal.
The dynamic image generating means of the present embodiment, can execute the technical solution in method shown in Fig. 2, realization principle
Similar with technical effect, details are not described herein again.
Fig. 6 is the structural schematic diagram for the dynamic image generating means that the embodiment of the present invention four provides, as shown in fig. 6, this reality
On the basis of the dynamic image generating means device shown in Fig. 5 for applying example, can also include:
Memory module 66, for saving dynamic image after driving dummy model generates dynamic image;
Divide module 67, for the sub- dynamic image for being 2 or 2 or more by dynamic image segmentation, wherein son dynamic
The set of image, which is constituted, changes consistent expression packet with the facial image of user.
In a kind of possible design, further includes:
Editor module 68, for being antithetical phrase after 2 or 2 or more sub- dynamic images by dynamic image segmentation
Dynamic image carries out editing and processing, and editing and processing includes: addition audio and/or text.
The dynamic image generating means of the present embodiment, can execute the technical solution in method shown in above-mentioned Fig. 2, Fig. 4,
The realization principle and technical effect are similar, and details are not described herein again.
Fig. 7 is the structural schematic diagram for the dynamic image generating device that the embodiment of the present invention five provides, as shown in fig. 7, this reality
The dynamic image generating device 70 for applying example may include: processor 71 and memory 72.
Memory 72 (such as realizes application program, the function of above-mentioned dynamic image generation method for storing computer program
Module etc.), computer instruction etc., above-mentioned computer program, computer instruction etc. can be deposited with partitioned storage in one or more
In reservoir 72.And above-mentioned computer program, computer instruction, data etc. can be called with device 71 processed.
Processor 71, for executing the computer program of the storage of memory 72, to realize method that above-described embodiment is related to
In each step.It specifically may refer to the associated description in previous methods embodiment.
Processor 71 and memory 72 can be absolute construction, be also possible to the integrated morphology integrated.Work as processing
When device 71 and memory 72 are absolute construction, memory 72, processor 71 can be of coupled connections by bus 73.
The server of the present embodiment can execute the technical solution in the method for any of the above-described embodiment of the method, realize
Principle is similar with technical effect, and details are not described herein again.
In addition, the embodiment of the present application also provides a kind of computer readable storage medium, deposited in computer readable storage medium
Computer executed instructions are contained, when at least one processor of user equipment executes the computer executed instructions, user equipment
Execute above-mentioned various possible methods.
Wherein, computer-readable medium includes computer storage media and communication media, and wherein communication media includes being convenient for
From a place to any medium of another place transmission computer program.Storage medium can be general or specialized computer
Any usable medium that can be accessed.A kind of illustrative storage medium is coupled to processor, to enable a processor to from this
Read information, and information can be written to the storage medium.Certainly, storage medium is also possible to the composition portion of processor
Point.Pocessor and storage media can be located in ASIC.In addition, the ASIC can be located in user equipment.Certainly, processor and
Storage medium can also be used as discrete assembly and be present in communication equipment.
The application also provides a kind of program product, and described program product includes computer program, and the computer program is deposited
In readable storage medium storing program for executing, at least one processor of server can read the computer from the readable storage medium storing program for executing for storage
Program, at least one described processor execute the computer program and make any institute of the server implementation embodiments of the present invention
The dynamic image generation method stated.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer readable storage medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned include: ROM, RAM, magnetic disk or
The various media that can store program code such as person's CD.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or part of or all technical features are carried out etc.
With replacement;And these modifications or substitutions, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution
Range.
Claims (20)
1. a kind of dynamic image generation method characterized by comprising
Acquire the facial image of user;
Key point is chosen from the facial image;
Establish the mapping relations between the key point and the target point of dummy model;
By the changes in coordinates information of key point described in continuous N frame facial image, it is converted into the changes in coordinates letter of the target point
Breath;Wherein, the N is the natural number greater than 1;
According to the changes in coordinates information of the target point, the dummy model is driven to generate dynamic image.
2. the method according to claim 1, wherein the facial image of the acquisition user, comprising:
Acquire the facial image of user in real time by monocular or more mesh cameras.
3. the method according to claim 1, wherein choosing key point from the facial image, comprising:
Facial image is divided into multiple regions according to face, and selects key point from each region;The key point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region.
4. according to the method described in claim 3, it is characterized in that, establishing between the key point and the target point of dummy model
Mapping relations, comprising:
Dummy model is divided into multiple regions, and the selection target point from each region according to face;The target point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region;
According to the distributing position of region and the target point in corresponding region where target point, the key point is established
Mapping relations between the target point of dummy model.
5. the method according to claim 1, wherein by the coordinate of key point described in continuous N frame facial image
Change information is converted into the changes in coordinates information of the target point, comprising:
According to the coordinate information of same key point in continuous N frame facial image, the changes in coordinates information of the key point is obtained;Institute
The changes in coordinates information for stating key point refers to: the key point is from when being moved to next moment at a moment, the key point
Coordinate value variable quantity;
According to the mapping relations between the key point and the target point of dummy model, the changes in coordinates letter of the key point is determined
Breath is in the dummy model, the changes in coordinates information of the corresponding target point;The changes in coordinates information of the target point is
Refer to: the target point is from when being moved to next moment at a moment, the coordinate value variable quantity of the target point.
6. the method according to claim 1, wherein driving institute according to the changes in coordinates information of the target point
It states dummy model and generates dynamic image, comprising:
It determines in the dummy model, when the target point is moved to the interval of next coordinate position from a coordinate position
It is long;
The dummy model is driven to generate Dynamic Graph according to the interval duration according to the changes in coordinates information of the target point
Picture.
7. the method according to claim 1, wherein the dummy model includes: the face of faceform, animal
Model.
8. method according to any one of claims 1-7, which is characterized in that the dummy model is being driven to generate dynamic
After image, further includes:
Save the dynamic image;
The sub- dynamic image for being 2 or 2 or more by the dynamic image segmentation, wherein the set of the sub- dynamic image
It constitutes and changes consistent expression packet with the facial image of user.
9. according to the method described in claim 8, it is characterized in that, being 2 or 2 or more by the dynamic image segmentation
Sub- dynamic image after, further includes:
Editing and processing is carried out to the sub- dynamic image, the editing and processing includes: addition audio and/or text.
10. a kind of dynamic image generating means characterized by comprising
Acquisition module, for acquiring the facial image of user;
Module is chosen, for choosing key point from the facial image;
Mapping block, the mapping relations for establishing between the key point and the target point of dummy model;
Processing module, for being converted into the target point for the changes in coordinates information of key point described in continuous N frame facial image
Changes in coordinates information;Wherein, the N is the natural number greater than 1;
Drive module drives the dummy model to generate dynamic image for the changes in coordinates information according to the target point.
11. device according to claim 10, which is characterized in that the acquisition module is specifically used for:
Acquire the facial image of user in real time by monocular or more mesh cameras.
12. device according to claim 10, which is characterized in that the selection module is specifically used for:
Facial image is divided into multiple regions according to face, and selects key point from each region;The key point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region.
13. device according to claim 12, which is characterized in that the mapping block is specifically used for:
Dummy model is divided into multiple regions, and the selection target point from each region according to face;The target point includes: position
Mark point on brow region, eye areas, nasal area, mouth region, ear region and face contour region;
According to the distributing position of region and the target point in corresponding region where target point, the key point is established
Mapping relations between the target point of dummy model.
14. device according to claim 10, which is characterized in that the processing module is specifically used for:
According to the coordinate information of same key point in continuous N frame facial image, the changes in coordinates information of the key point is obtained;Institute
The changes in coordinates information for stating key point refers to: the key point is from when being moved to next moment at a moment, the key point
Coordinate value variable quantity;
According to the mapping relations between the key point and the target point of dummy model, the changes in coordinates letter of the key point is determined
Breath is in the dummy model, the changes in coordinates information of the corresponding target point;The changes in coordinates information of the target point is
Refer to: the target point is from when being moved to next moment at a moment, the coordinate value variable quantity of the target point.
15. device according to claim 10, which is characterized in that the drive module is specifically used for:
It determines in the dummy model, when the target point is moved to the interval of next coordinate position from a coordinate position
It is long;
The dummy model is driven to generate Dynamic Graph according to the interval duration according to the changes in coordinates information of the target point
Picture.
16. device according to claim 10, which is characterized in that the dummy model includes: the face of faceform, animal
Portion's model.
17. device described in any one of 0-16 according to claim 1, which is characterized in that further include:
Memory module, for saving the dynamic image after driving the dummy model to generate dynamic image;
Divide module, for the sub- dynamic image for being 2 or 2 or more by the dynamic image segmentation, wherein the son is dynamic
The set of state image, which is constituted, changes consistent expression packet with the facial image of user.
18. device according to claim 17, which is characterized in that further include:
Editor module, for after by the dynamic image segmentation being 2 or 2 or more sub- dynamic images, to described
Sub- dynamic image carries out editing and processing, and the editing and processing includes: addition audio and/or text.
19. a kind of dynamic image generating device characterized by comprising memory and processor are stored in memory described
The executable instruction of processor;Wherein, the processor is configured to carry out perform claim requirement via the execution executable instruction
The described in any item dynamic image generation methods of 1-9.
20. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Claim 1-9 described in any item dynamic image generation methods are realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810985746.0A CN109147017A (en) | 2018-08-28 | 2018-08-28 | Dynamic image generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810985746.0A CN109147017A (en) | 2018-08-28 | 2018-08-28 | Dynamic image generation method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109147017A true CN109147017A (en) | 2019-01-04 |
Family
ID=64828397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810985746.0A Pending CN109147017A (en) | 2018-08-28 | 2018-08-28 | Dynamic image generation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109147017A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109978975A (en) * | 2019-03-12 | 2019-07-05 | 深圳市商汤科技有限公司 | A kind of moving method and device, computer equipment of movement |
CN110321008A (en) * | 2019-06-28 | 2019-10-11 | 北京百度网讯科技有限公司 | Exchange method, device, equipment and storage medium based on AR model |
CN110490162A (en) * | 2019-08-23 | 2019-11-22 | 北京搜狐新时代信息技术有限公司 | The methods, devices and systems of face variation are shown based on recognition of face unlocking function |
CN110580691A (en) * | 2019-09-09 | 2019-12-17 | 京东方科技集团股份有限公司 | dynamic processing method, device and equipment of image and computer readable storage medium |
CN110620884A (en) * | 2019-09-19 | 2019-12-27 | 平安科技(深圳)有限公司 | Expression-driven-based virtual video synthesis method and device and storage medium |
CN110705094A (en) * | 2019-09-29 | 2020-01-17 | 深圳市商汤科技有限公司 | Flexible body simulation method and device, electronic equipment and computer readable storage medium |
CN111009024A (en) * | 2019-12-09 | 2020-04-14 | 咪咕视讯科技有限公司 | Method for generating dynamic image, electronic equipment and storage medium |
CN111063339A (en) * | 2019-11-11 | 2020-04-24 | 珠海格力电器股份有限公司 | Intelligent interaction method, device, equipment and computer readable medium |
CN111127603A (en) * | 2020-01-06 | 2020-05-08 | 北京字节跳动网络技术有限公司 | Animation generation method and device, electronic equipment and computer readable storage medium |
CN111291674A (en) * | 2020-02-04 | 2020-06-16 | 清华珠三角研究院 | Method, system, device and medium for extracting expression and action of virtual character |
CN111368662A (en) * | 2020-02-25 | 2020-07-03 | 华南理工大学 | Method, device, storage medium and equipment for editing attribute of face image |
CN111447379A (en) * | 2019-01-17 | 2020-07-24 | 百度在线网络技术(北京)有限公司 | Method and device for generating information |
WO2020151456A1 (en) * | 2019-01-25 | 2020-07-30 | 北京字节跳动网络技术有限公司 | Method and device for processing image having animal face |
CN111640183A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | AR data display control method and device |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
CN111651033A (en) * | 2019-06-26 | 2020-09-11 | 广州虎牙科技有限公司 | Driving display method and device for human face, electronic equipment and storage medium |
CN111669647A (en) * | 2020-06-12 | 2020-09-15 | 北京百度网讯科技有限公司 | Real-time video processing method, device, equipment and storage medium |
CN111901672A (en) * | 2020-06-12 | 2020-11-06 | 深圳市京华信息技术有限公司 | Artificial intelligence image processing method |
CN111985268A (en) * | 2019-05-21 | 2020-11-24 | 搜狗(杭州)智能科技有限公司 | Method and device for driving animation by human face |
CN112634420A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
WO2021083133A1 (en) * | 2019-10-29 | 2021-05-06 | 广州虎牙科技有限公司 | Image processing method and device, equipment and storage medium |
CN114007099A (en) * | 2021-11-04 | 2022-02-01 | 北京搜狗科技发展有限公司 | Video processing method and device for video processing |
CN114281236A (en) * | 2021-12-28 | 2022-04-05 | 建信金融科技有限责任公司 | Text processing method, device, equipment, medium and program product |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826217A (en) * | 2010-05-07 | 2010-09-08 | 上海交通大学 | Rapid generation method for facial animation |
CN103514432A (en) * | 2012-06-25 | 2014-01-15 | 诺基亚公司 | Method, device and computer program product for extracting facial features |
CN104170318A (en) * | 2012-04-09 | 2014-11-26 | 英特尔公司 | Communication using interactive avatars |
CN105678702A (en) * | 2015-12-25 | 2016-06-15 | 北京理工大学 | Face image sequence generation method and device based on feature tracking |
CN107154069A (en) * | 2017-05-11 | 2017-09-12 | 上海微漫网络科技有限公司 | A kind of data processing method and system based on virtual role |
CN108197533A (en) * | 2017-12-19 | 2018-06-22 | 迈巨(深圳)科技有限公司 | A kind of man-machine interaction method based on user's expression, electronic equipment and storage medium |
CN108256505A (en) * | 2018-02-12 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN108335345A (en) * | 2018-02-12 | 2018-07-27 | 北京奇虎科技有限公司 | The control method and device of FA Facial Animation model, computing device |
-
2018
- 2018-08-28 CN CN201810985746.0A patent/CN109147017A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826217A (en) * | 2010-05-07 | 2010-09-08 | 上海交通大学 | Rapid generation method for facial animation |
CN104170318A (en) * | 2012-04-09 | 2014-11-26 | 英特尔公司 | Communication using interactive avatars |
CN103514432A (en) * | 2012-06-25 | 2014-01-15 | 诺基亚公司 | Method, device and computer program product for extracting facial features |
CN105678702A (en) * | 2015-12-25 | 2016-06-15 | 北京理工大学 | Face image sequence generation method and device based on feature tracking |
CN107154069A (en) * | 2017-05-11 | 2017-09-12 | 上海微漫网络科技有限公司 | A kind of data processing method and system based on virtual role |
CN108197533A (en) * | 2017-12-19 | 2018-06-22 | 迈巨(深圳)科技有限公司 | A kind of man-machine interaction method based on user's expression, electronic equipment and storage medium |
CN108256505A (en) * | 2018-02-12 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN108335345A (en) * | 2018-02-12 | 2018-07-27 | 北京奇虎科技有限公司 | The control method and device of FA Facial Animation model, computing device |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111447379A (en) * | 2019-01-17 | 2020-07-24 | 百度在线网络技术(北京)有限公司 | Method and device for generating information |
GB2595094A (en) * | 2019-01-25 | 2021-11-17 | Beijing Bytedance Network Tech Co Ltd | Method and device for processing image having animal face |
GB2595094B (en) * | 2019-01-25 | 2023-03-08 | Beijing Bytedance Network Tech Co Ltd | Method and device for processing image having animal face |
WO2020151456A1 (en) * | 2019-01-25 | 2020-07-30 | 北京字节跳动网络技术有限公司 | Method and device for processing image having animal face |
CN109978975A (en) * | 2019-03-12 | 2019-07-05 | 深圳市商汤科技有限公司 | A kind of moving method and device, computer equipment of movement |
CN111985268A (en) * | 2019-05-21 | 2020-11-24 | 搜狗(杭州)智能科技有限公司 | Method and device for driving animation by human face |
CN111651033A (en) * | 2019-06-26 | 2020-09-11 | 广州虎牙科技有限公司 | Driving display method and device for human face, electronic equipment and storage medium |
CN111651033B (en) * | 2019-06-26 | 2024-03-05 | 广州虎牙科技有限公司 | Face driving display method and device, electronic equipment and storage medium |
CN110321008B (en) * | 2019-06-28 | 2023-10-24 | 北京百度网讯科技有限公司 | Interaction method, device, equipment and storage medium based on AR model |
CN110321008A (en) * | 2019-06-28 | 2019-10-11 | 北京百度网讯科技有限公司 | Exchange method, device, equipment and storage medium based on AR model |
CN110490162A (en) * | 2019-08-23 | 2019-11-22 | 北京搜狐新时代信息技术有限公司 | The methods, devices and systems of face variation are shown based on recognition of face unlocking function |
CN110580691A (en) * | 2019-09-09 | 2019-12-17 | 京东方科技集团股份有限公司 | dynamic processing method, device and equipment of image and computer readable storage medium |
CN110620884B (en) * | 2019-09-19 | 2022-04-22 | 平安科技(深圳)有限公司 | Expression-driven-based virtual video synthesis method and device and storage medium |
CN110620884A (en) * | 2019-09-19 | 2019-12-27 | 平安科技(深圳)有限公司 | Expression-driven-based virtual video synthesis method and device and storage medium |
CN110705094A (en) * | 2019-09-29 | 2020-01-17 | 深圳市商汤科技有限公司 | Flexible body simulation method and device, electronic equipment and computer readable storage medium |
WO2021083133A1 (en) * | 2019-10-29 | 2021-05-06 | 广州虎牙科技有限公司 | Image processing method and device, equipment and storage medium |
CN111063339A (en) * | 2019-11-11 | 2020-04-24 | 珠海格力电器股份有限公司 | Intelligent interaction method, device, equipment and computer readable medium |
CN111009024B (en) * | 2019-12-09 | 2024-03-26 | 咪咕视讯科技有限公司 | Method for generating dynamic image, electronic equipment and storage medium |
CN111009024A (en) * | 2019-12-09 | 2020-04-14 | 咪咕视讯科技有限公司 | Method for generating dynamic image, electronic equipment and storage medium |
CN111127603B (en) * | 2020-01-06 | 2021-06-11 | 北京字节跳动网络技术有限公司 | Animation generation method and device, electronic equipment and computer readable storage medium |
CN111127603A (en) * | 2020-01-06 | 2020-05-08 | 北京字节跳动网络技术有限公司 | Animation generation method and device, electronic equipment and computer readable storage medium |
CN111291674A (en) * | 2020-02-04 | 2020-06-16 | 清华珠三角研究院 | Method, system, device and medium for extracting expression and action of virtual character |
CN111368662A (en) * | 2020-02-25 | 2020-07-03 | 华南理工大学 | Method, device, storage medium and equipment for editing attribute of face image |
CN111368662B (en) * | 2020-02-25 | 2023-03-21 | 华南理工大学 | Method, device, storage medium and equipment for editing attribute of face image |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
CN111640183A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | AR data display control method and device |
CN111669647A (en) * | 2020-06-12 | 2020-09-15 | 北京百度网讯科技有限公司 | Real-time video processing method, device, equipment and storage medium |
CN111901672A (en) * | 2020-06-12 | 2020-11-06 | 深圳市京华信息技术有限公司 | Artificial intelligence image processing method |
CN112634420A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN112634420B (en) * | 2020-12-22 | 2024-04-30 | 北京达佳互联信息技术有限公司 | Image special effect generation method and device, electronic equipment and storage medium |
CN114007099A (en) * | 2021-11-04 | 2022-02-01 | 北京搜狗科技发展有限公司 | Video processing method and device for video processing |
CN114281236A (en) * | 2021-12-28 | 2022-04-05 | 建信金融科技有限责任公司 | Text processing method, device, equipment, medium and program product |
CN114281236B (en) * | 2021-12-28 | 2023-08-15 | 建信金融科技有限责任公司 | Text processing method, apparatus, device, medium, and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147017A (en) | Dynamic image generation method, device, equipment and storage medium | |
CN109191548A (en) | Animation method, device, equipment and storage medium | |
US11321385B2 (en) | Visualization of image themes based on image content | |
KR102658960B1 (en) | System and method for face reenactment | |
CN110390704B (en) | Image processing method, image processing device, terminal equipment and storage medium | |
CN108335345B (en) | Control method and device of facial animation model and computing equipment | |
CN110490896B (en) | Video frame image processing method and device | |
CN111028330A (en) | Three-dimensional expression base generation method, device, equipment and storage medium | |
CN107343225B (en) | The method, apparatus and terminal device of business object are shown in video image | |
CN109409274B (en) | Face image transformation method based on face three-dimensional reconstruction and face alignment | |
CN108986190A (en) | A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation | |
US11282257B2 (en) | Pose selection and animation of characters using video data and training techniques | |
CN108111911B (en) | Video data real-time processing method and device based on self-adaptive tracking frame segmentation | |
CN109064548B (en) | Video generation method, device, equipment and storage medium | |
CN111145308A (en) | Paster obtaining method and device | |
CN106056650A (en) | Facial expression synthetic method based on rapid expression information extraction and Poisson image fusion | |
CN112995534B (en) | Video generation method, device, equipment and readable storage medium | |
US20190206117A1 (en) | Image processing method, intelligent terminal, and storage device | |
CN115100334B (en) | Image edge tracing and image animation method, device and storage medium | |
CN115393480A (en) | Speaker synthesis method, device and storage medium based on dynamic nerve texture | |
CN114708636A (en) | Dense face grid expression driving method, device and medium | |
CN114758027A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
US9092874B2 (en) | Method for determining the movements of an object from a stream of images | |
CN108109158B (en) | Video crossing processing method and device based on self-adaptive threshold segmentation | |
CN115937372B (en) | Facial expression simulation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190104 |
|
RJ01 | Rejection of invention patent application after publication |