CN109948555A - Human face super-resolution recognition methods based on video flowing - Google Patents
Human face super-resolution recognition methods based on video flowing Download PDFInfo
- Publication number
- CN109948555A CN109948555A CN201910218852.0A CN201910218852A CN109948555A CN 109948555 A CN109948555 A CN 109948555A CN 201910218852 A CN201910218852 A CN 201910218852A CN 109948555 A CN109948555 A CN 109948555A
- Authority
- CN
- China
- Prior art keywords
- eyebrow
- mouth
- vector
- eyes
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The human face super-resolution recognition methods based on video flowing that this application provides a kind of, this method obtain the continuous multiple frames image comprising face;Each face characteristic in every frame is positioned, face characteristic includes eyes, eyebrow, mouth;According to positioning result of the identical face characteristic in continuous multiple frames image, dynamically track is carried out to each face characteristic;According to the dynamically track of each face characteristic as a result, carrying out vector estimation to continuous multiple frames image;According to vector estimated result, superresolution restoration is carried out to multiple image, obtains facial image.Method provided by the present application, according to the dynamically track result of each face characteristic in the continuous multiple frames image comprising face, vector estimation is carried out to continuous multiple frames image, further according to vector estimated result, superresolution restoration is carried out to multiple image, facial image is obtained, the face recognition accuracy rate under situation of movement can be promoted.
Description
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of human face super-resolution identification sides based on video flowing
Method.
Background technique
As computer technology and image recognition technology are using more and more extensive, in real life, under many scenes
It will need to identify the face in certain image to carry out identification.
The conditions following figures such as but under many scenes, such as the person's of being taken distance is farther out, bad weather, and mobile terminal imaging is poor
Image sharpness is poor, and especially in the shooting of video camera, most of monitoring camera is all in very high position, for people
The image of face is smaller and fuzzy.Even, in many cases, people is in mobile rather than stationary state, the eyes and lip of people
It is in moving condition.
Traditional face recognition technology can be poor to the face processing under situation of movement, in this case, many situations
The accuracy rate of human face identification is lower.
Summary of the invention
To solve the above problems, the embodiment of the present application proposes a kind of human face super-resolution identification side based on video flowing
Method.The main technical schemes that the present invention uses include:
A kind of human face super-resolution recognition methods based on video flowing, the method, comprising:
S101 obtains the continuous multiple frames image comprising face;
S102 positions each face characteristic in every frame, and the face characteristic includes eyes, eyebrow, mouth;
S103 carries out each face characteristic according to positioning result of the identical face characteristic in the continuous multiple frames image
Dynamically track;
S104, according to the dynamically track of each face characteristic as a result, carrying out vector estimation to the continuous multiple frames image;
S105 carries out superresolution restoration to the multiple image, obtains facial image according to vector estimated result.
Optionally, face pixel is not more than 50 × 50 in any frame image.
Optionally, the S102 includes:
For any frame image,
S201 identifies any frame figure according to the face characteristic identification model for being in advance based on deep neural network training
The left eye eyeball as in, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image;
S202, according to left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image is determined
Maximum spacing between eyes, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows are double
Minimum spacing between eyebrow, the difference in height between double eyebrows, mouth length, mouth width;
S203, by left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image, eyes it
Between maximum spacing, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows, between double eyebrows
Minimum spacing, the difference in height between double eyebrows, mouth length, mouth width is used as the positioning to face characteristic each in any frame
As a result.
Optionally, left eye eyeball in any frame image identified in the S201, right eye eyeball, left eyebrow, right eyebrow,
Position of the mouth in any frame image includes:
The inner eye corner position of left eye eyeball, the external eyes Angle Position of left eye eyeball, the eyeball center of left eye eyeball, right eye eyeball it is interior
Canthus position, the external eyes Angle Position of right eye eyeball, the eyeball center of right eye eyeball, the brows position of left eyebrow, the eyebrow of left eyebrow
Tail position, the eyebrow peak position of left eyebrow, the brows position of right eyebrow, the eyebrow tail position of right eyebrow, the eyebrow peak position of right eyebrow, mouth
Left corners of the mouth position, the right corners of the mouth position of mouth, the lip peak position of mouth, the lip bottom position of mouth.
Optionally, the S202 includes:
S202-1 determines the face area in any frame image according to the face characteristic identification model;
S202-2 determines the image area of any frame image;
S202-3 determines face area in ratio=any frame image of any frame image/described any
The image area of frame image;
S202-4, determine eyes regulation coefficient=any frame image ratio × | external eyes Angle Position-left side of left eye eyeball
The inner eye corner position of eyes |/| external eyes Angle Position-right eye eyeball inner eye corner position of right eye eyeball |;
S202-5, determine eyebrow regulation coefficient=any frame image ratio × | the brows position-Zuo Mei of left eyebrow
The eyebrow tail position of hair |/| the brows position of right eyebrow-right eyebrow eyebrow tail position |;
S202-6, determine mouth regulation coefficient=any frame image ratio × | left corners of the mouth position-left eye eyeball of mouth
Inner eye corner position |/| the inner eye corner position of right corners of the mouth position-right eye eyeball of mouth |;
S202-7, maximum spacing=eyes regulation coefficient between eyes × | external eyes Angle Position-right eye eyeball of left eye eyeball
External eyes Angle Position | the ratio of/any frame image;
Minimum spacing=eyes regulation coefficient between eyes × | inner eye corner position-right eye eyeball inner eye corner of left eye eyeball
Position |;
Difference in height=eyes regulation coefficient between eyes × | in eyeball center-right eye eyeball eyeball of left eye eyeball
Heart position |;
Maximum spacing=eyebrow regulation coefficient between double eyebrows × | eyebrow tail position-right eyebrow eyebrow tail position of left eyebrow
|;
Minimum spacing=eyebrow regulation coefficient between double eyebrows × | the brows position of left eyebrow-right eyebrow brows position
|;
Difference in height=eyebrow regulation coefficient between double eyebrows × | the eyebrow peak position of left eyebrow-right eyebrow eyebrow peak position |;
Mouth length=mouth regulation coefficient × | the right corners of the mouth position of left corners of the mouth position-mouth of mouth |;
Mouth width=mouth regulation coefficient × | lip peak position-mouth lip bottom position of mouth |.
Optionally, the S103 includes:
S103-1, by eye position left in every frame, right eye position, the maximum spacing between eyes, between eyes most
Small spacing, the difference in height between eyes, according to [left eye position, right eye position, the maximum spacing between eyes, eyes it
Between minimum spacing, the difference in height between eyes] format, form eyes vector;
S103-2, by eyebrow left in every frame position, right eyebrow position, the maximum spacing between double eyebrows, between double eyebrows most
Small spacing, the difference in height between double eyebrows, according to [left eyebrow position, right eyebrow position, the maximum spacing between double eyebrows, double eyebrows it
Between minimum spacing, the difference in height between double eyebrows] format, form eyebrow vector;
S103-3, by the position of mouth in every frame, mouth length, mouth width, according to [position of mouth, mouth length, mouth width] lattice
Formula forms mouth vector;
S103-4, by eyes vector, eyebrow vector, mouth vector, according to [eyes vector, eyebrow vector, mouth vector] format,
Form two-dimensional feature vector;
S103-5 arranges the two-dimensional feature vector of each frame according to frame sequential, forms vector sequence;
S103-6, using the vector sequence as the dynamically track result of each face characteristic.
Optionally, the S104 includes:
S104-1 successively selects a two-dimensional feature vector in the non-first two-dimensional feature vector of the vector sequence,
Calculate the eyes phasor difference between the two-dimensional feature vector of selection and its previous two-dimensional feature vector, eyebrow phasor difference, mouth vector
Difference;
S104-2 calculates the standard deviation of all eyes phasor differences, the standard deviation of all eyebrow phasor differences, all mouth phasor differences
Standard deviation;
S104-3 successively selects a frame image in non-first frame image, calculate the image of selection face area and its before
Difference between the face area of one frame image;
S104-5 calculates the standard deviation of owner's face product moment;
The standard deviation of all eyes phasor differences/owner's face product moment standard deviation is determined as eyes estimation by S104-6
Value, is determined as eyebrow estimated value for the standard deviation of all eyebrow phasor differences/owner's face product moment standard deviation, all mouths is sweared
Standard deviation/owner's face product moment standard deviation of amount difference is determined as mouth estimated value;
S104-7, by eyes estimated value, eyebrow estimated value, mouth estimated value is used as vector estimated result.
Optionally, the S104-1 includes:
For any one two-dimensional feature vector in the non-first two-dimensional feature vector of the vector sequence,
S301 determines the eyes vector in any one described two-dimensional feature vector, eyebrow vector, mouth vector, and is remembered
For first eye vector, the first eyebrow vector, the first mouth vector;
S302, determines eyes vector in the previous two-dimensional feature vector of any one two-dimensional feature vector, eyebrow vector,
Mouth vector, and it is denoted as the second eyes vector, the second eyebrow vector, the second mouth vector;
S303 determines eyes phasor difference=(min is { in left eye position/second eyes vector in first eye vector
Left eye position, the right eye position in right eye position/second eyes vector in first eye vector }) × (max { first
The maximum spacing between the eyes in maximum spacing the-the second eyes vector between eyes in eyes vector, first eye arrow
The minimum spacing between the eyes in minimum spacing the-the second eyes vector between eyes in amount, in first eye vector
The difference in height between the eyes in difference in height the-the second eyes vector between eyes });
S304 determines eyebrow phasor difference=(min is { in left eyebrow position/second eyebrow vector in the first eyebrow vector
Left eyebrow position, the right eyebrow position in right eyebrow position/second eyebrow vector in the first eyebrow vector }) × [(the first eyebrow
The maximum spacing between double eyebrows in maximum spacing the-the second eyebrow vector between double eyebrows in hair vector)+(the first eyebrow arrow
The minimum spacing between double eyebrows in minimum spacing the-the second eyebrow vector between double eyebrows in amount)+(in the first eyebrow vector
Double eyebrows between difference in height the-the second eyebrow vector in double eyebrows between difference in height)]/3;
S305, determine mouth phasor difference=(position of the mouth in the position of the mouth in the first mouth vector/second mouth vector) ×
[(the mouth width in mouth length × first mouth vector in the first mouth vector)-(mouth length × the second mouth arrow in the second mouth vector
Mouth width in amount)].
Optionally, the S105 includes:
S105-1 carries out preset quantity time super-resolution rebuilding to the multiple image according to the following formula, obtains final
Reconstruction image:
Wherein, n is current reconstruction number, and 1≤n≤N, N are preset quantity, and K is the frame sum of the multiple image, and k is
Frame identification, ykFor kth frame image, ↑ for up-sampling operation, s is parameter preset, and P is back projection's core, FkFor the mapping forward of kth frame
Operation,BkFor the mapping operations backward of kth frame, h is fuzzy core, and * is convolution operator, ↓ be
Down-sampling operation, X(n)Image after being rebuild for n-th;
S105-2 identifies the facial image in final reconstruction image according to the face characteristic identification model;
S105-3 is adjusted the facial image in final reconstruction image according to the vector estimated result, obtains most
Whole facial image.
Optionally, the facial image in final reconstruction image is carried out according to the vector estimated result in the S105-3
Adjustment, comprising:
For each pixel in the facial image in whole reconstruction image,
It is minute of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyes
Resolution × eyes estimated value;
It is minute of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyebrow
Resolution × eyebrow estimated value;
It is the resolution of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of mouth
Rate × mouth estimated value;
If the pixel non-ocular, non-eyebrow, the corresponding pixel of non-mouth, then its resolution ratio does not adjust.
The beneficial effects of the present invention are: according to the dynamically track knot of each face characteristic in the continuous multiple frames image comprising face
Fruit carries out vector estimation to continuous multiple frames image, further according to vector estimated result, carries out superresolution restoration to multiple image,
Facial image is obtained, the face recognition accuracy rate under situation of movement can be promoted.
Detailed description of the invention
The specific embodiment of the application is described below with reference to accompanying drawings, in which:
Fig. 1 shows a kind of stream of human face super-resolution recognition methods based on video flowing of one embodiment of the application offer
Journey schematic diagram;
Fig. 2 shows a kind of face characteristics that one embodiment of the application provides to identify schematic diagram;
Fig. 3 shows a kind of eyes schematic diagram of one embodiment of the application offer;
Fig. 4 shows a kind of eyebrow position schematic diagram of one embodiment of the application offer;
Fig. 5 shows a kind of mouth position schematic diagram of one embodiment of the application offer;
Fig. 6 shows a kind of face characteristic dynamically track schematic diagram of one embodiment of the application offer.
Specific embodiment
The conditions following figures such as but under many scenes, such as the person's of being taken distance is farther out, bad weather, and mobile terminal imaging is poor
Image sharpness is poor, and especially in the shooting of video camera, most of monitoring camera is all in very high position, for people
The image of face is smaller and fuzzy.Even, in many cases, people is in mobile rather than stationary state, the eyes and lip of people
It is in moving condition.Traditional face recognition technology can be poor to the face processing under situation of movement, in this case, very much
In the case of recognition of face accuracy rate it is lower.
Based on this, this motion provides a kind of method, according in the continuous multiple frames image comprising face each face characteristic it is dynamic
State tracking result carries out vector estimation to continuous multiple frames image, further according to vector estimated result, carries out super-resolution to multiple image
Rate is restored, and obtains facial image, can promote the face recognition accuracy rate under situation of movement.
Referring to Fig. 1, a kind of human face super-resolution recognition methods based on video flowing provided in this embodiment realizes process such as
Under:
S101 obtains the continuous multiple frames image comprising face.
Wherein, in every frame image face pixel no more than 50 × 50.
The available one section video comprising face in concrete practice.
S102 positions each face characteristic in every frame.
Wherein, face characteristic includes eyes (left eye eyeball and right eye eyeball), eyebrow (left eyebrow and right eyebrow), mouth.
This step can use the deep neural network of depth learning technology training one face characteristic identification, to every frame figure
Face as in carries out feature identification, identifies eyes, eyebrow, mouth, and orient the position of each face characteristic.
Specifically, it is directed to any frame image (such as the i-th frame image),
S201 is identified in any frame image according to the face characteristic identification model for being in advance based on deep neural network training
Left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image.
As shown in Fig. 2, first identifying the i-th frame figure according to the face characteristic identification model for being in advance based on deep neural network training
Left eye eyeball as in, right eye eyeball, left eyebrow, right eyebrow, the position of mouth.
Due to eyes, eyebrow, mouth is a region, and position is a point, therefore in specific identification, can only identify left eye
The inner eye corner position of eyeball, the external eyes Angle Position of left eye eyeball, the eyeball center of left eye eyeball, the inner eye corner position of right eye eyeball are right
The external eyes Angle Position of eyes, the eyeball center of right eye eyeball, the brows position of left eyebrow, the eyebrow tail position of left eyebrow, Zuo Mei
The eyebrow peak position of hair, the brows position of right eyebrow, the eyebrow tail position of right eyebrow, the eyebrow peak position of right eyebrow, the left corners of the mouth position of mouth
It sets, the right corners of the mouth position of mouth, the lip peak position of mouth, the lip bottom position of mouth.
Wherein, inner eye corner is canthus of the eyes at the bridge of the nose, such as 1 position in Fig. 3.The tail of the eye is eyes close to the sun
Canthus at cave, such as 2 positions in Fig. 3.Eyeball center is the ball center of eyeball, such as 3 positions in Fig. 3.
Brows are the tip of the brow point at the bridge of the nose, such as 1 position in Fig. 4.Eyebrow tail is the tip of the brow point at temple, such as
2 positions in Fig. 4.Eyebrow peak is eyebrow highest point, such as 3 positions in Fig. 4.
The left corners of the mouth of mouth is the left side corners of the mouth, such as 1 position in Fig. 5.The right corners of the mouth of mouth is the right side corners of the mouth, such as 2 positions in Fig. 5.
Lip peak is the highest point of upper lip, such as 3 positions of Fig. 5.Lip bottom is the lowest part of lower lip, such as 4 positions of Fig. 5.
S202, according to left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image determines eyes
Between maximum spacing, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows, double eyebrows it
Between minimum spacing, the difference in height between double eyebrows, mouth length, mouth width.
, may be due to hand shaking although multiple image is the image for same face, the reasons such as movement will lead to bat
When taking the photograph different frame image, image acquiring device (such as video camera, camera etc.) is different at a distance from face, and some far points have
Near point, this will cause, and the ratio of face in the picture is different in different frame image, and somebody is bold a little, the bashful point of somebody.This
Kind of human face ratio difference will cause the maximum spacing between the eyes between different frame, the minimum spacing between eyes, eyes it
Between difference in height, the maximum spacing between double eyebrows, the minimum spacing between double eyebrows, the difference in height between double eyebrows, mouth length, mouth is wide
Degree is without comparativity.
This step is directed to situation of movement, and the eyes regulation coefficient of every frame image is dynamically determined by the ratio of every frame image,
Eyebrow regulation coefficient, mouth regulation coefficient, further according to the eyes regulation coefficient of every frame image, eyebrow regulation coefficient, mouth regulation coefficient
Determine the maximum spacing between the determination eyes of every frame image, the minimum spacing between eyes, the difference in height between eyes, double eyebrows
Between maximum spacing, the minimum spacing between double eyebrows, the difference in height between double eyebrows, mouth length, mouth width can effectively go
Except the human face ratio as caused by movement is different, it is ensured that the maximum spacing between the eyes of every frame image, between eyes most
Small spacing, the difference in height between eyes, the maximum spacing between double eyebrows, the minimum spacing between double eyebrows, the height between double eyebrows
Difference, mouth length, mouth width are comparable,
Specifically,
S202-1 determines the face area in any frame image according to face characteristic identification model.
S202-2 determines the image area of any frame image.
S202-3 determines face area/any frame image image in ratio=any frame image of any frame image
Area.
S202-4, determine eyes regulation coefficient=any frame image ratio × | external eyes Angle Position-left eye eyeball of left eye eyeball
Inner eye corner position |/| external eyes Angle Position-right eye eyeball inner eye corner position of right eye eyeball |.
S202-5, determine eyebrow regulation coefficient=any frame image ratio × | the brows position of left eyebrow-left eyebrow
Eyebrow tail position |/| the brows position of right eyebrow-right eyebrow eyebrow tail position |.
S202-6, determine mouth regulation coefficient=any frame image ratio × | the interior eye of left corners of the mouth position-left eye eyeball of mouth
Angle Position |/| the inner eye corner position of right corners of the mouth position-right eye eyeball of mouth |.
S202-7, maximum spacing=eyes regulation coefficient between eyes × | external eyes Angle Position-right eye eyeball of left eye eyeball
External eyes Angle Position | the ratio of/any frame image.
Minimum spacing=eyes regulation coefficient between eyes × | inner eye corner position-right eye eyeball inner eye corner of left eye eyeball
Position |.
Difference in height=eyes regulation coefficient between eyes × | in eyeball center-right eye eyeball eyeball of left eye eyeball
Heart position |.
Maximum spacing=eyebrow regulation coefficient between double eyebrows × | eyebrow tail position-right eyebrow eyebrow tail position of left eyebrow
|。
Minimum spacing=eyebrow regulation coefficient between double eyebrows × | the brows position of left eyebrow-right eyebrow brows position
|。
Difference in height=eyebrow regulation coefficient between double eyebrows × | the eyebrow peak position of left eyebrow-right eyebrow eyebrow peak position |.
Mouth length=mouth regulation coefficient × | the right corners of the mouth position of left corners of the mouth position-mouth of mouth |.
Mouth width=mouth regulation coefficient × | lip peak position-mouth lip bottom position of mouth |.
S203, by left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image, between eyes
Maximum spacing, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows, between double eyebrows most
Small spacing, the difference in height between double eyebrows, mouth length, mouth width are used as the positioning result to face characteristic each in any frame.
So far, the left eye position of every frame image can be obtained, right eye position, left eyebrow position, right eyebrow, mouth position,
Maximum spacing between eyes, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows are double
Minimum spacing between eyebrow, the difference in height between double eyebrows, mouth length, mouth width.
S103 carries out dynamic to each face characteristic according to positioning result of the identical face characteristic in continuous multiple frames image
Tracking.
After executing step S102, the eyes in every frame image can be recognized, eyebrow, the position of mouth connects each frame
Coming can be to eye, and eyebrow, mouth is tracked, as shown in fig. 6, the left side is eyes in previous frame image in Fig. 6, eyebrow, mouth
Position, the right side are eyes in a later frame image, eyebrow, the position of mouth.
In concrete practice, step S103 can record the dynamically track result in the form of vector.Such as:
S103-1, by eye position left in every frame, right eye position, the maximum spacing between eyes, between eyes most
Small spacing, the difference in height between eyes, according to [left eye position, right eye position, the maximum spacing between eyes, eyes it
Between minimum spacing, the difference in height between eyes] format, form eyes vector.
Herein, eyes vector is a n dimensional vector n.
S103-2, by eyebrow left in every frame position, right eyebrow position, the maximum spacing between double eyebrows, between double eyebrows most
Small spacing, the difference in height between double eyebrows, according to [left eyebrow position, right eyebrow position, the maximum spacing between double eyebrows, double eyebrows it
Between minimum spacing, the difference in height between double eyebrows] format, form eyebrow vector.
Herein, eyebrow vector is a n dimensional vector n.
S103-3, by the position of mouth in every frame, mouth length, mouth width, according to [position of mouth, mouth length, mouth width] lattice
Formula forms mouth vector.
Herein, mouth vector is a n dimensional vector n.
S103-4, by eyes vector, eyebrow vector, mouth vector, according to [eyes vector, eyebrow vector, mouth vector] format,
Form two-dimensional feature vector.
The two-dimensional vector that two-dimensional feature vector herein is made of 3 one-dimensional vectors.
S103-5 arranges the two-dimensional feature vector of each frame according to frame sequential, forms vector sequence.
In vector sequence herein, each element is a two-dimensional feature vector.Practical Sequence composition three-dimensional arrow
Amount.
S103-6, using vector sequence as the dynamically track result of each face characteristic.
S104, according to the dynamically track of each face characteristic as a result, carrying out vector estimation to continuous multiple frames image.
The dynamically track result for each face characteristic that this step can be obtained according to S103 determines corresponding to eyes, eyebrow and mouth
The estimation of a vector, vector estimation practical is regulation coefficient, face when being identified by the coefficient to subsequent facial image
Image is finely adjusted.Since vector estimation is obtained according to the dynamically track of all frame images, translates all frame images and make
For the whole deviation situation of one section of whole image sequence, therefore, shifting can further be eliminated by being adjusted according to vector estimation
Emotionally influence of the condition to face recognition accuracy rate.
Specifically, the realization process of S104 is as follows:
S104-1 successively selects a two-dimensional feature vector in the non-first two-dimensional feature vector of vector sequence, calculates
Eyes phasor difference between the two-dimensional feature vector of selection and its previous two-dimensional feature vector, eyebrow phasor difference, mouth phasor difference.
For example, share 5 frame images, vector sequence be the two-dimensional feature vector of the 1st frame, the two-dimensional feature vector of the 2nd frame,
The two-dimensional feature vector of 3rd frame, the two-dimensional feature vector of the 4th frame, the two-dimensional feature vector of the 5th frame }.
Two-dimensional feature vector of this step in the 2nd frame, the two-dimensional feature vector of the 3rd frame, the two-dimensional feature vector of the 4th frame,
A two-dimensional feature vector is successively selected in the two-dimensional feature vector of 5th frame.
For example, the two-dimensional feature vector of the 2nd frame of selection, at this point, calculating the two-dimensional feature vector (two dimension of the 2nd frame of selection
Characteristic vector) and its previous two-dimensional feature vector (two-dimensional feature vector of the 1st frame) between eyes phasor difference, eyebrow vector
Difference, mouth phasor difference.
Determining eyes phasor difference, eyebrow phasor difference, when mouth phasor difference, implementation method includes but is not limited to:
For any one two-dimensional feature vector (such as two dimension of the 2nd frame in the non-first two-dimensional feature vector of vector sequence
Characteristic vector),
S301 determines the eyes vector in any one two-dimensional feature vector, eyebrow vector, mouth vector, and is denoted as
One eyes vector, the first eyebrow vector, the first mouth vector.
For example, the eyes vector in the two-dimensional feature vector of the 2nd frame, eyebrow vector, mouth vector are denoted as first eye vector,
First eyebrow vector, the first mouth vector.
S302 determines eyes vector in the previous two-dimensional feature vector of any one two-dimensional feature vector, eyebrow vector, mouth arrow
Amount, and it is denoted as the second eyes vector, the second eyebrow vector, the second mouth vector.
For example, the eyes vector in the two-dimensional feature vector of the 1st frame, eyebrow vector, mouth vector are denoted as the second eyes vector,
Second eyebrow vector, the second mouth vector.
S303 determines eyes phasor difference=(min is { in left eye position/second eyes vector in first eye vector
Left eye position, the right eye position in right eye position/second eyes vector in first eye vector }) × (max { first
The maximum spacing between the eyes in maximum spacing the-the second eyes vector between eyes in eyes vector, first eye arrow
The minimum spacing between the eyes in minimum spacing the-the second eyes vector between eyes in amount, in first eye vector
The difference in height between the eyes in difference in height the-the second eyes vector between eyes }).
When determining eyes vector error, it is contemplated that the difference of distance between the ratio and eyes of single eye position
Away from.The ratio of simple eye position can reflect the longitudinal gap of eye position in two field pictures, and the gap of distance can be between eyes
The lateral gap for reflecting eye position in two field pictures can effectively eliminate two frame figures by considering vertical and horizontal gap
Eyes deviation as caused by movement as in.
S304 determines eyebrow phasor difference=(min is { in left eyebrow position/second eyebrow vector in the first eyebrow vector
Left eyebrow position, the right eyebrow position in right eyebrow position/second eyebrow vector in the first eyebrow vector }) × [(the first eyebrow
The maximum spacing between double eyebrows in maximum spacing the-the second eyebrow vector between double eyebrows in hair vector)+(the first eyebrow arrow
The minimum spacing between double eyebrows in minimum spacing the-the second eyebrow vector between double eyebrows in amount)+(in the first eyebrow vector
Double eyebrows between difference in height the-the second eyebrow vector in double eyebrows between difference in height)]/3.
When determining eyebrow vector error, it is contemplated that the difference of distance between the ratio and double eyebrows of single eyebrow position
Away from.The ratio of single eyebrow position can reflect the longitudinal gap of eyebrow position in two field pictures, and the gap of distance can be between double eyebrows
The lateral gap for reflecting eyebrow position in two field pictures can effectively eliminate two frame figures by considering vertical and horizontal gap
Eyebrow position deviation as caused by movement as in.
In addition, since eyebrow is with eyes ratio, eyebrow is relatively fine, therefore when describing the gap of distance between double eyebrows, and
The method of the gap of distance between non-selected description eyes, but average weighted method is selected, it can effectively protrude
The gap feature of eyebrow, facilitates subsequent adjustment.
S305, determine mouth phasor difference=(position of the mouth in the position of the mouth in the first mouth vector/second mouth vector) ×
[(the mouth width in mouth length × first mouth vector in the first mouth vector)-(mouth length × the second mouth arrow in the second mouth vector
Mouth width in amount)].
When determining mouth vector error, it is contemplated that the gap between the ratio and mouth area of mouth position.Mouth position
Ratio can reflect the longitudinal gap of mouth position in two field pictures, and the gap between mouth area can reflect mouth position in two field pictures
The lateral gap set can effectively eliminate mouth position in two field pictures and be made due to mobile by considering vertical and horizontal gap
At deviation.
In addition, since mouth is with eyes and eyebrow ratio, mouth area is bigger, therefore when describing the gap between mouth, not
The method of the gap of distance between selection description eyes or eyebrow, but difference in areas is selected, it can effectively protrude the difference of mouth
Away from feature, facilitate subsequent adjustment.
S104-2 calculates the standard deviation of all eyes phasor differences, the standard deviation of all eyebrow phasor differences, all mouth phasor differences
Standard deviation.
S104-3 successively selects a frame image in non-first frame image, calculate the image of selection face area and its before
Difference between the face area of one frame image.
S104-5 calculates the standard deviation of owner's face product moment.
The standard deviation of all eyes phasor differences/owner's face product moment standard deviation is determined as eyes estimation by S104-6
Value, is determined as eyebrow estimated value for the standard deviation of all eyebrow phasor differences/owner's face product moment standard deviation, all mouths is sweared
Standard deviation/owner's face product moment standard deviation of amount difference is determined as mouth estimated value.
Eyes estimated value is being calculated, eyebrow estimated value, when mouth estimated value, the standard deviation for passing through face difference in areas carries out normalizing
Change processing can further eliminate mobile bring eyes, the ratio influence of eyebrow and mouth.Guarantee eyes estimated value, eyebrow is estimated
Evaluation, the independence of mouth estimated value reduce mobile to eyes estimated value, eyebrow estimated value, the influence of mouth estimated value.
S104-7, by eyes estimated value, eyebrow estimated value, mouth estimated value is used as vector estimated result.
S105 carries out superresolution restoration to multiple image, obtains facial image according to vector estimated result.
This step to first to multiple image carry out superresolution restoration obtain a high-resolution facial image, further according to
Vector estimated result is finely adjusted resolution ratio, guarantees that facial image adjusted is no longer influenced by photographer's movement.
Specifically,
S105-1 carries out preset quantity time super-resolution rebuilding to multiple image according to the following formula, is finally rebuild
Image:
Wherein, n is current reconstruction number, and 1≤n≤N, N are preset quantity, and K is the frame sum of multiple image, and k is frame mark
Know, ykFor kth frame image, ↑ for up-sampling operation, s is parameter preset, and P is back projection's core, FkIt is transported for the mapping forward of kth frame
It calculates,BkFor the mapping operations backward of kth frame, h is fuzzy core, and * is convolution operator, ↓ be under
Sampled operational, X(n)Image after being rebuild for n-th.
S105-2 identifies the facial image in final reconstruction image according to face characteristic identification model.
S105-3 is adjusted the facial image in final reconstruction image according to vector estimated result, obtains final people
Face image.
The method that the facial image in final reconstruction image is adjusted with specific reference to vector estimated result are as follows:
For each pixel in the facial image in whole reconstruction image,
It is minute of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyes
Resolution × eyes estimated value.
It is minute of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyebrow
Resolution × eyebrow estimated value.
It is the resolution of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of mouth
Rate × mouth estimated value.
If the pixel non-ocular, non-eyebrow, the corresponding pixel of non-mouth, then its resolution ratio does not adjust.
After carrying out above-mentioned adjustment to the facial image in final reconstruction image to get arrive final facial image.
Above-mentioned adjustment process, is adjusted only for the pixel of eyes, eyebrow, mouth, for other positions pixel without
Adjustment can both promote regulated efficiency, reduce adjustresources consumption, and can promote adjustment effect.In addition, eyes, eyebrow, mouth
Targetedly adjustment can be more in line with personal characteristics, promote adjustment effect.
Method provided in this embodiment, according to the dynamically track knot of each face characteristic in the continuous multiple frames image comprising face
Fruit carries out vector estimation to continuous multiple frames image, further according to vector estimated result, carries out superresolution restoration to multiple image,
Facial image is obtained, the face recognition accuracy rate under situation of movement can be promoted.
It should be clear that the invention is not limited to specific configuration described above and shown in figure and processing.
For brevity, it is omitted here the detailed description to known method.In the above-described embodiments, several tools have been described and illustrated
The step of body, is as example.But method process of the invention is not limited to described and illustrated specific steps, this field
Technical staff can be variously modified, modification and addition after understanding spirit of the invention, or suitable between changing the step
Sequence.
It should also be noted that, the exemplary embodiment referred in the present invention, is retouched based on a series of step or device
State certain methods or system.But the present invention is not limited to the sequence of above-mentioned steps, that is to say, that can be according in embodiment
The sequence referred to executes step, may also be distinct from that the sequence in embodiment or several steps are performed simultaneously.
Finally, it should be noted that above-described embodiments are merely to illustrate the technical scheme, rather than to it
Limitation;Although the present invention is described in detail referring to the foregoing embodiments, those skilled in the art should understand that:
It can still modify to technical solution documented by previous embodiment, or to part of or all technical features into
Row equivalent replacement;And these modifications or substitutions, it does not separate the essence of the corresponding technical solution various embodiments of the present invention technical side
The range of case.
Claims (10)
1. a kind of human face super-resolution recognition methods based on video flowing, which is characterized in that the described method includes:
S101 obtains the continuous multiple frames image comprising face;
S102 positions each face characteristic in every frame, and the face characteristic includes eyes, eyebrow, mouth;
S103 carries out dynamic to each face characteristic according to positioning result of the identical face characteristic in the continuous multiple frames image
Tracking;
S104, according to the dynamically track of each face characteristic as a result, carrying out vector estimation to the continuous multiple frames image;
S105 carries out superresolution restoration to the multiple image, obtains facial image according to vector estimated result.
2. the method according to claim 1, wherein face pixel is not more than 50 × 50 in any frame image.
3. the method according to claim 1, wherein the S102 includes:
For any frame image,
S201 is identified in any frame image according to the face characteristic identification model for being in advance based on deep neural network training
Left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image;
S202, according to left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image determines eyes
Between maximum spacing, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows, double eyebrows it
Between minimum spacing, the difference in height between double eyebrows, mouth length, mouth width;
S203, by left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image, between eyes
Maximum spacing, the minimum spacing between eyes, the difference in height between eyes, the maximum spacing between double eyebrows, between double eyebrows most
Small spacing, the difference in height between double eyebrows, mouth length, mouth width are used as the positioning knot to face characteristic each in any frame
Fruit.
4. according to the method described in claim 3, it is characterized in that, left in any frame image identified in the S201
Eyes, right eye eyeball, left eyebrow, right eyebrow, position of the mouth in any frame image include:
The inner eye corner position of left eye eyeball, the external eyes Angle Position of left eye eyeball, the eyeball center of left eye eyeball, the inner eye corner of right eye eyeball
Position, the external eyes Angle Position of right eye eyeball, the eyeball center of right eye eyeball, the brows position of left eyebrow, the eyebrow tail position of left eyebrow
It sets, the eyebrow peak position of left eyebrow, the brows position of right eyebrow, the eyebrow tail position of right eyebrow, the eyebrow peak position of right eyebrow, a left side for mouth
Corners of the mouth position, the right corners of the mouth position of mouth, the lip peak position of mouth, the lip bottom position of mouth.
5. according to the method described in claim 4, it is characterized in that, the S202 includes:
S202-1 determines the face area in any frame image according to the face characteristic identification model;
S202-2 determines the image area of any frame image;
S202-3 determines face area/any frame figure in ratio=any frame image of any frame image
The image area of picture;
S202-4, determine eyes regulation coefficient=any frame image ratio × | external eyes Angle Position-left eye eyeball of left eye eyeball
Inner eye corner position |/| external eyes Angle Position-right eye eyeball inner eye corner position of right eye eyeball |;
S202-5, determine eyebrow regulation coefficient=any frame image ratio × | the brows position of left eyebrow-left eyebrow
Eyebrow tail position |/| the brows position of right eyebrow-right eyebrow eyebrow tail position |;
S202-6, determine mouth regulation coefficient=any frame image ratio × | the interior eye of left corners of the mouth position-left eye eyeball of mouth
Angle Position |/| the inner eye corner position of right corners of the mouth position-right eye eyeball of mouth |;
S202-7, maximum spacing=eyes regulation coefficient between eyes × | external eyes Angle Position-right eye eyeball external eyes of left eye eyeball
Angle Position | the ratio of/any frame image;
Minimum spacing=eyes regulation coefficient between eyes × | inner eye corner position-right eye eyeball inner eye corner position of left eye eyeball
|;
Difference in height=eyes regulation coefficient between eyes × | eyeball center-right eye eyeball eyeball centre bit of left eye eyeball
Set |;
Maximum spacing=eyebrow regulation coefficient between double eyebrows × | eyebrow tail position-right eyebrow eyebrow tail position of left eyebrow |;
Minimum spacing=eyebrow regulation coefficient between double eyebrows × | the brows position of left eyebrow-right eyebrow brows position |;
Difference in height=eyebrow regulation coefficient between double eyebrows × | the eyebrow peak position of left eyebrow-right eyebrow eyebrow peak position |;
Mouth length=mouth regulation coefficient × | the right corners of the mouth position of left corners of the mouth position-mouth of mouth |;
Mouth width=mouth regulation coefficient × | lip peak position-mouth lip bottom position of mouth |.
6. according to the method described in claim 5, it is characterized in that, the S103 includes:
S103-1, by eye position left in every frame, right eye position, the maximum spacing between eyes, between the minimum between eyes
Away from, difference in height between eyes, according to [left eye position, right eye position, the maximum spacing between eyes, between eyes
Minimum spacing, the difference in height between eyes] format, form eyes vector;
S103-2, by eyebrow left in every frame position, right eyebrow position, the maximum spacing between double eyebrows, between the minimum between double eyebrows
Away from, difference in height between double eyebrows, according to [left eyebrow position, right eyebrow position, the maximum spacing between double eyebrows, between double eyebrows
Minimum spacing, the difference in height between double eyebrows] format, form eyebrow vector;
S103-3, by the position of mouth in every frame, mouth length, mouth width, according to [position of mouth, mouth length, mouth width] format, shape
At mouth vector;
S103-4, by eyes vector, eyebrow vector, mouth vector is formed according to [eyes vector, eyebrow vector, mouth vector] format
Two-dimensional feature vector;
S103-5 arranges the two-dimensional feature vector of each frame according to frame sequential, forms vector sequence;
S103-6, using the vector sequence as the dynamically track result of each face characteristic.
7. according to the method described in claim 6, it is characterized in that, the S104 includes:
S104-1 successively selects a two-dimensional feature vector in the non-first two-dimensional feature vector of the vector sequence, calculates
Eyes phasor difference between the two-dimensional feature vector of selection and its previous two-dimensional feature vector, eyebrow phasor difference, mouth phasor difference;
S104-2 calculates the standard deviation of all eyes phasor differences, the standard deviation of all eyebrow phasor differences, the mark of all mouth phasor differences
It is quasi- poor;
S104-3 successively selects a frame image in non-first frame image, calculates the face area and its former frame of the image of selection
Difference between the face area of image;
S104-5 calculates the standard deviation of owner's face product moment;
The standard deviation of all eyes phasor differences/owner's face product moment standard deviation is determined as eyes estimated value by S104-6, will
The standard deviation of all eyebrow phasor differences/owner's face product moment standard deviation is determined as eyebrow estimated value, by all mouth phasor differences
Standard deviation/owner's face product moment standard deviation be determined as mouth estimated value;
S104-7, by eyes estimated value, eyebrow estimated value, mouth estimated value is used as vector estimated result.
8. the method according to the description of claim 7 is characterized in that the S104-1 includes:
For any one two-dimensional feature vector in the non-first two-dimensional feature vector of the vector sequence,
S301 determines the eyes vector in any one described two-dimensional feature vector, eyebrow vector, mouth vector, and is denoted as the
One eyes vector, the first eyebrow vector, the first mouth vector;
S302 determines eyes vector in the previous two-dimensional feature vector of any one two-dimensional feature vector, eyebrow vector, mouth arrow
Amount, and it is denoted as the second eyes vector, the second eyebrow vector, the second mouth vector;
S303 determines eyes phasor difference=(min { left eye in left eye position/second eyes vector in first eye vector
Eyeball position, the right eye position in right eye position/second eyes vector in first eye vector }) × (max { first eye
The maximum spacing between the eyes in maximum spacing the-the second eyes vector between eyes in vector, in first eye vector
Eyes between minimum spacing the-the second eyes vector in eyes between minimum spacing, the eyes in first eye vector
Between difference in height the-the second eyes vector in eyes between difference in height);
S304 determines eyebrow phasor difference=(min { left eyebrow in left eyebrow position/second eyebrow vector in the first eyebrow vector
Hair position, the right eyebrow position in right eyebrow position/second eyebrow vector in the first eyebrow vector }) × [(the first eyebrow arrow
The maximum spacing between double eyebrows in maximum spacing the-the second eyebrow vector between double eyebrows in amount)+(in the first eyebrow vector
Double eyebrows between minimum spacing the-the second eyebrow vector in double eyebrows between minimum spacing)+it is (double in the first eyebrow vector
The difference in height between double eyebrows in difference in height the-the second eyebrow vector between eyebrow)]/3;
S305 determines mouth phasor difference=(position of the mouth in the position of the mouth in the first mouth vector/second mouth vector) × [(
The mouth width in mouth length × first mouth vector in one mouth vector)-(in mouth length × second mouth vector in the second mouth vector
Mouth width)].
9. according to the method described in claim 8, it is characterized in that, the S105 includes:
S105-1 carries out preset quantity time super-resolution rebuilding to the multiple image according to the following formula, is finally rebuild
Image:
Wherein, n is current reconstruction number, and 1≤n≤N, N are preset quantity, and K is the frame sum of the multiple image, and k is frame mark
Know, ykFor kth frame image, ↑ for up-sampling operation, s is parameter preset, and P is back projection's core, FkIt is transported for the mapping forward of kth frame
It calculates,BkFor the mapping operations backward of kth frame, h is fuzzy core, and * is convolution operator, ↓ be under
Sampled operational, X(n)Image after being rebuild for n-th;
S105-2 identifies the facial image in final reconstruction image according to the face characteristic identification model;
S105-3 is adjusted the facial image in final reconstruction image according to the vector estimated result, obtains final people
Face image.
10. according to the method described in claim 9, it is characterized in that, according to the vector estimated result pair in the S105-3
Facial image in final reconstruction image is adjusted, comprising:
For each pixel in the facial image in whole reconstruction image,
It is the resolution ratio of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyes
× eyes estimated value;
It is the resolution ratio of the point in final reconstruction image by its resolution adjustment if the pixel is the corresponding pixel of eyebrow
× eyebrow estimated value;
If the pixel be the corresponding pixel of mouth, by its resolution adjustment be final reconstruction image in the point resolution ratio ×
Mouth estimated value;
If the pixel non-ocular, non-eyebrow, the corresponding pixel of non-mouth, then its resolution ratio does not adjust.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910218852.0A CN109948555B (en) | 2019-03-21 | 2019-03-21 | Face super-resolution identification method based on video stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910218852.0A CN109948555B (en) | 2019-03-21 | 2019-03-21 | Face super-resolution identification method based on video stream |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109948555A true CN109948555A (en) | 2019-06-28 |
CN109948555B CN109948555B (en) | 2020-11-06 |
Family
ID=67010621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910218852.0A Expired - Fee Related CN109948555B (en) | 2019-03-21 | 2019-03-21 | Face super-resolution identification method based on video stream |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109948555B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738117A (en) * | 2019-09-16 | 2020-01-31 | 深圳市创捷科技有限公司 | method and device for extracting human face from video |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7933464B2 (en) * | 2006-10-17 | 2011-04-26 | Sri International | Scene-based non-uniformity correction and enhancement method using super-resolution |
CN102354397A (en) * | 2011-09-19 | 2012-02-15 | 大连理工大学 | Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN103218612A (en) * | 2013-05-13 | 2013-07-24 | 苏州福丰科技有限公司 | 3D (Three-Dimensional) face recognition method |
CN103903236A (en) * | 2014-03-10 | 2014-07-02 | 北京信息科技大学 | Method and device for reconstructing super-resolution facial image |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN109063565A (en) * | 2018-06-29 | 2018-12-21 | 中国科学院信息工程研究所 | A kind of low resolution face identification method and device |
-
2019
- 2019-03-21 CN CN201910218852.0A patent/CN109948555B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7933464B2 (en) * | 2006-10-17 | 2011-04-26 | Sri International | Scene-based non-uniformity correction and enhancement method using super-resolution |
CN102354397A (en) * | 2011-09-19 | 2012-02-15 | 大连理工大学 | Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs |
CN102663361A (en) * | 2012-04-01 | 2012-09-12 | 北京工业大学 | Face image reversible geometric normalization method facing overall characteristics analysis |
CN103218612A (en) * | 2013-05-13 | 2013-07-24 | 苏州福丰科技有限公司 | 3D (Three-Dimensional) face recognition method |
CN103903236A (en) * | 2014-03-10 | 2014-07-02 | 北京信息科技大学 | Method and device for reconstructing super-resolution facial image |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN109063565A (en) * | 2018-06-29 | 2018-12-21 | 中国科学院信息工程研究所 | A kind of low resolution face identification method and device |
Non-Patent Citations (3)
Title |
---|
JIANGGANG YU等: "Super-resolution Restoration of Facial Images in Video", 《18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION(ICPR’06)》 * |
YU CHEN等: "FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
郑梅兰等: "基于学习的人脸图像超分辨率重建方法", 《计算机工程与应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738117A (en) * | 2019-09-16 | 2020-01-31 | 深圳市创捷科技有限公司 | method and device for extracting human face from video |
CN110738117B (en) * | 2019-09-16 | 2020-07-31 | 深圳市创捷科技有限公司 | Method and device for extracting face from video |
Also Published As
Publication number | Publication date |
---|---|
CN109948555B (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200226821A1 (en) | Systems and Methods for Automating the Personalization of Blendshape Rigs Based on Performance Capture Data | |
US10116901B2 (en) | Background modification in video conferencing | |
US9232189B2 (en) | Background modification in video conferencing | |
KR100571115B1 (en) | Method and system using a data-driven model for monocular face tracking | |
CN110363116B (en) | Irregular human face correction method, system and medium based on GLD-GAN | |
CN111105432B (en) | Unsupervised end-to-end driving environment perception method based on deep learning | |
US20060244757A1 (en) | Methods and systems for image modification | |
CN109657583A (en) | Face's critical point detection method, apparatus, computer equipment and storage medium | |
JPH10228544A (en) | Encoding and decoding of face based on model used characteristic detection and encoding of inherent face | |
CN110197462A (en) | A kind of facial image beautifies in real time and texture synthesis method | |
CN103443826A (en) | Mesh animation | |
CN110264396A (en) | Video human face replacement method, system and computer readable storage medium | |
CN115689869A (en) | Video makeup migration method and system | |
CN115170559A (en) | Personalized human head nerve radiation field substrate representation and reconstruction method based on multilevel Hash coding | |
KR100411760B1 (en) | Apparatus and method for an animation image synthesis | |
CN116648733A (en) | Method and system for extracting color from facial image | |
Isikdogan et al. | Eye contact correction using deep neural networks | |
Pandžić et al. | Real-time facial interaction | |
CN109948555A (en) | Human face super-resolution recognition methods based on video flowing | |
CN117157673A (en) | Method and system for forming personalized 3D head and face models | |
CN109635674A (en) | A kind of face alignment method of the dendron shape convolutional neural networks adapted to based on posture | |
US11954905B2 (en) | Landmark temporal smoothing | |
CN113449590B (en) | Speaking video generation method and device | |
CN114677312A (en) | Face video synthesis method based on deep learning | |
Li et al. | FAIVconf: Face enhancement for AI-based video conference with low bit-rate |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201106 |