CN108765264A - Image U.S. face method, apparatus, equipment and storage medium - Google Patents
Image U.S. face method, apparatus, equipment and storage medium Download PDFInfo
- Publication number
- CN108765264A CN108765264A CN201810487608.XA CN201810487608A CN108765264A CN 108765264 A CN108765264 A CN 108765264A CN 201810487608 A CN201810487608 A CN 201810487608A CN 108765264 A CN108765264 A CN 108765264A
- Authority
- CN
- China
- Prior art keywords
- face
- parameter
- current
- image
- lip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 230000014509 gene expression Effects 0.000 claims abstract description 148
- 238000012545 processing Methods 0.000 claims abstract description 63
- 230000001815 facial effect Effects 0.000 claims abstract description 33
- 230000004807 localization Effects 0.000 claims abstract description 29
- 238000004590 computer program Methods 0.000 claims description 19
- 230000003827 upregulation Effects 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 10
- 230000008921 facial expression Effects 0.000 abstract description 10
- 230000000007 visual effect Effects 0.000 abstract description 6
- 238000004458 analytical method Methods 0.000 abstract description 3
- 230000037303 wrinkles Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 210000000162 simple eye Anatomy 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 235000011464 Pachycereus pringlei Nutrition 0.000 description 1
- 240000006939 Pachycereus weberi Species 0.000 description 1
- 235000011466 Pachycereus weberi Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Abstract
The invention discloses a kind of image U.S. face methods, including:It treats U.S. face image and carries out Face datection, determine the human face region waited in U.S. face image;Facial feature localization is carried out to human face region, obtains the lip block for waiting for current face in U.S. face image;The lip longitudinal degree parameter set of current face is calculated based on lip block;The expression line intensity of current face is determined according to lip longitudinal degree parameter set;Target U.S. face parameter of current face is determined based on expression line intensity;U.S. face processing is carried out to current face according to target U.S. face parameter.In the present invention, lip longitudinal degree parameter set is obtained by the analysis to current face's lip block, and the U.S. face parameter of current face is adjusted according to the expression line intensity of the current face of lip longitudinal degree parameter set determination, to be arranged rational target U.S. face parameter to carry out U.S. face processing to current face, the U.S. face effect of facial expression damage is avoided, the visual comfort of U.S. face is optimized.The present invention also provides a kind of image U.S. face device, equipment and storage mediums.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image U.S. face method, apparatus based on expression line,
Equipment and storage medium.
Background technology
With the fast development of artificial intelligence AI technologies, self-timer, live streaming etc. have obtained extensively in daily life
Application, the photo or video that more and more people like either being broadcast live by self-timer sharing shooting be to show oneself
It improves self-image and shows effect, beautiful Yancheng is carried out for essential operation to image, produces various U.S. face therewith
System.
Existing U.S.'s face system tends to ignore face facial expression institute band in the setting for carrying out image U.S. face parameter
The influence come, and face will produce different degrees of expression line when generating facial expression, the generation of expression line can then influence U.S.
Face effect reduces the visual comfort of U.S. face.
To sum up, how to avoid the expression schlieren that facial expression generates from ringing image U.S. face effect becomes those skilled in the art urgently
Problem to be solved.
Invention content
An embodiment of the present invention provides a kind of image U.S. face method, apparatus, equipment and storage mediums, can avoid facial table
The expression schlieren that feelings generate rings the effect of image U.S. face, optimizes the visual comfort of U.S. face.
The embodiment of the present invention in a first aspect, provide a kind of image U.S. face method, including:
It treats U.S. face image and carries out Face datection, the human face region in U.S. face image is waited for described in determination;
Facial feature localization is carried out to the human face region, the lip block of current face in U.S. face image is waited for described in acquisition;
The lip longitudinal degree parameter set of the current face is calculated based on the lip block;
The expression line intensity of the current face is determined according to the lip longitudinal degree parameter set;
Target U.S. face parameter of the current face is determined based on the expression line intensity;
U.S. face processing is carried out to the current face according to target U.S. face parameter.
Further, the lip longitudinal degree parameter set includes that the first parameter of lip longitudinal degree and lip longitudinal degree second are joined
Number;
Correspondingly, the first parameter of lip longitudinal degree and lip longitudinal degree of the current face are calculated based on the lip block
The formula of second parameter is:
Wherein, Hm is the first parameter of lip longitudinal degree, and bimax is the maximum value of line number in lip block, and bimin is lip block
The minimum value of middle line number, Wm are the second parameter of lip longitudinal degree, and bjmax is the maximum value of row number in lip block, and bjmin is lip
The minimum value of row number in block.
Preferably, the formula of the expression line intensity that the current face is determined according to the lip longitudinal degree parameter set
For;
Wherein, Fw is the expression line intensity of current face, k1、k2For scale parameter, and 1≤k2<k1。
Optionally, before the target U.S. face parameter for determining the current face based on the expression line intensity, including:It obtains
Take the first initial U.S. face parameter of the current face;
Correspondingly, target U.S. face parameter that the current face is determined based on the expression line intensity, including:
As the expression line intensity Fw=1, the described first initial U.S. face parameter is raised into the first default adjustment amplitude, and
The initial U.S. face parameter of first after the first default adjustment amplitude that raises is determined as to target U.S. face parameter of the current face;
As the expression line intensity Fw=-1, the described first initial U.S. face parameter is lowered into the second default adjustment amplitude,
And the initial U.S. face parameter of first after the second default adjustment amplitude of lowering is determined as to target U.S. face parameter of the current face;
As the expression line intensity Fw=0, the described first initial U.S. face parameter is determined as to the mesh of the current face
The U.S. face parameter of mark.
Further, the expression line intensity of the current face further includes eye expression line intensity, the lip longitudinal degree
Parameter set further includes lip longitudinal degree third parameter;
Correspondingly, described that facial feature localization is carried out to the human face region, further include:
Facial feature localization is carried out to the human face region, obtains the left eye of the current face or the eye block of right eye;
The lip longitudinal degree parameter set that the current face is calculated based on the lip block further includes:According to following
Formula calculates the lip longitudinal degree third parameter of the current face:
ASize=sum (max (j | bk (i, j) ∈ i)-min (j | bk (i, j) ∈ i)+1);
Wherein, ASize be lip longitudinal degree third parameter, bk (i, j) be the i-th row jth row lip block, max (j | bk
(i, j) ∈ i) be row number in the i-th row lip block maximum value, min (j | bk (i, j) ∈ i) is that row number is most in the i-th row lip block
Small value;
The expression line intensity that the current face is determined according to the lip longitudinal degree parameter set further includes:Pass through
Following formula determine the eye expression line intensity of the current face:
Wherein, sig is the eye expression line intensity of current face, and ESize is the eye block of current face's left eye or right eye
Block number, k3For scale parameter, and k3≥1.5。
Preferably, before the target U.S. face parameter for determining the current face based on the expression line intensity, further include:
Obtain the second initial U.S. face parameter of the current face, wherein the described second initial U.S. face parameter includes eye U.S. face parameter;
Correspondingly, target U.S. face parameter that the current face is determined based on the expression line intensity, including:
As the expression line intensity Fw=1, judge whether the eye expression line intensity sig is equal to 1;If the eye
The second of the current face initial U.S. face parameter up-regulation third is then preset adjustment amplitude by expression line intensity sig=1, and will
The second initial U.S. face parameter after the default adjustment amplitude of up-regulation third is determined as target U.S. face parameter of the current face;If institute
Eye expression line intensity sig ≠ 1 is stated, then eye U.S. face parameter of the current face is raised into the 4th default adjustment amplitude, and
The initial U.S. face parameter of second after the 4th default adjustment amplitude that raises is determined as to target U.S. face parameter of the current face;
As the expression line intensity Fw=-1, the second of the current face the initial U.S. face parameter is lowered the 5th and is preset
Adjustment amplitude, and the second initial U.S. face parameter after the 5th default adjustment amplitude of lowering is determined as to the target of the current face
U.S. face parameter;
As the expression line intensity Fw=0, the described second initial U.S. face parameter is determined as to the mesh of the current face
The U.S. face parameter of mark.
Optionally, described to wait for that U.S. face image includes each frame image in dynamic image;
Correspondingly, described image U.S. face method further includes:
After carrying out U.S. face processing to the current frame image in the dynamic image, judge to whether there is in the dynamic image
Next frame image;
If there are next frame image in the dynamic image, judge whether the next frame image is scene switching frame;
When the next frame image is scene switching frame, by the next frame image be determined as it is new wait for U.S. face image,
And return to execution and wait for that U.S. face image carries out Face datection to described, determine described in the step of waiting for the human face region in U.S. face image and
Subsequent step;
When the next frame image is not scene switching frame, according to target U.S. face parameter of current frame image under described
Correspondence face in one frame image carries out U.S. face processing, while obtaining the newly-increased face in the next frame image, by newly-increased people
Face is determined as new current face, and is executed to the new current face and calculate the current face's based on the lip block
The step of lip longitudinal degree parameter set and subsequent step.
The second aspect of the embodiment of the present invention provides a kind of image U.S. face device, including:
Face detection module carries out Face datection for treating U.S. face image, the face in U.S. face image is waited for described in determination
Region;
Facial feature localization module waits for current in U.S. face image for carrying out facial feature localization to the human face region described in acquisition
The lip block of face;
Parameter set computing module, the lip longitudinal degree parameter set for calculating the current face based on the lip block;
Expression line determining module, for determining that the expression line of the current face is strong according to the lip longitudinal degree parameter set
Degree;
U.S. face parameter determination module, target U.S. face ginseng for determining the current face based on the expression line intensity
Number;
U.S. face processing module, for carrying out U.S. face processing to the current face according to target U.S. face parameter.
The third aspect of the embodiment of the present invention, provides a kind of image U.S. face equipment, including memory, processor and deposits
The computer program that can be run in the memory and on the processor is stored up, the processor executes the computer journey
It is realized such as the step of aforementioned first aspect described image U.S. face method when sequence.
The fourth aspect of the embodiment of the present invention, provides a kind of computer readable storage medium, described computer-readable to deposit
Storage media is stored with computer program, and such as aforementioned first aspect described image is realized when the computer program is executed by processor
The step of U.S. face method.
As can be seen from the above technical solutions, the embodiment of the present invention has the following advantages:
In the embodiment of the present invention, first, treats U.S. face image and carry out Face datection, the people in U.S. face image is waited for described in determination
Face region, and facial feature localization is carried out to the human face region, the lip block of current face in U.S. face image is waited for described in acquisition;Its
It is secondary, the lip longitudinal degree parameter set of the current face is calculated based on the lip block, and according to the lip longitudinal degree parameter
Collection determines the expression line intensity of the current face;Then, the target of the current face is determined based on the expression line intensity
U.S. face parameter, to carry out U.S. face processing to the current face according to target U.S. face parameter.In the embodiment of the present invention, pass through
Lip longitudinal degree parameter set is obtained to the analysis of current face's lip block, it is current to be determined according to lip longitudinal degree parameter set
The expression line intensity of face is beautiful rational target is arranged to adjust the U.S. face parameter of current face according to expression line intensity
Face parameter to carry out U.S. face processing to current face, to avoid the expression schlieren that facial expression generates from ringing image U.S. face effect,
The visual comfort of the U.S. face of optimization.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description be only the present invention some
Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the method flow diagram for image U.S. face method that the embodiment of the present invention one provides;
Fig. 2 is flow diagram of the image U.S. face method that provides of the embodiment of the present invention one under an application scenarios;
Fig. 3 is the structural schematic diagram of image U.S. face device provided by Embodiment 2 of the present invention;
Fig. 4 is the schematic diagram for image U.S. face equipment that the embodiment of the present invention three provides.
Specific implementation mode
An embodiment of the present invention provides a kind of image U.S. face method, apparatus, equipment and storage mediums, rational for being arranged
Target U.S. face parameter to carry out U.S. face processing to current face, to avoid the expression schlieren that facial expression generates from ringing image U.S. face
Effect optimizes the visual comfort of U.S. face.
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below
Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field
All other embodiment that those of ordinary skill is obtained without making creative work, belongs to protection of the present invention
Range.
Referring to Fig. 1, the embodiment of the present invention one provides a kind of image U.S. face method, described image U.S. face method includes:
Step S101, it treats U.S. face image and carries out Face datection, the human face region in U.S. face image is waited for described in determination.
In the present embodiment, in acquisition after U.S. face image, wait for that U.S. face image carries out Face datection to described first, with detection
It is described to wait for whether there is area of skin color in U.S. face image, if described wait in U.S. face image there are area of skin color, then it is assumed that institute
It states and waits for that there are faces in U.S. face image, and wait for the human face region in U.S. face image where face described in determination, with to human face region
In face carry out U.S. face processing.It is understood that if described wait for that there is no if area of skin color in U.S. face image, then it is assumed that
It is described to wait for that face is not present in U.S. face image, then directly terminate U.S. face process flow, without waiting for that U.S. face image carries out U.S. to described
Face processing.
Step S102, facial feature localization is carried out to the human face region, the lip of current face in U.S. face image is waited for described in acquisition
Portion's block.
In the present embodiment, described in determination after the human face region in U.S. face image, face are carried out to the human face region
Positioning obtains the lip block corresponding to each current face in the human face region, wherein five when facial feature localization success
When official positions acquisition lip block, each lip block is equipped with corresponding line number and row number;When facial feature localization is unsuccessful, can recognize
For face is not present in the human face region, then directly terminate U.S. face process flow, without waiting for that U.S. face image carries out U.S. to described
Face processing.
Step S103, the lip longitudinal degree parameter set of the current face is calculated based on the lip block.
Here, the lip longitudinal degree parameter set includes the second parameter of the first parameter of lip longitudinal degree and lip longitudinal degree,
Wherein, first parameter of lip longitudinal degree is the line number of the lip block corresponding to each current face, the lip longitudinal degree
Second parameter is the columns of the lip block corresponding to each current face.In the present embodiment, each current face institute is being obtained
After corresponding lip block, the lip block for being labeled as lip will be isolated in each current face and is deleted, is obtained in each current face
The lip block of connection, and the lip block of minimum row number, the lip block of maximum row number, minimum row are found in the lip block of the connection
Number lip block and maximum line number lip block.Lip block, maximum row number in the minimum row number for finding each current face
Lip block, after the lip block of minimum line number and the lip block of maximum line number, calculated separately according to following formula one each current
The second parameter of the first parameter of lip longitudinal degree and lip longitudinal degree of face, i.e., the line number and row of each current face's lip block
Number:
Wherein, Hm is the first parameter of lip longitudinal degree, and bimax is the maximum value of line number in lip block, and bimin is lip block
The minimum value of middle line number, Wm are the second parameter of lip longitudinal degree, and bjmax is the maximum value of row number in lip block, and bjmin is lip
The minimum value of row number in block.
Step S104, the expression line intensity of the current face is determined according to the lip longitudinal degree parameter set.
In the present embodiment, after the lip longitudinal degree parameter set corresponding to each current face is calculated, you can according to
Lip longitudinal degree parameter set corresponding to each current face determines expression line intensity corresponding thereto respectively.Namely
It says, can determine that the expression line of the first current face is strong according to the lip longitudinal degree parameter set corresponding to the first current face
Degree, can determine that the expression line of the second current face is strong according to the lip longitudinal degree parameter set corresponding to the second current face
Degree, and so on, to which the expression line intensity waited in U.S. face image corresponding to all current faces can be obtained.
Specifically, in the present embodiment, the expression line intensity of the current face is determined according to following formula two:
Wherein, Fw is the expression line intensity of current face, k1、k2For scale parameter, and 1≤k2<k1。
It is understood that the expression line intensity of the current face refer to according to the line number of the lip block of current face and
Three kinds of states of current face's lip that columns determines arrange in this way so as to predict expression type possessed by current face
The difference of number and line number is more than expressions type or the line numbers such as the smile of first threshold and the difference of columns is greater than or equal to second
The expressions types such as the O-shaped mouth or beep mouth of threshold value, wherein first threshold=Wm-k2* Hm, second threshold=Hm-k1*Wm。
Step S105, target U.S. face parameter of the current face is determined based on the expression line intensity.
It is understood that image U.S. face method in the present embodiment can be used with any U.S. face system support, thus, this
Image U.S. face method in embodiment is determining target U.S. face parameter of the current face with right based on the expression line intensity
Before the current face carries out U.S. face processing, U.S.'s face system can be obtained first and waits for working as forefathers in U.S. face image for described
First initial U.S. face parameter determined by face, then based on the initial U.S. face parameter of expression line intensity adjustment described first come
To target U.S. face parameter of the current face.
Specifically, as the expression line intensity Fw=1, by described first initial U.S. the first default adjustment of face parameter up-regulation
Amplitude, and the first initial U.S. face parameter after the first default adjustment amplitude that raises is determined as to target U.S. face of the current face
Parameter;
As the expression line intensity Fw=-1, the described first initial U.S. face parameter is lowered into the second default adjustment amplitude,
And the initial U.S. face parameter of first after the second default adjustment amplitude of lowering is determined as to target U.S. face parameter of the current face;
As the expression line intensity Fw=0, the described first initial U.S. face parameter is determined as to the mesh of the current face
The U.S. face parameter of mark.
I.e. as the expression line intensity Fw=1 of a certain current face, it is believed that the line number of current face's lip block is apparent
More than columns, i.e., there may be the expressions such as O-shaped mouth or beep mouth by the current face, and the presence of the expressions such as O-shaped mouth or beep mouth may
Wrinkle possessed by current face's script is stretched so that the wrinkle of the current face is fewer than its practical wrinkle, causes U.S. face
System thinks that the current face is younger, so that U.S. face system U.S. face parameter initial to determined by the current face first
It is smaller, U.S. face effect is not achieved, thus, the initial U.S. face parameter of first corresponding to the current face should be raised first at this time
Default adjustment amplitude, obtains target U.S. face parameter corresponding to the current face;As the expression line intensity Fw of a certain current face
When=- 1, it is believed that the columns of current face's lip block is significantly more than line number, i.e., there may be the tables such as smile by the current face
Feelings, and the presence for the expressions such as smile then may make that the current face generates the expression line of similar wrinkle, and the expression line easily by
U.S. face system is mistakenly considered wrinkle so that U.S. face system thinks that the current face is older, so that U.S. face system is to this
First initial U.S. face parameter determined by current face is larger, causes U.S. face uncomfortable, untrue, thus, it at this time should should
The first initial U.S. face parameter corresponding to current face lowers the second default adjustment amplitude, obtains the mesh corresponding to the current face
The U.S. face parameter of mark;As the expression line intensity Fw=0 of a certain current face, that is, think that facial expression is not present in the current face,
Thus, erroneous judgement of the U.S. face system to current face U.S. face parameter will not be caused, you can determine target U.S. face of the current face
Parameter is U.S. face system U.S. face parameter initial to determined by the current face first.
Here, the described first initial U.S. face parameter and target U.S. face parameter can be the filtering strength of U.S. face, i.e., really
Determine and adjusts the filtering strength of current face to carry out the face processing of face U.S. to current face.
It is understood that the first default adjustment amplitude and the second default adjustment amplitude can by user according to
Need the adjustment amplitude of setting fixed value, naturally it is also possible to be determined according to Hm, Wm in lip longitudinal degree parameter set adaptive
Adjustment amplitude.
Step S106, U.S. face processing is carried out to the current face according to target U.S. face parameter.
In the present embodiment, in target U.S. face parameter that current face is determined, after target filtering strength, you can according to this
Target filtering strength treats the current face in U.S. face image and carries out U.S. face processing, and waits for the rest part in U.S. face image then
Corresponding U.S. face processing is carried out still according to U.S. face parameter determined by U.S. face system.
It is understood that in the present embodiment, the current face refers to described waiting for that U.S. face framing label is being determined
The face of position, is waited for Ru a certain in U.S. face image, has face A, face B and face C, and works as the people that prelocalization label is positioning
Face is face A, then face A is current face at this time, to be determined corresponding to face A according to above-mentioned steps S101 to S105
Target U.S. face parameter A, and U.S. face processing is carried out to face A according to target U.S. face parameter A.After the completion of processing, label is positioned
It is automatically transferred to face B, then face B is current face at this time, can equally be determined according to above-mentioned steps corresponding to face B
Target U.S. face parameter B, and U.S. face processing is carried out to face B according to target U.S. face parameter B, after the completion of processing, positioning label is
It can be automatically transferred to face C, to complete same processing procedure.
Here, can certainly first determine the corresponding target U.S. face parameter A of face A, face B respectively according to above-mentioned steps
Then corresponding target U.S. face parameter B, the corresponding target U.S. face parameter C of face C simultaneously carry out face A, face B, face C
The processing of U.S. face, i.e., will according to target U.S. face parameter A to face A carry out U.S. face processing, according to target U.S. face parameter B to face B into
The face processing of row U.S. and the three for carrying out U.S. face processing to face C according to target U.S. face parameter C are carried out at the same time.
Further, to improve the accuracy that target U.S. face parameter determines, image U.S. face method provided in this embodiment exists
In one application scenarios, eye expression line intensity is increased to assist determining target U.S. face parameter of current face.
Specifically, under the application scenarios, the expression line intensity of the current face further includes eye expression line intensity, institute
It further includes lip longitudinal degree third parameter to state lip longitudinal degree parameter set;Correspondingly, described that face are carried out to the human face region
It positions, further includes:Facial feature localization is carried out to the human face region, obtains the left eye of the current face or the eye of right eye
Block;Meanwhile the lip longitudinal degree parameter set that the current face is calculated based on the lip block, further include:According to following
Formula three calculates the lip longitudinal degree third parameter of the current face:
ASize=sum (max (j | bk (i, j) ∈ i)-min (j | bk (i, j) ∈ i)+1) (formula three);
(variable | condition) is the maximum value of the variable for the condition that meets here, max, and max (variable | condition) it is to meet condition
Variable minimum value.Thus, in above-mentioned formula three, ASize is lip longitudinal degree third parameter, and bk (i, j) is the i-th row jth
The lip block of row, max (j | bk (i, j) ∈ i) are the maximum value of row number in the i-th row lip block, and min (j | bk (i, j) ∈ i) is the
The minimum value of row number in i row lip blocks;
The expression line intensity that the current face is determined according to the lip longitudinal degree parameter set further includes:Pass through
Following formula four determine the eye expression line intensity of the current face:
Wherein, sig is the eye expression line intensity of current face, and ESize is the eye block of current face's left eye or right eye
Block number, k3For scale parameter, and k3≥1.5。
It is understood that the lip longitudinal degree third parameter refers to the lip area of the current face, specifically may be used
It is embodied with the block number of current face's lip block.The eye expression line intensity of the current face refers to according to current face
Eye block block number and the block number of lip block between relationship determined by eye state.
In this scene, after the expression line intensity and eye expression line intensity for obtaining current face's lip, you can according to working as
The expression line intensity and eye expression line intensity of preceding human face and lip determines target U.S. face parameter of the current face.
Specifically, before the target U.S. face parameter for determining the current face based on the expression line intensity, further include:
Obtain the second initial U.S. face parameter of the current face, wherein the described second initial U.S. face parameter includes eye U.S. face parameter;
Correspondingly, target U.S. face parameter that the current face is determined based on the expression line intensity, including:
As the expression line intensity Fw=1, judge whether the eye expression line intensity sig is equal to 1;If the eye
The second of the current face initial U.S. face parameter up-regulation third is then preset adjustment amplitude by expression line intensity sig=1, and will
The second initial U.S. face parameter after the default adjustment amplitude of up-regulation third is determined as target U.S. face parameter of the current face;If institute
Eye expression line intensity sig ≠ 1 is stated, then eye U.S. face parameter of the current face is raised into the 4th default adjustment amplitude, and
The initial U.S. face parameter of second after the 4th default adjustment amplitude that raises is determined as to target U.S. face parameter of the current face;
As the expression line intensity Fw=-1, the second of the current face the initial U.S. face parameter is lowered the 5th and is preset
Adjustment amplitude, and the second initial U.S. face parameter after the 5th default adjustment amplitude of lowering is determined as to the target of the current face
U.S. face parameter;
As the expression line intensity Fw=0, the described second initial U.S. face parameter is determined as to the mesh of the current face
The U.S. face parameter of mark.
When face is there are the facial expressions such as O-shaped mouth or beep mouth and stretches facial wrinkles, the block of human face and lip block can be increased
Number and/or the block number for reducing simple eye eye block, so that the block number of lip block and the difference of the block number of simple eye eye block are more than
Predetermined threshold value;And the facial expressions such as the O-shaped mouth existing for the face or beep mouth can increase simple eye eye block when only stretching eye wrinkle-removing
Block number so that the difference of the block number of the block number of lip block and simple eye eye block is less than the predetermined threshold value.
Here, as the expression line intensity Fw=1 for the lip for determining a certain current face, it is believed that in the current face
There are the expressions such as O-shaped mouth or beep mouth, but there are the expressions such as O-shaped mouth or beep mouth whether to stretch facial wrinkles by the current face, then needs
The eye expression line intensity for further judging the current face, when the eye expression line intensity for continuing to judge the current face
When sig=1, the i.e. block number of the lip block of the current face and the difference of the block number of simple eye eye block are more than the predetermined threshold value, i.e.,
Determine that the expressions such as O-shaped mouth existing for the current face or beep mouth have stretched the facial wrinkles of current face, thus, it should incite somebody to action at this time
The second initial U.S. face parameter up-regulation third corresponding to the current face presets adjustment amplitude, obtains corresponding to the current face
Target U.S. face parameter.And ought further judge eye expression line intensity sig ≠ 1 of the current face, i.e. the current face
When the block number of lip block and the difference of the block number of simple eye eye block are less than the predetermined threshold value, that is, think O existing for the current face
The expressions such as type mouth or beep mouth only stretch the eye wrinkle-removing of the current face so that the wrinkle of current face's eye is than its practical wrinkle
Line is few, causes U.S. face system smaller to eye U.S. face parameter determined by current face's eye, U.S. face effect is not achieved, because
And it at this time should be by the 4th default adjustment width of eye U.S. face parameter up-regulation in the second of the current face the initial U.S. face parameter
It spends, and keeps other U.S. face parameter constants in the second initial U.S. face parameter, to obtain target U.S. corresponding to the current face
Face parameter.
It is understood that the third presets adjustment amplitude, the 4th default adjustment amplitude and the described 5th is preset
The adjustment amplitude of fixed value can be set as needed in adjustment amplitude by user, wherein the adjustment amplitude of the fixed value can with it is upper
It is identical to state the first default adjustment amplitude, the adjustment amplitude of fixed value in the second default adjustment amplitude, also can be different, certainly herein
Adaptive adjustment amplitude can also be determined according to Hm, Wm in lip longitudinal degree parameter set.
As shown in Fig. 2, a kind of image U.S. face method that the embodiment of the present invention one provides can be used under an application scenarios
U.S. face processing is carried out to the face in dynamic image.It is described to wait for that U.S. face image includes each in dynamic image under the scene
Frame image.Described image U.S. face method includes step S201, step S202, step S203, step S204, step S205, step
S206, step S207, step S208, step S209 and step S210.
Wherein, it step S201, treats U.S. face image and carries out Face datection, the human face region in U.S. face image is waited for described in determination
It is similar to above-mentioned steps S101;Step S202, facial feature localization is carried out to the human face region, wait for described in acquisition in U.S. face image when
The lip block of preceding face is similar to above-mentioned steps S102;Step S203, the lip of the current face is calculated based on the lip block
Portion's longitudinal degree parameter set is similar to above-mentioned steps S103;Step S204, work as according to described in lip longitudinal degree parameter set determination
The expression line intensity of preceding face is similar to above-mentioned steps S104;Step S205, it is determined based on the expression line intensity described current
Target U.S. face parameter of face is similar to above-mentioned steps S105;Step S206, according to target U.S. face parameter to described current
The U.S. face processing of face progress is similar to above-mentioned steps S106, and for simplicity, details are not described herein.
Step S207, after carrying out U.S. face processing to the current frame image in the dynamic image, judge the dynamic image
In whether there is next frame image.
It, can when being used for described image U.S. face method to carry out U.S. face processing to the face in dynamic image in this scene
The current frame image in the dynamic image is first obtained, and using current frame image as U.S. face image is waited for, with through the above steps
S201 to S206 carries out U.S. face processing to the current frame image.After the completion of to current frame image U.S. face processing, further sentence
Whether there is also next frame images in the disconnected dynamic image, if if next frame image is not present in the dynamic image,
Think that the U.S. face processing to all frame images in the dynamic image is completed, directly terminates U.S. face flow.If the Dynamic Graph
There is also next frame image as in, it is believed that there is also U.S. face image is waited in the dynamic image, then obtain the next frame figure
Picture.
It is understood that in this scene, the dynamic image can be video image, can also be the cardons figure such as gif
Picture.
If there are next frame image in step S208, the described dynamic image, judge whether the next frame image is scene
Switch frame.
In the variation of dynamic image, next frame image may be same or similar with current frame image, such as next frame image
It may only have occurred background changing, and the face occurred in next frame image is as the face occurred in current frame image, or
It is only to have increased part face newly in person's next frame image;Certain next frame image may also be entirely different with current frame image,
If foreground variation has occurred in next frame image, scene switching is constituted.Thus, in this scene, after obtaining next frame image, with
Judge whether the next frame image is scene switching frame, that is, judges that the next frame image is that foreground variation has occurred, or only
Background changing only has occurred, to carry out different process flows to next frame image according to different transformation results.
Step S209, when the next frame image is not scene switching frame, according to target U.S. face ginseng of current frame image
Several correspondence faces in the next frame image carry out U.S. face processing, while obtaining the newly-increased people in the next frame image
Newly-increased face is determined as new current face, and is executed to the new current face and calculate institute based on the lip block by face
The step of stating the lip longitudinal degree parameter set of current face and subsequent step.
Here, when it is scene switching frame to judge next frame image not, such as judge that next frame image is only carried on the back
When scape changes, then it is assumed that there are the faces occurred in current frame image in the next frame image, while can in the next frame image
It can there is also newly-increased faces.Thus, it needs to judge at this time in the next frame image with the presence or absence of newly-increased face, if present, then
The newly-increased face of the next frame image is obtained, and determines target U.S. face parameter of the newly-increased face.
Specifically, all intra-frame prediction blocks in the next frame image are found out first, using the intra-frame prediction block as new
Subgraph, and facial feature localization is carried out to the new subgraph, newly-increased face is determined whether there is according to facial feature localization result, if depositing
If newly-increased face, then newly-increased face is determined as to new current face, and executed to the new current face and be based on institute
The step of lip block calculates the lip longitudinal degree parameter set of the current face and subsequent step are stated, to determine the newly-increased face
Target U.S. face parameter.Wherein, when judging to have newly-increased face according to facial feature localization result, you can in the facial feature localization mistake
Lip block, the eye block etc. of the newly-increased face are obtained in journey.
After target U.S. face parameter of newly-increased face is determined, you can according to target U.S. face parameter to the next frame image
In newly-increased face carry out U.S. face processing;And for the non-newly-increased face in next frame image, i.e., go out in current frame image
The face now crossed, then directly using in current frame image identified corresponding target U.S. face parameter come to the non-newly-increased face
Carry out U.S. face processing.
In this scene, when existing with identical face in current frame image in next frame image, i.e., directly according to current
Identified target U.S. face parameter to carry out U.S. face processing to the identical face in frame image so that in next frame image U.S. face
In processing procedure, target U.S. face parameter of the identical face need not be recalculated, and only needs the target for calculating newly-increased face beautiful
Face parameter improves the efficiency of dynamic image U.S. face processing so as to which the U.S. face processing procedure of dynamic image is rapidly completed.
It is understood that in video compression, using the block of intra prediction mode, typicallying represent current block and former frame
Correlation is small, is the high probability region for new face occur.Thus, in this scene, the compression information carried in video can be used
The intra-frame prediction block in remaining frame is found out, such as a certain piece of prediction mode is intra prediction or it includes intra prediction sub-block,
The block can be determined as intra-frame prediction block.
Step S210, when the next frame image is scene switching frame, the next frame image is determined as new wait for
U.S. face image, and return to execution and wait for that U.S. face image carries out Face datection to described, the face area in U.S. face image is waited for described in determination
The step of domain and subsequent step.
Here, when it is scene switching frame to judge the next frame image, that is, think that foreground variation has occurred in it, and sends out
With the presence or absence of when U.S. face face in the next frame image of scape variation before death, then need further to judge.Thus, this scene
In, when the next frame image is scene switching frame, that is, it is determined as waiting for U.S. face image, and return to execution and wait for U.S. to described
Face image carries out Face datection, determine described in the step of waiting for the human face region in U.S. face image and subsequent step, to judge under this
With the presence or absence of the face for needing U.S. face in one frame image, and follow-up processing flow is executed according to judging result.
In this scene, expression line intensity is determined only with the lip longitudinal degree parameter set of current face, and according to determination
Expression line intensity determine target U.S. face parameter, to carry out U.S. face processing, calculation amount to current face according to target U.S. face parameter
It is few, requirement of the Video Applications to U.S. face real-time can be met.
In the embodiment of the present invention, first, treats U.S. face image and carry out Face datection, the people in U.S. face image is waited for described in determination
Face region, and facial feature localization is carried out to the human face region, the lip block of current face in U.S. face image is waited for described in acquisition;Its
It is secondary, the lip longitudinal degree parameter set of the current face is calculated based on the lip block, and according to the lip longitudinal degree parameter
Collection determines the expression line intensity of the current face;Then, the target of the current face is determined based on the expression line intensity
U.S. face parameter, to carry out U.S. face processing to the current face according to target U.S. face parameter.In the embodiment of the present invention, pass through
Lip longitudinal degree parameter set is obtained to the analysis of current face's lip block, it is current to be determined according to lip longitudinal degree parameter set
The expression line intensity of face is beautiful rational target is arranged to adjust the U.S. face parameter of current face according to expression line intensity
Face parameter to carry out U.S. face processing to current face, to avoid the expression schlieren that facial expression generates from ringing image U.S. face effect,
The visual comfort of the U.S. face of optimization.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
A kind of image U.S. face method is essentially described above, a kind of image U.S. face device will be described in detail below.
As shown in figure 3, second embodiment of the present invention provides a kind of image U.S. face device, described image U.S. face device includes:
Face detection module 301 carries out Face datection for treating U.S. face image, the people in U.S. face image is waited for described in determination
Face region;
Facial feature localization module 302, for carrying out facial feature localization to the human face region, wait for described in acquisition in U.S. face image when
The lip block of preceding face;
Parameter set computing module 303, the lip longitudinal degree parameter for calculating the current face based on the lip block
Collection;
Expression line determining module 304, the expression for determining the current face according to the lip longitudinal degree parameter set
Line intensity;
U.S. face parameter determination module 305, target U.S. face for determining the current face based on the expression line intensity
Parameter;
U.S. face processing module 306, for carrying out U.S. face processing to the current face according to target U.S. face parameter.
Further, the lip longitudinal degree parameter set includes that the first parameter of lip longitudinal degree and lip longitudinal degree second are joined
Number;
Correspondingly, it is described based on the lip block calculate the current face the first parameter of lip longitudinal degree and lip it is vertical
The formula of the second parameter of depth is:
Wherein, Hm is the first parameter of lip longitudinal degree, and bimax is the maximum value of line number in lip block, and bimin is lip block
The minimum value of middle line number, Wm are the second parameter of lip longitudinal degree, and bjmax is the maximum value of row number in lip block, and bjmin is lip
The minimum value of row number in block.
Preferably, the formula of the expression line intensity that the current face is determined according to the lip longitudinal degree parameter set
For;
Wherein, Fw is the expression line intensity of current face, k1、k2For scale parameter, and 1≤k2<k1。
Optionally, described image U.S. face device further includes:
First initial U.S. face parameter acquisition module, the first initial U.S. face parameter for obtaining the current face;
Correspondingly, the U.S. face parameter determination module 305, including:
Described first initial U.S. face parameter is raised for as the expression line intensity Fw=1 by the first determination unit
One default adjustment amplitude, and the initial U.S. face parameter of first after the first default adjustment amplitude that raises is determined as the current face
Target U.S. face parameter;
Second determination unit, for as the expression line intensity Fw=-1, the described first initial U.S. face parameter to be lowered
Second default adjustment amplitude, and the first initial U.S. face parameter after the second default adjustment amplitude of lowering is determined as described to work as forefathers
Target U.S. face parameter of face;
Third determination unit, for as the expression line intensity Fw=0, the described first initial U.S. face parameter to be determined as
Target U.S. face parameter of the current face.
Further, the expression line intensity of the current face further includes eye expression line intensity, the lip longitudinal degree
Parameter set further includes lip longitudinal degree third parameter;
Correspondingly, the facial feature localization module 302 further includes:
Eye block acquiring unit obtains the left eye of the current face for carrying out facial feature localization to the human face region
Or the eye block of right eye;
The parameter set computing module 303, the lip longitudinal degree for calculating the current face according to following formula
Three parameters:
ASize=sum (max (j | bk (i, j) ∈ i)-min (j | bk (i, j) ∈ i)+1);
Wherein, ASize be lip longitudinal degree third parameter, bk (i, j) be the i-th row jth row lip block, max (j | bk
(i, j) ∈ i) be row number in the i-th row lip block maximum value, min (j | bk (i, j) ∈ i) is that row number is most in the i-th row lip block
Small value;
The expression line determining module 304, the eye expression line for determining the current face by following formula are strong
Degree:
Wherein, sig is the eye expression line intensity of current face, and ESize is the eye block of current face's left eye or right eye
Block number, k3For scale parameter, and k3≥1.5。
Preferably, described image U.S. face device further includes:
Second initial U.S. face parameter acquisition module, the second initial U.S. face parameter for obtaining the current face, wherein
Described second initial U.S. face parameter includes eye U.S. face parameter;
Correspondingly, the U.S. face parameter determination module 305, including:
4th determination unit, for as the expression line intensity Fw=1, judging that the eye expression line intensity sig is
It is no to be equal to 1;If the eye expression line intensity sig=1, the second of the current face the initial U.S. face parameter is raised into third
Default adjustment amplitude, and the second initial U.S. face parameter after the default adjustment amplitude of up-regulation third is determined as the current face's
Target U.S. face parameter;If eye expression line intensity sig ≠ 1, by the eye U.S. face parameter up-regulation of the current face the
Four default adjustment amplitudes, and the initial U.S. face parameter of second after the 4th default adjustment amplitude that raises is determined as the current face
Target U.S. face parameter;
5th determination unit is used for as the expression line intensity Fw=-1, initial beautiful by the second of the current face
Face parameter lowers the 5th default adjustment amplitude, and the initial U.S. face parameter of second after the 5th default adjustment amplitude of lowering is determined as
Target U.S. face parameter of the current face;
6th determination unit, for as the expression line intensity Fw=0, the described second initial U.S. face parameter to be determined as
Target U.S. face parameter of the current face.
Optionally, described to wait for that U.S. face image includes each frame image in dynamic image;
Correspondingly, described image U.S. face device further includes:
First image judgment module judges after carrying out U.S. face processing to the current frame image in the dynamic image
It whether there is next frame image in the dynamic image;
Second image judgment module judges the next frame figure if for there are next frame images in the dynamic image
Seem it is no be scene switching frame;
Image determining module, for when the next frame image is scene switching frame, the next frame image to be determined
U.S. face image is waited for for new, and returns to execution and waits for that U.S. face image carries out Face datection to described, is waited in U.S. face image described in determination
Human face region the step of and subsequent step;
Newly-increased face acquisition module is used for when the next frame image is not scene switching frame, according to current frame image
Target U.S. face parameter U.S. face processing is carried out to the correspondence face in the next frame image, while obtaining the next frame image
In newly-increased face, newly-increased face is determined as to new current face, and executed to the new current face and be based on the lip
Portion's block calculates the step of lip longitudinal degree parameter set of the current face and subsequent step.
Fig. 4 is the schematic diagram for image U.S. face equipment that the embodiment of the present invention three provides.As shown in figure 4, the figure of the embodiment
As U.S. face equipment 400 includes:It processor 401, memory 402 and is stored in the memory 402 and can be in the processing
The computer program 403 run on device 401, such as image U.S. face program.The processor 401 executes the computer program
The step in above-mentioned each image U.S. face embodiment of the method, such as step S101 shown in FIG. 1 to step S106 are realized when 403.
Alternatively, the processor 401 realizes each module/unit in above-mentioned each device embodiment when executing the computer program 403
Function, such as module shown in Fig. 3 301 is to the function of module 306.
Illustratively, the computer program 403 can be divided into one or more module/units, it is one or
Multiple module/the units of person are stored in the memory 402, and are executed by the processor 401, to complete the present invention.Institute
It can be the series of computation machine program instruction section that can complete specific function, the instruction segment to state one or more module/units
For describing implementation procedure of the computer program 403 in described image U.S. face equipment.For example, the computer program
403 can be divided into face detection module, facial feature localization module, parameter set computing module, expression line determining module, U.S. face ginseng
Number determining module, U.S. face processing module, each module concrete function are as follows:
Face detection module carries out Face datection for treating U.S. face image, the face in U.S. face image is waited for described in determination
Region;
Facial feature localization module waits for current in U.S. face image for carrying out facial feature localization to the human face region described in acquisition
The lip block of face;
Parameter set computing module, the lip longitudinal degree parameter set for calculating the current face based on the lip block;
Expression line determining module, for determining that the expression line of the current face is strong according to the lip longitudinal degree parameter set
Degree;
U.S. face parameter determination module, target U.S. face ginseng for determining the current face based on the expression line intensity
Number;
U.S. face processing module, for carrying out U.S. face processing to the current face according to target U.S. face parameter.
Described image U.S. face equipment 400 can be the meters such as desktop PC, notebook, palm PC and cloud server
Calculate equipment.Described image U.S. face equipment 400 may include, but be not limited only to, processor 401, memory 402.People in the art
Member is appreciated that Fig. 4 is only the example of image U.S. face equipment 400, does not constitute the restriction to image U.S. face equipment 400, can
To include either combining certain components or different components, such as described image U.S. face than illustrating more or fewer components
Equipment 400 can also include input-output equipment, network access equipment, bus etc..
The processor 401 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
The memory 402 can be the internal storage unit of described image U.S. face equipment 400, such as image U.S. face equipment
400 hard disk or memory.The memory 402 can also be the External memory equipment of described image U.S. face equipment 400, such as institute
State the plug-in type hard disk being equipped in image U.S. face equipment 400, intelligent memory card (Smart Media Card, SMC), secure digital
(Secure Digital, SD) blocks, flash card (Flash Card) etc..Further, the memory 402 can also both include
The internal storage unit of described image U.S. face equipment 400 also includes External memory equipment.The memory 402 is described for storing
Other programs and data needed for computer program and described image U.S. face equipment 400.The memory 402 can be also used for
Temporarily store the data that has exported or will export.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may realize that each embodiment described in conjunction with the examples disclosed in this document
Module, unit and/or method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.This
A little functions are implemented in hardware or software actually, depend on the specific application and design constraint of technical solution.Specially
Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not
It is considered as beyond the scope of this invention.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be the indirect coupling by some interfaces, device or unit
It closes or communicates to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can be stored in a computer read/write memory medium.Based on this understanding, the present invention realizes above-described embodiment side
All or part of flow in method can also instruct relevant hardware to complete, the computer by computer program
Program can be stored in a computer readable storage medium, and the computer program is when being executed by processor, it can be achieved that above-mentioned each
The step of a embodiment of the method.Wherein, the computer program includes computer program code, and the computer program code can
Think source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium can be with
Including:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, light of the computer program code can be carried
Disk, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random
Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the computer
The content that readable medium includes can carry out increase and decrease appropriate according to legislation in jurisdiction and the requirement of patent practice, such as
In certain jurisdictions, according to legislation and patent practice, computer-readable medium does not include electric carrier signal and telecommunication signal.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to before
Stating embodiment, invention is explained in detail, it will be understood by those of ordinary skill in the art that:It still can be to preceding
The technical solution recorded in each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
Modification or replacement, the spirit and scope for various embodiments of the present invention technical solution that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of image U.S. face method, which is characterized in that including:
It treats U.S. face image and carries out Face datection, the human face region in U.S. face image is waited for described in determination;
Facial feature localization is carried out to the human face region, the lip block of current face in U.S. face image is waited for described in acquisition;
The lip longitudinal degree parameter set of the current face is calculated based on the lip block;
The expression line intensity of the current face is determined according to the lip longitudinal degree parameter set;
Target U.S. face parameter of the current face is determined based on the expression line intensity;
U.S. face processing is carried out to the current face according to target U.S. face parameter.
2. image U.S. face method according to claim 1, which is characterized in that the lip longitudinal degree parameter set includes lip
The second parameter of the first parameter of longitudinal degree and lip longitudinal degree;
Correspondingly, the first parameter of lip longitudinal degree and lip longitudinal degree second of the current face are calculated based on the lip block
The formula of parameter is:
Wherein, Hm is the first parameter of lip longitudinal degree, and bimax is the maximum value of line number in lip block, and bimin is row in lip block
Number minimum value, Wm be the second parameter of lip longitudinal degree, bjmax be lip block in row number maximum value, bjmin be lip block in
The minimum value of row number.
3. image U.S. face method according to claim 2, which is characterized in that described according to the lip longitudinal degree parameter set
Determine that the formula of the expression line intensity of the current face is;
Wherein, Fw is the expression line intensity of current face, k1、k2For scale parameter, and 1≤k2<k1。
4. image U.S. face method according to claim 3, which is characterized in that described in being determined based on the expression line intensity
Before target U.S. face parameter of current face, including:Obtain the first initial U.S. face parameter of the current face;
Correspondingly, target U.S. face parameter that the current face is determined based on the expression line intensity, including:
As the expression line intensity Fw=1, the described first initial U.S. face parameter is raised into the first default adjustment amplitude, and will be upper
The after the first default adjustment amplitude first initial U.S. face parameter is adjusted to be determined as target U.S. face parameter of the current face;
As the expression line intensity Fw=-1, the described first initial U.S. face parameter is lowered into the second default adjustment amplitude, and will
Lower target U.S. face parameter that the after the second default adjustment amplitude first initial U.S. face parameter is determined as the current face;
As the expression line intensity Fw=0, the target that the described first initial U.S. face parameter is determined as the current face is beautiful
Face parameter.
5. image U.S. face method according to claim 3, which is characterized in that the expression line intensity of the current face is also wrapped
Eye expression line intensity is included, the lip longitudinal degree parameter set further includes lip longitudinal degree third parameter;
Correspondingly, described that facial feature localization is carried out to the human face region, further include:
Facial feature localization is carried out to the human face region, obtains the left eye of the current face or the eye block of right eye;
The lip longitudinal degree parameter set that the current face is calculated based on the lip block further includes
The lip longitudinal degree third parameter of the current face is calculated according to following formula:
ASize=sum (max (j | bk (i, j) ∈ i)-min (j | bk (i, j) ∈ i)+1);
Wherein, ASize be lip longitudinal degree third parameter, bk (i, j) be the i-th row jth row lip block, max (j | bk (i, j)
∈ i) be row number in the i-th row lip block maximum value, min (j | bk (i, j) ∈ i) is the minimum value of row number in the i-th row lip block;
The expression line intensity that the current face is determined according to the lip longitudinal degree parameter set further includes:
The eye expression line intensity of the current face is determined by following formula:
Wherein, sig is the eye expression line intensity of current face, and ESize is the block of the eye block of current face's left eye or right eye
Number, k3For scale parameter, and k3≥1.5。
6. image U.S. face method according to claim 5, which is characterized in that described in being determined based on the expression line intensity
Before target U.S. face parameter of current face, further include:Obtain the second initial U.S. face parameter of the current face, wherein institute
It includes eye U.S. face parameter to state the second initial U.S. face parameter;
Correspondingly, target U.S. face parameter that the current face is determined based on the expression line intensity, including:
As the expression line intensity Fw=1, judge whether the eye expression line intensity sig is equal to 1;If the eye expression
The second of the current face initial U.S. face parameter up-regulation third is then preset adjustment amplitude by line intensity sig=1, and will up-regulation
The second initial U.S. face parameter after the default adjustment amplitude of third is determined as target U.S. face parameter of the current face;If the eye
Eye U.S. face parameter of the current face is then raised the 4th default adjustment amplitude by portion expression line intensity sig ≠ 1, and will be upper
The after the 4th default adjustment amplitude second initial U.S. face parameter is adjusted to be determined as target U.S. face parameter of the current face;
As the expression line intensity Fw=-1, the second of the current face the initial U.S. face parameter is lowered into the 5th default adjustment
Amplitude, and the second initial U.S. face parameter after the 5th default adjustment amplitude of lowering is determined as to target U.S. face of the current face
Parameter;
As the expression line intensity Fw=0, the target that the described second initial U.S. face parameter is determined as the current face is beautiful
Face parameter.
7. image U.S. face method according to any one of claim 1 to 6, which is characterized in that described to wait for U.S. face image packet
Include each frame image in dynamic image;
Correspondingly, described image U.S. face method further includes:
After carrying out U.S. face processing to the current frame image in the dynamic image, judge in the dynamic image with the presence or absence of next
Frame image;
If there are next frame image in the dynamic image, judge whether the next frame image is scene switching frame;
When the next frame image is scene switching frame, by the next frame image be determined as it is new wait for U.S. face image, and return
Receipt row is to described the step of waiting for that U.S. face image carries out Face datection, the human face region in U.S. face image is waited for described in determination and subsequently
Step;
When the next frame image is not scene switching frame, according to target U.S. face parameter of current frame image to the next frame
Correspondence face in image carries out U.S. face processing, while obtaining the newly-increased face in the next frame image, and newly-increased face is true
It is set to new current face, and executes the lip for calculating the current face based on the lip block to the new current face
The step of longitudinal degree parameter set and subsequent step.
8. a kind of image U.S. face device, which is characterized in that including:
Face detection module carries out Face datection for treating U.S. face image, the human face region in U.S. face image is waited for described in determination;
Facial feature localization module waits for current face in U.S. face image for carrying out facial feature localization to the human face region described in acquisition
Lip block;
Parameter set computing module, the lip longitudinal degree parameter set for calculating the current face based on the lip block;
Expression line determining module, the expression line intensity for determining the current face according to the lip longitudinal degree parameter set;
U.S. face parameter determination module, target U.S. face parameter for determining the current face based on the expression line intensity;
U.S. face processing module, for carrying out U.S. face processing to the current face according to target U.S. face parameter.
9. a kind of image U.S. face equipment, including memory, processor and it is stored in the memory and can be in the processing
The computer program run on device, which is characterized in that the processor realizes such as claim 1 when executing the computer program
The step of to any one of 7 described image U.S. face method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, feature to exist
In the step of realization any one of such as claim 1 to 7 described image U.S. face method when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487608.XA CN108765264B (en) | 2018-05-21 | 2018-05-21 | Image beautifying method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487608.XA CN108765264B (en) | 2018-05-21 | 2018-05-21 | Image beautifying method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108765264A true CN108765264A (en) | 2018-11-06 |
CN108765264B CN108765264B (en) | 2022-05-20 |
Family
ID=64008633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810487608.XA Active CN108765264B (en) | 2018-05-21 | 2018-05-21 | Image beautifying method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108765264B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274983A (en) * | 2018-12-06 | 2019-01-25 | 广州酷狗计算机科技有限公司 | The method and apparatus being broadcast live |
CN109685741A (en) * | 2018-12-28 | 2019-04-26 | 北京旷视科技有限公司 | A kind of image processing method, device and computer storage medium |
CN111445417A (en) * | 2020-03-31 | 2020-07-24 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN112669233A (en) * | 2020-12-25 | 2021-04-16 | 北京达佳互联信息技术有限公司 | Image processing method, image processing apparatus, electronic device, storage medium, and program product |
WO2022187997A1 (en) * | 2021-03-08 | 2022-09-15 | 深圳市大疆创新科技有限公司 | Video processing method, electronic device, and storage medium |
WO2023051664A1 (en) * | 2021-09-30 | 2023-04-06 | 北京字跳网络技术有限公司 | Image processing method and apparatus |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
CN106056533A (en) * | 2016-05-26 | 2016-10-26 | 维沃移动通信有限公司 | Photographing method and terminal |
WO2017088432A1 (en) * | 2015-11-26 | 2017-06-01 | 腾讯科技(深圳)有限公司 | Image recognition method and device |
CN107249100A (en) * | 2017-06-30 | 2017-10-13 | 北京金山安全软件有限公司 | Photographing method and device, electronic equipment and storage medium |
WO2017177259A1 (en) * | 2016-04-12 | 2017-10-19 | Phi Technologies Pty Ltd | System and method for processing photographic images |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
-
2018
- 2018-05-21 CN CN201810487608.XA patent/CN108765264B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017088432A1 (en) * | 2015-11-26 | 2017-06-01 | 腾讯科技(深圳)有限公司 | Image recognition method and device |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
WO2017177259A1 (en) * | 2016-04-12 | 2017-10-19 | Phi Technologies Pty Ltd | System and method for processing photographic images |
CN106056533A (en) * | 2016-05-26 | 2016-10-26 | 维沃移动通信有限公司 | Photographing method and terminal |
CN107249100A (en) * | 2017-06-30 | 2017-10-13 | 北京金山安全软件有限公司 | Photographing method and device, electronic equipment and storage medium |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274983A (en) * | 2018-12-06 | 2019-01-25 | 广州酷狗计算机科技有限公司 | The method and apparatus being broadcast live |
CN109685741A (en) * | 2018-12-28 | 2019-04-26 | 北京旷视科技有限公司 | A kind of image processing method, device and computer storage medium |
CN109685741B (en) * | 2018-12-28 | 2020-12-11 | 北京旷视科技有限公司 | Image processing method and device and computer storage medium |
CN111445417A (en) * | 2020-03-31 | 2020-07-24 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN111445417B (en) * | 2020-03-31 | 2023-12-19 | 维沃移动通信有限公司 | Image processing method, device, electronic equipment and medium |
CN112669233A (en) * | 2020-12-25 | 2021-04-16 | 北京达佳互联信息技术有限公司 | Image processing method, image processing apparatus, electronic device, storage medium, and program product |
WO2022187997A1 (en) * | 2021-03-08 | 2022-09-15 | 深圳市大疆创新科技有限公司 | Video processing method, electronic device, and storage medium |
WO2023051664A1 (en) * | 2021-09-30 | 2023-04-06 | 北京字跳网络技术有限公司 | Image processing method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN108765264B (en) | 2022-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765264A (en) | Image U.S. face method, apparatus, equipment and storage medium | |
WO2018188453A1 (en) | Method for determining human face area, storage medium, and computer device | |
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
Guo et al. | Underwater ranker: Learn which is better and how to be better | |
WO2022078041A1 (en) | Occlusion detection model training method and facial image beautification method | |
CN110310229A (en) | Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing | |
WO2012122682A1 (en) | Method for calculating image visual saliency based on color histogram and overall contrast | |
CN103440633B (en) | A kind of digital picture dispels the method for spot automatically | |
CN108734127A (en) | Age identifies value adjustment method, device, equipment and storage medium | |
CN112200736B (en) | Image processing method based on reinforcement learning and model training method and device | |
CN111383232A (en) | Matting method, matting device, terminal equipment and computer-readable storage medium | |
JP2021531571A (en) | Certificate image extraction method and terminal equipment | |
CN109343701A (en) | A kind of intelligent human-machine interaction method based on dynamic hand gesture recognition | |
DE112016005482T5 (en) | Object detection with adaptive channel features | |
RU2770748C1 (en) | Method and apparatus for image processing, device and data carrier | |
CN107506691B (en) | Lip positioning method and system based on skin color detection | |
CN109800659A (en) | A kind of action identification method and device | |
CN110910512A (en) | Virtual object self-adaptive adjusting method and device, computer equipment and storage medium | |
CN107945139A (en) | A kind of image processing method, storage medium and intelligent terminal | |
CN110163049B (en) | Face attribute prediction method, device and storage medium | |
CN109741300B (en) | Image significance rapid detection method and device suitable for video coding | |
JP2013196681A (en) | Method and device for extracting color feature | |
CN115471413A (en) | Image processing method and device, computer readable storage medium and electronic device | |
CN109451318A (en) | Convenient for the method, apparatus of VR Video coding, electronic equipment and storage medium | |
CN108932704A (en) | Image processing method, picture processing unit and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |