CN111340688B - Method and device for generating closed-eye image - Google Patents

Method and device for generating closed-eye image Download PDF

Info

Publication number
CN111340688B
CN111340688B CN202010113899.3A CN202010113899A CN111340688B CN 111340688 B CN111340688 B CN 111340688B CN 202010113899 A CN202010113899 A CN 202010113899A CN 111340688 B CN111340688 B CN 111340688B
Authority
CN
China
Prior art keywords
eyelid
eye
coordinates
connecting line
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010113899.3A
Other languages
Chinese (zh)
Other versions
CN111340688A (en
Inventor
吴家贤
朱英芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010113899.3A priority Critical patent/CN111340688B/en
Publication of CN111340688A publication Critical patent/CN111340688A/en
Application granted granted Critical
Publication of CN111340688B publication Critical patent/CN111340688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The embodiment of the invention provides a method and a device for generating a closed-eye image, wherein the method comprises the following steps: acquiring eye key point coordinates in an image to be processed; determining an operation area of eye closing transformation according to the eye key point coordinates; acquiring coordinates of a current pixel point to be operated in the operation area; and according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key points, controlling to replace the pixel value of the current pixel point to be operated with the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area so as to generate a closed-eye image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key points. Therefore, the closed-eye image can be generated by adopting the image to be processed, and the problem that a great deal of time and labor are required to be consumed to acquire the closed-eye image when the model training is carried out by adopting the closed-eye image is avoided, so that the cost of acquiring the closed-eye image sample can be reduced.

Description

Method and device for generating closed-eye image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating an eye-closed image.
Background
When constructing an algorithm based on deep learning, a large number of labeled samples are needed to train a model so that the algorithm can learn sufficiently, and the diversity, quality and the like of the samples determine the effect of the algorithm in use. The method for acquiring the labeled sample comprises the following steps: 1. downloading from the data set disclosed by the network, 2, collecting data by self or entrusting a third party data merchant and labeling.
The positioning of the key points of the human face has wide application in the field of image vision, and is an important step of face related problems such as face recognition, expression analysis, face reconstruction and the like. And the accurate positioning of key points near eyes under various eye states is important for fatigue detection, expression analysis, eye beauty and the like. However, the prior face key point data are not a lot of face samples for closing eyes and closing eyes, the algorithm learns the condition of closing eyes insufficiently, so that the key points of the eye areas are positioned inaccurately when the face is closed, and the key points of the positioned eye areas cannot be adjusted according to the opening and closing of eyes. When it is desired to calculate expression, fatigue level, or generate special effects at the eye site from the positions of key points near the eyes, it is therefore necessary to collect more closed-eye samples to solve this problem.
At present, face picture samples can be acquired through network crawling or offline shooting, however, the cost is relatively high for additionally acquiring a large number of eye-closing picture samples, and on one hand, if network crawling is adopted, the eyes in most face pictures are in an eye-opening state, so that the eye-closing picture samples can be obtained through manual screening. On the other hand, if offline shooting is adopted, the cost is high, the acquired hungry samples lack the variety of characters, and a great deal of manpower is required to mark the key points of the face after the picture is acquired.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention have been made to provide a method of generating a closed-eye image and a corresponding apparatus for generating a closed-eye image that overcome or at least partially solve the foregoing problems.
In order to solve the above problems, an embodiment of the present invention discloses a method for generating a closed-eye image, including:
acquiring eye key point coordinates in an image to be processed, wherein the eye key point coordinates comprise eye corner key point coordinates, eyelid key point coordinates and eyebrow key point coordinates;
determining an operation area of eye closing transformation according to the eye key point coordinates;
Acquiring coordinates of a current pixel point to be operated in the operation area;
and according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key points, controlling to replace the pixel value of the current pixel point to be operated with the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area so as to generate a closed-eye image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key points.
Optionally, the method further comprises:
according to the coordinates of the corner key points, determining corner connecting lines and unit direction vectors of the corner connecting lines;
and determining the new eyelid key point coordinates by adopting the eyelid key point coordinates, the eyelid key point coordinates and the unit direction vector.
Optionally, before the step of determining the operation region of the eye-closure transformation according to the eye keypoint coordinates, the method further comprises:
determining an canthus connecting line according to the canthus key point coordinates;
determining an upper eyelid curve and a lower eyelid curve according to the eyelid key point coordinates;
calculating the maximum distance between the upper eyelid point and the corner connecting line in the upper eyelid curve, the maximum distance between the lower eyelid point and the corner connecting line in the lower eyelid curve, and the distance between the corner connecting lines;
Determining an eyebrow curve according to the coordinates of the eyebrow key points, and calculating the maximum distance between the eyebrow points and the connecting line of the eye corners in the eyebrow curve;
calculating the opening and closing degree of eyes according to the maximum distance between the upper eyelid point and the corner connecting line, the maximum distance between the lower eyelid point and the corner connecting line and the distance between the corner connecting lines;
and if the opening and closing degree of the eyes is larger than a preset value, executing the step of determining the operation area of the eye closing transformation according to the eye key point coordinates.
Optionally, the determining the operation area of the eye closing transformation according to the eye key point coordinates includes:
determining an upward extension length according to the maximum distance between the upper eyelid point and the corner connecting line and the maximum distance between the eyebrow point and the corner connecting line;
determining a downward extension length according to the maximum distance between the lower eyelid point and the corner connecting line;
and determining an operation area of closed-eye transformation according to the upward extension length, the downward extension length and the corner connecting line.
Optionally, after the step of obtaining the coordinates of the pixel point to be currently operated in the operation area, the method further includes:
and judging whether the coordinates of the pixel point to be operated currently are in the operation area of the closed-eye transformation.
Optionally, the eyelid pixel value of the corresponding eyelid area or eyelid pixel value of the eyelid area is obtained by:
determining the projection of the current pixel point to be operated on the corner connecting line and the projection of the current pixel point to be operated on the vertical line of the corner connecting line;
and calculating eyelid pixel values of corresponding eyelid areas or eyelid pixel values of eyelid areas according to the projection on the corner connecting line and the projection on the vertical line of the corner connecting line.
Optionally, the calculating the eyelid pixel value of the corresponding eyelid area or eyelid pixel value of the eyelid area according to the projection on the canthus line and the projection on the vertical line of the canthus line includes:
when the projection on the perpendicular line of the eye corner connecting line is smaller than a preset value, determining the pixel value corresponding to the pixel point to be operated at present as the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the eye corner connecting line and the projection on the perpendicular line of the eye corner connecting line;
when the projection on the vertical line of the canthus line is larger than or equal to a preset value, determining that the pixel value corresponding to the pixel point to be operated currently is the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the canthus line and the projection on the vertical line of the canthus line.
The embodiment of the invention also discloses a device for generating the closed-eye image, which comprises the following steps:
the eye key point acquisition module is used for acquiring eye key point coordinates in the image to be processed, wherein the eye key point coordinates comprise eye corner key point coordinates, eyelid key point coordinates and eyebrow key point coordinates;
the operation area determining module is used for determining an operation area of eye closing transformation according to the eye key point coordinates;
the coordinate acquisition module is used for acquiring the coordinates of the current pixel point to be operated in the operation area;
and the eye-closed image generation module is used for controlling the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area to replace the pixel value of the current pixel point to be operated according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key point so as to generate an eye-closed image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key point.
Optionally, the apparatus further comprises:
the unit direction vector determining module is used for determining an eye corner connecting line and a unit direction vector of the eye corner connecting line according to the eye corner key point coordinates;
and the key point coordinate determining module is used for determining the new eyelid key point coordinate by adopting the eye corner key point coordinate, the eyelid key point coordinate and the unit direction vector.
Optionally, the apparatus further comprises:
the canthus connecting line determining module is used for determining a canthus connecting line according to the canthus key point coordinates;
the curve determining module is used for determining an upper eyelid curve and a lower eyelid curve according to the eyelid key point coordinates;
a first distance calculating module, configured to calculate a maximum distance between an upper eyelid point and an eye corner connecting line in the upper eyelid curve, a maximum distance between a lower eyelid point and an eye corner connecting line in the lower eyelid curve, and a distance between the eye corner connecting lines;
the second distance calculation module is used for determining an eyebrow curve according to the coordinates of the eyebrow key points and calculating the maximum distance between the eyebrow points and the connecting line of the eye corners in the eyebrow curve;
the opening and closing degree calculation module is used for calculating the opening and closing degree of eyes according to the maximum distance between the upper eyelid point and the corner connecting line, the maximum distance between the lower eyelid point and the corner connecting line and the distance between the corner connecting lines;
and the judging module is used for executing the step of determining the operation area of the eye closing transformation according to the eye key point coordinates if the opening degree of the eyes is larger than a preset value.
Optionally, the operation region determining module includes:
The first length determining submodule is used for determining an upward extension length according to the maximum distance between the upper eyelid point and the eye corner connecting line and the maximum distance between the eyebrow point and the eye corner connecting line;
a second length determining submodule, configured to determine a downward extension length according to a maximum distance between the lower eyelid point and the canthus connecting line;
and the operation area determination submodule is used for determining an operation area of closed-eye transformation according to the upward extension length, the downward extension length and the corner connecting line.
Optionally, the apparatus further comprises:
and the judging module is used for judging whether the coordinates of the pixel point to be operated currently are in the operation area of the closed-eye transformation.
Optionally, the eyelid pixel values of the corresponding eyelid area or eyelid pixel values of the eyelid area are obtained by the following sub-modules:
a projection determining submodule, configured to determine a projection of the current pixel to be operated on the corner connecting line and a projection of the current pixel to be operated on a vertical line of the corner connecting line;
and the target pixel value calculating submodule is used for calculating eyelid pixel values of corresponding eyelid areas or eyelid pixel values of eyelid areas according to the projection on the corner connecting line and the projection on the vertical line of the corner connecting line.
Optionally, the target pixel value calculating submodule includes:
the first calculation unit is used for determining the pixel value corresponding to the pixel point to be operated currently as the eyelid pixel value of the eyelid area when the projection on the perpendicular line of the eye corner connecting line is smaller than a preset value, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the eye corner connecting line and the projection on the perpendicular line of the eye corner connecting line;
and the second calculation unit is used for determining that the pixel value corresponding to the pixel point to be operated currently is the eyelid pixel value of the eyelid area when the projection on the vertical line of the canthus connecting line is larger than or equal to a preset numerical value, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the canthus connecting line and the projection on the vertical line of the canthus connecting line.
The embodiment of the invention also discloses an electronic device, which comprises:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the steps of one or more methods according to embodiments of the present invention.
Embodiments of the present invention also disclose a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the steps of one or more methods according to embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the pixel value of the current pixel point to be operated is replaced by the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area to generate the closed-eye image by determining the operation area of closed-eye transformation in the image to be processed, so that the problem that a great deal of time and labor are required to be consumed to acquire the closed-eye image when the closed-eye image is required to be used for model training is avoided, and the cost for acquiring the closed-eye image sample can be reduced.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a method of generating a closed-eye image of the present invention;
FIG. 2 is a schematic illustration of interpolation points for a key point of the present invention;
FIG. 3 is a schematic illustration of the operational area of one eye-closing transformation of the present invention;
FIG. 4 is a schematic illustration of the present invention moving the upper eyelid curve down to the corner of the eye line;
FIG. 5A is a schematic representation of an eyelid area of the present invention;
FIG. 5B is a schematic illustration of the area covered by an eyelid area after stretching in accordance with the present invention;
FIG. 5C is a schematic representation of the present invention after stretching the eyelid;
fig. 6 is a block diagram showing the structure of an embodiment of a closed-eye image generating apparatus according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for generating a closed-eye image according to the present invention may specifically include the following steps:
step 101, acquiring eye key point coordinates in an image to be processed, wherein the eye key point coordinates comprise eye corner key point coordinates, eyelid key point coordinates and eyebrow key point coordinates;
the image to be processed may be a face image that needs to be converted into a closed-eye state. For the image to be processed, if the eyes in the image to be processed are in an open state, the eyes in the image to be processed are adjusted to be in a closed state.
The eye keypoint coordinates may be positions of key areas for indicating eyes of a human face, including eyebrows, corners of eyes, eyelids, and the like. The coordinates of the corner key points may include coordinates of left and right corner points of two eyes, or coordinates of inner and outer corner points, respectively. The corner of the eye key coordinates may include coordinates of upper and lower eyelid points of the two eyes, respectively, at least two of the upper and lower eyelid points. The eyebrow key point coordinates may include coordinates of a left eyebrow point and a right eyebrow point, at least three each.
In the embodiment of the invention, if the image to be processed has the corresponding eye key point position label, the eye key point coordinates can be obtained directly by reading the eye key point position label.
In addition, if the image to be processed does not have a corresponding eye keypoint tag, a face keypoint in the image to be processed may be detected by a face keypoint detection algorithm, and the eye keypoint coordinates may be determined from the face keypoint coordinates, for example, by an ASM (Active Shape Model ), AAM (Active Appearnce Model, active appearance model), CPR (Cascaded pose regression, cascade gesture regression), PFLD (Practical Facial Landmark Detector, practical face keypoint detection algorithm), or the like.
102, determining an operation area of eye closing transformation according to the eye key point coordinates;
the operation area may be an eye area in the image to be processed, where the eye-closing transformation operation is required, and the operation area may include an eyeball, an eyelid, an canthus, and the like. By modifying the pixel values in the operation region, a closed-eye image can be generated.
In a specific implementation, the operation area of the closed-eye transformation is a rectangular area in which the eye corner connecting line extends in the vertical direction, the operation areas corresponding to the two eyes can be respectively determined, the eye corner connecting line is generated according to the left eye corner point coordinate and the right eye corner point coordinate corresponding to one eye, the eye corner connecting line is translated to an upper eyelid area by a certain distance until the upper eyelid area is covered, the eye corner connecting line is translated to a lower eyelid area by a certain distance until the lower eyelid area is covered, and the operation area of the closed-eye transformation can be an area determined by the eye corner connecting line after the two translations.
Step 103, obtaining coordinates of a current pixel point to be operated in the operation area;
the current pixel point to be operated may refer to a pixel point to be operated currently in the image to be processed.
Specifically, pixel points in the image to be processed can be sequentially traversed, the traversed pixel points located in the operation area are used as current pixel points to be operated, and coordinates of the current pixel points to be operated are further obtained.
And 104, according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key points, controlling to replace the pixel value of the current pixel point to be operated with the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area to generate a closed-eye image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key points.
Specifically, the eyelid area may be an area including eyelid keypoints determined according to eye keypoint coordinates, and the eyelid area may be an eyelid area excluding eyebrows and eyelids determined according to eye keypoint coordinates.
In the embodiment of the invention, the pixel value of the current pixel point to be operated can be controlled to be replaced by the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the key point of the eye angle so as to generate the closed-eye image.
And judging the position relation between the coordinates of the pixel point to be operated currently and the coordinates of the key point of the canthus, and determining whether the pixel point to be operated currently is positioned near the canthus connecting line. Specifically, an eye corner connecting line is determined according to coordinates of an eye corner key point, then a distance relation between the current pixel point to be operated and the eye corner connecting line is determined according to coordinates of the current pixel point to be operated, if the coordinates of the current pixel point to be operated are close, the pixel value of the current pixel point to be operated needs to be switched to an eyelid pixel value of an eyelid area, and if the coordinates of the current pixel point to be operated are far, the pixel value of the current pixel point to be operated needs to be switched to an eyelid pixel value of an eyelid area.
If the current pixel point to be operated is positioned near the eye corner connecting line, determining a vertical line of the eye corner connecting line by taking the current pixel point to be operated as a base point, and acquiring a pixel value of an eyelid point positioned on the vertical line as an eyelid pixel value of a corresponding eyelid area; if the pixel point is not located near the canthus connecting line, determining a perpendicular line of the canthus connecting line by taking the point as a base point, and acquiring a pixel value of eyelid located on the perpendicular line as an eyelid pixel value of a corresponding eyelid area.
In the embodiment of the invention, the pixel value corresponding to the current pixel to be operated can be determined according to the position relation between the coordinates of the current pixel to be operated and the coordinates of the corner key point, and the closer the current pixel to be operated is to the corner connecting line determined by the corner key point, the closer the corresponding pixel value is to the eyelid, the farther the corresponding pixel value is to the current pixel to be operated is to the corner connecting line determined by the corner key point, and the farther the corresponding pixel value is to the eyelid, so as to achieve the effect of stretching eyelid synchronously.
In a preferred embodiment of the invention, the method may further comprise the steps of:
according to the coordinates of the corner key points, determining corner connecting lines and unit direction vectors of the corner connecting lines; and determining the new eyelid key point coordinates by adopting the eyelid key point coordinates, the eyelid key point coordinates and the unit direction vector.
Specifically, note that the left corner point coordinate of one eye is vLeft, the right corner point coordinate is vRight, and then the inter-corner distance is eyedis=norm (vRight-vLeft), and the unit direction vector of the corner connecting line is ieye= (vRight-vLeft)/EyeDis. Assuming that the eyelid key point coordinates in the image to be processed are ori_pos, the new eyelid key point coordinates satisfy the formula:
New_Pos=vLeft+lEye*((Ori_Pos-vLeft)·lEye)。
and substituting values of the eyelid key point coordinates, the left Fang Yanjiao point coordinates and the unit direction vector in the image to be processed into the formula, so that a new eyelid key point coordinate can be obtained through calculation.
In the embodiment of the invention, the eye corner key point coordinates, the eyebrow key point coordinates and the calculated key point labels of the to-be-processed image for generating the eye closing image can be adopted, so that the subsequent operation of using the eye closing image for model training and the like is facilitated.
Prior to said step 102, the following steps may be included:
determining an canthus connecting line according to the canthus key point coordinates; determining an upper eyelid curve and a lower eyelid curve according to the eyelid key point coordinates; calculating the maximum distance between the upper eyelid point and the corner connecting line in the upper eyelid curve, the maximum distance between the lower eyelid point and the corner connecting line in the lower eyelid curve, and the distance between the corner connecting lines; according to the coordinates of the key points of the eyebrows, determining an eyebrow curve, and calculating the maximum distance between the eyebrow points and the connecting line of the corners of the eyes in the eyebrow curve; calculating the opening and closing degree of eyes according to the maximum distance between the upper eyelid point and the corner connecting line, the maximum distance between the lower eyelid point and the corner connecting line and the distance between the corner connecting lines; and if the opening and closing degree of the eyes is larger than a preset value, executing the step of determining the operation area of the eye closing transformation according to the eye key point coordinates.
Fig. 2 shows a schematic diagram of interpolation points of a key point of the present invention, and in fig. 2, the interpolation points include interpolation points obtained by interpolation of eyelid key points and interpolation points obtained by interpolation of eyebrow key points. The operation region may be determined from the interpolation points.
Specifically, the coordinates of the key points of the upper eyelid and the coordinates of the key points of the canthus are respectively [ pt ] from left to right 1 ,pt 2 ...pt n ],pt i ∈R 2 . Wherein R is 2 = { (x, y) |x, y e R }. t is a fraction between 0 and 1, 0 representing the leftmost end, 1 representing the rightmost end, and 0.5 representing the midpoint. P is p 1 For the corner of the left eye, p n For the right eye corner, add the end point pt 0 =pt 1 *2-pt 2 ,pt n+1 =pt n *2-pt n-1 . The coordinates at the length of the corner of the eye line t may satisfy the following equation 1:
LineLoc(t)=pt 1 +t*(pt n -pt 1 ) 1 (1)
The eyelid curve is a piecewise curve, and the number of pieces of the eyelid curve numsections=n-1 is recorded. Recording deviceu=t*numSections-currPt;a=pt currPt ;b=pt currPt+1 ;c=pt currPt+2 ;d=pt currPt+3 . Wherein (1)>Is rounded downwards. Bit of thenThe point coordinates at the length of the upper eyelid curve t may satisfy the following equation 2:
CurveLoc(t)=((-a+b*3-c*3+d)*u3+(a*2-b*5+c*4-d)*u2+(c-a)*u+b*2)*0.5
2, 2
Therefore, the distance between the corner line of the eye and the upper eyelid at the length t may satisfy the following formula 3:
UpLibDis (t) =norm (Curveloc (t) -LineLoc (t)) 3
Where norm represents the 2-norm of the vector, i.e. the euclidean distance.
The maximum distance between the upper eyelid point and the corner of the eye in the upper eyelid curve can satisfy the following equation 4:
UpLibL≡max (UpLibDis (t)) t= {0,1/30,2/30,..
Let t= {0,1/30,2/30,..29/30 }, substituting a series of t values into the above formula 3 and taking the maximum value, the maximum distance between the upper eyelid point and the corner of the eye can be obtained.
Similarly, the calculation process is performed on the key point coordinates of the lower eyelid and the key point coordinates of the canthus, so that a distance formula DownLibDis (t) between the canthus connecting line at the t length and the lower eyelid can be obtained, and the maximum distance DownLibL between the upper eyelid point and the canthus connecting line can be estimated according to the formula DownLibDis (t).
The calculation process is performed on the coordinates of the eyebrow key points, a distance formula BowDis (t) of the connection line between the coordinates of the eyebrow key points and the eye corners can be obtained, and the maximum distance BowL of the connection line between the eyebrow points and the eye corners can be estimated according to the formula BowDis (t).
Using corner points of two eyes (p 1 And p n ) The Euclidean distance is calculated to obtain the distance EyeW between the corner connecting lines.
If the opening and closing degree of the eye is EOR, the opening and closing degree of the eye can satisfy the formula: eor= (downlibl+uplibl)/EyeW. When the opening and closing degree of the eyes is larger than the preset value, the eyes are considered to be in an open state. The preset value is a preset value, and is used for determining the state of the human eye, for example, the preset value is 0.3, and in a specific implementation, the magnitude of the preset value can be set according to needs, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, when the opening and closing degree of the eyes is greater than the preset value, the eyes are considered to be open, the subsequent step 102 can be continuously executed, and when the opening and closing degree of the eyes is less than or equal to the preset value, the eyes are considered to be closed, the process can be exited, and the subsequent step 102 is not executed.
In an embodiment of the present invention, the step 102 may include the following sub-steps:
determining an upward extension length according to the maximum distance between the upper eyelid point and the corner connecting line and the maximum distance between the eyebrow point and the corner connecting line; determining a downward extension length according to the maximum distance between the lower eyelid point and the corner connecting line; and determining an operation area of closed-eye transformation according to the upward extension length, the downward extension length and the corner connecting line.
Specifically, the operation region may be a rectangle in which the corner line extends in the direction of its perpendicular. Wherein, the operation area can upwards extend to the lower part of the eyebrow, and the upward extension length of the operation area can satisfy the formula: upwrapl=bowl-0.1 uplibl. The downward extension of the operating region may satisfy the formula: downwrapl=2×downlibl.
As an example, let the left corner point coordinate of one eye be vLeft and the right corner point coordinate be vlight, then the inter-corner distance be eyedis=norm (vRight-vLeft), the unit direction vector of the corner connecting line, ieye= (vRight-vLeft)/EyeDis, the vector of the perpendicular direction of the corner connecting line be ieye= (-ieye [1], ieye [0 ]).
Four corner coordinates of the operation region rectcorner= (vleft+veye x DownWrapL, vLeft-vEye x UpWrapL, vright+veye x DownWrapL, vRight-vEye x UpWrapL). Wherein, rectCorner is a matrix of four rows and two columns, each row representing the coordinates of one corner point. The maximum abscissa of the operating region is xmax, the minimum abscissa is xmin, the maximum ordinate is ymax, and the minimum ordinate is ymin. Xmin and xmax, ymax satisfy the following formula:
(xmax,ymax)=max(RectCorner,axis=0);
(xmin,ymin)=min(RectCorner,axis=0)。
further, coordinates of four corner points of the operation region of the closed-eye transformation can be determined: (xmin), (xmin, ymax), (xmax, ymin) and (xmax, ymax). A schematic diagram of the operational area of one eye-closing transformation of the present invention is shown in fig. 3.
In a preferred embodiment of the present invention, after said step 103, the following steps may be further included:
and judging whether the coordinates of the pixel point to be operated currently are in the operation area of the closed-eye transformation.
In the embodiment of the present invention, it is determined whether the coordinates of the current pixel to be operated are within the operation area, if the coordinates of the current pixel to be operated are within the operation area, step 103 is continuously executed, and if the coordinates of the current pixel to be operated are not within the operation area, the flow is exited, and step 103 is not executed.
Specifically, the coordinates of the current operation pixel point are recorded as chord= (x, y), and the sitting marks of the current operation pixel point relative to the left eye corner point are as follows: rel_chord=chord-vLeft, the projection of the relative coordinates on the corner of the eye line is: the dot product of the unit vector of the relative coordinates and the corner connecting line direction is recorded as: eye projx = rel_chord · lxe, the projection of the relative coordinates on the perpendicular to the eye-corner line is: eye projy = rel_cord. If the projection EyeProjX of the relative coordinates on the corner connecting line is larger than 0 and smaller than the inter-corner distance EyeDis, and the projection EyeProjY on the vertical line of the corner connecting line is larger than-UpWrapL and smaller than Down WrapL, the current pixel point to be operated is judged to be in the operation area.
In a preferred embodiment of the present invention, the step 104 may comprise the following sub-steps:
determining the projection of the current pixel point to be operated on the corner connecting line and the projection of the current pixel point to be operated on the vertical line of the corner connecting line; and calculating eyelid pixel values of corresponding eyelid areas or eyelid pixel values of eyelid areas according to the projection on the corner connecting line and the projection on the vertical line of the corner connecting line.
In the embodiment of the invention, the bilinear interpolation of the extension point of the pixel point to be operated at present along the vertical line of the canthus line can be taken as the eyelid pixel value of the eyelid area or eyelid pixel value of the eyelid area corresponding to the pixel point to be operated at present by determining the projection of the pixel point to be operated at present on the canthus line and the projection of the pixel point to be operated at present on the vertical line of the canthus line.
In a preferred embodiment of the present invention, the calculating the eyelid pixel value of the corresponding eyelid area or eyelid pixel value of the eyelid area according to the projection on the corner line and the projection on the vertical line of the corner line may include the following sub-steps:
when the projection on the perpendicular line of the eye corner connecting line is smaller than a preset value, determining the pixel value corresponding to the pixel point to be operated at present as the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the eye corner connecting line and the projection on the perpendicular line of the eye corner connecting line; when the projection on the vertical line of the canthus line is larger than or equal to a preset value, determining that the pixel value corresponding to the pixel point to be operated currently is the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the canthus line and the projection on the vertical line of the canthus line.
The preset value may be a preset value, which is used for judging whether the pixel point to be operated is located near the corner connecting line, for example, the preset value may be 0.3 times of the maximum distance between the eyelid and the corner connecting line. If the maximum distance between the eyelid and the corner of the eye is assumed to be 1cm, the preset value may be set to be 0.3cm.
In the embodiment of the invention, whether the current pixel point to be operated is located near the eye corner connecting line can be determined through the projection on the vertical line of the eye corner connecting line and the preset value, if the projection on the vertical line of the eye corner connecting line is smaller than the preset value, the current pixel point to be operated is considered to be near the eye corner connecting line, the pixel value corresponding to the current pixel point to be operated is determined to be the eyelid pixel value of the eyelid area, and the eyelid pixel value of the corresponding eyelid area is calculated according to the projection on the eye corner connecting line and the projection on the vertical line of the eye corner connecting line. If the projection on the perpendicular line of the canthus connecting line is larger than or equal to a preset value, the pixel point to be operated is not located near the canthus connecting line, the pixel value corresponding to the pixel point to be operated is determined to be the eyelid pixel value of the eyelid area, and the eyelid pixel value of the corresponding eyelid area is calculated according to the projection on the canthus connecting line and the projection on the perpendicular line of the canthus connecting line.
Specifically, I (x, y) is a pixel value at the coordinate (x, y) of the pixel point to be currently operated, and the coordinate of the pixel point of the eyelid area or eyelid area to be replaced is new_chord= (x, y), where new_chord is a floating point value coordinate. The pixel value of new_chord is obtained by bilinear interpolation. Then I (x, y) satisfies the formula: i (x, y) =bilinear (new_minor), new_minor=minor+veye. Note new_chord= (nx, ny), Wherein->To round up the symbol ++>To round down the symbol. The bilinear interpolation formula is:
Bilinear(new_coord)=(nx2-nx)*(ny2-ny)*I(nx1,ny1)+(nx-nx1)*(ny2-ny)*I(nx2,ny1)+(nx2-nx)*(ny-ny1)*I(nx1,ny2)+(nx-nx1)*(ny-ny1)*I(nx2,ny2)。
the value of vStep in the above formula of new_integrated is calculated in case:
when EyeProjY <0, then the pixel currently to be operated is located above the corner line of the eye, since the perpendicular to the corner line of the eye is directed to the lower eyelid. If the current pixel to be operated is closer to the corner connecting line, namely-EyeProjY < pl > UpLibL, the pixel value of the eyelid is used to replace the pixel value of the current pixel to be operated, and at this time, vStep satisfies the formula: vstep= -UpLibDis (EyeProjX), where pl represents an estimate of eyelid range duty cycle, e.g. pl is 0.3. The operation at this time is actually to move the upper eyelid curve down to the corner of the eye line, as shown in fig. 4.
Conversely, if the current pixel to be operated is far from the corner line, the pixel at the eyelid can be stretched to the point, and at this time, vStep satisfies the formula: vStep = Upsa eprojy+upsb, wherein upsa= (UpWrapL-UpLibDis (EyeProjX) -pl) UpLibL)/(UpWrapL-pl UpLibL) -1, upsb = Upsa. The operation at this time is to actually stretch the eyelid area over the eyeball area as shown in fig. 5A to 5C.
When EyeProjY >0, then the pixel currently to be operated is located below the corner of the eye line. Similarly, when the current pixel to be operated is closer to the corner line, i.e., eyeProjY < pl, downLibL, the pixel on the lower eyelid curve is moved up to the corner line, and vcep satisfies the formula: vcep= DownLibDis (EyeProjX). Otherwise, the pixels of the eyelid area are stretched, at which time vStep satisfies the formula: vstep=downsa × eyeprojy+downsb.
Wherein, downsa= (DownWrapL-DownLibDis (EyeProjX) -pl)/(DownWrapL-pl) max_downl) -1, downsb= (1-Downsa) DownWrapL.
In the embodiment of the invention, the pixel value of the current pixel point to be operated is replaced by the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area to generate the closed-eye image by determining the operation area of closed-eye transformation in the image to be processed, so that the problem that a great deal of time and labor are required to be consumed to acquire the closed-eye image when the closed-eye image is required to be used for model training is avoided, and the cost for acquiring the closed-eye image sample can be reduced.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 6, a block diagram illustrating an embodiment of a closed-eye image generating apparatus according to the present invention may specifically include the following modules:
The key point obtaining module 601 is configured to obtain eye key point coordinates in an image to be processed, where the eye key point coordinates include an eye corner key point coordinate, an eyelid key point coordinate and an eyebrow key point coordinate;
an operation region determining module 602, configured to determine an operation region of the eye-closure transformation according to the eye key point coordinates;
a coordinate acquiring module 603, configured to acquire coordinates of a current pixel to be operated in the operation area;
the closed-eye image generating module 605 is configured to control to replace the pixel value of the current pixel to be operated with the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area according to the position relation between the coordinates of the current pixel to be operated and the coordinates of the corner key point of the eye, so as to generate a closed-eye image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the corner key point of the eye.
In a preferred embodiment of the invention, the apparatus may further comprise the following modules:
the unit direction vector determining module is used for determining an eye corner connecting line and a unit direction vector of the eye corner connecting line according to the eye corner key point coordinates;
and the key point coordinate determining module is used for determining the new eyelid key point coordinate by adopting the eye corner key point coordinate, the eyelid key point coordinate and the unit direction vector.
In a preferred embodiment of the invention, the device further comprises:
the canthus connecting line determining module is used for determining a canthus connecting line according to the canthus key point coordinates;
the curve determining module is used for determining an upper eyelid curve and a lower eyelid curve according to the eyelid key point coordinates;
a first distance calculating module, configured to calculate a maximum distance between an upper eyelid point and an eye corner connecting line in the upper eyelid curve, a maximum distance between a lower eyelid point and an eye corner connecting line in the lower eyelid curve, and a distance between the eye corner connecting lines;
the second distance calculation module is used for determining an eyebrow curve according to the coordinates of the eyebrow key points and calculating the maximum distance between the eyebrow points and the connecting line of the eye corners in the eyebrow curve;
the opening and closing degree calculation module is used for calculating the opening and closing degree of eyes according to the maximum distance between the upper eyelid point and the corner connecting line, the maximum distance between the lower eyelid point and the corner connecting line and the distance between the corner connecting lines;
and the judging module is used for executing the step of determining the operation area of the eye closing transformation according to the eye key point coordinates if the opening degree of the eyes is larger than a preset value.
In a preferred embodiment of the present invention, the operation region determining module 602 may include the following sub-modules:
The first length determining submodule is used for determining an upward extension length according to the maximum distance between the upper eyelid point and the eye corner connecting line and the maximum distance between the eyebrow point and the eye corner connecting line;
a second length determining submodule, configured to determine a downward extension length according to a maximum distance between the lower eyelid point and the canthus connecting line;
and the operation area determination submodule is used for determining an operation area of closed-eye transformation according to the upward extension length, the downward extension length and the corner connecting line.
In a preferred embodiment of the invention, the apparatus may further comprise the following modules:
and the judging module is used for judging whether the coordinates of the pixel point to be operated currently are in the operation area of the closed-eye transformation.
In a preferred embodiment of the invention, the eyelid pixel values of the corresponding eyelid area or eyelid pixel values of the eyelid area are obtained by the following sub-modules:
a projection determining submodule, configured to determine a projection of the current pixel to be operated on the corner connecting line and a projection of the current pixel to be operated on a vertical line of the corner connecting line;
and the target pixel value calculating submodule is used for calculating eyelid pixel values of corresponding eyelid areas or eyelid pixel values of eyelid areas according to the projection on the corner connecting line and the projection on the vertical line of the corner connecting line.
In a preferred embodiment of the present invention, the target pixel value calculation submodule includes:
the first calculation unit is used for determining the pixel value corresponding to the pixel point to be operated currently as the eyelid pixel value of the eyelid area when the projection on the perpendicular line of the eye corner connecting line is smaller than a preset value, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the eye corner connecting line and the projection on the perpendicular line of the eye corner connecting line;
and the second calculation unit is used for determining that the pixel value corresponding to the pixel point to be operated currently is the eyelid pixel value of the eyelid area when the projection on the vertical line of the canthus connecting line is larger than or equal to a preset numerical value, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the canthus connecting line and the projection on the vertical line of the canthus connecting line.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention also provides electronic equipment, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the steps of the methods described by the embodiments of the present invention.
Embodiments of the present invention also provide a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the steps of the methods described in the embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the method and the device for generating the closed-eye image provided by the invention applies specific examples to illustrate the principles and the implementation of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A method of generating an eye-closed image, comprising:
acquiring eye key point coordinates in an image to be processed, wherein the eye key point coordinates comprise eye corner key point coordinates, eyelid key point coordinates and eyebrow key point coordinates;
determining an operation area of eye closing transformation according to the eye key point coordinates; the operation region of the eye closing transformation is determined according to an upward extension length determined based on a maximum distance between an upper eyelid point and an eye corner connecting line and a maximum distance between an eyebrow point and an eye corner connecting line, a downward extension length determined based on a maximum distance between a lower eyelid point and an eye corner connecting line, and an eye corner connecting line;
acquiring coordinates of a current pixel point to be operated in the operation area;
and according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key points, controlling to replace the pixel value of the current pixel point to be operated with the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area so as to generate a closed-eye image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key points.
2. The method as recited in claim 1, further comprising:
According to the coordinates of the corner key points, determining corner connecting lines and unit direction vectors of the corner connecting lines;
and determining new eyelid key point coordinates by adopting the eyelid key point coordinates, the eyelid key point coordinates and the unit direction vector.
3. The method of claim 1, further comprising, prior to the step of determining an operating region of an eye closure transformation based on the eye keypoint coordinates:
determining an canthus connecting line according to the canthus key point coordinates;
determining an upper eyelid curve and a lower eyelid curve according to the eyelid key point coordinates;
calculating the maximum distance between the upper eyelid point and the corner connecting line in the upper eyelid curve, the maximum distance between the lower eyelid point and the corner connecting line in the lower eyelid curve, and the distance between the corner connecting lines;
determining an eyebrow curve according to the coordinates of the eyebrow key points, and calculating the maximum distance between the eyebrow points and the connecting line of the eye corners in the eyebrow curve;
calculating the opening and closing degree of eyes according to the maximum distance between the upper eyelid point and the corner connecting line, the maximum distance between the lower eyelid point and the corner connecting line and the distance between the corner connecting lines;
and if the opening and closing degree of the eyes is larger than a preset value, executing the step of determining the operation area of the eye closing transformation according to the eye key point coordinates.
4. The method of claim 3, wherein said determining an operating region of an eye-closure transformation from the eye keypoint coordinates comprises:
determining an upward extension length according to the maximum distance between the upper eyelid point and the corner connecting line and the maximum distance between the eyebrow point and the corner connecting line;
determining a downward extension length according to the maximum distance between the lower eyelid point and the corner connecting line;
and determining an operation area of closed-eye transformation according to the upward extension length, the downward extension length and the corner connecting line.
5. The method according to claim 1, further comprising, after the step of acquiring coordinates of a pixel point currently to be operated in the operation area:
and judging whether the coordinates of the pixel point to be operated currently are in the operation area of the closed-eye transformation.
6. A method according to claim 3, wherein the eyelid pixel values of the corresponding eyelid area or eyelid pixel values of an eyelid area are obtained by:
determining the projection of the current pixel point to be operated on the corner connecting line and the projection of the current pixel point to be operated on the vertical line of the corner connecting line;
and calculating eyelid pixel values of corresponding eyelid areas or eyelid pixel values of eyelid areas according to the projection on the corner connecting line and the projection on the vertical line of the corner connecting line.
7. The method of claim 6, wherein calculating eyelid pixel values for the corresponding eyelid region or eyelid pixel values for the eyelid region from the projection onto the corner line and the projection onto the vertical of the corner line comprises:
when the projection on the perpendicular line of the eye corner connecting line is smaller than a preset value, determining the pixel value corresponding to the pixel point to be operated at present as the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the eye corner connecting line and the projection on the perpendicular line of the eye corner connecting line;
when the projection on the vertical line of the canthus line is larger than or equal to a preset value, determining that the pixel value corresponding to the pixel point to be operated currently is the eyelid pixel value of the eyelid area, and calculating the eyelid pixel value of the corresponding eyelid area according to the projection on the canthus line and the projection on the vertical line of the canthus line.
8. A closed-eye image generating apparatus, comprising:
the eye key point acquisition module is used for acquiring eye key point coordinates in the image to be processed, wherein the eye key point coordinates comprise eye corner key point coordinates, eyelid key point coordinates and eyebrow key point coordinates;
The operation area determining module is used for determining an operation area of eye closing transformation according to the eye key point coordinates; the operation region of the eye closing transformation is determined according to an upward extension length determined based on a maximum distance between an upper eyelid point and an eye corner connecting line and a maximum distance between an eyebrow point and an eye corner connecting line, a downward extension length determined based on a maximum distance between a lower eyelid point and an eye corner connecting line, and an eye corner connecting line;
the coordinate acquisition module is used for acquiring the coordinates of the current pixel point to be operated in the operation area;
and the eye-closed image generation module is used for controlling the eyelid pixel value of the corresponding eyelid area or the eyelid pixel value of the eyelid area to replace the pixel value of the current pixel point to be operated according to the position relation between the coordinates of the current pixel point to be operated and the coordinates of the eye corner key point so as to generate an eye-closed image, wherein the eyelid area and the eyelid area are areas determined according to the coordinates of the eye key point.
9. An electronic device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the method of generating the closed-eye image of any of claims 1-7.
10. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of generating the closed-eye image of any of claims 1-7.
CN202010113899.3A 2020-02-24 2020-02-24 Method and device for generating closed-eye image Active CN111340688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010113899.3A CN111340688B (en) 2020-02-24 2020-02-24 Method and device for generating closed-eye image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010113899.3A CN111340688B (en) 2020-02-24 2020-02-24 Method and device for generating closed-eye image

Publications (2)

Publication Number Publication Date
CN111340688A CN111340688A (en) 2020-06-26
CN111340688B true CN111340688B (en) 2023-08-11

Family

ID=71187068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010113899.3A Active CN111340688B (en) 2020-02-24 2020-02-24 Method and device for generating closed-eye image

Country Status (1)

Country Link
CN (1) CN111340688B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381709B (en) * 2020-11-13 2022-06-21 北京字节跳动网络技术有限公司 Image processing method, model training method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN108259758A (en) * 2018-03-18 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109376624A (en) * 2018-10-09 2019-02-22 三星电子(中国)研发中心 A kind of modification method and device of eye closing photo

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230794A1 (en) * 2006-04-04 2007-10-04 Logitech Europe S.A. Real-time automatic facial feature replacement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN108259758A (en) * 2018-03-18 2018-07-06 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109376624A (en) * 2018-10-09 2019-02-22 三星电子(中国)研发中心 A kind of modification method and device of eye closing photo

Also Published As

Publication number Publication date
CN111340688A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
Xiao et al. Robust facial landmark detection via recurrent attentive-refinement networks
Jiang et al. Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning
CN109492643A (en) Certificate recognition methods, device, computer equipment and storage medium based on OCR
KR101525133B1 (en) Image Processing Device, Information Generation Device, Image Processing Method, Information Generation Method, Control Program, and Recording Medium
CN109919209B (en) Domain self-adaptive deep learning method and readable storage medium
CN109657583A (en) Face&#39;s critical point detection method, apparatus, computer equipment and storage medium
CN107430429A (en) Incarnation keyboard
CN111183405A (en) Adjusting digital representation of head region
JP2005242567A (en) Movement evaluation device and method
US20240037398A1 (en) Reinforcement learning-based techniques for training a natural media agent
CN111340688B (en) Method and device for generating closed-eye image
CN112241784A (en) Training generative model and discriminant model
CN108229432A (en) Face calibration method and device
CN111062899B (en) Guidance-based blink video generation method for generating confrontation network
Yang et al. Controllable sketch-to-image translation for robust face synthesis
Liu et al. Ocular recognition for blinking eyes
CN110378932B (en) Correlation filtering visual tracking method based on spatial regularization correction
CN114581918A (en) Text recognition model training method and device
Kerdvibulvech et al. Vision-based detection of guitar players' fingertips without markers
Hu et al. Face reenactment via generative landmark guidance
KR102208688B1 (en) Apparatus and method for developing object analysis model based on data augmentation
CN110889385A (en) Handwritten text recognition method based on local adjacent attention
Li et al. DW-GAN: Toward high-fidelity color-tones of GAN-generated images with dynamic weights
Wang et al. Hierarchical gaze estimation based on adaptive feature learning
WO2021161453A1 (en) Image processing system, image processing method, and nontemporary computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant