CN114596602A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114596602A
CN114596602A CN202011396250.3A CN202011396250A CN114596602A CN 114596602 A CN114596602 A CN 114596602A CN 202011396250 A CN202011396250 A CN 202011396250A CN 114596602 A CN114596602 A CN 114596602A
Authority
CN
China
Prior art keywords
eye
key points
standard
eyelid
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011396250.3A
Other languages
Chinese (zh)
Inventor
李晓帆
徐子昱
李美娜
娄心怡
宋莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202011396250.3A priority Critical patent/CN114596602A/en
Publication of CN114596602A publication Critical patent/CN114596602A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation

Abstract

The invention discloses an image processing method and device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a target image containing a face area; positioning face key points in the target image, and performing interpolation point supplementation on eye key points in the face key points to generate new eye key points; and according to the new eye key points, fusing the pre-made standard double eyelid materials with the eye region in the target image to obtain a fused eye region image. When the human face image is obtained, interpolation point supplementing is further carried out on the eye key points on the basis of positioning the eye key points, so that the eye position points are more concentrated, and therefore, according to the newly generated eye key points, standard double-eyelid materials which are manufactured in advance can be better fused to the eyes of the human face image, the generated double-eyelid is more attached and more follows, and the effect of simulating the double-eyelid by using the standard double-eyelid materials can be more real and vivid.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method and device, electronic equipment and a readable storage medium.
Background
Eyes are one of the most important organs of the human face, and the aesthetic concept of modern people generally considers that the double-fold eyelid shape of the face is more beautiful than the single-fold eyelid shape. In order to directly simulate the morphological effect of double eyelids through an image processing mode without actually performing plastic surgery.
In the related art, a user manually draws a double eyelid fold line on an image and adjusts the position, brightness, and width of the line so that the "double eyelid" looks natural.
However, the adoption of drawing a curve on the image to represent the fold of the double eyelid has poor reality of simulation effect.
Disclosure of Invention
The present invention provides an image processing method and apparatus, an electronic device, and a readable storage medium, which are directed to the deficiencies of the prior art mentioned above, and the objective is achieved by the following technical solutions.
A first aspect of the present invention provides an image processing method, including:
acquiring a target image containing a face area;
positioning face key points in the target image, wherein the face key points in the target image comprise eye key points;
carrying out interpolation point filling on the eye key points to generate new eye key points;
and according to the new eye key points, fusing the pre-made standard double eyelid materials with the eye region in the target image to obtain a fused eye region image.
Optionally, before acquiring the target image including the face region, the method may further include: acquiring face key points of a standard face image, wherein the face key points of the standard face image comprise standard eye keys; carrying out interpolation point supplementing on the standard eye key points to generate new standard eye key points; intercepting a standard eye image corresponding to a new standard eye key point from the standard face image; and manufacturing the standard double-eyelid material by taking the standard eye image as a template.
Optionally, the fusing the standard double eyelid material with the eye region in the target image according to the new eye key point may include: establishing a mapping relation between a new standard eye key point and a new eye key point, and fusing the standard double-eyelid material to the eye region in the target image according to the mapping relation
Optionally, the fusing the standard double-edged eyelid material to the eye region in the target image according to the mapping relationship may include: acquiring a triangulation grid of the new standard eye key point; acquiring pixel points covered by the triangulation grids on the eye region in the target image according to the mapping relation; and aiming at each acquired pixel point, calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point, which corresponds to the standard double-eyelid material.
Optionally, the calculating the target pixel value by using the first pixel value of the pixel point and the second pixel value of the pixel point corresponding to the standard double-eyelid material may include: acquiring a preset rendering weight coefficient; and calculating the target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
Optionally, the calculating the target pixel value by using the first pixel value of the pixel point and the second pixel value of the pixel point corresponding to the standard double-eyelid material may include: receiving a rendering weight coefficient input by a user; and calculating a target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
Optionally, the manufacturing of the standard double-edged eyelid material by using the standard eye image as a template may include: receiving a standard double-eyelid material drawn on the standard eye image by a user; and carrying out fuzzy and shadow gradual change processing on the standard double-fold eyelid material.
Optionally, the performing interpolation and point supplementation on the eye key points to generate new eye key points may include: interpolating in the eye key points by adopting a preset interpolation algorithm; moving in a preset direction by taking the eye key points obtained by interpolation as starting points to generate new key points; and taking the eye key points obtained by interpolation and the generated new key points as new eye key points.
A second aspect of the present invention proposes an image processing apparatus, comprising:
the acquisition module is used for acquiring a target image containing a face area;
the positioning module is used for positioning the key points of the human face in the target image, and the key points of the human face in the target image comprise key points of eyes;
the point supplementing module is used for carrying out interpolation point supplementing on the eye key points so as to generate new eye key points;
and the material fusion module is used for fusing the pre-made standard double-eyelid material with the eye region in the target image according to the new eye key point to obtain a fused eye region image.
A third aspect of the present invention proposes an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the program.
A fourth aspect of the present invention proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method according to the first aspect as described above.
Based on the image processing method and the image processing device of the first aspect and the second aspect, the invention has the following beneficial effects:
when the human face image is obtained, interpolation point supplementing is further carried out on the eye key points on the basis of positioning the eye key points, so that the eye position points are more concentrated, and therefore, according to the newly generated eye key points, standard double-eyelid materials which are manufactured in advance can be better fused to the eyes of the human face image, the generated double-eyelid is more attached and more follows, and the effect of simulating the double-eyelid by using the standard double-eyelid materials can be more real and vivid.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram illustrating an embodiment of a method of image processing according to an exemplary embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a data preparation flow according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating key points of a human face located on a standard face image according to an exemplary embodiment of the present invention;
FIG. 4 illustrates a standard eye keypoint image after interpolation for point patching according to an exemplary embodiment of the present invention;
FIG. 5 illustrates a triangulated standard eye region image in accordance with an exemplary embodiment of the present invention;
FIG. 6 illustrates a double eyelid material image according to one exemplary embodiment of the invention;
FIG. 7 is a schematic diagram illustrating a comparison between a target face and a target face after applying double eyelids according to the embodiment of FIG. 1;
FIG. 8 is a diagram illustrating a hardware configuration of an electronic device in accordance with an exemplary embodiment of the present invention;
fig. 9 is a schematic structural diagram illustrating an image processing apparatus according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to solve the problem that the reality sense of the simulation effect of double eyelid wrinkles is poor by drawing a curve on an image at present, the invention provides an improved image processing method, when a human face image is obtained, interpolation point filling is further carried out on eye key points on the basis of positioning the eye key points, so that the eye part points are more concentrated, and therefore, according to newly generated eye key points, pre-made standard double eyelid materials can be better fused to the eyes of the human face image, so that the generated double eyelids are more attached and more followed, and the effect of simulating the double eyelids by using the standard double eyelid materials is more real and vivid.
The image processing method proposed by the present invention is explained in detail below with specific embodiments.
Fig. 1 is a flowchart illustrating an embodiment of an image processing method according to an exemplary embodiment of the present invention, where the image processing method may be applied to an electronic device, where the electronic device may include a terminal such as a smart phone or a tablet computer, may be a server, and may also be an electronic system including the terminal and the server. Referring to fig. 1, the image processing method includes the steps of:
step 101: and acquiring a target image containing the human face area.
In this embodiment, an application app may be set in the electronic device, and a double eyelid function may be set in the app, where the double eyelid function may be a function key or a set gesture, which is not limited herein.
In this embodiment, the electronic device may periodically or in real time detect whether the double eyelid function is triggered, and when not triggered, the electronic device continues to detect; when triggered, such as a function key is clicked or a set gesture is detected, the electronic device may capture a target image.
The manner of acquiring the target image may include:
and acquiring an image acquired by the camera, detecting a face in the image, and determining the image containing the face area as a target image if the face is detected.
In an example, the acquired image may be an image acquired by a camera in real time, and may also be an offline image, which is not limited in the present invention.
Before the double eyelid is added to the target image, the previous data preparation is required. Referring to fig. 2, the data preparation process may include the following steps:
step 201: and acquiring the face key points of the standard face image, wherein the face key points of the standard face image comprise standard eye key points.
In this embodiment, the electronic device may perform face key point positioning on a standard face image. In an example, the key points of the face located on the standard face image as shown in fig. 3 may include points around the face, around the eyes, around the nose triangle, around the mouth, around the eyebrows, and the like of the face, wherein the key points around the eyes include two key points at the left and right corners of the eyes, one key point at the highest position of the upper eyelid and two key points at the left and right sides of the key point, one key point at the lowest position of the lower eyelid and two key points at the left and right sides of the key point.
Step 202: and carrying out interpolation point supplementing on the key points of the standard eye to generate new key points of the standard eye.
In some embodiments, the standard eye key points may be obtained from the located key points, a preset interpolation algorithm is adopted to perform interpolation in the standard eye key points, then the standard eye key points obtained through interpolation are used as a starting point, movement in a preset direction is performed to generate new key points, and then the standard eye key points obtained through interpolation and the generated new key points are used as new standard eye key points, so that the eye position points are more dense, and the standard double eyelid materials are conveniently fused to the eyes of the target image subsequently.
The preset interpolation algorithm may adopt a cubic bezier curve, and the formula is as follows:
B3(t)=(1-t)3P0+3t(1-t)2P1+3t2(1-t)P2+t3P3,t∈[0,1]
wherein, P0、P1、P2、P3T is an interpolation coefficient for four consecutive standard eye key points. That is, a new standard eye keypoint can be interpolated each time with four consecutive standard eye keypoints.
Moving the standard eye key points obtained by interpolation to generate new key points, wherein the moving formula is as follows:
P0'=P0+Dir*len
where P0 is the eye key point (i.e. as the original point), Dir is the unit vector of the moving direction, and len is the moving distance.
In an example, as shown in fig. 4, for the standard eye keypoint image after interpolation and point patching, specifically, the keypoint located on the upper eyelid of the eye is moved upward by a preset distance, the keypoint located on the lower eyelid of the eye is moved downward by a preset distance, and the keypoint located on the canthus of the eye is moved laterally outward by a preset distance.
It is noted that the distance moved up and down does not exceed the distance between the eyebrows and the eyes.
Step 203: and intercepting the standard eye image corresponding to the new standard eye key point from the standard face image.
Step 204: and (4) making a standard double-eyelid material by taking the standard eye image as a template.
In an embodiment, the standard eye image may be output and displayed, then the standard double eyelid material drawn on the standard eye image by the user is received, and the standard double eyelid material is subjected to blurring and shading gradient processing, so that the combination effect of the double eyelid material and the standard eye is more real, and further the effect of fusing the double eyelid material to the eye of the target image is more fit and more follow, as shown in fig. 6, the generated double eyelid material is obtained.
The pixels at the positions except the double-eyelid position in the standard double-eyelid material are not transparent, and the pixels at other positions are transparent, so that when the standard double-eyelid material is fused with the eye region of the target image, only the pixels at the double-eyelid position are fused with the target image, and the pixels at other positions do not influence the pixels in the target image.
Step 102: and positioning the key points of the face in the target image, and performing interpolation point-supplementing on the key points of the eye to generate new key points of the eye, wherein the key points of the face in the target image comprise the key points of the eye.
In step 102, the standard face image processing principle described in the above steps 201 to 202 is adopted to process the face in the target image, that is, a preset interpolation algorithm is adopted to interpolate in the eye key points, the eye key points obtained by interpolation are taken as a starting point, movement in a preset direction is performed to generate new key points, and finally the eye key points obtained by interpolation and the generated new key points are taken as new eye key points, so that the eye part points are more concentrated, and thus, according to the new eye key points, standard double eyelid materials can be better fused to the eyes of the face image, and the generated double eyelid is more fit and more follows.
In this embodiment, since the standard face image and the target image adopt the same processing principle, the number of the acquired new standard eye key points is the same as the number of the new eye key points, that is, the number of the key points on the upper eyelid of the standard eye is the same as the number of the key points on the upper eyelid of the target image, and the number of the key points on the lower eyelid is also the same.
Step 103: and according to the new eye key points, fusing the pre-made standard double eyelid materials with the eye region in the target image to obtain a fused eye region image.
Before step 103 is executed, an eye image corresponding to the eye key point may be intercepted from the target image, and the eyelid type of the eye in the eye image is determined, and if the eyelid type is a single eyelid type, step 103 is executed.
In one example, an eye region image may be cut out from a target image, the eye region image may be input to a pre-trained eyelid recognition model, and the eyelid type of the eyes in the eye region image may be recognized by the eyelid recognition model.
In one embodiment, the fusion process may include: and establishing a mapping relation between the new standard eye key points and the new eye key points, and fusing the standard double-eyelid materials to the eye region in the target image according to the mapping relation. Because the standard double-fold eyelid material is manufactured based on the standard eye image, the standard double-fold eyelid material can be better fused to the eyes of the target image according to the mapping relation between the key points of the standard eye and the key points of the eyes.
The number of the new standard eye key points is the same as that of the new eye key points, so that the new standard eye key points and the new eye key points can establish a one-to-one mapping relation, for example, the key points on the standard upper eyelid of the eye correspond to the key points on the upper eyelid of the target image one to one. In an example, index numbers can be respectively allocated to each key point of the standard eye and each key point of the eye in the target image according to a preset sequence, and a mapping relationship is established between the two key points with the same index numbers.
In some embodiments, the process of fusing according to the mapping relationship may include: and then, aiming at each acquired pixel point, calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point corresponding to the standard double-eyelid material. Because the triangulation mesh is established by the key points of the standard eye, and the key points of the standard eye and the key points of the eyes in the target image have a one-to-one mapping relation, the pixel points covered by the triangulation mesh on the eye region of the target image can be obtained according to the mapping relation, and the pixel points are fused, so that the obtained effect of the double eyelids is more real.
The triangulation mesh of the key points of the standard eye can be generated by adopting a triangulation method, and comprises a plurality of triangles, wherein the vertex of each triangle is a key point of the standard eye, and as shown in fig. 5, the triangle is a triangulated image of the area of the standard eye.
In an example, the calculation process for the target pixel value in the fusion process may include: and obtaining a preset rendering weight coefficient, and calculating a target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value. And the rendering weight coefficient is used for representing the rendering degree.
In another example, the rendering weight coefficient may also be adjusted by a user to meet an actual adjustment requirement of the user, so that the target pixel value of the pixel point may also be calculated according to the received rendering weight coefficient, the first pixel value, and the second pixel value after receiving the rendering weight coefficient input by the user. For example, the user adjusts the required rendering weight coefficients by triggering a slider bar.
That is, the rendering weight coefficient may be calculated using a preset value or a value adjusted by a user and the received rendering weight coefficient.
Further, a target pixel value of the pixel point is calculated according to the rendering weight coefficient, the first pixel value and the second pixel value, and the calculation formula is as follows:
Ptarget=α*(PStandard eye*PTarget eye)/255
Where α is a rendering weight coefficient, PStandard eyeIs the second pixel value, PTarget eyeIs the first pixel value. In an example, since the standard binocular eyelid material is manufactured by using the standard eye as a template, the standard eye triangulation mesh correspondence may also be pasted to the standard binocular eyelid material, and then the pixel positions of the pixel points of the standard eye triangulation mesh covered by the eye region of the target image corresponding to the standard binocular eyelid material may be determined according to the triangles of the standard binocular eyelid material belonging to the mesh.
It should be noted that, in the pixel fusion process, since the pixels in the standard double-fold eyelid material except for the double-fold eyelid are not transparent, the pixels in other positions are all transparent. Therefore, the pixels corresponding to the positions of the double eyelids can be fused to the eye pixels of the target image, and the pixels corresponding to other positions do not change the eye pixels of the target image.
In an exemplary scenario, as shown in fig. 7, in which diagram (a) is an acquired target image human face, and the eyes of the target image human face are of a monocular eyelid type, after processing by the above embodiment, a double eyelid effect diagram in diagram (b) is obtained, and the reality is realistic.
So far, accomplish the processing procedure that above-mentioned figure 1 shows, through the above-mentioned processing procedure, when obtaining the face image, on the eye key point basis of locating out the people's face, further carry out interpolation point-filling to the eye key point, make eyes position point more encryption set, thereby according to the eye key point that newly generates, can be with the better eye that fuses to the face image of the standard double eyelid material of making in advance, make the more laminating of the double eyelid that generates follow, and can be more true, lifelike through using the simulation double eyelid effect of standard double eyelid material.
Fig. 8 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present invention, the electronic device including: a communication interface 401, a processor 402, a machine-readable storage medium 403, and a bus 404; wherein the communication interface 401, the processor 402 and the machine-readable storage medium 403 communicate with each other via a bus 404. The processor 402 can execute the image processing method described above by reading and executing machine executable instructions corresponding to the control logic of the image processing method in the machine readable storage medium 403, and the specific content of the method is referred to the above embodiments, which will not be described herein again.
The machine-readable storage medium 403 referred to in this disclosure may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 403 may be a RAM (Random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), any type of storage disk (e.g., an optical disk, a DVD, etc.), or similar storage medium, or a combination thereof.
Corresponding to the embodiment of the image processing method, the invention also provides an embodiment of the image processing device.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present invention, and referring to fig. 9, the image processing apparatus includes:
an obtaining module 910, configured to obtain a target image including a face region;
a positioning module 920, configured to position face key points in the target image, where the face key points in the target image include eye key points;
a point supplementing module 930, configured to perform interpolation point supplementing on the eye key points to generate new eye key points;
and a material fusion module 940, configured to fuse a pre-made standard double eyelid material with the eye region in the target image according to the new eye keypoints to obtain a fused eye region image.
In an alternative implementation, the apparatus further comprises (not shown in fig. 9):
a data preparation module, configured to obtain face key points of a standard face image before the obtaining module 910 obtains a target image including a face region, where the face key points of the standard face image include standard eye keys; carrying out interpolation point supplementing on the standard eye key points to generate new standard eye key points; intercepting a standard eye image corresponding to a new standard eye key point from the standard face image; and manufacturing the standard double-eyelid material by taking the standard eye image as a template.
In an optional implementation manner, the material fusion module 940 is specifically configured to establish a mapping relationship between a new standard eye key point and a new eye key point, and fuse the standard double eyelid material to the eye region in the target image according to the mapping relationship.
In an optional implementation manner, the material fusion module 940 is specifically configured to obtain the new triangulation mesh of the standard eye keypoints in the process of fusing the standard double-eyelid material to the eye region in the target image according to the mapping relationship; acquiring pixel points covered by the triangulation grids on the eye region in the target image according to the mapping relation; and aiming at each acquired pixel point, calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point, which corresponds to the standard double-fold eyelid material.
In an optional implementation manner, the material fusion module 940 is specifically configured to obtain a preset rendering weight coefficient in a process of calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point corresponding to the standard double-eyelid material; and calculating the target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
In an optional implementation manner, the material fusion module 940 is specifically configured to receive a rendering weight coefficient input by a user in a process of calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point corresponding to the standard double-eyelid material; and calculating the target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
In an optional implementation manner, the point-complementing module 930 is specifically configured to perform interpolation in the eye key points by using a preset interpolation algorithm; taking the eye key points obtained by interpolation as a starting point, moving in a preset direction to generate new key points; and taking the eye key points obtained by interpolation and the generated new key points as new eye key points.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the above-described embodiments.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a target image containing a face area;
positioning face key points in the target image, wherein the face key points in the target image comprise eye key points;
carrying out interpolation point filling on the eye key points to generate new eye key points;
and according to the new eye key points, fusing the pre-made standard double eyelid materials with the eye region in the target image to obtain a fused eye region image.
2. The method of claim 1, wherein prior to acquiring the target image containing the face region, the method further comprises:
acquiring face key points of a standard face image, wherein the face key points of the standard face image comprise standard eye keys;
carrying out interpolation point supplementing on the standard eye key points to generate new standard eye key points;
intercepting a standard eye image corresponding to a new standard eye key point from the standard face image;
and manufacturing the standard double-eyelid material by taking the standard eye image as a template.
3. The method of claim 2, wherein said fusing standard binocular material with eye regions in the target image based on the new eye keypoints comprises:
and establishing a mapping relation between a new standard eye key point and a new eye key point, and fusing the standard double-eyelid material to the eye region in the target image according to the mapping relation.
4. The method of claim 3, wherein said fusing said standard binocular material onto an eye region in said target image according to said mapping comprises:
acquiring a triangulation grid of the new standard eye key point;
acquiring pixel points covered by the triangulation grids on the eye region in the target image according to the mapping relation;
and aiming at each acquired pixel point, calculating a target pixel value by using a first pixel value of the pixel point and a second pixel value of the pixel point, which corresponds to the standard double-eyelid material.
5. The method of claim 4, wherein calculating the target pixel value using the first pixel value of the pixel and the second pixel value of the pixel corresponding to the standard binocular eyelid material comprises:
acquiring a preset rendering weight coefficient;
and calculating the target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
6. The method of claim 4, wherein calculating the target pixel value using the first pixel value of the pixel and the second pixel value of the pixel corresponding to the standard binocular eyelid material comprises:
receiving a rendering weight coefficient input by a user;
and calculating the target pixel value of the pixel point according to the rendering weight coefficient, the first pixel value and the second pixel value.
7. The method of claim 2, wherein creating the standard binocular eyelid material using the standard eye image as a template comprises:
receiving a standard double-eyelid material drawn on the standard eye image by a user;
and carrying out fuzzy and shadow gradual change processing on the standard double-fold eyelid material.
8. The method of claim 1, wherein said interpolating said eye keypoints to generate new eye keypoints comprises:
interpolating in the eye key points by adopting a preset interpolation algorithm;
taking the eye key points obtained by interpolation as a starting point, and moving in a preset direction to generate new key points;
and taking the eye key points obtained by interpolation and the generated new key points as new eye key points.
9. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image containing a face area;
the positioning module is used for positioning the key points of the human face in the target image, and the key points of the human face in the target image comprise key points of eyes;
the point supplementing module is used for carrying out interpolation point supplementing on the eye key points so as to generate new eye key points;
and the material fusion module is used for fusing the pre-made standard double-eyelid material with the eye region in the target image according to the new eye key point to obtain a fused eye region image.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-8 are implemented when the processor executes the program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202011396250.3A 2020-12-03 2020-12-03 Image processing method and device, electronic equipment and readable storage medium Pending CN114596602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011396250.3A CN114596602A (en) 2020-12-03 2020-12-03 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011396250.3A CN114596602A (en) 2020-12-03 2020-12-03 Image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114596602A true CN114596602A (en) 2022-06-07

Family

ID=81803115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011396250.3A Pending CN114596602A (en) 2020-12-03 2020-12-03 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114596602A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117334023A (en) * 2023-12-01 2024-01-02 四川省医学科学院·四川省人民医院 Eye behavior monitoring method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117334023A (en) * 2023-12-01 2024-01-02 四川省医学科学院·四川省人民医院 Eye behavior monitoring method and system

Similar Documents

Publication Publication Date Title
CN108305312B (en) Method and device for generating 3D virtual image
US20220189095A1 (en) Method and computer program product for producing 3 dimensional model data of a garment
CN106030661B (en) The independent 3D scene texture background in the visual field
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
CN108986016B (en) Image beautifying method and device and electronic equipment
AU2018241115A1 (en) Smart guide to capture digital images that align with a target image model
CN107610202B (en) Face image replacement method, device and storage medium
JP2019527410A (en) Method for hiding objects in images or videos and related augmented reality methods
CN107564080B (en) Face image replacement system
CN107993216A (en) A kind of image interfusion method and its equipment, storage medium, terminal
CN109801380A (en) A kind of method, apparatus of virtual fitting, storage medium and computer equipment
CN113628327B (en) Head three-dimensional reconstruction method and device
WO2020024569A1 (en) Method and device for dynamically generating three-dimensional face model, and electronic device
CN107507216A (en) The replacement method of regional area, device and storage medium in image
CN106910102A (en) The virtual try-in method of glasses and device
CN108463823A (en) A kind of method for reconstructing, device and the terminal of user's Hair model
CN107302694B (en) Method, equipment and the virtual reality device of scene are presented by virtual reality device
CN110223372A (en) Method, apparatus, equipment and the storage medium of model rendering
CN110275968A (en) Image processing method and device
CN107609490A (en) Control method, control device, Intelligent mirror and computer-readable recording medium
WO2018080849A1 (en) Simulating depth of field
CN108170282A (en) For controlling the method and apparatus of three-dimensional scenic
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
KR20230085931A (en) Method and system for extracting color from face images
CN112749611A (en) Face point cloud model generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination