CN111104878A - Image processing method, device, server and storage medium - Google Patents

Image processing method, device, server and storage medium Download PDF

Info

Publication number
CN111104878A
CN111104878A CN201911243117.1A CN201911243117A CN111104878A CN 111104878 A CN111104878 A CN 111104878A CN 201911243117 A CN201911243117 A CN 201911243117A CN 111104878 A CN111104878 A CN 111104878A
Authority
CN
China
Prior art keywords
image
face
region
eye
nose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911243117.1A
Other languages
Chinese (zh)
Inventor
钟艺豪
陈维江
刘笑笑
李百川
赖奂升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Kaixin Corn Network Technology Co Ltd
Original Assignee
Jiangxi Kaixin Corn Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Kaixin Corn Network Technology Co Ltd filed Critical Jiangxi Kaixin Corn Network Technology Co Ltd
Priority to CN201911243117.1A priority Critical patent/CN111104878A/en
Publication of CN111104878A publication Critical patent/CN111104878A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device, a server and a storage medium, wherein the user characteristics of a user to which a face belongs in a face area of a first image are obtained by determining the face area of the first image, a third image matched with the user characteristics is selected from at least one preset second image, and the face area of the third image is replaced by the face area of the first image to generate a fourth image. The technical scheme provided by the invention automatically achieves the purpose of face changing of the image without depending on manual operation of a user, and improves the face changing efficiency of the image.

Description

Image processing method, device, server and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a server, and a storage medium.
Background
With the development of network and computer technologies, image face changing technology gradually becomes a new hotspot of social entertainment of people, and various applications with image face changing function are developed to bring fun to entertainment life of people. Although the current image face changing technology can realize the function of replacing the face in one image with the face in the other image, the current image face changing technology needs to depend on manual participation of a user, and the image face changing efficiency is low; especially, when a user is unfamiliar with the image changing function provided by the application of the electronic device, the user experience of the image changing function provided by the application is seriously affected, and the stickiness of the user to the application is reduced.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, an image processing apparatus, a server and a storage medium, so as to improve image face changing efficiency.
In order to achieve the above object, the following solutions are proposed:
a first aspect of the present invention discloses an image processing method, including:
determining a face region of a first image;
acquiring user characteristics of a user to which the face belongs in the face area of the first image;
selecting a third image matched with the user characteristics from at least one preset second image;
and replacing the face area of the third image with the face area of the first image to generate a fourth image.
Optionally, the user characteristics include: any one or more of gender, age, and facial form.
Optionally, the method further includes:
detecting whether the face in the face area of the first image is a side face;
if the face in the face area of the first image is a side face, generating reminding information for reminding that the face in the face area of the first image is the side face;
the obtaining of the user characteristics of the user to which the face belongs in the face region of the first image includes: and if the face in the face area of the first image is not a side face, acquiring the user characteristics of the user to which the face in the face area of the first image belongs.
Optionally, the obtaining of the user characteristics of the user to which the face belongs in the face region of the first image includes:
acquiring an identification region covering a face region of the first image in the first image, wherein the area of the identification region is larger than that of the face region of the first image;
and acquiring the user characteristics of the user to which the face belongs in the face area of the first image based on the image content of the identification area in the first image.
Optionally, the replacing the face region of the third image with the face region of the first image to generate a fourth image includes:
acquiring at least one key point of a face in a face region of the first image, wherein the at least one key point represents the face feature of the face in the face region of the first image;
acquiring an eye, nose and mouth triangular region of a human face in the human face region of the first image according to the at least one key point;
and replacing the eye, nose and mouth triangular areas of the face in the face area of the third image with the eye, nose and mouth triangular areas of the face in the face area of the first image to generate a fourth image.
Optionally, the method further includes:
performing expansion operation on an eye-nose-mouth triangular region of a face in the face region of the first image to generate a first eye-nose-mouth triangular region;
acquiring the proportion of the face area of the third image relative to the face area of the first image;
adjusting the first eye-nose-mouth triangular area according to the proportion to generate a second eye-nose-mouth triangular area;
the replacing the eye, nose and mouth triangular regions of the face in the face region of the third image with the eye, nose and mouth triangular regions of the face in the face region of the first image to generate a fourth image includes: and replacing the eye, nose and mouth triangular areas of the face in the face area of the third image with the second eye, nose and mouth triangular areas to generate a fourth image.
Optionally, the replacing the eye, nose and mouth triangular regions of the face in the face region of the third image with the second eye, nose and mouth triangular regions to generate a fourth image includes:
acquiring at least one key point of a human face in a preset human face area of the third image;
constructing an affine transformation matrix based on at least one key point of the face in the face region of the first image and at least one key point of the face in the face region of the third image;
performing affine transformation on the second eye, nose and mouth triangular region according to the affine transformation matrix to generate a third eye, nose and mouth triangular region;
and replacing the eye, nose and mouth triangular areas of the human face in the human face area of the third image with the third eye, nose and mouth triangular areas to generate a fourth image.
Optionally, the method further includes:
carrying out color correction on the third eye, nose and mouth triangular region according to the third image to obtain a fourth eye, nose and mouth triangular region;
the replacing the eye, nose and mouth triangular regions of the face in the face region of the third image with the third eye, nose and mouth triangular regions to generate a fourth image comprises:
and performing Poisson fusion on the eye-nose-mouth triangular region of the face in the fourth eye-nose-mouth triangular region and the face region of the third image, and replacing the eye-nose-mouth triangular region of the face in the face region of the third image with the fourth eye-nose-mouth triangular region to generate a fourth image.
A second aspect of the present invention discloses an image processing apparatus comprising:
a face region determination unit for determining a face region of the first image;
the first acquisition unit is used for acquiring the user characteristics of the user to which the face belongs in the face area of the first image;
the image selecting unit is used for selecting a third image matched with the user characteristics from at least one preset second image;
and the image processing unit is used for replacing the face area of the third image with the face area of the first image to generate a fourth image.
A third aspect of the present invention discloses a server, comprising: at least one memory and at least one processor; the memory stores a program, and the processor calls the program stored in the memory, wherein the program is used for realizing the image processing method disclosed in any one of the first aspect of the invention.
A fourth aspect of the present invention discloses a computer-readable storage medium having stored thereon computer-executable instructions for performing the image processing method as disclosed in any one of the first aspects of the present invention above.
The application provides an image processing method, an image processing device, a server and a storage medium, wherein the user characteristics of a user to which a face belongs in a face area of a first image are obtained by determining the face area of the first image, a third image matched with the user characteristics is selected from at least one preset second image, and the face area of the third image is replaced by the face area of the first image, so that the purpose of image face changing is automatically achieved, the manual operation of a user is not required, and the image face changing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of another image processing method provided in the embodiments of the present application;
FIG. 3 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is a flowchart of a method for generating a fourth image by replacing an eye-nose-mouth triangular region of a face in a face region of a third image with a second eye-nose-mouth triangular region according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of a hardware structure of a server to which an image processing method according to an embodiment of the present application is applied.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Example (b):
the current image face changing technology needs to depend on manual participation of a user, and the image face changing efficiency is low. For example, when the user is an old person, the user may not use the electronic device smoothly, and is unfamiliar with the image face changing function provided by the application of the electronic device, if the user needs to manually participate in the image face changing function, the user experience is poor, and the stickiness of the user to the application is reduced.
Therefore, the embodiment of the application provides an image processing method, an image processing device, a server and a storage medium, so as to realize an image face changing technology and improve the image face changing efficiency on the basis of not depending on manual participation of a user. Moreover, the convenience of the image face changing technology can be improved, and the viscosity of the application providing the image face changing function for the user is increased.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 1, the method includes:
s101, determining a face area of a first image;
according to the embodiment of the application, a face detection model is preset, a first image is input into the face detection model, and a face area in the first image is detected by the face detection model.
It should be noted that the region where the face is located in the image is a face region of the image. In the embodiment of the present application, the number of faces displayed in the face area of the first image is 1.
S102: acquiring user characteristics of a user to which a face belongs in a face region of a first image;
the user characteristics of the user may be any one or more of gender, age, and face shape of the user to which the face belongs.
In the specific execution process of step S102, an identification region covering the face region of the first image in the first image is obtained, and then the user characteristics of the user to which the face belongs in the face region of the first image are obtained based on the image content of the identification region in the first image.
In the embodiment of the present application, when the user characteristic is a gender, acquiring the user characteristic of the user to which the face belongs in the face region of the first image based on the image content of the identification region in the first image includes: and carrying out gender recognition on the face in the recognition area based on a pre-trained gender recognition model to obtain the gender of the user to which the face belongs in the face area of the first image.
As a preferred mode of the embodiment of the present application, a gender identification model is preset, the face region of the first image is input to the gender identification model, and the gender of the user to which the face belongs in the face region of the first image is identified by the gender identification model. For the convenience of distinguishing, the gender of the user to which the face of the face region of the identified first image belongs is called target gender.
As another preferred implementation of the embodiment of the present application, after the face region of the first image is determined, the face region of the first image is specified in the first image, and then the first image in which the face region is specified is input to the gender recognition model, and the gender recognition model performs gender recognition on the face in the face region specified in the first image, so as to obtain the target gender of the user to which the face in the face region specified in the first image belongs.
As another preferred implementation of the embodiment of the present application, after the face area of the first image is determined, a first target image may be generated, where only the first image content is displayed in the first target image, the first image content is the image content of the face area of the first image, the first target image is input to the gender identification model, and the gender identification model performs gender identification on the face in the first target image to obtain the target gender of the user to which the face belongs in the face area of the first image.
The above is only a preferred way of identifying the gender of the user to which the face belongs in the face region of the first image based on the gender identification model provided in the embodiment of the present application, and the inventor can set the gender according to his own needs with regard to a specific way of identifying the gender of the user to which the face belongs in the face region of the first image based on the gender identification model, which is not limited herein.
S103, selecting a third image matched with the characteristics of the user from at least one preset second image;
in this embodiment of the present application, at least one second image may be preset, where each of the at least one second image displays one face, and the at least one second image includes both a second image in which the gender of the user to which the displayed face belongs is "male" and a second image in which the gender of the user to which the displayed face belongs is "female", where the number of the second images in which the gender of the user to which the displayed face belongs is "male" is at least one, and the number of the second images in which the gender of the user to which the displayed face belongs is "female" is at least one.
As a preferred implementation manner of the embodiment of the present application, when the user characteristic is gender, a manner of selecting the third image matching the user characteristic from the second image may be: after the target gender of the user to which the face belongs in the face region of the first image is identified, a third image can be selected from at least one preset second image, wherein the selected third image is the second image in which the gender of the user to which the face belongs, which is displayed in each of the at least one preset second image, is the target gender. For example, if the target gender is "male", the second image with the gender of "male" of the user to which the face displayed in each of the at least one preset second image belongs may be determined as a third image.
As another preferred implementation manner of the embodiment of the present application, when the user characteristic is gender, a manner of selecting the third image matching the user characteristic from the second image may be: after the target gender of the user to which the face belongs is identified in the first image, the second image in which the gender of the user to which the face belongs is the target gender and displayed in at least one preset second image can be determined, and then one second image is screened from the determined second images to be used as a third image. For example, if the target gender is "male", the second image of the user with the gender of the face "male" may be determined from at least one preset second image, and then one second image may be selected from the determined second images as the third image.
As another preferred implementation manner of the embodiment of the present application, when the user features are age and face, the manner of selecting the third image matching the user features from the second image may be: after the target face shape and the target age of the user to which the face belongs in the face region of the first image are obtained, the face shape of the user to which the face belongs, which is displayed in at least one preset second image, may be determined to be a second image which is similar to the target face shape and whose age is similar to the target age, and then one second image may be screened out from the determined second image to be used as a third image. For example, if the target age is "25" and the target face shape is "round face", the face shape of the user to which the displayed face belongs may be determined as a second image which is similar to the "round face" and has an age of about "25" from at least one preset second image, and then one second image may be selected as the third image from the determined second image.
As another preferred implementation manner of the embodiment of the present application, when the user characteristics are gender and face shape, the manner of selecting the third image matching the user characteristics from the second images may be: after the target gender and the target face shape of the user to which the face belongs in the face region of the first image are acquired, a third image can be selected from at least one preset second image, wherein the selected third image is the second image in which the face shape of each displayed face in the at least one preset second image is similar to the target face shape and the gender of the user to which the face belongs is the target gender. For example, if the target gender is "male" and the target face shape is "round face", the second image in which the gender of the user to which the face belongs is "male" and the face shape of the user to which the face belongs is similar to the "round face" in the at least one preset second image may be determined as the third image.
In the embodiment of the application, a second image which is matched with the portrait of the user to the maximum degree can be screened out from the determined second images according to the portrait of the user to be used as a third image. Wherein, the portrait of the user can be generated according to the historical behavior information of the user.
The above is merely a preferred way to select the third image from the at least one preset second image provided in the embodiment of the present application, and regarding a specific way to select the third image from the at least one preset second image, the inventor may set the third image according to his own needs, and is not limited herein.
And S104, replacing the face area of the third image with the face area of the first image to generate a fourth image.
In the embodiment of the present application, if there are a plurality of third images, for each third image, the face area of the third image is replaced with the face area of the first image, so as to generate a fourth image. Correspondingly, if a plurality of third images exist, one fourth image can be generated for each third image, the generated plurality of fourth images are returned to the user, and the user selects the required image from the plurality of fourth images.
In the embodiment of the application, when the user characteristic is gender, the face in each of a plurality of second images with gender of male in at least one second image can be set to belong to the same person; the faces in each of the plurality of second images of at least one second image having a gender of "woman" belong to the same person. Therefore, after the target gender of the user to which the face belongs in the face region of the first image is determined, a plurality of second images with the gender of the user to which the face belongs as the target gender can be selected from at least one second image, the faces in the selected second images are from the same person, each selected second image is used as a third image, and the face region of the third image is replaced by the face region of the first image to generate a fourth image. In this way, the generated fourth images can be returned to the user, so that the user can select preferred images from the fourth images.
It should be noted that, when the user is an elderly person, the scheme can automatically identify the gender of the user to which the face of the face region of the first image belongs, automatically select the third image according to the gender of the user to which the face of the identified face region of the first image belongs, and then automatically replace the face region according to the selected third image to generate the fourth image, so that the purpose of face changing of the image can be achieved without manual participation of the user, the efficiency of face changing of the image is effectively improved, and the convenience of face changing of the image is improved.
Fig. 2 is a flowchart of another image processing method according to an embodiment of the present application.
As shown in fig. 2, the method includes:
s201, determining a face area of a first image;
s202, detecting whether the face in the face area of the first image is a side face; if the face in the face region of the first image is a side face, executing step S206; if the face in the face region of the first image is not a side face, executing step S203;
in the embodiment of the present application, a process of detecting whether a human face in a human face region of a first image is a side face includes, but is not limited to, two embodiments.
In a first embodiment, detecting whether a human face in a human face region of a first image is a side face includes: and detecting whether the face in the face of the first image is a side face or not according to the contour points of the face in the face area of the first image and the relative positions of the mouth corner points.
In a second embodiment, detecting whether a face in a face region of a first image is a side face includes: and detecting whether the face in the face area of the first image is a side face or not according to the relative positions of the nose tip point, the left eye and the right eye of the face in the face area of the first image.
As a preferred implementation of the embodiment of the present application, a face detection model may be used to perform face detection on a first image to obtain a face region in the first image; and detecting the face region to obtain at least one key point of the face in the face region, wherein the at least one key point represents the face feature of the face in the face region of the first image. The region of the face in the first image can be regarded as the face region in the first image.
In this embodiment of the application, the number of the key points in the at least one key point for obtaining the face region by detecting the face region may be 68. The above is only the preferable content of the at least one key point provided in the embodiment of the present application, and the inventor may set the specific number of the key points in the at least one key point according to his own needs, which is not limited herein.
In the embodiment of the application, if detecting whether the face in the face area of the first image is a side face is performed according to the relative positions of the nose tip point, the left eye and the right eye of the face in the face area of the first image, then extracting a first coordinate of the left-side point of the left eye of the face in the face area of the first image in the first image, a second coordinate of the right-side point of the right eye in the first image and a third coordinate of the nose tip point in the first image from at least one key point of the face in the face area of the first image, and calculating side face information according to the first coordinate, the second coordinate and the third coordinate; judging whether the side face information is smaller than a preset side face threshold value or not; if the side face information is smaller than the side face threshold value, determining that the face in the face area of the first image is the side face; and if the side face information is not smaller than the side face threshold value, determining that the face in the face area of the first image is not a side face.
The above is only a preferred way to determine whether the face in the face area of the first image is a side face provided by the embodiment of the present application, and regarding a specific way to determine whether the face in the face area of the first image is a side face, the inventor may set the determination according to his own needs, and is not limited herein.
S203, acquiring user characteristics of a user to which the face belongs in the face area of the first image;
s204, selecting a third image matched with the characteristics of the user from at least one preset second image;
s205, replacing the face area of the third image with the face area of the first image to generate a fourth image;
and S206, generating reminding information for reminding that the face in the face area of the first image is a side face.
In the embodiment of the application, if the face in the face area of the first image is a side face, the prompting information can be generated, and the prompting information is used for prompting that the face in the face area of the first image is a side face. Further, the reminding information is also used for reminding the user of using the front face image. Through showing reminding information for the user, can be convenient for the user after seeing reminding information, know that present first image can not carry out image processing for the side face image, need change present first image for the front face image so that carry out image processing. The front face image is an image of a front face of a human face in the image, and the side face image is an image of a side face of the human face in the image.
According to the image processing method provided by the embodiment of the application, the front face image can be adopted during image face changing by further detecting the side face of the face in the face area of the first image, and the image face changing effect is further ensured.
Fig. 3 is a further image processing method according to an embodiment of the present application.
As shown in fig. 3, the method includes:
s301, determining a face area of the first image;
s302, detecting whether the face in the face area of the first image is a side face or not according to at least one key point of the face in the face area of the first image; if the face in the face region of the first image is a side face, executing step S311; if the face in the face region of the first image is not a side face, executing step S303;
according to the embodiment of the application, if detecting whether the face in the face area of the first image is a side face is to detect whether the face in the face area of the first image is a side face according to the relative positions of the nose tip point, the left eye and the right eye of the face in the face area of the first image, after at least one key point of the face in the face area of the first image is obtained, the nose tip point, the left eye left side point and the right eye right side point of the face in the face area of the first image can be extracted from the at least one key point, and whether the face in the face area of the first image is a side face is detected according to the extracted nose tip point, the left eye left side point and the right eye right side point.
S303, acquiring an identification region covering the face region of the first image in the first image, wherein the area of the identification region is larger than that of the face region of the first image;
in the embodiment of the application, a face detection model is used for performing face detection on a first image to obtain a face region in the first image, and an identification region in the first image is obtained based on the face region in the first image, wherein the area of the identification region is larger than that of the face region, and the identification region covers the face region in the first image. Therefore, the acquired recognition area can certainly include the face area in the first image and can also include an edge area of the face area in the first image (for example, the face area only displays the face of a person, and the recognition area not only displays the face of the person, but also displays the hair of the person, the neck of the person, and the like), so that the gender recognition result obtained based on the recognition area is more accurate compared with the gender recognition result obtained based on the face area.
In this embodiment of the present application, a manner of obtaining the identification region located in the first image based on the face region in the first image may be: acquiring a fourth coordinate of a point positioned at the upper left corner of the face area of the first image in the first image and a fifth coordinate of a point positioned at the lower right corner of the face area of the first image in the first image; and processing the fourth coordinate and the fifth coordinate to obtain a sixth coordinate and a seventh coordinate, wherein the sixth coordinate is the coordinate of the point at the upper left corner of the identification area of the first image in the first image, and the seventh coordinate is the coordinate of the point at the lower right corner of the identification area of the first image in the first image.
As a preferred implementation of the embodiment of the present application, the fourth coordinate includes a fourth abscissa and a fourth ordinate, and the fifth coordinate includes a fifth abscissa and a fifth ordinate; processing the fourth abscissa and the fifth abscissa based on a first processing rule to obtain a sixth abscissa; processing the fourth ordinate and the fifth ordinate based on a second processing rule to obtain a sixth ordinate; the sixth abscissa is an abscissa in the sixth coordinate, and the sixth ordinate is an ordinate in the sixth coordinate; processing the fourth abscissa and the fifth abscissa based on a third processing rule to obtain a seventh abscissa; processing the fourth ordinate and the fifth ordinate based on a fourth processing rule to obtain a seventh ordinate; the seventh abscissa is an abscissa in the seventh coordinate, and the seventh ordinate is an ordinate in the seventh coordinate.
The above is merely a preferred way to acquire the identification region in the first image provided by the embodiment of the present application, and regarding a specific way to acquire the identification region in the first image, the inventors may set the acquisition according to their own needs, and are not limited herein.
S304, acquiring user characteristics of a user to which the face belongs in the face area of the first image based on the image content of the identification area in the first image;
in the embodiment of the application, a trained gender identification model is preset, and when the user characteristic is gender, gender identification is performed on the face in the identification area based on the gender identification model, so that the gender of the user to which the face belongs in the face area of the first image can be obtained. For the purpose of distinction, the gender of the user to which the face belongs in the face region of the obtained first image is temporarily referred to as the target gender.
As a preferred implementation manner of the embodiment of the present application, the method for pre-training the gender identification model may be: acquiring an image sample, wherein a human face is displayed in the image sample; the method comprises the steps of carrying out face detection on an image sample according to a face detection model to obtain a face area of the image sample, generating an identification area of the image sample according to the face area of the image sample, carrying out gender prediction on a face in the identification area of the image sample by using a gender prediction model to be trained to obtain a gender prediction result, and training the gender recognition model to be trained by using a nominal gender of the gender prediction result approaching the image sample as a training target to obtain the gender recognition model.
In the embodiment of the application, the calibrated gender of the image sample is the gender of the user to which the face belongs in the manually calibrated image sample. The image sample may be a half-body or full-body photograph of a single person in a natural scene.
The above is only a preferred way of obtaining the target gender of the user to whom the face belongs in the face region of the first image by performing gender recognition on the face in the recognition region based on the pre-trained gender recognition model provided by the embodiment of the present application, and the inventor can set the preferred way according to his own needs with regard to a specific way of identifying the target gender of the user to whom the face belongs in the face region of the first image, which is not limited herein.
S305, selecting a third image matched with the characteristics of the user from at least one preset second image;
s306, acquiring an eye-nose-mouth triangular region of the face in the face region of the first image according to at least one key point of the face in the face region of the first image;
according to the embodiment of the application, after at least one key point of the face in the face area of the first image is obtained, the target key points related to the eye, nose and mouth triangular areas of the face in the face area of the first image can be cut out from the at least one key point, and then the eye, nose and mouth triangular areas of the face in the face area of the first image are cut out from the face area of the first image according to the target key points.
S307, performing expansion operation on the eye, nose and mouth triangular region of the face in the face region of the first image to generate a first eye, nose and mouth triangular region;
further, referring to fig. 3, the image processing method according to the embodiment of the present application further includes performing an expansion operation on the eye, nose and mouth triangular regions of the face in the face region of the first image to generate a first eye, nose and mouth triangular region.
In this embodiment of the application, performing an expansion operation on an eye-nose-mouth triangular region of a face in a face region of a first image may obtain a first eye-nose-mouth triangular region located in the first image, where the first eye-nose-mouth triangular region covers the eye-nose-mouth triangular region of the face in the face region of the first image in the first image, and an area of the first eye-nose-mouth triangular region in the first image is greater than an area of the eye-nose-mouth triangular region of the face in the face region of the first image.
S308, acquiring the proportion of the face area of the third image relative to the face area of the first image;
further, referring to fig. 3, the image processing method according to the embodiment of the present application further includes adjusting the first eye-nose-mouth triangle area according to a ratio of the face area of the third image to the face area of the first image to generate a second eye-nose-mouth triangle area.
In this embodiment, since the size of the face region of the third image may be different from the size of the face region of the first image, before step S309 is executed, the ratio of the face region of the third image to the face region of the first image may be determined, and the first eye-nose-mouth triangular region is adjusted according to the ratio to obtain the second eye-nose-mouth triangular region.
The first area of the face region in the third image and the second area of the face region in the first image may be determined, a result of dividing the first area by the second area is used as a proportion of the face region of the third image relative to the face region of the first image, and the first eye-nose-mouth triangular region is scaled according to the proportion to obtain the second eye-nose-mouth triangular region.
S309, adjusting the first eye-nose-mouth triangular region according to the proportion to generate a second eye-nose-mouth triangular region;
s310, replacing the eye, nose and mouth triangular area of the face in the face area of the third image with the second eye, nose and mouth triangular area to generate a fourth image;
in the embodiment of the application, the fourth image is an image obtained after the eye, nose and mouth triangular areas of the face in the face area of the third image are replaced by the second eye, nose and mouth triangular areas. Wherein the eye, nose and mouth triangular regions of the face in the face region of the third image may be replaced by the second eye, nose and mouth triangular regions by affine transformation to generate a fourth image. Please refer to fig. 4 for a manner of generating a fourth image by replacing the eye-nose-mouth triangular region of the face in the face region of the third image with the second eye-nose-mouth triangular region, which is not described herein again.
S311, generating reminding information for reminding that the face in the face area of the first image is a side face.
Fig. 4 is a flowchart of a method for generating a fourth image by replacing an eye-nose-mouth triangular region of a human face in a human face region of a third image with a second eye-nose-mouth triangular region according to an embodiment of the present application.
As shown in fig. 4, the method includes:
s401, acquiring at least one key point of a face in a face area of a preset third image;
according to the embodiment of the application, at least one key point of the face in the face area of each second image in at least one second image can be preset, so that when the second image is selected as a third image, an affine transformation matrix can be directly constructed according to the at least one key point of the face in the face area of the first image and the at least one key point of the face in the face area of the preset third image, and then the purpose of replacing the eye, nose and mouth triangular area of the face in the face area of the first image with the second eye, nose and mouth triangular area of the third image is achieved based on the affine transformation matrix.
S402, an affine transformation matrix is constructed based on at least one key point of the face in the face area of the first image and at least one key point of the face in the face area of the third image;
in this embodiment of the application, the affine transformation matrix may be a rotation matrix and a translation matrix from at least one key point of the face in the face region of the first image to at least one key point of the face in the face region of the third image.
S403, performing affine transformation on the second eye, nose and mouth triangular region according to the affine transformation matrix to generate a third eye, nose and mouth triangular region;
s404, performing color correction on the third eye, nose and mouth triangular region according to the third image to obtain a fourth eye, nose and mouth triangular region;
in the embodiment of the application, in order to solve the difference between the skin color of the human face in the third image and the skin color of the human face in the first image, the norm of the mean value of the left eye region and the right eye region in the third image multiplied by 40% is taken as a kernel of gaussian blur, the gaussian blur is performed on the second target image and the third image, the difference value of the gaussian blur is taken as the color difference value between the second target image and the third image, the inverse triangle of the triangular third eye-nose-mouth region in the second target image is multiplied by the color difference value, an image close to the color of the human face edge of the third image is obtained, the triangular eye-nose-mouth region in the obtained image can be regarded as the triangular fourth eye-nose-mouth region, and therefore the color correction of the triangular third eye-nose-mouth region in the second target image can be achieved. The second target image is displayed with a second image content, and the second image content may be an image content of the first image in a third eye, nose and mouth triangular area thereof.
S405, Poisson fusion is carried out on the eye-nose-mouth triangular region of the face in the fourth eye-nose-mouth triangular region and the face region of the third image, and the eye-nose-mouth triangular region of the face in the face region of the third image is replaced by the fourth eye-nose-mouth triangular region to generate a fourth image.
According to the image processing method provided by the embodiment of the application, the eye, nose and mouth triangular regions of the human face in the first image can be better fused with the third image through the modes of side face detection, eye, nose and mouth triangular region expansion, color correction on the image and the like, and the generated image is more vivid.
The application provides an image processing method, which is characterized in that a face area of a first image is determined, user characteristics of a user to which the face belongs in the face area of the first image are obtained, a third image matched with the user characteristics is selected from at least one preset second image, and the face area of the third image is replaced by the face area of the first image, so that the purpose of face changing of the image is automatically achieved, manual operation of the user is not needed, and the face changing efficiency of the image is improved.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 5, the apparatus includes:
a face region determination unit 51 for determining a face region of the first image;
a first obtaining unit 52, configured to obtain a user feature of a user to whom a face belongs in a face region of the first image;
an image selecting unit 53, configured to select a third image matching the user characteristic from at least one preset second image;
and an image processing unit 54 for generating a fourth image by replacing the face region of the third image with the face region of the first image.
The specific principle and the execution process of each unit in the image processing apparatus disclosed in the above embodiment of the present invention are the same as those of the image processing method disclosed in the above embodiment of the present invention, and reference may be made to corresponding parts in the image processing method disclosed in the above embodiment of the present invention, which are not described herein again.
The application provides an image processing device, which is characterized in that the user characteristics of a user to which the face belongs in the face area of a first image are obtained by determining the face area of the first image, a third image matched with the user characteristics is selected from at least one preset second image, and then the face area of the third image is replaced by the face area of the first image, so that the purpose of face changing of the image is automatically achieved, the manual operation of the user is not required, and the face changing efficiency of the image is improved.
In the embodiment of the present application, preferably, the user characteristics include: any one or more of gender, age, and facial form.
Further, an image processing apparatus provided in an embodiment of the present application further includes:
the detection unit is used for detecting whether the face in the face area of the first image is a side face;
the reminding information generating unit is used for generating reminding information for reminding that the face in the face area of the first image is the side face if the face in the face area of the first image is the side face;
accordingly, the first obtaining unit 52 includes: and the second acquisition unit is used for acquiring the user characteristics of the user to which the face belongs in the face area of the first image if the face in the face area of the first image is not a side face.
In the embodiment of the application, the front face image can be adopted when the face of the image is changed by further detecting the side face of the face region of the first image, so that the face changing effect of the image is ensured.
In the embodiment of the present application, it is preferable that the first obtaining unit 52 includes:
the third acquisition unit is used for acquiring an identification region covering the face region of the first image in the first image, and the area of the identification region is larger than that of the face region of the first image;
and the fourth acquisition unit is used for acquiring the user characteristics of the user to which the face belongs in the face area of the first image based on the image content of the identification area in the first image.
In the embodiment of the present application, it is preferable that the image processing unit 54 includes:
the fifth acquiring unit is used for acquiring at least one key point of the face in the face area of the first image, and the at least one key point represents the face feature of the face in the face area of the first image;
the sixth acquisition unit is used for acquiring an eye, nose and mouth triangular region of the face in the face region of the first image according to at least one key point;
and the first image processing subunit is used for replacing the eye, nose and mouth triangular areas of the human face in the human face area of the third image with the eye, nose and mouth triangular areas of the human face in the human face area of the first image to generate a fourth image.
Furthermore, an image processing apparatus provided in an embodiment of the present application further includes:
the expansion unit is used for performing expansion operation on the eye-nose-mouth triangular region of the face in the face region of the first image to generate a first eye-nose-mouth triangular region;
a seventh acquiring unit, configured to acquire a ratio of the face region of the third image with respect to the face region of the first image;
the adjusting unit is used for adjusting the first eye-nose-mouth triangular area according to the proportion to generate a second eye-nose-mouth triangular area;
accordingly, a first image processing subunit comprises: and the second image processing subunit is used for replacing the eye, nose and mouth triangular areas of the human face in the human face area of the third image with the second eye, nose and mouth triangular areas to generate a fourth image.
In the embodiment of the present application, it is preferable that the second image processing subunit includes:
the eighth acquiring unit is used for acquiring at least one key point of the face in the face area of the preset third image;
the radiation transformation processing unit is used for carrying out affine transformation on the second eye, nose and mouth triangular region according to the affine transformation matrix to generate a third eye, nose and mouth triangular region;
and the third image processing subunit is used for replacing the eye, nose and mouth triangular areas of the human face in the human face area of the third image with the third eye, nose and mouth triangular areas to generate a fourth image.
Furthermore, an image processing apparatus provided in an embodiment of the present application further includes:
the correction unit is used for carrying out color correction on the third eye-nose-mouth triangular area according to a third image to obtain a fourth eye-nose-mouth triangular area;
correspondingly, the third image processing subunit comprises: and the fourth image processing subunit is used for performing Poisson fusion on the eye-nose-mouth triangular region of the human face in the fourth eye-nose-mouth triangular region and the human face region of the third image, and replacing the eye-nose-mouth triangular region of the human face in the human face region of the third image with the fourth eye-nose-mouth triangular region to generate a fourth image.
The image processing device provided by the embodiment of the application further detects, expands the triangle areas of eyes, noses and mouths, corrects colors of the images and the like through the side faces, so that the triangle areas of eyes, noses and mouths of the human faces in the first image can be better fused with the third image, and the generated image is more vivid.
The following describes in detail a hardware structure of a server to which an image processing method provided in an embodiment of the present application is applied, taking an example in which the image processing method is applied to the server.
The image processing method provided by the embodiment of the application can be applied to a server, and the server can be a service device which provides service for a user at a network side, can be a server cluster formed by a plurality of servers, and can also be a single server.
Optionally, fig. 6 is a block diagram illustrating a hardware structure of a server to which an image processing method provided in the embodiment of the present application is applied, and referring to fig. 6, the hardware structure of the server may include: a processor 61, a communication interface 62, a memory 63 and a communication bus 64;
in the embodiment of the present invention, the number of the processor 61, the communication interface 62, the memory 63, and the communication bus 64 may be at least one, and the processor 61, the communication interface 62, and the memory 63 complete mutual communication through the communication bus 64;
the processor 61 may be a central processing unit CPU, or an application specific Integrated circuit asic, or one or more Integrated circuits configured to implement embodiments of the present invention, or the like;
the memory 63 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory;
wherein the memory stores a program, the processor may invoke the program stored in the memory, and the program is operable to:
determining a face region of a first image;
acquiring user characteristics of a user to which a face belongs in a face region of a first image;
selecting a third image matched with the characteristics of the user from at least one preset second image;
and replacing the face area of the third image with the face area of the first image to generate a fourth image.
For the functions of the program, reference may be made to the above detailed description of an image processing method provided in the embodiments of the present application, which is not described herein again.
Further, an embodiment of the present application also provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used for executing the image processing method.
For specific contents of the computer executable instructions, reference may be made to the above detailed description of an image processing method provided in the embodiments of the present application, which is not repeated herein.
The application provides an image processing method, an image processing device, a server and a storage medium, wherein the user characteristics of a user to which a face belongs in a face area of a first image are obtained by determining the face area of the first image, a third image matched with the user characteristics is selected from at least one preset second image, and the face area of the third image is replaced by the face area of the first image, so that the purpose of image face changing is automatically achieved, the manual operation of a user is not required, and the image face changing efficiency is improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (11)

1. An image processing method, comprising:
determining a face region of a first image;
acquiring user characteristics of a user to which the face belongs in the face area of the first image;
selecting a third image matched with the user characteristics from at least one preset second image;
and replacing the face area of the third image with the face area of the first image to generate a fourth image.
2. The method of claim 1, wherein the user characteristics comprise: any one or more of gender, age, and facial form.
3. The method of claim 1, further comprising:
detecting whether the face in the face area of the first image is a side face;
if the face in the face area of the first image is a side face, generating reminding information for reminding that the face in the face area of the first image is the side face;
the obtaining of the user characteristics of the user to which the face belongs in the face region of the first image includes: and if the face in the face area of the first image is not a side face, acquiring the user characteristics of the user to which the face in the face area of the first image belongs.
4. The method according to claim 1, wherein the obtaining of the user characteristics of the user to whom the face belongs in the face region of the first image comprises:
acquiring an identification region covering a face region of the first image in the first image, wherein the area of the identification region is larger than that of the face region of the first image;
and acquiring the user characteristics of the user to which the face belongs in the face area of the first image based on the image content of the identification area in the first image.
5. The method of claim 1, wherein the replacing the face region of the third image with the face region of the first image generates a fourth image comprising:
acquiring at least one key point of a face in a face region of the first image, wherein the at least one key point represents the face feature of the face in the face region of the first image;
acquiring an eye, nose and mouth triangular region of a human face in the human face region of the first image according to the at least one key point;
and replacing the eye, nose and mouth triangular areas of the face in the face area of the third image with the eye, nose and mouth triangular areas of the face in the face area of the first image to generate a fourth image.
6. The method of claim 5, further comprising:
performing expansion operation on an eye-nose-mouth triangular region of a face in the face region of the first image to generate a first eye-nose-mouth triangular region;
acquiring the proportion of the face area of the third image relative to the face area of the first image;
adjusting the first eye-nose-mouth triangular area according to the proportion to generate a second eye-nose-mouth triangular area;
the replacing the eye, nose and mouth triangular regions of the face in the face region of the third image with the eye, nose and mouth triangular regions of the face in the face region of the first image to generate a fourth image includes: and replacing the eye, nose and mouth triangular areas of the face in the face area of the third image with the second eye, nose and mouth triangular areas to generate a fourth image.
7. The method of claim 6, wherein the replacing the eye-nose-mouth triangular region of the face in the face region of the third image with the second eye-nose-mouth triangular region generates a fourth image comprising:
acquiring at least one key point of a human face in a preset human face area of the third image;
constructing an affine transformation matrix based on at least one key point of the face in the face region of the first image and at least one key point of the face in the face region of the third image;
performing affine transformation on the second eye, nose and mouth triangular region according to the affine transformation matrix to generate a third eye, nose and mouth triangular region;
and replacing the eye, nose and mouth triangular areas of the human face in the human face area of the third image with the third eye, nose and mouth triangular areas to generate a fourth image.
8. The method of claim 7, further comprising:
carrying out color correction on the third eye, nose and mouth triangular region according to the third image to obtain a fourth eye, nose and mouth triangular region;
the replacing the eye, nose and mouth triangular regions of the face in the face region of the third image with the third eye, nose and mouth triangular regions to generate a fourth image comprises:
and performing Poisson fusion on the eye-nose-mouth triangular region of the face in the fourth eye-nose-mouth triangular region and the face region of the third image, and replacing the eye-nose-mouth triangular region of the face in the face region of the third image with the fourth eye-nose-mouth triangular region to generate a fourth image.
9. An image processing apparatus characterized by comprising:
a face region determination unit for determining a face region of the first image;
the first acquisition unit is used for acquiring the user characteristics of the user to which the face belongs in the face area of the first image;
the image selecting unit is used for selecting a third image matched with the user characteristics from at least one preset second image;
and the image processing unit is used for replacing the face area of the third image with the face area of the first image to generate a fourth image.
10. An image processing server, comprising: at least one memory and at least one processor; the memory stores a program that the processor calls, the program being stored in the memory, the program being for implementing the image processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium having stored thereon computer-executable instructions for performing the image processing method of any one of claims 1 to 8.
CN201911243117.1A 2019-12-06 2019-12-06 Image processing method, device, server and storage medium Pending CN111104878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911243117.1A CN111104878A (en) 2019-12-06 2019-12-06 Image processing method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911243117.1A CN111104878A (en) 2019-12-06 2019-12-06 Image processing method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN111104878A true CN111104878A (en) 2020-05-05

Family

ID=70422268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911243117.1A Pending CN111104878A (en) 2019-12-06 2019-12-06 Image processing method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN111104878A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931145A (en) * 2020-06-29 2020-11-13 北京爱芯科技有限公司 Face encryption method, face recognition method, face encryption device, face recognition device, electronic equipment and storage medium
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN113674139A (en) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424721A (en) * 2013-08-22 2015-03-18 辽宁科大聚龙集团投资有限公司 Face shield identification method combined with ATM
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN108550176A (en) * 2018-04-19 2018-09-18 咪咕动漫有限公司 Image processing method, equipment and storage medium
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
CN109325988A (en) * 2017-07-31 2019-02-12 腾讯科技(深圳)有限公司 A kind of facial expression synthetic method, device and electronic equipment
US20190206101A1 (en) * 2017-12-28 2019-07-04 Facebook, Inc. Systems and methods for swapping faces and face components based on facial recognition
CN110197462A (en) * 2019-04-16 2019-09-03 浙江理工大学 A kind of facial image beautifies in real time and texture synthesis method
CN110458751A (en) * 2019-06-28 2019-11-15 广东智媒云图科技股份有限公司 A kind of face replacement method, equipment and medium based on Guangdong opera picture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424721A (en) * 2013-08-22 2015-03-18 辽宁科大聚龙集团投资有限公司 Face shield identification method combined with ATM
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN109325988A (en) * 2017-07-31 2019-02-12 腾讯科技(深圳)有限公司 A kind of facial expression synthetic method, device and electronic equipment
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
US20190206101A1 (en) * 2017-12-28 2019-07-04 Facebook, Inc. Systems and methods for swapping faces and face components based on facial recognition
CN108550176A (en) * 2018-04-19 2018-09-18 咪咕动漫有限公司 Image processing method, equipment and storage medium
CN110197462A (en) * 2019-04-16 2019-09-03 浙江理工大学 A kind of facial image beautifies in real time and texture synthesis method
CN110458751A (en) * 2019-06-28 2019-11-15 广东智媒云图科技股份有限公司 A kind of face replacement method, equipment and medium based on Guangdong opera picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
黄诚: "基于Candide-3算法的图像中面部替换技术", 《计算技术与自动化》 *
黄诚: "基于Candide-3算法的图像中面部替换技术", 《计算技术与自动化》, no. 02, 15 June 2018 (2018-06-15), pages 100 - 104 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931145A (en) * 2020-06-29 2020-11-13 北京爱芯科技有限公司 Face encryption method, face recognition method, face encryption device, face recognition device, electronic equipment and storage medium
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN113674139A (en) * 2021-08-17 2021-11-19 北京京东尚科信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
CN107771336B (en) Feature detection and masking in images based on color distribution
CN109952594B (en) Image processing method, device, terminal and storage medium
CN110929569B (en) Face recognition method, device, equipment and storage medium
CN112884637B (en) Special effect generation method, device, equipment and storage medium
CN111104878A (en) Image processing method, device, server and storage medium
CN105243371B (en) A kind of detection method, system and the camera terminal of face U.S. face degree
CN107507217B (en) Method and device for making certificate photo and storage medium
WO2016180224A1 (en) Method and device for processing image of person
CN107610202B (en) Face image replacement method, device and storage medium
KR101944112B1 (en) Method and apparatus for creating user-created sticker, system for sharing user-created sticker
WO2017206400A1 (en) Image processing method, apparatus, and electronic device
CN103778376A (en) Information processing device and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109903291B (en) Image processing method and related device
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
CN105608699B (en) A kind of image processing method and electronic equipment
KR101743764B1 (en) Method for providing ultra light-weight data animation type based on sensitivity avatar emoticon
CN109005368A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
WO2019142127A1 (en) Method and system of creating multiple expression emoticons
CN112581518A (en) Eyeball registration method, device, server and medium based on three-dimensional cartoon model
CN113658324A (en) Image processing method and related equipment, migration network training method and related equipment
WO2022262209A1 (en) Neural network training method and apparatus, computer device, and storage medium
CN108537162A (en) The determination method and apparatus of human body attitude
CN110321821B (en) Human face alignment initialization method and device based on three-dimensional projection and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination