CN109255761B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN109255761B
CN109255761B CN201810966031.0A CN201810966031A CN109255761B CN 109255761 B CN109255761 B CN 109255761B CN 201810966031 A CN201810966031 A CN 201810966031A CN 109255761 B CN109255761 B CN 109255761B
Authority
CN
China
Prior art keywords
face
adjustment
processed
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810966031.0A
Other languages
Chinese (zh)
Other versions
CN109255761A (en
Inventor
莊凱伃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jupiter Technology Co ltd
Original Assignee
Beijing Lemi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lemi Technology Co Ltd filed Critical Beijing Lemi Technology Co Ltd
Priority to CN201810966031.0A priority Critical patent/CN109255761B/en
Publication of CN109255761A publication Critical patent/CN109255761A/en
Application granted granted Critical
Publication of CN109255761B publication Critical patent/CN109255761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: obtaining an image to be processed; identifying the face features of the face in the image to be processed as the face features to be processed; determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information; performing first adjustment processing on the face in the image to be processed based on the target processing effect information; and displaying the image after the first adjustment processing. The method and the device can adjust the face image which is more accordant with the aesthetic sense of the user, so that the adjusting effect is better, and the use experience of the user is improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
At present, various beauty software layers are endless in order to meet the aesthetic requirements of users. Currently, most of the art beautifying software is configured with a one-key art beautifying function, in order to provide a set of adjustment parameters conforming to the aesthetic sense of the user for the user, and the adjustment parameters may include: the adjustment data of the large-eye degree, the adjustment data of the face thinning degree, the adjustment data set by the skin color and the like enable the face in the image to be adjusted according to the set of adjustment parameters which accord with the aesthetic sense of the user after the one-key facial beautification function of the beautification software is started, so that the adjusted face is more accordant with the aesthetic sense of the user.
However, the adjustment parameters corresponding to the current one-key beauty function are single, that is, when the one-key beauty function of the beauty software is started, the beauty software uses the same set of adjustment parameters to perform one-key beauty adjustment on the face in the image for different users. Considering that there may be different users with different cultural backgrounds, there are different situations in their beauty requirements. The single adjustment parameter may not be in accordance with the aesthetic sense of some user groups, which may cause a trouble to the user and may cause a poor user experience.
Therefore, how to provide an image processing method with better effect becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide an image processing method, an image processing device and electronic equipment, so that a face image which is more in line with the aesthetic sense of a user can be adjusted, the adjustment effect is better, and the use experience of the user is improved. The specific technical scheme is as follows:
in one aspect, an embodiment of the present invention provides an image processing method, where the method includes:
obtaining an image to be processed;
recognizing the face features of the face in the image to be processed as the face features to be processed;
determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
and displaying the image after the first adjustment processing.
Optionally, the face features to be processed include skin color features to be processed; the preset corresponding relationship comprises: the corresponding relation between different skin color characteristics and processing effect information;
the step of determining target processing effect information based on the human face features to be processed and the preset corresponding relation comprises the following steps:
and determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
Optionally, the preset corresponding relationship includes: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information;
the step of determining target processing effect information based on the human face features to be processed and the preset corresponding relation comprises the following steps:
determining a preset face class to which the face belongs in the image to be processed as a target face class based on the corresponding relation between different face features and preset face classes and the face features to be processed;
and determining processing effect information corresponding to the target face type as target processing effect information based on the corresponding relation between different preset face types and the processing effect information and the target face type.
Optionally, before the step of identifying the face features of the face in the image to be processed as the face features to be processed, the method further includes:
judging whether history adjustment information aiming at the face exists or not, wherein the history adjustment information is as follows: adjusting information according to the face of the user in the historical image;
and when judging that the historical adjustment information aiming at the human face does not exist, executing the step of identifying the human face characteristics of the human face in the image to be processed as the human face characteristics to be processed.
Optionally, before the step of identifying the face features of the face in the image to be processed as the face features to be processed, the method further includes:
judging whether history adjustment information aiming at the face exists or not, wherein the history adjustment information is as follows: adjusting information according to the face of the user in the historical image;
when judging that historical adjustment information aiming at the face exists, performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
and displaying the image after the second adjustment processing.
Optionally, after the step of presenting the first adjusted image, the method further includes:
receiving an adjusting instruction of a user for the face in the image after the first adjusting processing;
adjusting the face in the image after the first adjustment processing based on the adjustment instruction;
and recording adjustment information aiming at the adjusted human face.
Optionally, the adjustment instruction carries the facial features to be adjusted and the adjustment parameters corresponding to the facial features to be adjusted;
after the step of recording adjustment information for the adjusted face, the method further includes:
judging whether the adjustment frequency of a user for the same face feature exceeds a preset frequency or not, and judging whether the fluctuation range among all adjustment parameters corresponding to the face feature is within a preset range or not;
when the judgment result is that the adjustment frequency of the user for the same face feature exceeds a preset frequency and the judgment result is that the fluctuation range of all adjustment parameters corresponding to the face feature is within a preset range, generating historical adjustment information for the face based on the adjustment information recorded for the adjusted face, wherein the historical adjustment information is as follows: and information for performing second adjustment processing on the face in the obtained image.
Optionally, before the step of performing the second adjustment processing on the face in the image to be processed based on the historical adjustment information, the method further includes:
acquiring the generation time of the historical adjustment information and the current time;
judging whether the difference value of the current time and the generation time exceeds a preset time length or not;
and when the preset time length is not exceeded, executing the step of carrying out second adjustment processing on the human face in the image to be processed based on the historical adjustment information.
Optionally, before the step of performing the second adjustment processing on the face in the image to be processed based on the historical adjustment information, the method further includes:
acquiring the generation time of the historical adjustment information and the current time;
judging whether the difference value of the current time and the generation time exceeds a preset time length or not;
when the preset time length is judged to be exceeded, outputting inquiry prompt information to inquire whether a user utilizes the historical adjustment information to carry out second adjustment processing on the face in the image to be processed;
and adjusting the human face in the image to be processed based on the selection result of the user.
Optionally, the facial features to be processed include: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information comprises target adjustment parameters corresponding to target skin color characteristics, target adjustment parameters corresponding to target eye characteristics and/or target adjustment parameters corresponding to target face characteristics;
the step of performing the first adjustment processing on the face in the image to be processed based on the target processing effect information includes:
when the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains a target adjusting parameter corresponding to the target eye feature, performing eye adjusting processing on the eye feature point to be processed by using the target adjusting parameter corresponding to the target eye feature;
and when the target processing effect information contains target adjustment parameters corresponding to the target face shape features, performing face shape adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target face shape features.
In another aspect, an embodiment of the present invention provides an image processing apparatus, including:
the first obtaining module is used for obtaining an image to be processed containing a human face;
the recognition module is used for recognizing the face features of the face in the image to be processed as the face features to be processed;
a determining module, configured to determine target processing effect information based on the facial features to be processed and a preset corresponding relationship, where the preset corresponding relationship includes: corresponding relations between different human face features and processing effect information;
and the first adjusting module is used for performing first adjusting processing on the face in the image to be processed based on the target processing effect information.
And the second display module is used for displaying the image after the first adjustment processing.
Optionally, the face features to be processed include skin color features to be processed; the preset corresponding relationship comprises: the corresponding relation between different skin color characteristics and processing effect information;
the determination module is particularly used for
And determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
Optionally, the preset corresponding relationship includes: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information;
the determination module is particularly used for
Determining a preset face class to which the face belongs in the image to be processed as a target face class based on the corresponding relation between different face features and preset face classes and the face features to be processed;
and determining processing effect information corresponding to the target face type as target processing effect information based on the corresponding relation between different preset face types and the processing effect information and the target face type.
Optionally, the apparatus further comprises:
a first determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
and when judging that the historical adjustment information aiming at the human face does not exist, triggering the identification module.
Optionally, the apparatus further comprises:
a second determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
the second adjusting module is used for performing second adjusting processing on the human face in the image to be processed based on the historical adjusting information when the historical adjusting information aiming at the human face is judged to exist;
and the second display module is used for displaying the image after the second adjustment processing.
Optionally, the apparatus further comprises:
the receiving module is used for receiving an adjusting instruction of a user for the face in the image after the first adjustment processing after the image after the first adjustment processing is displayed;
the third adjusting module is used for adjusting the face in the image after the first adjusting processing based on the adjusting instruction;
and the recording module is used for recording the adjustment information aiming at the adjusted human face.
Optionally, the adjustment instruction carries the facial features to be adjusted and the adjustment parameters corresponding to the facial features to be adjusted;
the device further comprises:
the second judgment module is used for judging whether the adjustment frequency of the user for the same face feature exceeds a preset frequency or not after the adjustment information is recorded for the adjusted face, and judging whether the fluctuation range of all adjustment parameters corresponding to the face feature is within a preset range or not;
a generating module, configured to generate historical adjustment information for a human face based on adjustment information recorded for the human face after adjustment processing when a determination result indicates that an adjustment frequency of a user for a same human face feature exceeds a preset frequency and a determination result indicates that a fluctuation range between all adjustment parameters corresponding to the human face feature is within a preset range, where the historical adjustment information is: and information for performing second adjustment processing on the face in the obtained image.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
the third judging module is used for judging whether the difference value between the current time and the generating time exceeds a preset time length or not;
and when the preset time length is not exceeded, triggering the second adjusting module.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
the fourth judging module is used for judging whether the difference value between the current time and the generating time exceeds a preset time length or not;
the output module is used for outputting inquiry prompt information when the preset time length is judged to be exceeded so as to inquire whether a user utilizes the historical adjustment information to carry out second adjustment processing on the face in the image to be processed;
and the fourth adjusting module is used for adjusting the human face in the image to be processed based on the selection result of the user.
Optionally, the facial features to be processed include: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information comprises target adjustment parameters corresponding to target skin color characteristics, target adjustment parameters corresponding to target eye characteristics and/or target adjustment parameters corresponding to target face characteristics;
the first adjusting module is specifically used for
When the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains a target adjusting parameter corresponding to the target eye feature, performing eye adjusting processing on the eye feature point to be processed by using the target adjusting parameter corresponding to the target eye feature;
and when the target processing effect information contains target adjustment parameters corresponding to the target face shape features, performing face shape adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target face shape features.
On the other hand, the embodiment of the invention provides electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
the processor is configured to implement any of the image processing method steps provided in the embodiments of the present invention when executing the computer program stored in the memory.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any of the image processing method steps provided in the embodiment of the present invention.
The image processing method provided by the embodiment of the invention obtains the image to be processed; identifying the face features of the face in the image to be processed as the face features to be processed; determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information; performing first adjustment processing on the face in the image to be processed based on the target processing effect information; and displaying the image after the first adjustment processing.
Therefore, in the embodiment of the present invention, target processing effect information, that is, target adjustment parameters, corresponding to different face features of a face can be determined based on the face features of the face in the identified image to be processed, and the face in the image to be processed is adjusted based on the determined target processing effect information, so as to implement adjustment of the face in the image based on the face features of the face in the image, and provide corresponding target adjustment parameters to adjust the face in the image, so as to adjust the face image more conforming to the aesthetic sense of the user, and improve the user experience. Furthermore, different beauty effects can be provided for images containing different face characteristics, and users with faces containing different face characteristics adjust the face images which are more in line with the beauty of different users, so that the image adjusting effect is better, and the user perception is improved. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is another schematic flow chart illustrating an image processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, an image processing device and electronic equipment, which are used for adjusting a face image which is more in line with the aesthetic sense of a user, so that the image adjusting effect is better, and the use experience of the user is improved.
As shown in fig. 1, an embodiment of the present invention provides an image processing method, which may include the following steps:
s101: obtaining an image to be processed;
it can be understood that the image processing method provided by the embodiment of the present invention can be applied to any type of electronic devices, and is not described herein again. In one implementation, the electronic device may be a server or a user terminal. The functional software for implementing the image processing method provided by the embodiment of the invention can exist in the form of special client software or existing plug-in of application software.
In one case, when the electronic device is a user terminal, the electronic device may obtain an image including a human face through a camera, and use the image including the human face, that is, a human face image, as an image to be processed according to an embodiment of the present invention. In another case, when the electronic device is a server, the electronic device may obtain an image including a human face sent by a connected user terminal, and use the image of the human face as an image to be processed according to an embodiment of the present invention. In another case, the electronic device may obtain any locally stored image containing a human face, and use the human face image as the image to be processed according to the embodiment of the present invention. The image to be processed may be an image in any format, and the format type of the image to be processed is not limited in the embodiment of the present invention.
In an implementation manner, in order to better implement providing a humanized service for a user and improve user experience, a function switch with a one-key beauty function may be provided, and when the function switch is turned on and the one-key beauty function is turned on, the image processing procedure provided by the embodiment of the present invention is executed. When the function switch is turned off and the one-key beauty function is turned off, the image processing flow provided by the embodiment of the invention may not be executed.
S102: identifying the face features of the face in the image to be processed as the face features to be processed;
in the embodiment of the invention, after the electronic device obtains the image to be processed, the region where the face is located can be identified from the image to be processed based on a preset face identification algorithm, and then, the face feature of the face is identified from the region where the face is located in the image to be processed, so as to serve as the face feature to be processed. Or, the region where the face is located may be determined from the image to be processed based on the obtained position information of the region where the face is located in the image to be processed; and then recognizing the face features of the face from the region where the face is located in the image to be processed as the face features to be processed.
The preset face recognition algorithm may be any recognition algorithm that can recognize a face from an image. All recognition algorithms capable of recognizing faces from images can be applied to the embodiment of the invention to realize recognition of the areas where the faces are located from the images to be processed, and are not described herein again.
In one implementation, the facial features may include: the method comprises the steps of obtaining a face color feature of the face, obtaining an eye feature of the face, obtaining a face shape feature of the face, obtaining a nose bridge feature of the face, and obtaining a face skin color feature of the face, wherein the eye feature of the face comprises eye feature points corresponding to eyes, the face shape feature of the face comprises a face outline feature point, and/or the nose bridge feature of the face comprises a nose bridge. The eye feature points can represent the size degree of eyes in the human face; the face contour feature points can represent the face shape of the face; the nose bridge feature points can represent the height degree of the nose bridge of the human face.
In one case, the skin color feature of the face may be identified by a pixel value of a pixel point representing the face in the image to be processed, such as an RGB (Red Green Blue ) value, or may be identified by a luminance value of a pixel point representing the face in the image to be processed, such as a Y component value in the image in a YUV format, which may be all the cases.
In the embodiment of the present invention, the image to be processed may include one or more human faces. In one case, when a plurality of faces are included, the respective regions of the faces can be identified from the image to be processed, and the respective face features of each face can be identified from the respective identified regions of each face; and executing subsequent image processing flow aiming at each identified face based on the face characteristics of the face.
S103: determining target processing effect information based on the human face features to be processed and a preset corresponding relation;
wherein, the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
in one implementation, the electronic device or the external storage device connected to the electronic device may store a preset corresponding relationship, where the preset corresponding relationship may include: and (4) corresponding relations between different human face features and processing effect information. After the electronic equipment identifies the face features of the face to be processed, processing effect information matched with the face features to be processed is determined from the preset corresponding relation and is used as target processing effect information. In one case, when the electronic device is a user terminal, the preset corresponding relationship may be stored in a server connected to the user terminal, and after the electronic device identifies a face feature of a face to be processed in an image to be processed, the preset corresponding relationship may be obtained from the connected server, and S103 is executed.
In another implementation manner, the electronic device may store a preset corresponding relationship locally or in an external storage device connected to the electronic device, where the preset corresponding relationship may include: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information. After the electronic equipment identifies the face features of the face to be processed, determining a preset face class to which the face features to be processed belong from the corresponding relation between different face features and the preset face class as a target face class; and then determining processing effect information corresponding to the target face type from the corresponding relation between different preset face types and the processing effect information as target processing effect information. Specifically, the preset corresponding relationship may include: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information; s103, may include:
determining a preset face class to which the face in the image to be processed belongs as a target face class based on the corresponding relation between different face features and the preset face class and the face features to be processed;
and determining the processing effect information corresponding to the target face type as the target processing effect information based on the corresponding relation between the different preset face types and the processing effect information and the target face type.
In one implementation, the facial features to be processed may include skin color features to be processed; the preset correspondence may include: the corresponding relation between different skin color characteristics and processing effect information;
s103, may include: and determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
In one case, the preset correspondence relationship may directly include: the corresponding relation between different skin color characteristics and the processing effect information can be a one-to-one corresponding relation or a many-to-one corresponding relation. For example: when the skin color feature to be processed is identified by the pixel value of the pixel point representing the face in the image to be processed, and the corresponding relationship between the skin color feature to be processed and the processing effect information is a many-to-one corresponding relationship, the corresponding relationship between the skin color feature to be processed and the processing effect information may be identified as follows: pixel values a to B correspond to processing effect information 1, pixel values C to D correspond to processing effect information 2, pixel values E to F correspond to processing effect information 3, and so on.
In another case, the preset face type to which the face belongs can be determined through the skin color characteristics of the face. Wherein, the preset face category may include: white skin face, black skin face, and yellow skin face. The preset corresponding relationship may include: the corresponding relation between different skin color characteristics and the preset human face types and the corresponding relation between different preset human face types and the processing effect information. The corresponding relation between different skin color characteristics and the preset human face category can be a one-to-one corresponding relation or a many-to-one corresponding relation; the corresponding relationship between different preset face categories and the processing effect information may be a one-to-one corresponding relationship, which is all possible. For example: the correspondence between the skin color feature to be processed and the processing effect information may be identified as: the pixel values A-B correspond to a white skin face, and the white skin face corresponds to processing effect information 1; the pixel values C-D correspond to black skin face, and the black skin face corresponds to processing effect information 2; the pixel values E-F correspond to a yellow skin color face, and the yellow skin color face corresponds to processing effect information 3.
Wherein A, B, C, D, E and F in this embodiment of the present invention may represent any pixel value, and the relationship between them may be A < B, C < D, and E < F.
In another implementation, the facial features may include, but are not limited to: the face skin color characteristic of the face, the eye characteristic of the face, the face shape characteristic of the face and the nose bridge characteristic of the face. When the face features to be processed include a skin color feature to be processed, an eye feature to be processed, a face shape feature to be processed, and a nose bridge feature to be processed, the preset corresponding relationship may include: different skin color characteristics, different eye size degrees, different face shapes, different nose bridge height degrees and the corresponding relation of processing effect information. The eye feature points in the eye features can represent the size degree of eyes in the human face; the face contour feature points in the face shape features can represent the face shape of the face; the nose bridge feature points in the nose bridge features can represent the height degree of the nose bridge of the human face.
S104: performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
the target processing effect information may include a specific target adjustment parameter, and the electronic device may perform adjustment processing on the face in the image to be processed based on the target adjustment parameter, as first adjustment processing.
In one case, the target processing effect information may include, but is not limited to: target adjustment parameters corresponding to the eye features of the human face, namely the target adjustment parameters corresponding to the eye features of the human face; target adjustment parameters corresponding to the skin color characteristics of the human face; a target adjustment parameter (a target adjustment parameter of a face thinning degree) corresponding to the face shape feature of the face, namely a target adjustment parameter corresponding to the face contour feature point of the face; and/or target adjustment parameters corresponding to the nose bridge features of the face, namely the target adjustment parameters corresponding to the nose bridge feature points of the face. The target adjustment parameter may include a target value to be adjusted, for example, a Z value, and at this time, the electronic device may directly adjust an original value corresponding to the corresponding face feature to the target value; the adjustment value required to be adjusted, such as increasing the X value, or decreasing the Y value, may also be included, in which case, the electronic device may increase or decrease the adjustment value directly on the basis of the original value corresponding to the corresponding face feature; it is also possible to include adjustments to the proportions, for example: and (2) increasing/decreasing P percent, and the like, at this time, the electronic device may directly increase/decrease P times the original value, that is, adjust to (1 minus P percent) times the original value, based on the original value corresponding to the corresponding face feature. Wherein, the original value corresponding to the face feature is the value before being adjusted.
In one implementation, the electronic device may recognize, from an area where a face of the image to be processed is located, each part of the face and determine the area where each part is located, for example: eyes, nose, mouth, eyebrows, cheeks, face contours, and forehead. Furthermore, the electronic device may perform a first adjustment process on the eye feature, that is, the eye feature point, based on the target adjustment parameter corresponding to the eye feature and the region where the eye is located; performing first adjustment processing on the face features, namely face contour feature points, based on target adjustment parameters corresponding to the face features and the region where the face contour is located; performing first adjustment processing on the skin color characteristics of the face based on target adjustment parameters corresponding to the skin color characteristics, a region where the cheek is located, a region where the nose is located, a region where the forehead is located and the like; and performing first adjustment processing on the nose bridge feature of the face based on the target adjustment parameter corresponding to the nose bridge feature and the region where the nose is located.
The embodiment of the present invention may use any related face recognition algorithm to recognize each part of a face and the region where each part is located from an image, and the embodiment of the present invention does not limit this. Any feasible adjustment algorithm can be adopted in the embodiment of the invention to adjust the face, and the embodiment of the invention does not limit the adjustment algorithm.
In one case, in order to make the adjusted result more humanized and more aesthetically pleasing to the user, the target adjustment parameters corresponding to the skin color features may include target skin color adjustment sub-parameters corresponding to skin color features of different parts of the human face. For example: the target skin color adjustment sub-parameter corresponding to the forehead, the target skin color adjustment sub-parameter corresponding to the cheek, the target skin color adjustment sub-parameter corresponding to the nose and the like can be included. And then the electronic equipment can perform first adjustment processing on the skin color of each part of the face in the image to be processed based on the target skin color adjustment sub-parameters corresponding to different parts.
In another implementation manner, in order to better provide an image processing service for a user and improve the user experience of the user, a first adjustment process may be performed on a skin color of a skin exposed outside of the user, which is identified in the image, for example, a first adjustment process is performed on a skin color of a neck of the user and a skin color of an arm of the user, so as to match the adjusted skin color of a human face.
S105: and displaying the image after the first adjustment processing.
In the embodiment of the present invention, after the first adjustment processing is performed on the face in the image to be processed, the image to be processed including the adjusted face, that is, the image after the first adjustment processing, may be displayed to the user, so that the user obtains the image after the first adjustment processing.
When the electronic device is a user terminal, the electronic device can display the image containing the first adjustment processing directly through a display screen of the electronic device. When the electronic device is a server, the electronic device may send the first adjusted image to a corresponding terminal, so that the corresponding terminal displays the first adjusted image through a display screen. Wherein, the corresponding terminal is: and sending the image to be processed containing the human face to a terminal of the electronic equipment (server).
In the embodiment of the invention, target processing effect information, namely target adjusting parameters, corresponding to different face characteristics of the face can be determined based on the face characteristics of the face in the identified image to be processed, the face in the image to be processed is adjusted based on the determined target processing effect information, so that the face in the image is adjusted by providing corresponding target adjusting parameters based on the face characteristics of the face in the image, the face image which is more in line with the aesthetic sense of a user is adjusted, and the use experience of the user is improved. Furthermore, different beauty effects can be provided for images containing different face characteristics, and users with faces containing different face characteristics adjust the face images which are more in line with the beauty of different users, so that the image adjusting effect is better, and the user perception is improved.
In one implementation, there may be differences in the aesthetics of different users, and there may be some users who consider the one-touch beautified image still not to be in accordance with their own aesthetics, and who are used to manually adjust the image to adjust the face in the image to be in accordance with their own aesthetics. In order to improve the adjustment of the human face in the image which is not satisfied with one-key beauty, the user is used to the experience of the user who adjusts the image manually. The electronic equipment can generate historical adjustment information corresponding to the user according to the adjustment habit of the user; furthermore, after obtaining the image to be processed, the electronic device may first determine whether there is historical adjustment information for the face of the user, and when it is determined that there is no historical adjustment information for the face, continue to perform the step of identifying the face features of the face in the image to be processed as the face features to be processed. Specifically, as shown in fig. 2, the method may include the following steps:
s201: obtaining an image to be processed;
s202: judging whether historical adjustment information aiming at the face exists or not, and executing S203 when judging that the historical adjustment information aiming at the face does not exist;
wherein the history adjustment information is: adjusting information according to the face of the user in the historical image;
s203: identifying the face features of the face in the image to be processed as the face features to be processed;
s204: determining target processing effect information based on the human face features to be processed and a preset corresponding relation;
wherein, the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
s205: performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
s206: and displaying the image after the first adjustment processing.
Wherein S201 is the same as S101 shown in fig. 1, S203 is the same as S202 shown in fig. 1, S204 is the same as S103 shown in fig. 1, S205 is the same as S104 shown in fig. 1, and S206 is the same as S105 shown in fig. 1.
In another implementation, as shown in fig. 2, when it is determined that there is history adjustment information for a face, the method may further include:
s207: and performing second adjustment processing on the face in the image to be processed based on the historical adjustment information.
S208: and displaying the image after the second adjustment processing.
In one aspect, when the electronic device is a user terminal, the electronic device may store information of adjustment of a face of a user in an image in a local area, and further generate history adjustment information based on the information of adjustment of the face of the user in the image. It is understood that, when the electronic device is a user terminal, a user who adjusts a face in an image obtained by the electronic device will generally be a holder of the electronic device. And performing second adjustment processing on the face in the image based on the historical adjustment information, wherein the adjusted result is generally more suitable for the holder of the electronic equipment, namely more suitable for the aesthetic sense of the user. Specifically, after the electronic device obtains the image to be processed, it may first be determined whether historical adjustment information for the face exists, and then an adjustment scheme is determined based on the determination result. For example: when it is judged that there is no history adjustment information for the face, S203 is continuously executed; when it is determined that there is history adjustment information for the face, S206 is performed. And further, after the second adjustment processing is carried out on the face in the image to be processed based on the historical adjustment information, displaying the image after the second adjustment processing. The display process of displaying the image after the second adjustment process is the same as the display process of displaying the image after the first adjustment process, and is not repeated herein.
In another case, when the electronic device is a user terminal, the electronic device may store information of manual adjustment of a face in an image by a user in a server connected to the electronic device, and subsequently, the server may store information of adjustment of the face in the image by the user based on an identifier of the electronic device or an identifier of the user, and further generate history adjustment information based on information of adjustment of the face in the image by the user. The identifier of the electronic device may be information that can uniquely identify the electronic device, such as a model number and a serial number of the electronic device. The user identifier may be: when a user logs in functional software for implementing the image processing method provided by the embodiment of the invention, the used account number, nickname and the like can uniquely identify the identity information of the user.
In another case, when the electronic device is a server, the electronic device may store the information of the adjustment of the user to the face in the image based on an identifier of a user terminal that uploads the image to be processed or an identifier of the user, and further generate the history adjustment information based on the information of the adjustment of the user to the face in the image. The identifier of the user terminal may be information that can uniquely identify the user terminal, such as a model number and a serial number of the user terminal. The user identifier may be: when a user logs in functional software for implementing the image processing method provided by the embodiment of the invention, the used account number, nickname and the like can uniquely identify the identity information of the user.
The process of generating the history adjustment information based on the information of the adjustment of the face in the image by the user may be: when the adjustment frequency of the user for the same face feature is determined to exceed the preset frequency and the fluctuation range of all adjustment parameters corresponding to the face feature is within the preset range, calculating the average value of all adjustment parameters corresponding to the face feature, and taking the average value as historical adjustment information for the face feature.
It is to be understood that the history adjustment information may include a specific specified adjustment parameter, and the electronic device may perform the second adjustment process on the face in the image to be processed based on the specified adjustment parameter.
In one case, the history adjustment information may include, but is not limited to: the eye characteristics of the human face, namely the specified adjustment parameters corresponding to the eye characteristic points; the skin color of the human face, namely the designated adjustment parameters corresponding to the skin color characteristics; face shape characteristics of the face, namely specified adjustment parameters (specified adjustment parameters of face thinning degree) corresponding to the face contour characteristic points; and/or the nose bridge characteristics of the human face, namely the designated adjustment parameters corresponding to the nose bridge characteristic points. The specified adjustment parameter may include a target value to be adjusted, for example, to a Z value; adjustment values for desired adjustments may also be included, such as increasing the value of X, or decreasing the value of Y, etc.; it is also possible to include adjustments to the proportions, for example: up/down percent P, etc.
In the embodiment of the invention, the second adjustment treatment is carried out on the face in the image to be processed based on the historical adjustment information aiming at the face, so that the individualized face adjustment is realized, namely, the adjustment treatment, namely the face beautifying treatment, is carried out on the image containing the face of each user aiming at each user, the user experience is improved, and the user perception is better increased.
In one implementation, the aesthetic value of different users is different, and there may be an adjustment effect of the user on the image after one-key beauty treatment, that is, an adjustment effect of the image after the first adjustment treatment is performed on the face of the image to be processed based on the target treatment effect information, which is not satisfactory. In order to better provide image processing service for users and meet the requirements of the users. After the user obtains the image after the first adjustment processing, the user is allowed to manually adjust the face of the person in the image after the first adjustment processing. Namely, the embodiment of the invention can provide the function of manually adjusting the image for the user. Specifically, after the step of displaying the image after the first adjustment processing, the method may further include:
receiving an adjusting instruction of a user for the face in the image after the first adjusting processing;
adjusting the face in the image after the first adjustment processing based on the adjustment instruction;
and recording adjustment information aiming at the face after the adjustment processing.
The adjusting instruction can carry the human face features to be adjusted and the adjusting parameters corresponding to the human face features to be adjusted. The facial features may include, but are not limited to, eye features, skin tone features, facial features, and/or nose bridge features of a human face. The electronic device may perform adjustment processing on the face in the image after the first adjustment processing based on an adjustment parameter corresponding to the face feature to be adjusted, for example: when the human face features required to be adjusted comprise eye features, adjusting eye feature points of eyes of the human face in the image after the first adjustment processing; when the human face features to be adjusted comprise skin color features, adjusting the skin color of the human face in the image after the first adjustment processing; when the facial features to be adjusted comprise facial features, adjusting the facial contour feature points of the face in the image after the first adjustment processing; and when the face features required to be adjusted comprise the nose bridge features of the face, adjusting the nose bridge feature points of the face in the image after the first adjustment.
In one case, the user may perform multiple adjustments on the face in the image after the first adjustment processing, that is, the electronic device may receive multiple adjustment instructions of the user for the face in the same image after the first adjustment processing. In order to save the storage space, the electronic device may record the adjustment information for the face after the adjustment processing after detecting a saving instruction of the user for the image containing the face after the adjustment processing. The adjustment information is: and determining information at least based on the adjustment result corresponding to the face in the image to be processed when the saving instruction is detected.
The adjustment result may include a feature value corresponding to each face feature. For example: the adjustment information may include: the size of the eyes of the adjusted face can be identified by the distance between the upper eyelid and the lower eyelid, namely the distance between the eye characteristic point of the upper eyelid and the eye characteristic point of the lower eyelid; the adjusted skin color value of the human face can be identified by a pixel value or a brightness value; the adjusted value of the face contour can be identified by the distance between the face contour characteristic points of the left face and the face contour characteristic points of the right face; the height of the nose bridge of the adjusted face can be identified through the distance between the nose bridge characteristic point and the nose wing characteristic point. Alternatively, the adjustment information may include: with respect to the face in the image to be processed before being adjusted (i.e., performing the first adjustment process or the second adjustment process), the adjusted value corresponding to the eyes of the face, the adjusted value corresponding to the skin color of the face, the adjusted value corresponding to the face shape of the face, the adjusted value corresponding to the nose bridge of the face, and the like.
In one implementation, when it is detected that, for a certain user, the user frequently performs manual adjustment on the face in the image after the first adjustment processing, and the value of a certain face feature is adjusted to a certain range each time, it may be determined that the manually adjusted face is more in line with the aesthetic sense of the user. In this case, in order to automatically adjust a face image that is aesthetically pleasing to the user for the user, when the number of times of manual adjustment by the user and the result of the manual adjustment satisfy a certain condition, history adjustment information for the face may be generated for the user. And when the image containing the face of the user is obtained again, the history adjustment information is used for adjusting the face in the image. Specifically, the method comprises the following steps: the adjusting instruction carries the human face features to be adjusted and adjusting parameters corresponding to the human face features to be adjusted;
after the step of recording the adjustment information for the face after the adjustment processing, the method may further include:
judging whether the adjustment frequency of a user for the same face feature exceeds a preset frequency or not, and judging whether the fluctuation range among all adjustment parameters corresponding to the face feature is within a preset range or not;
when the judgment result is that the adjustment frequency of the user for the same face feature exceeds the preset frequency and the judgment result is that the fluctuation range between all adjustment parameters corresponding to the face feature is within the preset range, generating historical adjustment information for the face based on the adjustment information recorded for the face after adjustment processing, wherein the historical adjustment information is as follows: and information for performing second adjustment processing on the face in the obtained image.
According to one implementation mode, when it is determined that the adjusting frequency of a certain face feature exceeds a preset frequency and the fluctuation range of all adjusting parameters corresponding to the face feature is within a preset range, the electronic device generates historical adjusting information for the face according to the face feature. For example: when the electronic equipment determines that the adjusting frequency of the eye features of the face exceeds the preset frequency and the fluctuation range of all adjusting parameters corresponding to the eye features of the face is within the preset range, historical adjusting information for the face is generated according to the eye features of the face. Wherein, the historical adjustment information comprises appointed adjustment parameters of the eye characteristics of the human face. And the specified adjustment parameters are: determined based on all the adjustment parameters corresponding to the eye features of the human face. In one case, the average value of all adjustments corresponding to the eye features of the face is used as the specified adjustment parameter. For the process of generating the historical adjustment information for the face for other face features, the above description is omitted here.
In another implementation manner, the electronic device generates historical adjustment information for the face when it is determined that the adjustment frequency of all the face features exceeds the preset frequency and the fluctuation range between all the adjustment parameters corresponding to all the face features is within the preset range. Wherein the historical adjustment information comprises specified adjustment parameters of all the human face features.
In one implementation, the same user may have different aesthetic appearances at different time periods, when determining that the historical adjustment information for the face of the user exists, it may be determined whether the historical adjustment information exists for too long time first, and when determining that the historical adjustment information does not exist for too long time, the face is subjected to the second adjustment processing based on the historical adjustment information. In one case, before the step of performing the second adjustment processing on the face in the image to be processed based on the history adjustment information, the method may further include:
acquiring generation time of historical adjustment information and current time;
judging whether the difference value of the current time and the generation time exceeds a preset time length or not;
and when the judgment result shows that the time length does not exceed the preset time length, executing the step of carrying out second adjustment processing on the face in the image to be processed based on the historical adjustment information.
The preset duration can be set by the electronic equipment in a default mode, and can also be set by a user according to own habits. The current time may be: the time when the electronic device obtains the image to be processed.
The existing time of the historical adjustment information can be determined through the current time and the generation time of the historical adjustment information, and when the difference value between the current time and the generation time does not exceed the preset time length, the existing time of the historical adjustment information can be determined not to be too long. When the difference between the current time and the generation time exceeds a preset time, it may be determined that the existing time of the history adjustment information is too long.
In one implementation, when it is determined that the existence time of the history adjustment information is too long, the second adjustment processing may not be performed on the face in the image to be processed directly based on the history adjustment information. Specifically, the method may further include:
when the preset time length is judged to be exceeded, outputting inquiry prompt information to inquire whether a user adjusts the face by using historical adjustment information;
and adjusting the face in the image to be processed based on the selection result of the user.
When the user selection result is yes, the second adjustment processing of the face in the image to be processed can be continued based on the history adjustment information. And when the user selection result is negative, the step of recognizing the face features of the face in the image to be processed as the face features to be processed can be continuously executed.
In another implementation manner, the electronic device may increase the count by one each time the step of performing the second adjustment processing on the face in the image to be processed based on the history adjustment information is performed. Before the electronic equipment performs the step of performing second adjustment processing on the face in the image to be processed based on the historical adjustment information, judging whether the count reaches a preset number of times, and when the count reaches the preset number of times, determining that the existing time of the historical adjustment information is too long. At this time, inquiry prompt information may be output to inquire of the user whether to perform adjustment processing on the face using the history adjustment information; and further adjusting the face in the image to be processed based on the selection result of the user.
In one implementation, the face features to be processed include: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information may include a target adjustment parameter corresponding to a target skin color feature, a target adjustment parameter corresponding to a target eye feature, and/or a target adjustment parameter corresponding to a target facial form feature;
the step of performing the first adjustment processing on the face in the image to be processed based on the target processing effect information may include:
when the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains target adjustment parameters corresponding to the target eye characteristics, performing eye adjustment processing on the eye characteristic points to be processed by using the target adjustment parameters corresponding to the target eye characteristics;
and when the target processing effect information contains target adjustment parameters corresponding to the target facial form features, performing facial form adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target facial form features.
In the embodiment of the invention, the electronic equipment can identify each part of the face from the region of the face of the image to be processed and determine the region of each part, and then the electronic equipment can perform first adjustment processing on different parts of the face aiming at different parts based on the target processing effect information. In an implementation manner, when the target processing effect information includes a target adjustment parameter corresponding to the target skin color feature, the target adjustment parameter corresponding to the target skin color feature may include a pixel value or a brightness value to which each part of the human face needs to be adjusted. Accordingly, in the first adjustment process: the electronic equipment adjusts the pixel value or the brightness value of each part of the human face in the image to be processed to the pixel value or the brightness value to which each part of the human face needs to be adjusted in the adjustment parameters corresponding to the target skin color characteristics.
In one implementation, when the target processing effect information includes a target adjustment parameter corresponding to a target eye feature, for example: the target adjustment parameter corresponding to the target eye characteristic may include an adjustment scale to the eye, such as adjusting the eye five percent larger. The eye feature points to be processed corresponding to the eyes may be edge points of eyes (including an upper eyelid and a lower eyelid) of a human face in the image to be processed. At this time, the electronic device calculates the size of the eye to be adjusted based on the size of the eye in the face in the image to be processed and the adjustment ratio, determines the position to which the eye feature point to be processed needs to be adjusted, and then adjusts the eye feature point to be processed to the position to which the eye feature point needs to be adjusted, thereby realizing the eye adjustment processing of the eye feature point to be processed.
In one implementation, when the target processing effect information includes a target adjustment parameter corresponding to the target facial form feature, the target adjustment parameter corresponding to the target facial form feature may include an adjustment ratio of the human face, for example, the human face is thinned by ten percent. The electronic equipment calculates the size of the face to be adjusted based on the size of the face in the image to be processed and the adjustment proportion, further determines the position to which the face contour feature point of the face to be processed needs to be adjusted, further adjusts the face contour feature point of the face to be processed to the position to which the face contour feature point of the face needs to be adjusted, and achieves face shape adjustment processing of the face contour feature point of the face to be processed.
In an implementation manner, when the target processing effect information includes target adjustment parameters corresponding to the target skin color features, target adjustment parameters corresponding to the target eye features, and target adjustment parameters corresponding to at least two types of face features in the target adjustment parameters corresponding to the target face features, the first adjustment processing may be performed on the face in the image to be processed based on the target adjustment parameters corresponding to the at least two types of face features at the same time, or the first adjustment processing may be performed on the face in the image to be processed based on the target adjustment parameters corresponding to each face feature in the at least two types of face features sequentially.
Corresponding to the method embodiment described above, an embodiment of the present invention provides an image processing apparatus, as shown in fig. 3, the apparatus may include:
a first obtaining module 310, configured to obtain an image to be processed;
the recognition module 320 is configured to recognize a face feature of a face in the image to be processed as a face feature to be processed;
a determining module 330, configured to determine target processing effect information based on the facial features to be processed and a preset corresponding relationship, where the preset corresponding relationship includes: corresponding relations between different human face features and processing effect information;
a first adjusting module 340, configured to perform a first adjustment process on the face in the image to be processed based on the target processing effect information;
a first displaying module 350, configured to display the image after the first adjustment processing.
In the embodiment of the invention, target processing effect information, namely target adjusting parameters, corresponding to different face characteristics of the face can be determined based on the face characteristics of the face in the identified image to be processed, the face in the image to be processed is adjusted based on the determined target processing effect information, so that the face in the image is adjusted by providing corresponding target adjusting parameters based on the face characteristics of the face in the image, the face image which is more in line with the aesthetic sense of a user is adjusted, and the use experience of the user is improved. Furthermore, different beauty effects can be provided for images containing different face characteristics, and users with faces containing different face characteristics adjust the face images which are more in line with the beauty of different users, so that the image adjusting effect is better, and the user perception is improved.
In one implementation, the face features to be processed include skin color features to be processed; the preset corresponding relationship comprises: the corresponding relation between different skin color characteristics and processing effect information;
the determining module 330 is specifically configured to
And determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
In one implementation, the preset correspondence includes: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information;
the determining module 330 is specifically configured to
Determining a preset face class to which the face belongs in the image to be processed as a target face class based on the corresponding relation between different face features and preset face classes and the face features to be processed;
and determining processing effect information corresponding to the target face type as target processing effect information based on the corresponding relation between different preset face types and the processing effect information and the target face type.
In one implementation, the apparatus further comprises:
a first determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
when it is determined that there is no history adjustment information for the face, the recognition module 320 is triggered.
In one implementation, the apparatus further comprises:
a second determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
the second adjusting module is used for performing second adjusting processing on the human face in the image to be processed based on the historical adjusting information when the historical adjusting information aiming at the human face is judged to exist;
and the second display module is used for displaying the image after the second adjustment processing.
In one implementation, the apparatus further comprises:
the receiving module is used for receiving an adjusting instruction of a user for the face in the image after the first adjustment processing after the image after the first adjustment processing is displayed;
the third adjusting module is used for adjusting the face in the image after the first adjusting processing based on the adjusting instruction;
and the recording module is used for recording the adjustment information aiming at the adjusted human face.
In one implementation manner, the adjustment instruction carries a face feature to be adjusted and a manual adjustment parameter corresponding to the face feature to be adjusted;
the device further comprises:
a third judging module, configured to, after the adjustment information is recorded for the adjusted face, judge whether an adjustment frequency of a user for the same face feature exceeds a preset frequency, and judge whether a fluctuation range between all adjustment parameters corresponding to the face feature is within a preset range;
a generating module, configured to generate historical adjustment information for a human face based on adjustment information recorded for the human face after adjustment processing when a determination result indicates that an adjustment frequency of a user for a same human face feature exceeds a preset frequency and a determination result indicates that a fluctuation range between all adjustment parameters corresponding to the human face feature is within a preset range, where the historical adjustment information is: and information for performing second adjustment processing on the face in the obtained image.
In one implementation, the apparatus further comprises:
a second obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
the fourth judging module is used for judging whether the difference value between the current time and the generating time exceeds a preset time length or not;
and when the preset time length is not exceeded, triggering the second adjusting module.
In one implementation, the apparatus further comprises:
a third obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
a fifth judging module, configured to judge whether a difference between the current time and the generation time exceeds a preset duration;
the output module is used for outputting inquiry prompt information when the preset time length is judged to be exceeded so as to inquire whether a user utilizes the historical adjustment information to carry out second adjustment processing on the face in the image to be processed;
and the fourth adjusting module is used for adjusting the human face in the image to be processed based on the selection result of the user.
In one implementation, the facial features to be processed include: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information comprises target adjustment parameters corresponding to target skin color characteristics, target adjustment parameters corresponding to target eye characteristics and/or target adjustment parameters corresponding to target face characteristics;
the first adjusting module 340 is specifically configured to
When the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains a target adjusting parameter corresponding to the target eye feature, performing eye adjusting processing on the eye feature point to be processed by using the target adjusting parameter corresponding to the target eye feature;
and when the target processing effect information contains target adjustment parameters corresponding to the target face shape features, performing face shape adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target face shape features.
Corresponding to the above method embodiments, the embodiment of the present invention further provides an electronic device, as shown in fig. 4, including a processor 410, a communication interface 420, a memory 430, and a communication bus 440, where the processor 410, the communication interface 420, and the memory 430 complete mutual communication through the communication bus 440,
a memory 430 for storing computer programs;
the processor 410, configured to execute the computer program stored in the memory 430 to implement any of the image processing method steps provided in the embodiments of the present invention, may include the steps of:
obtaining an image to be processed;
recognizing the face features of the face in the image to be processed as the face features to be processed;
determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
and displaying the image after the first adjustment processing.
In the embodiment of the invention, target processing effect information, namely target adjusting parameters, corresponding to different face characteristics of the face can be determined based on the face characteristics of the face in the identified image to be processed, the face in the image to be processed is adjusted based on the determined target processing effect information, so that the face in the image is adjusted by providing corresponding target adjusting parameters based on the face characteristics of the face in the image, the face image which is more in line with the aesthetic sense of a user is adjusted, and the use experience of the user is improved. Furthermore, different beauty effects can be provided for images containing different face characteristics, and users with faces containing different face characteristics adjust the face images which are more in line with the beauty of different users, so that the image adjusting effect is better, and the user perception is improved.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method for implementing any of the foregoing image processing method steps provided in the embodiment of the present invention may include the steps of:
obtaining an image to be processed;
recognizing the face features of the face in the image to be processed as the face features to be processed;
determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
and displaying the image after the first adjustment processing.
In the embodiment of the invention, target processing effect information, namely target adjusting parameters, corresponding to different face characteristics of the face can be determined based on the face characteristics of the face in the identified image to be processed, the face in the image to be processed is adjusted based on the determined target processing effect information, so that the face in the image is adjusted by providing corresponding target adjusting parameters based on the face characteristics of the face in the image, the face image which is more in line with the aesthetic sense of a user is adjusted, and the use experience of the user is improved. Furthermore, different beauty effects can be provided for images containing different face characteristics, and users with faces containing different face characteristics adjust the face images which are more in line with the beauty of different users, so that the image adjusting effect is better, and the user perception is improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (18)

1. An image processing method, characterized in that the method comprises:
obtaining an image to be processed;
recognizing the face features of the face in the image to be processed as the face features to be processed;
determining target processing effect information based on the human face features to be processed and a preset corresponding relation, wherein the preset corresponding relation comprises: corresponding relations between different human face features and processing effect information;
performing first adjustment processing on the face in the image to be processed based on the target processing effect information;
displaying the image after the first adjustment processing;
after the step of presenting the first adjusted processed image, the method further comprises:
receiving an adjusting instruction of a user for the face in the image after the first adjusting processing;
adjusting the face in the image after the first adjustment processing based on the adjustment instruction;
recording adjustment information aiming at the adjusted human face;
the adjusting instruction carries the human face features to be adjusted and adjusting parameters corresponding to the human face features to be adjusted;
after the step of recording adjustment information for the adjusted face, the method further includes:
judging whether the adjustment frequency of a user for the same face feature exceeds a preset frequency or not, and judging whether the fluctuation range among all adjustment parameters corresponding to the face feature is within a preset range or not;
when the judgment result is that the adjustment frequency of the user for the same face feature exceeds a preset frequency and the judgment result is that the fluctuation range of all adjustment parameters corresponding to the face feature is within a preset range, generating historical adjustment information for the face based on the adjustment information recorded for the adjusted face, wherein the historical adjustment information is as follows: and information for performing second adjustment processing on the face in the obtained image.
2. The method of claim 1, wherein the facial features to be processed comprise skin color features to be processed; the preset corresponding relationship comprises: the corresponding relation between different skin color characteristics and processing effect information;
the step of determining target processing effect information based on the human face features to be processed and the preset corresponding relation comprises the following steps:
and determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
3. The method according to claim 1, wherein the preset correspondence comprises: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information;
the step of determining target processing effect information based on the human face features to be processed and the preset corresponding relation comprises the following steps:
determining a preset face class to which the face belongs in the image to be processed as a target face class based on the corresponding relation between different face features and preset face classes and the face features to be processed;
and determining processing effect information corresponding to the target face type as target processing effect information based on the corresponding relation between different preset face types and the processing effect information and the target face type.
4. The method according to claim 1, wherein before the step of identifying the face feature of the face in the image to be processed as the face feature to be processed, the method further comprises:
judging whether history adjustment information aiming at the face exists or not, wherein the history adjustment information is as follows: adjusting information according to the face of the user in the historical image;
and when judging that the historical adjustment information aiming at the human face does not exist, executing the step of identifying the human face characteristics of the human face in the image to be processed as the human face characteristics to be processed.
5. The method according to claim 1, wherein before the step of identifying the face feature of the face in the image to be processed as the face feature to be processed, the method further comprises:
judging whether history adjustment information aiming at the face exists or not, wherein the history adjustment information is as follows: adjusting information according to the face of the user in the historical image;
when judging that historical adjustment information aiming at the face exists, performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
and displaying the image after the second adjustment processing.
6. The method according to claim 5, wherein before the step of performing the second adjustment processing on the face in the image to be processed based on the history adjustment information, the method further comprises:
acquiring the generation time of the historical adjustment information and the current time;
judging whether the difference value of the current time and the generation time exceeds a preset time length or not;
and when the preset time length is not exceeded, executing the step of carrying out second adjustment processing on the human face in the image to be processed based on the historical adjustment information.
7. The method according to claim 5, wherein before the step of performing the second adjustment processing on the face in the image to be processed based on the history adjustment information, the method further comprises:
acquiring the generation time of the historical adjustment information and the current time;
judging whether the difference value of the current time and the generation time exceeds a preset time length or not;
when the preset time length is judged to be exceeded, outputting inquiry prompt information to inquire whether a user utilizes the historical adjustment information to carry out second adjustment processing on the face in the image to be processed;
and adjusting the human face in the image to be processed based on the selection result of the user.
8. The method according to any one of claims 1 to 7, wherein the facial features to be processed comprise: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information comprises target adjustment parameters corresponding to target skin color characteristics, target adjustment parameters corresponding to target eye characteristics and/or target adjustment parameters corresponding to target face characteristics;
the step of performing the first adjustment processing on the face in the image to be processed based on the target processing effect information includes:
when the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains a target adjusting parameter corresponding to the target eye feature, performing eye adjusting processing on the eye feature point to be processed by using the target adjusting parameter corresponding to the target eye feature;
and when the target processing effect information contains target adjustment parameters corresponding to the target face shape features, performing face shape adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target face shape features.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first obtaining module is used for obtaining an image to be processed containing a human face;
the recognition module is used for recognizing the face features of the face in the image to be processed as the face features to be processed;
a determining module, configured to determine target processing effect information based on the facial features to be processed and a preset corresponding relationship, where the preset corresponding relationship includes: corresponding relations between different human face features and processing effect information;
the first adjusting module is used for performing first adjusting processing on the face in the image to be processed based on the target processing effect information;
the first display module is used for displaying the image after the first adjustment processing;
the device further comprises:
the receiving module is used for receiving an adjusting instruction of a user for the face in the image after the first adjustment processing after the image after the first adjustment processing is displayed;
the third adjusting module is used for adjusting the face in the image after the first adjusting processing based on the adjusting instruction;
the recording module is used for recording adjustment information aiming at the adjusted human face;
the adjusting instruction carries the human face features to be adjusted and adjusting parameters corresponding to the human face features to be adjusted;
the device further comprises:
a third judging module, configured to, after the adjustment information is recorded for the adjusted face, judge whether an adjustment frequency of a user for the same face feature exceeds a preset frequency, and judge whether a fluctuation range between all adjustment parameters corresponding to the face feature is within a preset range;
a generating module, configured to generate historical adjustment information for a human face based on adjustment information recorded for the human face after adjustment processing when a determination result indicates that an adjustment frequency of a user for a same human face feature exceeds a preset frequency and a determination result indicates that a fluctuation range between all adjustment parameters corresponding to the human face feature is within a preset range, where the historical adjustment information is: and information for performing second adjustment processing on the face in the obtained image.
10. The apparatus of claim 9, wherein the to-be-processed face features comprise to-be-processed skin color features; the preset corresponding relationship comprises: the corresponding relation between different skin color characteristics and processing effect information;
the determination module is particularly used for
And determining target processing effect information based on the corresponding relation between different skin color characteristics and the processing effect information and the skin color characteristics to be processed.
11. The apparatus of claim 9, wherein the predetermined correspondence comprises: the corresponding relation between different human face characteristics and the preset human face categories and the corresponding relation between different preset human face categories and the processing effect information;
the determination module is particularly used for
Determining a preset face class to which the face belongs in the image to be processed as a target face class based on the corresponding relation between different face features and preset face classes and the face features to be processed;
and determining processing effect information corresponding to the target face type as target processing effect information based on the corresponding relation between different preset face types and the processing effect information and the target face type.
12. The apparatus of claim 9, further comprising:
a first determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
and when judging that the historical adjustment information aiming at the human face does not exist, triggering the identification module.
13. The apparatus of claim 9, further comprising:
a second determining module, configured to determine whether there is historical adjustment information for a face before the face feature of the face in the image to be processed is identified as the face feature to be processed, where the historical adjustment information is: adjusting information according to the face of the user in the historical image;
the second adjusting module is used for performing second adjusting processing on the human face in the image to be processed based on the historical adjusting information when the historical adjusting information aiming at the human face is judged to exist;
and the second display module is used for displaying the image after the second adjustment processing.
14. The apparatus of claim 13, further comprising:
a second obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
the fourth judging module is used for judging whether the difference value between the current time and the generating time exceeds a preset time length or not;
and when the preset time length is not exceeded, triggering the second adjusting module.
15. The apparatus of claim 13, further comprising:
a third obtaining module, configured to obtain generation time of the historical adjustment information and current time before performing second adjustment processing on the face in the image to be processed based on the historical adjustment information;
a fifth judging module, configured to judge whether a difference between the current time and the generation time exceeds a preset duration;
the output module is used for outputting inquiry prompt information when the preset time length is judged to be exceeded so as to inquire whether a user utilizes the historical adjustment information to carry out second adjustment processing on the face in the image to be processed;
and the fourth adjusting module is used for adjusting the human face in the image to be processed based on the selection result of the user.
16. The apparatus according to any one of claims 9-15, wherein the facial features to be processed comprise: skin color features to be processed, eye feature points to be processed corresponding to the eyes and/or face contour feature points to be processed;
the target processing effect information comprises target adjustment parameters corresponding to target skin color characteristics, target adjustment parameters corresponding to target eye characteristics and/or target adjustment parameters corresponding to target face characteristics;
the first adjusting module is specifically used for
When the target processing effect information contains a target adjustment parameter corresponding to the target skin color feature, performing skin color adjustment processing on the skin color feature to be processed by using the target adjustment parameter corresponding to the target skin color feature;
when the target processing effect information contains a target adjusting parameter corresponding to the target eye feature, performing eye adjusting processing on the eye feature point to be processed by using the target adjusting parameter corresponding to the target eye feature;
and when the target processing effect information contains target adjustment parameters corresponding to the target face shape features, performing face shape adjustment processing on the face contour feature points to be processed by using the target adjustment parameters corresponding to the target face shape features.
17. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the image processing method steps of any one of claims 1 to 8 when executing the computer program stored in the memory.
18. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image processing method steps of any one of claims 1 to 8.
CN201810966031.0A 2018-08-23 2018-08-23 Image processing method and device and electronic equipment Active CN109255761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810966031.0A CN109255761B (en) 2018-08-23 2018-08-23 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810966031.0A CN109255761B (en) 2018-08-23 2018-08-23 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109255761A CN109255761A (en) 2019-01-22
CN109255761B true CN109255761B (en) 2021-06-25

Family

ID=65050356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810966031.0A Active CN109255761B (en) 2018-08-23 2018-08-23 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109255761B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872273A (en) * 2019-02-26 2019-06-11 上海上湖信息技术有限公司 A kind of image processing method and device
CN110211063B (en) * 2019-05-20 2021-06-08 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and system
CN112785488A (en) * 2019-11-11 2021-05-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, storage medium and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966267A (en) * 2015-07-02 2015-10-07 广东欧珀移动通信有限公司 User image beautifying method and apparatus
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN105530435A (en) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 Shooting method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9684987B1 (en) * 2015-02-26 2017-06-20 A9.Com, Inc. Image manipulation for electronic display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966267A (en) * 2015-07-02 2015-10-07 广东欧珀移动通信有限公司 User image beautifying method and apparatus
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN105530435A (en) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 Shooting method and mobile terminal

Also Published As

Publication number Publication date
CN109255761A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN113129312B (en) Image processing method, device and equipment
CN109255761B (en) Image processing method and device and electronic equipment
US9565410B2 (en) Automatic white balance with facial color features as reference color surfaces
CN109961453B (en) Image processing method, device and equipment
US9633462B2 (en) Providing pre-edits for photos
CN107491755B (en) Method and device for gesture recognition
CN107862653B (en) Image display method, image display device, storage medium and electronic equipment
US10181073B2 (en) Technologies for efficient identity recognition based on skin features
CN106611415B (en) Skin region detection method and device
CN106326823B (en) Method and system for obtaining head portrait in picture
CN107730448B (en) Beautifying method and device based on image processing
CN111627076B (en) Face changing method and device and electronic equipment
US20080279467A1 (en) Learning image enhancement
CN111275650B (en) Beauty treatment method and device
CN113610723B (en) Image processing method and related device
CN111428552A (en) Black eye recognition method and device, computer equipment and storage medium
CN113095148B (en) Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN113222844B (en) Image beautifying method and device, electronic equipment and medium
CN110473156B (en) Image information processing method and device, storage medium and electronic equipment
CN114972014A (en) Image processing method and device and electronic equipment
WO2023142474A1 (en) Image processing method and apparatus, electronic device, storage medium, and computer program product
US10354125B2 (en) Photograph processing method and system
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
WO2022111269A1 (en) Method and device for enhancing video details, mobile terminal, and storage medium
CN110248104B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201123

Address after: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing 100123

Applicant after: Beijing LEMI Technology Co.,Ltd.

Address before: 100123 Building 8, Huitong Times Square, 1 South Road, Chaoyang District, Beijing.

Applicant before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230831

Address after: 3870A, 3rd Floor, Building 4, Courtyard 49, Badachu Road, Shijingshan District, Beijing, 100144

Patentee after: Beijing Jupiter Technology Co.,Ltd.

Address before: 100123 room 115, area C, 1st floor, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee before: Beijing LEMI Technology Co.,Ltd.

TR01 Transfer of patent right