CN111639216A - Display method and device of face image, computer equipment and storage medium - Google Patents

Display method and device of face image, computer equipment and storage medium Download PDF

Info

Publication number
CN111639216A
CN111639216A CN202010504375.7A CN202010504375A CN111639216A CN 111639216 A CN111639216 A CN 111639216A CN 202010504375 A CN202010504375 A CN 202010504375A CN 111639216 A CN111639216 A CN 111639216A
Authority
CN
China
Prior art keywords
face image
face
state
score
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010504375.7A
Other languages
Chinese (zh)
Inventor
孙红亮
王子彬
李炳泽
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010504375.7A priority Critical patent/CN111639216A/en
Publication of CN111639216A publication Critical patent/CN111639216A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The present disclosure provides a method and an apparatus for displaying a face image, a computer device and a storage medium, including: acquiring a second face image matched with the first face image from a database on the basis of the acquired first face image; determining a first image score of a first face image and a second image score of a second face image; determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score; and displaying the target face image. According to the method and the device, the face image with higher image score can be selected from the first face image and the second face image for displaying, and the display effect of the face image is improved.

Description

Display method and device of face image, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for displaying a face image, a computer device, and a storage medium.
Background
There are many scenarios involving the presentation of a user avatar to the public. The user avatar currently presented is typically a photographed image. However, the effect of the obtained face image may be different due to the shooting angle or the state of the user, for example: only the side face of the face is shot, or whether the body is healthy or not, even including shooting light and the like, of the user in a makeup and non-makeup state can cause certain influence on the shot face image. The display effect of the face image is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a display method and device of a face image, computer equipment and a storage medium.
The present disclosure generally includes the following aspects:
in a first aspect, an embodiment of the present disclosure provides a display method of a face image, where the display method includes:
acquiring a second face image matched with the first face image from a database on the basis of the acquired first face image;
determining a first avatar score of the first facial image and a second avatar score of the second facial image;
determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
and displaying the target face image.
In an alternative embodiment, the determining the first avatar score for the first face image includes:
carrying out face state detection on the face included in the first face image to obtain state information corresponding to each face state in at least one face state;
determining a state score corresponding to each face state based on the state information corresponding to each face state;
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state.
In an alternative embodiment, the face state includes at least one of: eye pose, mouth pose, head pose, degree of occlusion, and expression.
In an optional embodiment, for a case that the face state includes an eye pose, the performing face state detection on the face included in the first face image includes: detecting eye postures of the human faces included in the first human face image to obtain state information corresponding to the eye postures; the state information corresponding to the eye posture comprises: degree of eye opening and closing;
for a case that the face state includes a mouth pose, the performing face state detection on the face included in the first face image includes: performing mouth angle inclination direction detection on a face included in the first face image, and determining state information corresponding to the mouth posture based on a result of the mouth angle inclination direction detection;
for a case that the face state includes a head pose, the performing face state detection on the face included in the first face image includes: performing head posture detection on the face in the first face image to obtain state information corresponding to the head posture; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the case that the face state includes the occlusion degree, the detecting the face state of the face included in the first face image includes:
performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image respectively; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of face key points.
In an optional implementation, the display method further includes:
performing quality detection on the first face image to obtain a quality detection score of the first face image;
the obtaining a first avatar score of the first face image based on the state score of each of the at least one face state comprises:
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state and the quality detection score.
In an alternative embodiment, the quality check includes at least one of: brightness detection, blur level detection, and resolution detection.
In an alternative embodiment, the determining the second appearance score of the second facial image includes:
and reading a second shape score corresponding to the second face image from the database.
In an optional implementation, the display method further includes:
and displaying the first face image after the second face image matched with the first face image does not exist in the database.
In an optional implementation, the display method further includes:
and under the condition that the target face image is the first face image, storing the first face image into the database, and deleting the second face image stored in the database.
In a second aspect, an embodiment of the present disclosure further provides a display device for a face image, where the display device includes:
the acquisition module is used for acquiring a second face image matched with the first face image from a database based on the acquired first face image;
a first determination module for determining a first appearance score of the first face image and a second appearance score of the second face image;
a second determination module for determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
and the first display module is used for displaying the target face image.
In an alternative embodiment, the first determining module includes:
the detection unit is used for detecting the face state of the face included in the first face image to obtain state information corresponding to each face state in at least one face state;
a first determination unit configured to determine a state score corresponding to each face state based on the state information corresponding to each face state;
and the second determining unit is used for obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state.
In an alternative embodiment, the face state includes at least one of: eye pose, mouth pose, head pose, degree of occlusion, and expression.
In an optional implementation manner, for a case that the face state includes an eye pose, the detection unit is specifically configured to: detecting eye postures of the human faces included in the first human face image to obtain state information corresponding to the eye postures; the state information corresponding to the eye posture comprises: degree of eye opening and closing;
for a case that the face state includes a mouth pose, the detection unit is specifically configured to: performing mouth angle inclination direction detection on a face included in the first face image, and determining state information corresponding to the mouth posture based on a result of the mouth angle inclination direction detection;
for a case that the face state includes a head pose, the detection unit is specifically configured to: performing head posture detection on the face in the first face image to obtain state information corresponding to the head posture; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the case that the face state includes the occlusion degree, the detection unit is specifically configured to: performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image respectively; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of face key points.
In an optional implementation, the first determining module is further configured to: performing quality detection on the first face image to obtain a quality detection score of the first face image;
the second determining unit is specifically configured to:
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state and the quality detection score.
In an alternative embodiment, the quality check includes at least one of: brightness detection, blur level detection, and resolution detection.
In an optional implementation, the first determining module is further configured to:
and reading a second shape score corresponding to the second face image from the database.
In an alternative embodiment, the display device further comprises:
and the second display module is used for displaying the first face image after a second face image matched with the first face image does not exist in the database.
In an alternative embodiment, the display device further comprises:
and the storage module is used for storing the first face image into the database and deleting the second face image stored in the database under the condition that the target face image is the first face image.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect, when the computer device is run.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the display method, the display device, the computer equipment and the storage medium of the face image, a second face image matched with a first face image can be obtained from a database based on the obtained first face image; and then determining a target face image from the first face image and the second face image based on the determined first image score of the first face image and the determined second image score of the second face image, and displaying the target face image. Therefore, a face image with higher image score can be selected from the first face image and the second face image for displaying, and the display effect of the face image is improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a presentation method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart for determining a first avatar score for a first face image in a presentation method provided by the disclosed embodiments;
FIG. 3 illustrates a schematic view of a display device provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a first determining module in a display apparatus according to an embodiment of the disclosure;
FIG. 5 shows a schematic view of another display device provided by embodiments of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
According to research, in the prior art, most of face images displayed through a screen are shot images, namely face images acquired while checking in or punching a card. However, due to the shooting angle or the situation of the photographer, the effect of the shot face image may be different, for example: only shoot the side face of people's face, perhaps the user is under the state of make-up and not make-up, whether healthy, even including shoot light all can cause certain influence to the face image of shooing, cause the bandwagon effect of face image relatively poor.
The display method, the display device, the computer equipment and the storage medium of the face image provided by the embodiment of the disclosure can select a face image with higher image score from a first face image and a second face image to display, so that the display effect of the face image is improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a display method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the display method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a presentation method provided by the embodiment of the present disclosure by taking an execution subject as a user equipment as an example.
Referring to fig. 1, which is a flowchart of a display method provided in the embodiment of the present disclosure, the display method includes steps S101 to S104, where:
s101: acquiring a second face image matched with the first face image from a database on the basis of the acquired first face image;
s102: determining a first avatar score of the first facial image and a second avatar score of the second facial image;
s103: determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
s104: and displaying the target face image.
According to the embodiment of the disclosure, a second face image matched with a first face image can be acquired from a database based on the acquired first face image; and then determining a target face image from the first face image and the second face image based on the determined first image score of the first face image and the determined second image score of the second face image, and displaying the target face image. Therefore, a face image with higher image score can be selected from the first face image and the second face image for displaying, and the display effect of the face image is improved.
The following describes each of the above-mentioned steps S101 to S103 in detail.
Firstly, the method comprises the following steps: in the above S101, taking the example of displaying the face image in the check-in scene, the camera may be a dedicated camera, such as a camera dedicated to the terminal device or the server for executing the check-in task, or an existing camera, such as a monitoring camera for monitoring provided in the check-in site, and the monitoring camera may be linked with a main body, such as the terminal device or the server, for executing the method, and acquire the first face image.
When the execution main body of the embodiment of the present disclosure is a server, a camera controlled by the server transmits an acquired first face image to the server after acquiring the first face image, wherein the server may be a local server or a cloud server to perform a subsequent image processing process.
When the execution main body of the embodiment of the disclosure is the terminal device, the terminal device can acquire the first face image based on the camera installed on the terminal device, and after the first face image is acquired, the subsequent processing can be directly performed on the face image, the first face image can also be sent to the server, whether a target visiting record matched with the first face image exists in the database is determined by the control server, and after the matching process is executed, the server returns the matching result to the terminal device.
Here, taking a server as an execution subject as an example, a detailed description is given to a specific process of performing matching in the embodiment of the present disclosure:
specifically, the first face image and the face images stored in the database are sequentially matched, and a second face image matched with the first face image is obtained from the database. Because a plurality of face images are stored in the database, when the first face image is acquired, the first face image can be subjected to image processing, and the method specifically comprises the following steps: and carrying out similarity detection on the first face image and the face images stored in the database, and if the similarity between any one of the face images and the first face image is greater than a preset similarity threshold value, taking the any one of the face images as a second face image matched with the first face image. The method for detecting the similarity can comprise the following steps: and comparing the first face image with the face images stored in the database by using a face recognition algorithm or a face recognition neural network model.
II, secondly: in S102, the avatar score of the first face image is obtained by, for example, detecting the first face image.
Referring to fig. 2, fig. 2 is a flowchart illustrating determining a first avatar score of a first face image in a display method according to an embodiment of the disclosure, including the following steps S1021 to S1023, where:
s1021: carrying out face state detection on the face included in the first face image to obtain state information corresponding to each face state in at least one face state;
s1022: determining a state score corresponding to each face state based on the state information corresponding to each face state;
s1023: and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state.
In S1021, the face state includes at least one of: eye pose, mouth pose, head pose, degree of occlusion, and expression. And detecting the face state to obtain corresponding state information.
Specifically, for the case that the face state includes an eye posture, eye posture detection may be performed on the face included in the first face image, so as to obtain state information corresponding to the eye posture; the state information corresponding to the eye posture comprises: degree of eye opening and closing.
For a case where the face state includes a mouth pose, mouth angle tilt direction detection may be performed on a face included in the first face image, and state information corresponding to the mouth pose may be determined based on a result of the mouth angle tilt direction detection;
for the condition that the face state comprises a head pose, performing head pose detection on the face in the first face image to obtain state information corresponding to the head pose; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the condition that the face state comprises the shielding degree, performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of face key points.
In S1022 to S1023, the state information is scored based on the state information corresponding to each of the obtained at least one face state, the state score corresponding to each face state is determined, and the first avatar score of the first face image is obtained based on the state score of each face state. For example, the state scores of the multiple face states may be weighted and summed to obtain the first avatar score of the first face image, or the state scores of the multiple face states may be directly summed to obtain the first avatar score of the first face image.
In addition, other specific methods of the first character score may be provided, which are not limited herein.
Exemplarily, the face states involved in the above steps can be divided into several cases, which are exemplified as follows:
example one: for the opening and closing degree of the eyes, if the eyes of the photographer are closed, the posture of the eyes can be scored as 0; if the eye state of the photographer is squinting or glaring, the eye posture can be scored as 2 or 3; if the photographer's eye state is open, the eye posture can be scored as 5.
Example two: regarding the inclination direction of the mouth angle, if the inclination direction of the mouth angle of the photographer is greater than 10 degrees, the mouth posture can be scored as 0; if the photographer's mouth angle inclination direction is less than 10 degrees but more than 2 degrees, the mouth posture can be scored as 2 or 3; if the angular inclination direction of the mouth of the photographer is less than 2 degrees, the mouth posture can be scored as 5.
Example three: for the shielding degree, if the corresponding key point detection result is complete shielding, the shielding degree can be scored as 0; if the corresponding key point detection result is partial shielding, the shielding degree can be scored as 2 or 3; if the corresponding key point detection result is that the key point is not shielded at all, the shielding degree score can be 5, and specifically, the setting can be carried out according to the actual situation.
The above examples only provide a few practical ways, and should not be taken as a limitation on the way the status score is set according to the embodiments of the present disclosure.
In another embodiment of the present disclosure, a quality detection may be performed on a first face image to obtain a quality detection score of the first face image, and then a first avatar score of the first face image may be obtained based on a state score of each of at least one of the face states and the quality detection score.
Wherein the quality detection comprises at least one of: brightness detection, blur level detection, and resolution detection.
Exemplarily, due to the problem of light or shooting angle, a shot image is dark, and the image effect is influenced; or, the problem of image blurring due to unstable shooting; or, the face images with different resolutions are acquired due to different shooting devices. The above factors all have certain influence on the display effect of the first face image.
Therefore, the quality detection score of the first face image can be combined with the state score of the face state to jointly determine the first image score of the first face image, so that the accuracy and the reliability of the determination of the first image score are improved.
In addition, for the method for obtaining the second shape score, the second shape score corresponding to the second face image may be read from the database based on the matched second face image, wherein the second face image is the face image with the highest score among the face images of the past times.
In another embodiment, the same detection process as the first facial image may also be directly performed on the second facial image, so as to obtain a second appearance score of the second facial image.
Thirdly, the method comprises the following steps: in the above S103, when the target face image is determined based on the acquired first and second avatar scores, for example, a face image (first face image or second face image) corresponding to the side with the higher score value may be determined as the target face image.
Fourthly, the method comprises the following steps: in S104, for example, the determined target face image may be displayed on a display screen provided at the check-in site. That is, the target face image can be displayed in the terminal device of the user.
In another embodiment of the present disclosure, in a case that the target face image is the first face image, the first face image is stored in the database, and the second face image stored in the database is deleted, so as to update the face image stored in the database by the user, thereby gradually improving a display effect of the face image of the user in a process of multiple check-ins.
In another embodiment of the present disclosure, if there is no second facial image of a person matching the first facial image in the database, the first facial image is directly displayed.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a display device corresponding to the display method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the display method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, 4 and 5, fig. 3 is a schematic view of a display device according to an embodiment of the present disclosure; fig. 4 is a schematic diagram illustrating a first determining module in a display apparatus according to an embodiment of the disclosure; fig. 5 is a schematic view of another display device provided in the embodiments of the present disclosure. The display device comprises: an obtaining module 310, a first determining module 320, a first determining module 330, and a first presenting module 340, wherein:
an obtaining module 310, configured to obtain, based on an obtained first face image, a second face image matched with the first face image from a database;
a first determining module 320 for determining a first avatar score of the first facial image and a second avatar score of the second facial image;
a second determining module 330, configured to determine a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
the first display module 340 is configured to display the target face image.
In an alternative embodiment, as shown in fig. 4, the first determining module 320 includes:
a detecting unit 321, configured to perform face state detection on a face included in the first face image, so as to obtain state information corresponding to each face state in at least one face state;
a first determining unit 322, configured to determine a state score corresponding to each face state based on the state information corresponding to each face state;
a second determining unit 323, configured to obtain a first avatar score of the first face image based on the state score of each of the at least one face state.
In an alternative embodiment, the face state includes at least one of: eye pose, mouth pose, head pose, degree of occlusion, and expression.
In an optional implementation manner, for a case that the face state includes an eye pose, the detecting unit 321 is specifically configured to: detecting eye postures of the human faces included in the first human face image to obtain state information corresponding to the eye postures; the state information corresponding to the eye posture comprises: degree of eye opening and closing;
for the case that the face state includes a mouth gesture, the detecting unit 321 is specifically configured to: performing mouth angle inclination direction detection on a face included in the first face image, and determining state information corresponding to the mouth posture based on a result of the mouth angle inclination direction detection;
for the case that the face state includes a head pose, the detecting unit 321 is specifically configured to: performing head posture detection on the face in the first face image to obtain state information corresponding to the head posture; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the case that the face state includes an occlusion degree, the detecting unit 321 is specifically configured to: performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image respectively; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of face key points.
In an optional implementation, the first determining module 320 is further configured to: performing quality detection on the first face image to obtain a quality detection score of the first face image;
the second determining unit 323 is specifically configured to:
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state and the quality detection score.
In an alternative embodiment, the quality check includes at least one of: brightness detection, blur level detection, and resolution detection.
In an optional implementation, the first determining module 320 is further configured to:
and reading a second shape score corresponding to the second face image from the database.
In an alternative embodiment, as shown in fig. 5, the display device further comprises:
and a second displaying module 350, configured to display the first face image after a second face image matching the first face image does not exist in the database.
In an alternative embodiment, the display device further comprises:
a storage module 360, configured to, if the target face image is the first face image, store the first face image in the database, and delete the second face image stored in the database.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and includes:
a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by a computer device are executed by the processor to perform the steps of:
acquiring a second face image matched with the first face image from a database on the basis of the acquired first face image;
determining a first avatar score of the first facial image and a second avatar score of the second facial image;
determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
and displaying the target face image.
In an alternative embodiment, the instructions executed by processor 11 for determining the first avatar score of the first face image include:
carrying out face state detection on the face included in the first face image to obtain state information corresponding to each face state in at least one face state;
determining a state score corresponding to each face state based on the state information corresponding to each face state;
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state.
In an alternative embodiment, the processor 11 executes instructions that include at least one of the following face states: eye pose, mouth pose, head pose, degree of occlusion, and expression.
In an alternative embodiment, the instructions executed by the processor 11 for performing face state detection on the face included in the first face image for the case that the face state includes an eye pose includes: detecting eye postures of the human faces included in the first human face image to obtain state information corresponding to the eye postures; the state information corresponding to the eye posture comprises: degree of eye opening and closing;
for a case that the face state includes a mouth pose, the performing face state detection on the face included in the first face image includes: performing mouth angle inclination direction detection on a face included in the first face image, and determining state information corresponding to the mouth posture based on a result of the mouth angle inclination direction detection;
for a case that the face state includes a head pose, the performing face state detection on the face included in the first face image includes: performing head posture detection on the face in the first face image to obtain state information corresponding to the head posture; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the case that the face state includes the occlusion degree, the detecting the face state of the face included in the first face image includes:
performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image respectively; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of face key points.
In an optional implementation manner, in the instructions executed by the processor 11, the presentation method further includes:
performing quality detection on the first face image to obtain a quality detection score of the first face image;
the obtaining a first avatar score of the first face image based on the state score of each of the at least one face state comprises:
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state and the quality detection score.
In an alternative embodiment, the processor 11 executes instructions that include at least one of: brightness detection, blur level detection, and resolution detection.
In an alternative embodiment, the instructions executed by processor 11 for determining the second appearance score of the second facial image include:
and reading a second shape score corresponding to the second face image from the database.
In an optional implementation manner, in the instructions executed by the processor 11, the presentation method further includes:
and displaying the first face image after the second face image matched with the first face image does not exist in the database.
In an optional implementation manner, in the instructions executed by the processor 11, the presentation method further includes:
and under the condition that the target face image is the first face image, storing the first face image into the database, and deleting the second face image stored in the database.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the presentation method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the display method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A display method of a face image is characterized by comprising the following steps:
acquiring a second face image matched with the first face image from a database on the basis of the acquired first face image;
determining a first avatar score of the first facial image and a second avatar score of the second facial image;
determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
and displaying the target face image.
2. The method of claim 1, wherein determining the first avatar score for the first face image comprises:
carrying out face state detection on the face included in the first face image to obtain state information corresponding to each face state in at least one face state;
determining a state score corresponding to each face state based on the state information corresponding to each face state;
and obtaining a first image score of the first face image based on the state score of each face state in the at least one face state.
3. The presentation method according to claim 2, wherein the face state comprises at least one of: eye pose, mouth pose, head pose, degree of occlusion, and expression.
4. The presentation method according to claim 3, wherein the face state detection of the face included in the first face image for the case that the face state includes an eye pose includes: detecting eye postures of the human faces included in the first human face image to obtain state information corresponding to the eye postures; the state information corresponding to the eye posture comprises: degree of eye opening and closing;
for a case that the face state includes a mouth pose, the performing face state detection on the face included in the first face image includes: performing mouth angle inclination direction detection on a face included in the first face image, and determining state information corresponding to the mouth posture based on a result of the mouth angle inclination direction detection;
for a case that the face state includes a head pose, the performing face state detection on the face included in the first face image includes: performing head posture detection on the face in the first face image to obtain state information corresponding to the head posture; the state information corresponding to the head pose includes: the pitch angle and yaw angle of the human face;
for the case that the face state includes the occlusion degree, the detecting the face state of the face included in the first face image includes:
performing key point detection on the first face image to obtain key point detection results corresponding to a plurality of face key points in the first face image respectively; and determining the state information corresponding to the shielding degree based on the key point detection results respectively corresponding to the plurality of key points.
5. The presentation method according to any one of claims 2 to 4, further comprising:
performing quality detection on the first face image to obtain a quality detection score of the first face image;
the obtaining a first avatar score of the first face image based on the state score of each of the at least one face state comprises:
and obtaining a first image score of the first human face image based on the state score of each human face state in at least one human face state and the quality detection score.
6. The demonstration method according to claim 5, wherein the quality check comprises at least one of: brightness detection, blur level detection, and resolution detection.
7. The presentation method of any one of claims 1 to 6 wherein said determining a second appearance score for said second facial image comprises:
and reading a second shape score corresponding to the second face image from the database.
8. The presentation method according to any one of claims 1 to 7, further comprising:
and displaying the first face image after the second face image matched with the first face image does not exist in the database.
9. The presentation method according to any one of claims 1 to 8, further comprising:
and under the condition that the target face image is the first face image, storing the first face image into the database, and deleting the second face image stored in the database.
10. A display device of face images, characterized in that the display device comprises:
the acquisition module is used for acquiring a second face image matched with the first face image from a database based on the acquired first face image;
a first determination module for determining a first appearance score of the first face image and a second appearance score of the second face image;
a second determination module for determining a target face image from the first face image and the second face image based on the first avatar score and the second avatar score;
and the first display module is used for displaying the target face image.
11. A computer device, comprising: a processor, a memory, said memory storing machine-readable instructions executable by said processor, said processor being configured to execute the machine-readable instructions stored in said memory, said machine-readable instructions, when executed by said processor, causing said processor to perform the steps of the method of presenting a face image according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which, when executed by a computer device, executes the steps of the method for presenting a face image according to any one of claims 1 to 9.
CN202010504375.7A 2020-06-05 2020-06-05 Display method and device of face image, computer equipment and storage medium Pending CN111639216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010504375.7A CN111639216A (en) 2020-06-05 2020-06-05 Display method and device of face image, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010504375.7A CN111639216A (en) 2020-06-05 2020-06-05 Display method and device of face image, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111639216A true CN111639216A (en) 2020-09-08

Family

ID=72332065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010504375.7A Pending CN111639216A (en) 2020-06-05 2020-06-05 Display method and device of face image, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111639216A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960974A (en) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 Face critical point detection method, apparatus, electronic equipment and storage medium
WO2019128507A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium and electronic device
CN110232134A (en) * 2019-06-13 2019-09-13 上海商汤智能科技有限公司 Data-updating method, server and computer storage medium
CN110298310A (en) * 2019-06-28 2019-10-01 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960974A (en) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 Face critical point detection method, apparatus, electronic equipment and storage medium
WO2019128507A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium and electronic device
CN110232134A (en) * 2019-06-13 2019-09-13 上海商汤智能科技有限公司 Data-updating method, server and computer storage medium
CN110298310A (en) * 2019-06-28 2019-10-01 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王亚, 朱明, 刘成林: "基于CNN 的监控视频中人脸图像质量评估", 计算机系统应用, vol. 27, no. 11, pages 71 - 77 *

Similar Documents

Publication Publication Date Title
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
RU2617557C1 (en) Method of exposure to virtual objects of additional reality
CN108805047B (en) Living body detection method and device, electronic equipment and computer readable medium
CN106056064B (en) A kind of face identification method and face identification device
CN104350509B (en) Quick attitude detector
WO2020199611A1 (en) Liveness detection method and apparatus, electronic device, and storage medium
Almeida et al. Detecting face presentation attacks in mobile devices with a patch-based CNN and a sensor-aware loss function
US20140354540A1 (en) Systems and methods for gesture recognition
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN113449696B (en) Attitude estimation method and device, computer equipment and storage medium
CN113191938B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111640165A (en) Method and device for acquiring AR group photo image, computer equipment and storage medium
CN111263955A (en) Method and device for determining movement track of target object
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN110428394A (en) Method, apparatus and computer storage medium for target mobile detection
CN111652971A (en) Display control method and device
CN112733946A (en) Training sample generation method and device, electronic equipment and storage medium
CN115035581A (en) Facial expression recognition method, terminal device and storage medium
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111353325A (en) Key point detection model training method and device
CN110858409A (en) Animation generation method and device
CN111638794A (en) Display control method and device for virtual cultural relics
CN111640203A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination