CN108875638B - Face matching test method, device and system - Google Patents

Face matching test method, device and system Download PDF

Info

Publication number
CN108875638B
CN108875638B CN201810634640.6A CN201810634640A CN108875638B CN 108875638 B CN108875638 B CN 108875638B CN 201810634640 A CN201810634640 A CN 201810634640A CN 108875638 B CN108875638 B CN 108875638B
Authority
CN
China
Prior art keywords
face
face image
image
execution state
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810634640.6A
Other languages
Chinese (zh)
Other versions
CN108875638A (en
Inventor
翟超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Jingdong Technology Holding Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN201810634640.6A priority Critical patent/CN108875638B/en
Publication of CN108875638A publication Critical patent/CN108875638A/en
Application granted granted Critical
Publication of CN108875638B publication Critical patent/CN108875638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application discloses an information processing method, device and system. One embodiment of the method comprises: acquiring a face image and feature information of a face contained in the face image; the following face matching steps are executed: searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that the preset face image matched with the face image is found, fusing the face image with the matched preset face image; the method further comprises the following steps: and writing the execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed. According to the embodiment of the application, the execution state of the face matching step can be determined through the log file, and the accuracy of monitoring the step is improved.

Description

Face matching test method, device and system
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a face matching test method, device and system.
Background
Before the fusion of the face images, a plurality of stages such as face acquisition and face matching are often required. In the related art, in order to test the fusion of the face images, a plurality of test procedures are often required.
Disclosure of Invention
The embodiment of the application provides a face matching test method, a face matching test device and a face matching test system.
In a first aspect, an embodiment of the present application provides a face matching test method, including: acquiring a face image and feature information of a face contained in the face image; the following face matching steps are executed: searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that the preset face image matched with the face image is found, fusing the face image with the matched preset face image; the method further comprises the following steps: writing execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed or not; in response to determining that the face matching step was not successfully performed based on the execution state information.
In some embodiments, after writing the execution state information of the face matching step to the log file, the method further comprises: and in response to the fact that the face matching step is not successfully executed, determining the fault occurrence position of the face matching step based on the fault identification contained in the execution state information.
In some embodiments, obtaining a face image and feature information of a face included in the face image includes: receiving a face image sent by terminal equipment and face feature information detected in the face image by the terminal equipment; and after writing the execution state information of the face matching step to the log file, the method further comprises: and responding to the situation that the face matching step is not successfully executed based on the execution state information, and sending a face detection instruction to the terminal equipment so as to enable the terminal equipment to carry out face detection again.
In some embodiments, after writing the execution state information of the face matching step to the log file, the method further comprises: and generating and sending alarm information corresponding to the fault identification.
In some embodiments, the method further comprises: and responding to the determination that the fusion is successful, and sending a face image display instruction to the display so as to enable the display to display the face image to the camera for face acquisition.
In some embodiments, the method further comprises: and synchronizing the execution state information in the log file to the terminal equipment.
In some embodiments, the method further comprises: and in response to the failure of the terminal equipment in detecting the face in the face image or the failure in acquiring the characteristic information, writing execution state information of the face detection step into the log file.
In some embodiments, the method further comprises: determining the failure occurrence frequency and the failure occurrence position in the face matching step; and for one fault occurrence position, determining the ratio of the fault occurrence times of the fault occurrence position to the total fault occurrence times as the fault occurrence rate of the fault occurrence position.
In a second aspect, an embodiment of the present application provides a face matching test apparatus, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a face image and feature information of a face contained in the face image; a face matching unit configured to perform the following face matching steps: searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that the preset face image matched with the face image is found, fusing the face image with the matched preset face image; the device still includes: and the writing unit is configured to write execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed.
In some embodiments, the apparatus further comprises: and the fault determining unit is configured to determine the fault occurrence position of the face matching step based on the fault identification contained in the execution state information in response to the fact that the face matching step is not successfully executed.
In some embodiments, the obtaining unit is further configured to receive a face image sent by the terminal device and feature information of a face detected in the face image by the terminal device; and the apparatus further comprises: and the sending unit is configured to respond to the situation that the face matching step is not successfully executed based on the execution state information, and send a face detection instruction to the terminal equipment so that the terminal equipment can carry out face detection again.
In some embodiments, the apparatus further comprises: and the generating unit is configured to generate and send alarm information corresponding to the fault identifier.
In some embodiments, the apparatus further comprises: and the sending instruction unit is configured to send a face image display instruction to the display in response to the determination that the fusion is successful, so that the display displays the face image to the camera for face acquisition.
In some embodiments, the apparatus further comprises: and the synchronization unit is configured to synchronize the execution state information in the log file to the terminal equipment.
In some embodiments, the apparatus further comprises: and the information writing unit is configured to write execution state information of the face detection step into the log file in response to failure of the terminal device in detecting the face in the face image or failure in acquiring the feature information.
In some embodiments, the apparatus further comprises: the statistical unit is configured for determining the failure occurrence frequency and the failure occurrence position in the face matching step; and for one fault occurrence position, determining the ratio of the fault occurrence times of the fault occurrence position to the total fault occurrence times as the fault occurrence rate of the fault occurrence position.
In a third aspect, an embodiment of the present application provides a face matching test system, where the system includes a terminal device and a server; the terminal device is used for executing the following human face detection steps: acquiring a face image displayed on a display; extracting feature information of the face image, and sending the face image and the feature information to the server; the server is used for executing the following face fusion steps: receiving a face image and feature information of the face image; searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that a preset face image matched with the face image is found, fusing the face image with the matched preset face image; the server is further configured to: and writing the execution state information of the face fusion step into the log file.
In some embodiments, the server is further configured to send a face detection instruction to the terminal device in response to determining that the face fusion step is not successfully executed based on the execution state information in the log file.
In some embodiments, the terminal device is further configured to write execution state information of the face detection step into a local log file, where the execution state information is used to indicate whether the face detection step is successfully executed; re-executing the face detection step in response to determining that the face detection step was not successfully executed based on the execution state information.
In some embodiments, the server is further configured to synchronize the execution state information in the log file to the terminal device; the terminal device is further configured to synchronize the execution state information in the local log file to the server.
In some embodiments, the system further comprises a display; the display is used for broadcasting the face images in turn at regular time.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method as in any embodiment of the face matching test method.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method as in any embodiment of a face matching test method.
According to the face matching test scheme provided by the embodiment of the application, firstly, a face image and feature information of a face contained in the face image are obtained. And then, executing a face matching step, and finally writing execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed. According to the embodiment of the application, the execution state of the face matching step can be determined through the log file, and the accuracy of monitoring the face matching step is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a face matching test method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of the face matching test method according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a face matching test method according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a face matching test method according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a face matching test device according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the face matching test method or the face matching test apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, a display 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, the display 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101 and 102 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
Here, the terminal apparatuses 101 and 102 may be hardware or software. When the terminal devices 101, 102 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like.
The display 103 may be a display device that presents images to the cameras of the terminal apparatuses 101, 102. Specifically, the camera here may be mounted on the terminal device 101, 102, or may be a separate electronic device in the system architecture 100. When the camera is a stand-alone electronic device in the system architecture 100, the camera may capture an image displayed on the display and transmit the captured image to the terminal devices 101, 102.
The server 104 may be a server providing various services, for example, performing face matching by using the face images uploaded by the terminal devices 101 and 102 and feature information of the faces, performing fusion of the face images, and feeding back a processing result to the terminal device.
It should be noted that the face matching test method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the face matching test apparatus is generally disposed in the server 105. It should be noted that, in some application scenarios, the face matching test method provided in the embodiment of the present application may also be executed by the terminal devices 101 and 102, and accordingly, the face matching test apparatus may also be disposed in the terminal devices 101 and 102.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a face matching test method according to the present application is shown. The face matching test method comprises the following steps:
step 201, obtaining a face image and feature information of a face contained in the face image.
In this embodiment, an execution subject (for example, a server shown in fig. 1) on which the face matching test method operates may acquire a face image from a local or other electronic device. The face image is an image in which a face is presented. In addition, the execution subject may further obtain feature information of a face included in the face image. Specifically, the feature information is information representing the features of a face in an image, and is generally expressed in the form of a vector.
The following face matching steps 202-203 are performed:
step 202, based on the feature information, searching a preset face image matched with the face image from a preset face image set.
In this embodiment, the execution subject may search a preset face image matched with the face image from a preset face image set based on the acquired feature information.
The preset face image set may include a plurality of preset face images. In addition, the preset face image set may further include feature information of each preset face image.
In practice, the execution subject may determine the matched preset face image in various ways. For example, the execution subject may search for a matching preset face image by using the obtained feature information of the face image and the feature information of the images in the preset face image set.
Specifically, the executing entity may compare the feature information of the face image with feature information of images in a preset face image set to determine a similarity between the feature information of the face image and the feature information of the images in the set. And determining feature information with the similarity higher than a preset similarity threshold value with the feature information of the face image from the feature information of the images in the set. And taking the image corresponding to the determined characteristic information in the set as a matched preset face image. In addition, feature information with the highest similarity to the feature information of the face image may be determined from the feature information of each image in the set, and an image corresponding to the feature information in the set may be used as a matched preset face image.
The execution subject can perform traversal search on images in a preset face image set. In addition, the execution subject can add a label to each image in the preset face image set after comparing the image to avoid repeated searching.
And 203, in response to the fact that the preset face image matched with the face image is determined to be found, fusing the face image and the matched preset face image.
In this embodiment, the executing agent fuses the face image and the preset face image matching with the face image in response to determining that the preset face image matching with the face image is found.
In practice, the above-described execution bodies may be fused in a variety of ways. For example, the executing entity may first obtain key points of a human face. After that, the execution subject may perform the alignment of the key points. Then, the execution subject may transform the face contour and the key points of one of the two images that coincide to face contour and key points of the other image using affine transformation, forming a fused image.
And 204, writing the execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed.
In this embodiment, the execution main body may write the execution status information of the face matching step into the log file. The execution state information is information indicating an execution state of the face matching step. For example, the execution status may be execution success or execution failure. Any position in the face matching step (such as any one of the steps 202-203) may be successfully executed or the execution may fail.
With continuing reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the face matching test method according to the present embodiment. In the application scenario of fig. 3, the electronic device 301 first obtains a face image 303 and feature information 304 of a face included in the face image from another electronic device 302; searching a preset face image 305 matched with the face image from a preset face image set based on the characteristic information 304; in response to determining that the preset face image matching the face image is found, fusing 306 the face image with the matching preset face image; the method further comprises the following steps: the log file is written with execution status information 307 of the face matching step, which indicates whether the face matching step was successfully executed.
According to the embodiment, the execution state of the face matching step can be determined through the log file, and the accuracy of monitoring the face matching step is improved. Meanwhile, the accuracy of the step of testing the face matching can be improved by executing the state information.
With further reference to fig. 4, a flow 400 of yet another embodiment of a face matching test method is shown. The process 400 of the face matching test method includes the following steps:
step 401, receiving a face image sent by a terminal device and feature information of a face detected in the face image by the terminal device.
In this embodiment, an execution subject (for example, a server shown in fig. 1) on which the face matching test method operates may receive a face image transmitted by a terminal device and receive feature information of a face detected in the face image by the terminal device. The terminal device can detect whether the face image contains a face or not and acquire feature information of the face. The feature information is information representing the features of a face in an image, and is generally expressed in a vector form.
The following face matching steps 402-403 are performed:
and step 402, searching a preset face image matched with the face image from a preset face image set based on the characteristic information.
In this embodiment, the execution subject may search a preset face image matched with the face image from a preset face image set based on the acquired feature information.
The preset face image set may include a plurality of preset face images. In addition, the preset face image set may further include feature information of each preset face image.
And 403, in response to determining that the preset face image matched with the face image is found, fusing the face image and the matched preset face image.
In this embodiment, the executing agent fuses the face image and the preset face image matching with the face image in response to determining that the preset face image matching with the face image is found.
Step 404, writing the execution status information of the face matching step into the log file, where the execution status information is used to indicate whether the face matching step is successfully executed.
In this embodiment, the execution main body may write the execution status information of the face matching step into the log file. The execution state information is information indicating an execution state of the face matching step. The execution status may be execution success or execution failure.
And 405, responding to the situation that the face matching step is not successfully executed based on the execution state information, and sending a face detection instruction to the terminal equipment so that the terminal equipment can perform face detection again.
In this embodiment, the executing body may respond to the determination that the face matching step is not successfully executed, and send a face detection instruction to the terminal device, so that the terminal device performs face detection again. Specifically, the execution subject may determine whether the face matching step is successfully performed based on the execution state information. Therefore, the execution of any position of the face matching step fails, and the face detection step can be executed by returning to the terminal equipment.
And step 406, in response to determining that the face matching step is not successfully executed, determining a fault occurrence position of the face matching step based on the fault identification contained in the execution state information.
In this embodiment, the executing agent determines the fault occurrence position of the face matching step based on the fault identifier included in the execution state information in response to determining that the face matching step is not successfully executed. Different fault identities may indicate different fault occurrence locations. The failure occurrence position refers to a specific part of the face matching step that has been performed but has not been successfully performed (execution failure). For example, the fault occurrence location may not be successfully executed in the step 402, i.e., "find matching preset face image".
In practice, each fault occurrence position corresponds to one fault identifier, so that the execution state of each fault occurrence position can be determined through the log file, and the accuracy of the step of monitoring the face matching can be further improved.
In some optional implementation manners of this embodiment, this embodiment may further include:
and generating and sending alarm information corresponding to the fault identification.
In this implementation manner, the execution main body may generate alarm information corresponding to the fault identifier, and send the alarm information. Thus, the fault occurrence position can be judged through the alarm information. Then, the number of times of occurrence of the failure at the different failure occurrence positions may be counted. In practice, the execution subject may send the alarm information to a specified terminal device. The user of the designated terminal device can be a technician, so that the technician can know that the face matching process has a fault and accurately know the fault occurrence position.
Each fault occurrence position in the embodiment corresponds to one fault identification, so that the execution state of each fault occurrence position can be determined through the log file, and the accuracy of monitoring the face matching step can be further improved. And when the face matching step is determined not to be successfully executed, the terminal equipment can detect in time, so that the interruption of the face matching step is avoided, and the interruption of the testing step is further prevented. In addition, the embodiment can generate alarm information to alarm, and the alarm information corresponds to the fault identifier, so that the alarm information is more targeted. The technician can quickly determine the fault occurrence position through the alarm information.
With further reference to fig. 5, a flow 500 of yet another embodiment of a face matching test method is shown. The process 500 of the face matching test method includes the following steps:
step 501, obtaining a face image and feature information of a face contained in the face image.
In this embodiment, an execution subject (for example, a server shown in fig. 1) on which the face matching test method operates may acquire a face image from a local or other electronic device. The face image is an image in which a face is presented. In addition, the execution subject may further obtain feature information of a face included in the face image. Specifically, the feature information is information representing the features of a face in an image, and is generally expressed in the form of a vector.
Step 502, based on the feature information, searching a preset face image matched with the face image from a preset face image set.
In this embodiment, the execution subject may search a preset face image matched with the face image from a preset face image set based on the acquired feature information.
The preset face image set may include a plurality of preset face images. In addition, the preset face image set may further include feature information of each preset face image.
The execution subject can perform traversal search on images in a preset face image set. The execution main body can add a label to each image after comparing each image in the preset face image set so as to ensure the searching speed and avoid repeated searching.
Step 503, in response to determining that the preset face image matched with the face image is found, fusing the face image and the matched preset face image.
In this embodiment, the executing agent fuses the face image and the preset face image matching with the face image in response to determining that the preset face image matching with the face image is found.
Step 504, writing the execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is executed successfully.
In this embodiment, the execution main body may write the execution status information of the face matching step into the log file. The execution state information is information indicating an execution state of the face matching step. The execution status may be execution success or execution failure. Any position in the face matching step may be in a state of success or failure in execution (such as any one of steps 501 to 503).
And 505, in response to the determination that the fusion is successful, sending a face image display instruction to a display so that the display displays the face image to a camera for face acquisition.
In this embodiment, if it is determined that the face image and the matched preset face image are successfully fused in step 503, the execution subject sends a face image display instruction to the display. Therefore, the display can show the face image so as to enable the camera to collect the face image. The camera here is a camera mounted on the terminal device or a camera in communication connection with the terminal device. After the execution main body determines that the fusion is successful, the execution main body can send information to the camera so that the terminal equipment can continuously acquire and detect the image acquired by the camera. The detected face image may be a replaced face image. The human face image display instruction can enable the display to change images in time, so that the camera can acquire richer samples. In addition, the display can also replace the displayed human face image at regular time.
Step 506, determining the failure occurrence frequency and the failure occurrence position in the face matching step.
In this embodiment, the execution subject may determine a fault occurrence position and a fault occurrence frequency in the process of executing the face matching step. Here, data of a preset number of executed face matching steps may be taken, for example, the number of times of occurrence of a failure and the location of occurrence of a failure in the course of 100 executed face matching steps may be taken.
In step 507, for one of the failure occurrence positions, the ratio of the failure occurrence times of the failure occurrence position to the total failure occurrence times is determined as the failure occurrence rate of the failure occurrence position.
In this embodiment, the execution body may determine, for one of the failure occurrence positions, a ratio of the number of times of failure occurrence at the failure occurrence position and the total number of times of failure occurrence. Then, the execution subject takes the ratio as the failure occurrence probability of the failure occurrence position. Therefore, the probability of the fault at each fault occurrence position can be obtained, and the fault frequent position can be determined. The total failure occurrence number here is the sum of the number of times that all failure occurrence positions have failed in the process of executing the face matching step.
In some optional implementation manners of this embodiment, this embodiment may further include:
and synchronizing the execution state information in the log file to the terminal equipment.
In this implementation, the execution agent as the server may synchronize the execution state information in the log file of the server to the terminal device. In this way, a technician can acquire the execution state information at any one of the terminal device and the server, determine the failure occurrence position, the failure occurrence frequency, and the like.
In some optional implementation manners of this embodiment, this embodiment may further include:
and in response to the failure of the terminal equipment in detecting the face in the face image or the failure in acquiring the characteristic information, writing execution state information of the face detection step into the log file.
In this implementation manner, the executing agent may further write execution state information of the face detection step into the log file in response to a failure of detecting a face in the face image by the terminal device or a failure of acquiring the feature information by the terminal device. Here, the execution subject as the server may receive a message transmitted by the terminal device in a case where the terminal device determines that a face is not detected in the face image, the message indicating that the face detection has failed. Or in the case that the terminal device determines that the feature information extraction fails, the execution main body receives a message sent by the terminal device, wherein the message indicates that the feature information extraction fails.
In practice, the terminal device may write the execution state information into the log file, no matter whether the above-mentioned detection of the face in the face image fails or the acquisition of the feature information fails. And the terminal device may synchronize the execution state information to the server by sending a message to the server.
And if the terminal equipment fails to detect the face in the face image or fails to acquire the feature information, the step of face detection can be returned to detect whether the face image contains the face again.
The human face image display instruction of the embodiment can enable the display to replace images in time, so that the camera acquires richer image samples. Meanwhile, the fault frequent position is determined by obtaining the fault probability of each fault occurrence position. In addition, by synchronizing the execution state information in the log file to the terminal device, a technician can acquire the execution state information at either one of the terminal device and the server, determine the location of occurrence of a failure, the number of occurrences of a failure, and the like.
The application also discloses an embodiment of the face matching test system. The face matching test system comprises terminal equipment and a server;
the terminal device is used for executing the following human face detection steps: acquiring a face image displayed on a display; extracting feature information of the face image, and sending the face image and the feature information to the server; the server is used for executing the following face fusion steps: receiving a face image and feature information of the face image; searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that a preset face image matched with the face image is found, fusing the face image with the matched preset face image; the server is further configured to: and writing the execution state information of the face fusion step into the log file.
In some embodiments, the server is further configured to send a face detection instruction to the terminal device in response to determining that the face fusion step is not successfully executed based on the execution state information in the log file.
In some embodiments, the terminal device is further configured to write execution state information of the face detection step into a local log file, where the execution state information is used to indicate whether the face detection step is successfully executed; re-executing the face detection step in response to determining that the face detection step was not successfully executed based on the execution state information.
In some embodiments, the server is further configured to synchronize the execution state information in the log file to the terminal device; the terminal device is further configured to synchronize the execution state information in the local log file to the server.
In some embodiments, the system further comprises a display; the display is used for broadcasting the face images in turn at regular time.
According to the embodiment, the execution state of the face matching step can be determined through the log file, and the accuracy of monitoring the face matching step is improved. Meanwhile, the accuracy of the step of testing the face matching can be improved by executing the state information.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a face matching test apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the face matching test apparatus 600 of the present embodiment includes: an acquisition unit 601, a face matching unit 602, and a writing unit 603. The acquiring unit 601 is configured to acquire a face image and acquire feature information of a face included in the face image; a face matching unit 602 configured to perform the following face matching steps: searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that the preset face image matched with the face image is found, fusing the face image with the matched preset face image; the device still includes: the writing unit 603 is configured to write execution state information of the face matching step into the log file, where the execution state information is used to indicate whether the face matching step is successfully executed.
In some embodiments, the acquisition unit 601 may acquire a face image from a local or other electronic device. The face image is an image in which a face is presented. In addition, the execution subject may further obtain feature information of a face included in the face image. Specifically, the feature information is information representing the features of a face in an image, and is generally expressed in the form of a vector.
In some embodiments, the face matching unit 602 may find a preset face image matching the face image from a preset face image set based on the obtained feature information. The face matching unit 602 may fuse the face image with the matched preset face image in response to determining that the preset face image matched with the face image is found.
In some embodiments, the writing unit 603 may write the execution status information of the above-mentioned face matching step to the log file. The execution state information is information indicating an execution state of the face matching step.
In some optional implementations of this embodiment, the apparatus further includes: and the fault determining unit is configured to determine the fault occurrence position of the face matching step based on the fault identification contained in the execution state information in response to the fact that the face matching step is not successfully executed.
In some optional implementation manners of this embodiment, the obtaining unit is further configured to receive a face image sent by the terminal device and feature information of a face detected in the face image by the terminal device; and the apparatus further comprises: and the sending unit is configured to respond to the situation that the face matching step is not successfully executed based on the execution state information, and send a face detection instruction to the terminal equipment so that the terminal equipment can carry out face detection again.
In some optional implementations of this embodiment, the apparatus further includes: and the generating unit is configured to generate and send alarm information corresponding to the fault identifier.
In some optional implementations of this embodiment, the apparatus further includes: and the sending instruction unit is configured to send a face image display instruction to the display in response to the determination that the fusion is successful, so that the display displays the face image to the camera for face acquisition.
In some optional implementations of this embodiment, the apparatus further includes: and the synchronization unit is configured to synchronize the execution state information in the log file to the terminal equipment.
In some optional implementations of this embodiment, the apparatus further includes: and the information writing unit is configured to write the execution state information of the face matching step into the log file in response to a failure in detecting the face in the face image or a failure in acquiring the feature information.
In some optional implementations of this embodiment, the apparatus further includes: the statistical unit is configured for determining the failure occurrence frequency and the failure occurrence position in the face matching step; and for one fault occurrence position, determining the ratio of the fault occurrence times of the fault occurrence position to the total fault occurrence times as the fault occurrence rate of the fault occurrence position.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
To the I/O interface 705, AN input section 706 including a keyboard, a mouse, and the like, AN output section 707 including a keyboard such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 708 including a hard disk and the like, and a communication section 709 including a network interface card such as a L AN card, a modem, and the like, the communication section 709 performs communication processing via a network such as the internet, a drive 710 is also connected to the I/O interface 705 as necessary, a removable medium 711 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a face matching unit, a writing unit, and a transmitting unit. Here, the names of the units do not constitute a limitation to the unit itself in some cases, and for example, the writing unit may also be described as a "unit that writes execution state information of the face matching step to the log file".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: the following face matching steps are executed: acquiring a face image, and acquiring feature information of a face contained in the face image; searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that the preset face image matched with the face image is found, fusing the face image with the matched preset face image; the method further comprises the following steps: writing execution state information of the face matching step into the log file, wherein the execution state information is used for indicating whether the face matching step is successfully executed or not; and sending a face detection instruction in response to determining that the face matching step is not successfully executed based on the execution state information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (21)

1. A face matching test method comprises the following steps:
acquiring a face image and feature information of a face contained in the face image;
the following face matching steps are executed:
searching a preset face image matched with the face image from a preset face image set based on the characteristic information;
in response to determining that a preset face image matched with the face image is found, fusing the face image with the matched preset face image;
the method further comprises the following steps:
writing execution state information of the face matching step into a log file, wherein the execution state information is used for indicating whether the face matching step is executed successfully or not;
after the log file is written with the execution state information of the face matching step, the method further comprises:
in response to the fact that the face matching step is not successfully executed, determining a fault occurrence position of the face matching step based on a fault identification contained in the execution state information, wherein the fault occurrence position refers to a part which is executed in the face matching step and is not successfully executed;
wherein, the fusing the face image with the matched preset face image comprises: and transforming the face contour and the key points of one image in the face image and the matched preset face image into the face contour and the key points in the other image by affine transformation to form a fused image.
2. The method of claim 1, wherein the obtaining of the face image and the feature information of the face included in the face image comprises:
receiving a face image sent by a terminal device and feature information of a face detected in the face image by the terminal device; and
after the log file is written with the execution state information of the face matching step, the method further comprises:
and sending a face detection instruction to the terminal equipment to enable the terminal equipment to carry out face detection again in response to the fact that the face matching step is not successfully executed based on the execution state information.
3. The method of claim 1, wherein after the writing of the log file with the execution state information of the face matching step, the method further comprises:
and generating and sending alarm information corresponding to the fault identification.
4. The method of claim 1, wherein the method further comprises:
and responding to the determination that the fusion is successful, and sending a face image display instruction to a display so that the display displays the face image to a camera for face acquisition.
5. The method of claim 2, wherein the method further comprises:
and synchronizing the execution state information in the log file to the terminal equipment.
6. The method of claim 2, wherein the method further comprises:
and in response to the failure of the terminal equipment in detecting the face in the face image or the failure in acquiring the characteristic information, writing execution state information of the face detection step into the log file.
7. The method of claim 1, wherein the method further comprises:
determining the failure occurrence frequency and the failure occurrence position in the face matching step;
and for one fault occurrence position, determining the ratio of the fault occurrence times of the fault occurrence position to the total fault occurrence times as the fault occurrence rate of the fault occurrence position.
8. A face matching test device, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire a face image and feature information of a face contained in the face image;
a face matching unit configured to perform the following face matching steps:
searching a preset face image matched with the face image from a preset face image set based on the characteristic information;
in response to determining that a preset face image matched with the face image is found, fusing the face image with the matched preset face image;
the device further comprises:
a writing unit configured to write execution state information of the face matching step to a log file, the execution state information being used to indicate whether the face matching step is successfully executed;
the device further comprises:
a fault determining unit configured to determine a fault occurrence position of the face matching step based on a fault identifier included in the execution state information in response to a determination that the face matching step is not successfully executed, wherein the fault occurrence position refers to a part which is executed in the face matching step and is not successfully executed;
wherein the face matching unit is further configured to perform the fusing of the face image with the matched preset face image as follows: and transforming the face contour and the key points of one image in the face image and the matched preset face image into the face contour and the key points in the other image by affine transformation to form a fused image.
9. The apparatus of claim 8, wherein,
the obtaining unit is further configured to: receiving a face image sent by a terminal device and feature information of a face detected in the face image by the terminal device; and
the device further comprises:
and the sending unit is configured to send a face detection instruction to the terminal equipment in response to the fact that the face matching step is not successfully executed based on the execution state information, so that the terminal equipment can perform face detection again.
10. The apparatus of claim 9, wherein the apparatus further comprises:
and the generating unit is configured to generate and send alarm information corresponding to the fault identifier.
11. The apparatus of claim 8, wherein the apparatus further comprises:
and the sending instruction unit is configured to send a face image display instruction to a display in response to the determination that the fusion is successful, so that the display displays the face image to a camera for face acquisition.
12. The apparatus of claim 9, wherein the apparatus further comprises:
and the synchronization unit is configured to synchronize the execution state information in the log file to the terminal equipment.
13. The apparatus of claim 9, wherein the apparatus further comprises:
and the information writing unit is configured to write execution state information of the face detection step into the log file in response to failure of the terminal device in detecting the face in the face image or failure in acquiring the feature information.
14. The apparatus of claim 8, wherein the apparatus further comprises:
the statistical unit is configured to determine the failure occurrence frequency and the failure occurrence position of the face matching step; and for one fault occurrence position, determining the ratio of the fault occurrence times of the fault occurrence position to the total fault occurrence times as the fault occurrence rate of the fault occurrence position.
15. A face matching test system comprises a terminal device and a server;
the terminal device is used for executing the following human face detection steps: acquiring a face image displayed on a display; extracting feature information of the face image, and sending the face image and the feature information to the server;
the server is used for executing the following face fusion steps: receiving a face image and feature information of the face image; searching a preset face image matched with the face image from a preset face image set based on the characteristic information; in response to determining that a preset face image matched with the face image is found, fusing the face image with the matched preset face image;
the server is further configured to: writing the execution state information of the face fusion step into a log file;
the system is further configured to:
in response to the fact that the face matching step is not successfully executed, determining a fault occurrence position of the face matching step based on a fault identification contained in the execution state information, wherein the fault occurrence position refers to a part which is executed in the face matching step and is not successfully executed;
the server is further configured to: and transforming the face contour and the key points of one image in the face image and the matched preset face image into the face contour and the key points in the other image by affine transformation to form a fused image.
16. The system of claim 15, wherein,
and the server is also used for responding to the situation that the face fusion step is not successfully executed based on the execution state information in the log file and sending a face detection instruction to the terminal equipment.
17. The system of claim 15, wherein,
the terminal device is further configured to write execution state information of the face detection step into a local log file, where the execution state information is used to indicate whether the face detection step is successfully executed; re-executing the face detection step in response to determining that the face detection step was not successfully executed based on the execution state information.
18. The system of claim 15, wherein,
the server is also used for synchronizing the execution state information in the log file to the terminal equipment;
the terminal device is further configured to synchronize the execution state information in the local log file to the server.
19. The system of claim 15, wherein the system further comprises a display;
the display is used for broadcasting the face images in turn at regular time.
20. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
21. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201810634640.6A 2018-06-20 2018-06-20 Face matching test method, device and system Active CN108875638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810634640.6A CN108875638B (en) 2018-06-20 2018-06-20 Face matching test method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810634640.6A CN108875638B (en) 2018-06-20 2018-06-20 Face matching test method, device and system

Publications (2)

Publication Number Publication Date
CN108875638A CN108875638A (en) 2018-11-23
CN108875638B true CN108875638B (en) 2020-07-31

Family

ID=64339753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810634640.6A Active CN108875638B (en) 2018-06-20 2018-06-20 Face matching test method, device and system

Country Status (1)

Country Link
CN (1) CN108875638B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259698B (en) * 2018-11-30 2023-10-13 百度在线网络技术(北京)有限公司 Method and device for acquiring image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN103310415A (en) * 2013-03-15 2013-09-18 清华大学 Face-based defected image inpainting method and system
CN103701926A (en) * 2013-12-31 2014-04-02 小米科技有限责任公司 Method, device and system for obtaining fault reason information
CN105205482A (en) * 2015-11-03 2015-12-30 北京英梅吉科技有限公司 Quick facial feature recognition and posture estimation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8990132B2 (en) * 2010-01-19 2015-03-24 James Ting-Ho Lo Artificial neural networks based on a low-order model of biological neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN103310415A (en) * 2013-03-15 2013-09-18 清华大学 Face-based defected image inpainting method and system
CN103701926A (en) * 2013-12-31 2014-04-02 小米科技有限责任公司 Method, device and system for obtaining fault reason information
CN105205482A (en) * 2015-11-03 2015-12-30 北京英梅吉科技有限公司 Quick facial feature recognition and posture estimation method

Also Published As

Publication number Publication date
CN108875638A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN109308490B (en) Method and apparatus for generating information
CN108337505B (en) Information acquisition method and device
CN109255337B (en) Face key point detection method and device
CN109214501B (en) Method and apparatus for identifying information
CN108235004B (en) Video playing performance test method, device and system
CN110211121B (en) Method and device for pushing model
US20210264198A1 (en) Positioning method and apparatus
CN112200173A (en) Multi-network model training method, image labeling method and face image recognition method
CN110673717A (en) Method and apparatus for controlling output device
CN110008926B (en) Method and device for identifying age
CN115272182A (en) Lane line detection method, lane line detection device, electronic device, and computer-readable medium
CN111160410A (en) Object detection method and device
CN108470179B (en) Method and apparatus for detecting an object
CN108875638B (en) Face matching test method, device and system
US11121912B2 (en) Method and apparatus for processing information
CN109947526B (en) Method and apparatus for outputting information
CN109542743B (en) Log checking method and device, electronic equipment and computer readable storage medium
CN110334763B (en) Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium
CN112699272B (en) Information output method and device and electronic equipment
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN115187510A (en) Loop detection method, device, electronic equipment and medium
CN110084298B (en) Method and device for detecting image similarity
CN113837986A (en) Method, apparatus, electronic device, and medium for recognizing tongue picture
CN113656286A (en) Software testing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant after: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant before: BEIJING JINGDONG FINANCIAL TECHNOLOGY HOLDING Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang 11th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Patentee before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

CP01 Change in the name or title of a patent holder