CN114373209A - Video-based face recognition method and device, electronic equipment and storage medium - Google Patents

Video-based face recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114373209A
CN114373209A CN202111657465.0A CN202111657465A CN114373209A CN 114373209 A CN114373209 A CN 114373209A CN 202111657465 A CN202111657465 A CN 202111657465A CN 114373209 A CN114373209 A CN 114373209A
Authority
CN
China
Prior art keywords
recognized
face
template image
similarity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111657465.0A
Other languages
Chinese (zh)
Inventor
尹义
冷鹏宇
张宁
刁俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202111657465.0A priority Critical patent/CN114373209A/en
Publication of CN114373209A publication Critical patent/CN114373209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a video-based face recognition method, a video-based face recognition device, electronic equipment and a storage medium, wherein the video-based face recognition method comprises the steps of intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in a video; calculating the similarity between the face image to be recognized and each pre-stored template image aiming at each face image to be recognized so as to determine the target similarity; and when the proportion of the face images to be recognized, of which the target similarity is greater than the first threshold corresponding to the target template image, to all the face images to be recognized is greater than the first proportion, determining the recognition result of the face images to be recognized. The method and the device can improve the accuracy of video face recognition.

Description

Video-based face recognition method and device, electronic equipment and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for face recognition based on video, an electronic device, and a storage medium.
Background
With the development of image processing technology and artificial intelligence technology, face recognition technology is widely applied, and various face recognition products are successively introduced. Generally, a face recognition product sets a uniform global first threshold, and according to the global first threshold, the current face recognition result can be determined. However, this way of face recognition is less accurate.
Particularly, when the face in the video is identified, because the angles of the face appearing in the video are various and the resolution of the video is generally low, how to accurately identify the face in the video is a technical problem which needs to be solved urgently.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for face recognition based on a video, an electronic device, and a storage medium, so as to solve the problem of how to improve accuracy of face recognition in the prior art.
A first aspect of an embodiment of the present application provides a face recognition method based on a video, which is characterized by including:
intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video;
calculating the similarity between each face image to be recognized and each pre-stored template image aiming at each face image to be recognized so as to determine the target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is a template image with the maximum similarity with the face image to be recognized in each pre-stored template image;
when the proportion of the target similarity to the face images to be recognized, which is greater than the first threshold corresponding to the target template image, to all the face images to be recognized is greater than the first proportion, determining the recognition result of the face images to be recognized, wherein each pre-stored template image respectively has a corresponding first threshold, and the first thresholds are generated according to the recognition records when the template images are successfully recognized in the historical time period;
and when the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, in all the facial images to be recognized is not greater than a first occupation ratio, and the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, in all the facial images to be recognized is greater than a second occupation ratio, determining the recognition result, wherein the second threshold value is smaller than the first threshold value and is generated according to the recognition record of the template image successfully recognized in the historical time period.
A second aspect of the embodiments of the present application provides a face recognition apparatus based on a video, including:
the acquisition unit is used for intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video;
the target similarity determining unit is used for respectively calculating the similarity between the face image to be recognized and each pre-stored template image so as to determine the target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is a template image with the maximum similarity with the face image to be recognized in each pre-stored template image;
the identification result determining unit is used for determining the identification result of the face image to be identified when the occupation ratio of all the face images to be identified, of which the target similarity is greater than the first threshold value corresponding to the target template image, is greater than the first occupation ratio, wherein each pre-stored template image respectively has a corresponding first threshold value, and the first threshold value is generated according to the identification record when the template image is successfully identified in the historical time period; when the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, in all the facial images to be recognized is not greater than a first occupation ratio, and the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, in all the facial images to be recognized is greater than a second occupation ratio, determining the recognition result, wherein the second threshold value is smaller than the first threshold value;
the pre-stored template images respectively have corresponding first threshold values and second threshold values, and the first threshold values and the second threshold values are generated according to identification records of the template images in the historical time period when the template images are successfully identified.
A third aspect of embodiments of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the electronic device is enabled to implement the steps of the video-based face recognition method.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, which, when executed by a processor, causes an electronic device to implement the steps of the video-based face recognition method as described.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to perform the steps of the video-based face recognition method as described in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, a face image to be recognized is obtained, the similarity between the face image to be recognized and each pre-stored template image is respectively calculated, the template image with the maximum similarity with the face image to be recognized is determined as a target template image, and the similarity corresponding to the target template image is determined as the target similarity. Then, judging whether the ratio of the target similarity to the face image to be recognized, which is larger than a first threshold corresponding to the target template image, to all the face images to be recognized is larger than a first ratio; if so, the identification is successful; if not, continuously judging whether the ratio of the target similarity to the face images to be recognized, which is greater than a second threshold value corresponding to the target template image, to all the face images to be recognized is greater than a second ratio; if so, the identification is successful; if not, the identification fails. Because each template image has a first threshold and a second threshold which are respectively corresponding to the template images, individuation of the first threshold and the second threshold can be realized by a mode of determining a recognition result based on the target similarity and the first threshold and the second threshold which are corresponding to the target template images, face recognition can be carried out more accurately according to the mode of being applicable to the current face image to be recognized, and the accuracy of the face recognition can be improved compared with the existing mode of only setting one uniform threshold. In addition, the first threshold and the second threshold are generated according to the recognition record when the template image is successfully recognized in the historical time period, namely the first threshold and the second threshold are dynamically generated according to the actual recognition condition of the historical time period, so that the first threshold and the second threshold can better accord with the actual face recognition condition, and the accuracy of the video face recognition is further improved. Finally, the method and the device solve the problem that the recognition result is prone to generating deviation because similar face images are intercepted from the video and are only screened once, and accuracy of video face image recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a schematic flow chart illustrating an implementation process of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a face recognition method according to another embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a face recognition method according to another embodiment of the present application;
fig. 4 is a flowchart of a specific implementation of step S107 in a face recognition method according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a specific implementation of step S102 in a face recognition method according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating an implementation of a face recognition method according to another embodiment of the present application;
fig. 7 is a schematic diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In general, a face recognition product sets a uniform global threshold, and according to the global threshold, the current face recognition result can be determined. When the face of a certain person is easily recognized by mistake, the problem is usually solved by increasing the global threshold, however, when the global threshold is increased, the face recognition of other persons is easily difficult, and the success rate of the face recognition is low. Therefore, the face recognition method based on the uniform global threshold has low accuracy.
In order to solve the technical problem, an embodiment of the present application provides a face recognition method based on a video, including: intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video; based on each face image to be recognized, executing the following steps: calculating the similarity between the face image to be recognized and each pre-stored template image to determine the target similarity; the target similarity is the similarity corresponding to the target template image, and the target template image is the template image with the maximum similarity between the pre-stored template images and the face image to be recognized. When the occupation ratio of all face images to be recognized of the face image to be recognized, with the target similarity being larger than a first threshold corresponding to the target template image, is larger than the first occupation ratio, determining the recognition result of the face image to be recognized, wherein each pre-stored template image respectively has a corresponding first threshold, and the first threshold is generated according to the recognition record when the template image is successfully recognized in the historical time period; when the occupation ratio of all the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, is not greater than the first occupation ratio, and the occupation ratio of all the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, is greater than the second occupation ratio, determining a recognition result, wherein the second threshold value is smaller than the first threshold value and is generated according to a recognition record when the template images are successfully recognized in a historical time period.
Because each template image has a first threshold and a second threshold which are respectively corresponding to the template images, the method for determining the recognition result based on the target similarity and the first threshold and the second threshold which are corresponding to the target template images can realize the individuation of the thresholds, more accurately carry out the face recognition according to the first threshold and the second threshold which are suitable for the face image to be recognized, and can improve the accuracy of the video face recognition compared with the existing method of only setting one uniform threshold. In addition, the first threshold and the second threshold are generated according to the recognition record when the template image is successfully recognized in the historical time period, namely the first threshold and the second threshold are dynamically generated according to the actual recognition condition of the historical time period, so that the first threshold and the second threshold can better accord with the actual face recognition condition, and the accuracy of the video face recognition is further improved.
The first embodiment is as follows:
fig. 1 shows a schematic flow chart of a video-based face recognition method provided in an embodiment of the present application, where an execution subject of the face recognition method is an electronic device. The electronic device may be a device having a camera module, such as a monitoring device, a mobile phone, a camera, etc.; alternatively, the electronic device may be another computing device connected to a device having a camera module, such as a computer connected to a monitoring camera, a video camera, or the like. The face recognition method shown in fig. 1 is detailed as follows:
in S101, several face images to be recognized of the same target person appearing in a plurality of video frames in the video are captured. And intercepting face images with the positions within a reasonable range in a plurality of video frames in the video aiming at the plurality of video frames in the video to form a group of face images to be recognized. Namely, according to the continuity of the face images in the video at the space-time positions, the embodiment of the application captures a group of face images to be recognized. The face image to be recognized is preset as a face image of the same person, and whether the face image is the same target person or not and the identity of the target person need to be determined in the subsequent steps.
More specifically, the embodiment of the application can acquire the face images preset to belong to the same target person in a plurality of video frames of a video, and determine whether the face position in the previous video frame and the face position in the next video frame in the face images reasonably appear in a specific range, if so, it can be presumed that the front and back face images belong to the face image of the same target person. In this way, a plurality of face images are obtained to form a group of face images to be recognized.
In terms of the continuity of the face image in time, the face image of the same target person appears in at least two video frames played back in succession, which are exemplarily regarded as continuous in time in the embodiment of the present application, and in other cases, the video frames in succession may be separated by a certain amount, for example, 1 in every 5 video frames. In terms of continuity of the face image in the embodiment of the present application in spatial position: in the embodiment of the present application, the human face images of the same target person appear at substantially the same positions in two video frames played back in front and back exemplarily to be regarded as being spatially continuous. In other words, in the case that the face image of the same target person appears in two video frames adjacent to each other in front and back, and the distance difference between the position of the face image of the same target person in the previous video frame and the position of the face image of the same target person in the next video frame is within a reasonable range, it is considered that the face image has continuity in spatio-temporal positions. The face images of the same target person only represent face images of which the similarity of the face features meets the requirement of the preset similarity, and are not confirmed to be the same target person.
In this embodiment, the reasonable range refers to a distance range in which the faces of the preceding and following frames can reasonably move. If the distance that the same target person is likely to move within the time of the front frame and the rear frame is represented within a reasonable range, classifying the person into the face image to be recognized; if the range is beyond the reasonable range, the same target person cannot move for such a long distance in the time of the previous and subsequent frames, and in this case, the image of the previous frame is excluded from the image of the face to be recognized. At this point, the face images to be recognized which may be the same target person are summarized by judging on the space-time continuity.
For example, when a video frame in which a face image appears for the first time in a video is detected, a face image to be recognized is created for each face image in the video frame. For example, in the videos, no face image appears in any of the 1 st to 100 th video frames, but 2 face images appear in the 101 st to 300 th video frames, and are continuous in space-time, and the face images belong to 2 persons respectively, which are defined as a first face image and a second face image, and 3 face images appear in the 301 st to 400 th video frames, and after the judgment of the space-time continuity, 2 persons and 2 persons appearing in the 101 st to 300 th video frames should be the same person, so that the two face images are classified into the first face image and the second face image respectively. The first face image to be recognized comprises the face features and the face key points of the first face image, and the second face image to be recognized comprises the face features and the face key points of the second face image. As for the 3 rd person who appears only at the 301 rd to 400 th video frames, a third face image to be recognized may be additionally created.
And obtaining a face image to be recognized. Then, based on each face image to be recognized, the following steps are performed.
In S102, calculating the similarity between the face image to be recognized and each pre-stored template image to determine the target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is the template image with the maximum similarity with the face image to be recognized in each pre-stored template image.
In the embodiment of the application, the face images of each authorized person collected in advance are pre-stored as template images in the storage unit of the local terminal of the electronic device or the storage unit of a third party which can be accessed by the electronic device.
After the face images to be recognized are obtained, the similarity of each face image to be recognized and each template image respectively is calculated. For example, needlesFor a certain face image to be recognized, if N template images exist currently (N is a positive integer greater than 1), calculating the similarity between the face image to be recognized and each image in the N template images one by one, thereby obtaining the value t of the N similarities corresponding to the face image to be recognized1~tNAnd for N face images to be recognized, corresponding to N × N similarity values. And then, determining a maximum similarity value from the N-by-N similarity values, taking the template image corresponding to the maximum similarity value as a target template image, and taking the similarity of all the face images to be recognized and the target template image as the target similarity.
In an embodiment, the similarity between the face image to be recognized and the template image may be a cosine similarity. As a possible implementation mode, the similarity between the face image to be recognized and the template image can be calculated through a pre-trained neural network model.
In S103, when the occupation ratio of all to-be-recognized face images of the to-be-recognized face images with the target similarity greater than the first threshold corresponding to the target template image is greater than the first occupation ratio, determining a recognition result of the to-be-recognized face images, where the pre-stored template images respectively have respective corresponding first thresholds, and the first thresholds are generated according to the recognition records when the template images are successfully recognized in the historical time period.
In the embodiment of the application, at least one corresponding first threshold value exists for each pre-stored template image. For example, the face images of N authorized persons may be stored in advance as N template images, which are numbered from 1 to N, and the first threshold values corresponding to the N template images are T1~TN. And the first threshold corresponding to each template image is dynamically generated according to the identification record when the template image is successfully identified in the historical time period. The historical time period may be a preset length of time period, and may be a day, a week, a month, or the like. For example, for a template image i, its current corresponding first threshold TiThe similarity between the face image recognized as the template image i and the template image i in the past week can be determinedAnd performing a solving calculation (e.g., an averaging calculation). It is to be understood that, in an initial state where the corresponding identification record has not been generated, the corresponding first threshold value of each template image may be an initial value preset in advance.
After the target similarity and the target template image are determined, a first threshold value corresponding to the target template image is obtained in advance. In one embodiment, each template image has corresponding identification information, which may be an Identity Identifier (ID), and the identification information of the template image is stored in correspondence with the first threshold, such as in a mapping table storing the first threshold. The identification information corresponding to the target template image is the target identification information, the target identification information is used as an index, and the first threshold of the target template image can be obtained by querying in the mapping table storing the first threshold.
And then, comparing the target similarity determined in the step S102 with a first threshold value of the currently acquired target template image, and determining the recognition result of the face image to be recognized. In one embodiment, when the proportion of all face images to be recognized to which the target similarity is greater than the first threshold corresponding to the target template image is greater than the first proportion, determining that the recognition result of the current face image to be recognized is: the identification is successful.
In S104, when the ratio of the target similarity to the face images to be recognized, which is greater than the first threshold corresponding to the target template image, to all the face images to be recognized is not greater than the first ratio, and the ratio of the target similarity to the face images to be recognized, which is greater than the second threshold corresponding to the target template image, to all the face images to be recognized is greater than the second ratio, determining a recognition result, where the second threshold is smaller than the first threshold and is generated according to a recognition record of the template image successfully recognized in a historical time period.
In the embodiment of the application, at least one second threshold corresponding to each pre-stored template image exists. Similarly, the face images of N authorized persons can be stored in advance as N template images which are numbered from 1 to NRespectively corresponding to a second threshold value of S1~SN. And the second threshold corresponding to each template image is dynamically generated according to the identification record when the template image is successfully identified in the historical time period. The historical time period may be a preset length of time period, and may be a day, a week, a month, or the like. For example, for the template image i, its current corresponding second threshold SiThe similarity calculation may be obtained by performing solution calculation (for example, averaging calculation) based on the similarity between each of the face images recognized as the template image i and the template image i in the past week. It is to be understood that, in the initial state where the corresponding identification record has not been generated, the corresponding second threshold value of each template image may be an initial value preset in advance.
And comparing the face image to be recognized which is not successfully recognized in the step S103 again, wherein the comparison is carried out with a second threshold value. In an embodiment, on the premise that the recognition is not successful in S103, if the proportion of all the facial images to be recognized, of which the target similarity is greater than the second threshold corresponding to the target template image, to the facial images to be recognized is greater than the second proportion, it may also be determined that the recognition result of the current facial image to be recognized is: the identification is successful. In another embodiment, if the proportion of all the face images to be recognized of which the target similarity is greater than the second threshold corresponding to the target template image is not greater than the second proportion, determining that the recognition result of the current face image to be recognized is: the identification fails.
Since the second threshold value is smaller than the first threshold value, it is possible that the face image to be recognized, which is determined to have been unsuccessfully recognized based on the first threshold value, is successfully recognized in the determination based on the second threshold value.
For example, if the first percentage is set to 60% and the second percentage is set to 80%. In the first case, if the percentage of the face images to be recognized, of which the target similarity is greater than the first threshold value, in all the face images to be recognized is 65%, and the percentage of the face images to be recognized, of which the target similarity is greater than the second threshold value corresponding to the target template image, in all the face images to be recognized is 75%, since 65% of the face images to be recognized is greater than the first percentage 60%, the recognition result is that the recognition is successful, and the second threshold value does not need to be referred again. In the second case, if the percentage of the face images to be recognized, whose target similarity is greater than the first threshold, in all the face images to be recognized is 55%, and the percentage of the face images to be recognized, whose target similarity is greater than the second threshold corresponding to the target template image, in all the face images to be recognized is 75%, since 55% is not greater than 60% of the first percentage, and 75% is also not greater than 80% of the second percentage, the recognition result is a recognition failure. In a third case, if the percentage of the face images to be recognized, of which the target similarity is greater than the first threshold value, in all the face images to be recognized is 55%, and the percentage of the face images to be recognized, of which the target similarity is greater than the second threshold value corresponding to the target template image, in all the face images to be recognized is 90%, since 55% is not greater than 60% of the first percentage, but 90% is greater than 80% of the second percentage, the recognition result is successful. The successful recognition indicates that the face image to be recognized and the target template image are the same person, and the failed recognition indicates that the face image to be recognized and the target template image are not the same person.
Because each template image has a first threshold and a second threshold which are respectively corresponding to the template images, the method for determining the recognition result based on the target similarity and the first threshold and the second threshold which are corresponding to the target template images can realize the individuation of the thresholds, more accurately carry out the face recognition according to the first threshold and the second threshold which are suitable for the face image to be recognized, and can improve the accuracy of the video face recognition compared with the existing method of only setting one uniform threshold. In addition, the first threshold and the second threshold are generated according to the recognition record when the template image is successfully recognized in the historical time period, namely the first threshold and the second threshold are dynamically generated according to the actual recognition condition of the historical time period, so that the first threshold and the second threshold can better accord with the actual face recognition condition, and the accuracy of the video face recognition is further improved. Finally, the method and the device solve the problem that the recognition result is prone to generating deviation because similar face images are intercepted from the video and are only screened once, and accuracy of video face image recognition is improved.
Fig. 2 is a schematic flow chart of a video face recognition method according to another embodiment of the present application, where the video face recognition method further includes, after the step S104:
s105: and if the recognition result of the face image to be recognized is successful, correspondingly storing the target similarity and the target template image.
In this embodiment of the application, in step S104, when the recognition result of the face image to be recognized is that the recognition is successful, the target similarity and the target template image may be stored correspondingly, so as to perform query tracking or statistical analysis according to the target similarity when the recognition is successful. In an embodiment, the target similarity and the target template image are stored correspondingly, specifically, the current recognition time point, the target similarity and the target identification information corresponding to the target template image may be stored correspondingly, for example, may be stored in a mapping table, so as to facilitate subsequent review and analysis.
Fig. 3 is a schematic flow chart illustrating a face recognition method according to another embodiment of the present application, where after step S104, the face recognition method further includes:
s106: for each template image, acquiring an identification record of the template image when the template image is successfully identified in a historical time period, wherein the identification record comprises the similarity between each face image to be identified and the template image when the face image to be identified is successfully identified as the template image;
s107: and generating a first threshold value and a second threshold value corresponding to the template image according to the similarity contained in the identification record.
In the embodiment of the application, in the process of face recognition, for each template image, when one face image to be recognized is successfully recognized as the template image, the template image and the similarity between the face image to be recognized and the template image are correspondingly recorded to obtain a recognition record.
In an embodiment, the identification information of the template image, the time point when the identification is successful, and the similarity may be stored correspondingly, for example, in a mapping table. And for each template image, inquiring the mapping table according to the identification information of the template image and the time starting point and the time ending point corresponding to the historical time period, and acquiring a storage item from which the time point is between the time starting point and the time ending point corresponding to the historical time period and the identification information of the template image conforms to, so as to obtain an identification record when the template image is successfully identified in the historical time period.
For each template image, a first threshold and a second threshold corresponding to the template image at present can be dynamically generated according to each similarity included in the identification record corresponding to the template image. In one embodiment, an average value of the similarity values included in the identification record may be calculated, and the average value may be used as the first threshold value and the second threshold value of the template image that is currently newly generated. Further, sorting the similarity contained in the identification records from high to low; according to the similarity of the sequence after the sequence before the reference value, a first average value is obtained; generating a first threshold corresponding to the template image according to the first average; according to the similarity after the sequence is arranged at the reference value, a second average value is obtained; and generating a second threshold corresponding to the template image according to the second average value.
For example, assume that the identification record includes 10 normalized similarity records, 0.71, 0.79, 0.66, 0.93, 0.82, 0.67, 0.66, 0.92, 0.82, 0.67, respectively. To obtain the first threshold, the similarity included in the identification record is sorted from high to low, namely 0.93, 0.92, 0.82, 0.79, 0.71, 0.67, 0.66 and 0.66; then, according to the similarity of the sorted front reference value, a first average value is obtained, wherein the reference value is set to be 50%, namely the similarity of the top 50% of the sorted samples: 0.93, 0.92, 0.82, 0.79, and if these similarities are averaged, (0.93+0.92+0.82+0.82+0.79)/5 is 0.86, the first average value is 0.86; finally, a first threshold corresponding to the template image is generated according to the first average, and in this embodiment, the first average is directly set as the first threshold, so that the first threshold is 0.86.
To obtain the second threshold, first, according to the sorted similarity after the reference value, a second average value is obtained, that is, an average value of the sorted similarity of the last 50%, that is, (0.71+0.67+0.67+0.66+0.66)/5 is 0.67, so that the second average value is 0.67; then, a second threshold corresponding to the template image is generated according to the second average, and in this embodiment, the second average is directly set as the second threshold, so that the second threshold is 0.67.
In another embodiment, the similarity with the largest occurrence number in the identification record (i.e., the mode of the similarity) may be counted, and then the mode of the similarity is subtracted by a preset value to obtain the first threshold and the second threshold.
In the embodiment of the application, as for each template image, the first threshold and the second threshold corresponding to the template image can be generated according to the similarity in the identification record when the template image is successfully identified in the historical time period, that is, the corresponding first threshold and the corresponding second threshold can be dynamically generated according to the actual face identification condition in the historical time period, the first threshold and the second threshold can be more in line with the actual face identification condition, and the accuracy of face identification is improved.
As shown in fig. 4, the step S107 specifically includes steps S1071 to S1075:
s1071: sorting the similarity contained in the identification records from high to low;
s1072: according to the similarity of the sequence after the sequence before the reference value, a first average value is obtained;
s1073: generating a first threshold corresponding to the template image according to the first average value;
s1074: according to the similarity after the sequence is arranged at the reference value, a second average value is obtained;
s1075: and generating a second threshold corresponding to the template image according to the second average value.
After the identification record corresponding to the template image is obtained, the similarity included in each time period may be stored in correspondence with the time period, for example, the similarity in the time period may be stored in a mapping table corresponding to the time period according to a time point corresponding to each time period, and different mapping tables may be corresponding to different time periods, or the similarity may be stored in the same mapping table, which is not limited herein.
In the embodiment of the application, the first and second average values corresponding to each time period in the historical time period can be obtained, and then the first threshold value and the second threshold value are updated according to the first and second average values, so that the generation efficiency of the first threshold value and the second threshold value is improved.
In the face recognition process, the statistics of the first/second average values is continuously carried out by taking a time period as a unit time length. In this process, when it is detected that a time interval between the current time and the latest generation time of the first threshold and the second threshold corresponding to the template image reaches a preset time threshold (that is, the time interval is greater than or equal to the time length corresponding to the historical time period), it indicates that the similarity recording of one historical time period has been completed currently, and at this time, the first threshold and the second threshold are updated according to the first/second average values corresponding to each time period in the current historical time period.
In the embodiment of the application, the first/second average value can be obtained when the time interval between the current time and the latest generation time of the first threshold and the second threshold reaches the preset time threshold, so that the dynamic generation of the first threshold and the second threshold can be realized immediately, and the accuracy of face recognition can be improved.
Optionally, as shown in fig. 5, the step S102 specifically includes steps S1021 to S1022:
s1021: respectively calculating the similarity between the face image to be recognized and each pre-stored template image;
s1022: and determining the maximum similarity based on a preset basic threshold and the similarity with the maximum value in the similarities.
In the embodiment of the application, the storage unit of the local terminal or the third party further stores a preset basic threshold in advance, and the preset basic threshold is a lowest threshold of face recognition set in advance.
And respectively calculating the similarity of the face image to be recognized and each pre-stored template image respectively through a preset algorithm. And then checking whether the similarity greater than a preset basic threshold exists in the similarities.
Specifically, the similarity with the largest value may be determined from the similarity, and the similarity with the largest value may be compared with the preset basic threshold. And if the similarity with the maximum numerical value is determined to be larger than the preset basic threshold, directly determining the similarity with the maximum numerical value as the maximum similarity. On the contrary, if the similarity with the largest numerical value is smaller than the preset basic threshold, it indicates that the similarities between the current face image to be recognized and the pre-stored template images are all small, and the face image to be recognized may be a face image of an unauthorized person, and at this time, it is directly determined that the face image to be recognized fails in recognition.
In the embodiment of the application, a preset basic threshold can be set, and the maximum similarity can be further efficiently and accurately determined based on the preset basic threshold and the similarity with the maximum value in the similarities, so that the accuracy and efficiency of face recognition are further improved.
Fig. 6 is a schematic flow chart illustrating a face recognition method according to another embodiment of the present application, where after step S104, the face recognition method further includes:
s108: and sending out prompt information matched with the identification result.
In the embodiment of the application, after the recognition result is determined, the prompt information matched with the recognition result can be sent out in any form of characters, voice, images and the like.
In one embodiment, if the current recognition result is that the recognition is successful, the first prompt message is sent out. The first prompt message may include a prompt indicating "recognition is successful". Further, the first prompt message may further include identity information corresponding to the target template image. The storage unit stores the corresponding relation between the template image and the corresponding identity information. The identity information may be an account number or a job number of the user, or may include any one or more of a name, an identification number, an age, and a gender of the user. After the target template image corresponding to the current face image to be recognized is determined, the identity information corresponding to the target template image can be obtained. And then, sending out first preset prompt information in a text display or voice broadcasting mode. The first preset prompt message at least contains identity information corresponding to the target image, so that a manager can know the personnel information in the video, and the intelligence of video face recognition is further improved.
In another embodiment, if the current recognition result is recognition failure, a second prompt message is sent out. The second prompt message may include a prompt indicating "recognition failed". Further, the second prompt information may further include indication information for prompting the user to apply for the authority, so that the user can enter the authority application information according to the indication information, so that after the authority application information is verified by the home terminal of the electronic device or the server, the face image of the user is stored in the preset storage unit as the template image, and a first threshold and a second threshold corresponding to individuation are set for the template image.
In the embodiment of the application, after the recognition result is obtained, the prompt information corresponding to the recognition result is sent out, so that the current face recognition result can be fed back to a user or a manager in time, and the intelligence of the video face recognition is improved.
Optionally, each template image has at least two first threshold values and second threshold values respectively corresponding to different illumination conditions; the determining the recognition result of the facial image to be recognized based on the target similarity and the first threshold and the second threshold corresponding to the target template image includes:
acquiring a current target illumination condition;
determining a first target threshold value and a second target threshold value corresponding to the target template image according to the target illumination condition; the target first threshold and the target second threshold are determined from the at least two first thresholds and second thresholds respectively corresponding to different illumination conditions and matched with the target illumination conditions;
and determining the recognition result of the face image to be recognized based on the target similarity and the target first threshold and second threshold.
In the embodiment of the application, at least two illumination conditions exist in a face recognition application scene. Illustratively, the lighting conditions may include: the solar illumination in the daytime and the night light illumination. By way of example, the daytime solar illumination may be further subdivided into: the illumination condition under different weather conditions such as sunny illumination, cloudy illumination, rainy illumination and the like.
For the same target person, the effect of shooting different face images obtained by the person under different lighting conditions is different, so that the corresponding first threshold value and the second threshold value under different lighting conditions are stored in advance for the template image of the same target person in the embodiment of the application.
In the process of face recognition, the current illumination condition is acquired as a target illumination condition.
After the target similarity and the target template image are determined, according to the target illumination condition, a first threshold and a second threshold which are matched with the target illumination condition are selected from a plurality of first thresholds and second thresholds which are correspondingly stored with the target identification information of the target template image to serve as a first target threshold and a second target threshold.
And then, determining the recognition result of the face image to be recognized under the target illumination condition based on the target similarity and the target first threshold and second threshold under the target illumination condition.
In the embodiment of the application, besides the personalized first threshold and the personalized second threshold corresponding to the current personnel can be flexibly obtained, the corresponding target first threshold and the target second threshold can be obtained according to the current illumination condition, so that the accuracy of determining the first threshold and the second threshold can be further ensured, and the accuracy of video face recognition is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 7 is a schematic structural diagram of a video face recognition apparatus provided in an embodiment of the present application, and for convenience of description, only parts related to the embodiment of the present application are shown:
the video face recognition device comprises: an acquisition unit 71, a target similarity determination unit 72, and a recognition result determination unit 73. Wherein:
and the acquisition unit 71 is used for intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video.
A target similarity determining unit 72, configured to calculate similarities between the face image to be recognized and each pre-stored template image, respectively, so as to determine a target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is the template image with the maximum similarity with the face image to be recognized in each pre-stored template image.
A recognition result determining unit 73, configured to determine a recognition result of the to-be-recognized face image when an occupation ratio of all to-be-recognized face images of which the target similarity is greater than a first threshold corresponding to the target template image is greater than a first occupation ratio, where each pre-stored template image has a corresponding first threshold, and the first threshold is generated according to a recognition record of the template image when the template image is successfully recognized in a historical time period; when the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, in all the facial images to be recognized is not greater than a first occupation ratio, and the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, in all the facial images to be recognized is greater than a second occupation ratio, determining the recognition result, wherein the second threshold value is smaller than the first threshold value; the pre-stored template images respectively have corresponding first threshold values and second threshold values, and the first threshold values and the second threshold values are generated according to identification records of the template images in the historical time period when the template images are successfully identified.
Optionally, the number of face images to be recognized of the same target person is determined according to the coincidence continuity of the face images in the video at the spatio-temporal positions, where the coincidence continuity is: the human face image of the same target person appears in two adjacent video frames, and the distance difference between the position of the human face image of the same target person in the previous video frame and the position of the human face image of the same target person in the next video frame is within a reasonable range.
Optionally, the video-based face recognition apparatus further includes:
and the storage unit is used for correspondingly storing the target similarity and the target template image if the recognition result of the face image to be recognized is successful.
Optionally, the video face recognition apparatus further includes:
the identification record acquisition unit is used for acquiring an identification record of each template image when the template image is successfully identified in a historical time period, wherein the identification record comprises the similarity between each face image to be identified and the template image when the face image to be identified is successfully identified as the template image;
and the threshold value generating unit is used for generating a first threshold value and a second threshold value corresponding to the template image according to the similarity contained in the identification record.
Optionally, the threshold generating unit is specifically configured to sort the similarity included in the identification record from high to low; according to the similarity of the sequence after the sequence before the reference value, a first average value is obtained; and generating a first threshold corresponding to the template image according to the first average value.
Optionally, the threshold generating unit is specifically configured to obtain a second average value according to the similarity after the sorting and after the reference value; and generating a second threshold corresponding to the template image according to the second average value.
Optionally, the video-based face recognition apparatus further includes:
and the prompting unit is used for sending out prompting information matched with the identification result.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 8 is a schematic diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 8, the electronic apparatus 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a face recognition program, stored in said memory 81 and operable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the various embodiments of the video-based face recognition method described above, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 80 executes the computer program 82 to implement the functions of the modules/units in the device embodiments, such as the functions of the acquiring unit 71 to the recognition result determining unit 73 shown in fig. 7.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the electronic device 8.
The electronic device 8 may be a monitoring device, a mobile phone, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The electronic device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of an electronic device 8 and does not constitute a limitation of the electronic device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the electronic device 8, such as a hard disk or a memory of the electronic device 8. The memory 81 may also be an external storage device of the electronic device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the electronic device 8. The memory 81 is used for storing the computer program and other programs and data required by the electronic device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method based on video is characterized by comprising the following steps:
intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video;
calculating the similarity between each face image to be recognized and each pre-stored template image aiming at each face image to be recognized so as to determine the target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is a template image with the maximum similarity with the face image to be recognized in each pre-stored template image;
when the proportion of the target similarity to the face images to be recognized, which is greater than the first threshold corresponding to the target template image, to all the face images to be recognized is greater than the first proportion, determining the recognition result of the face images to be recognized, wherein each pre-stored template image respectively has a corresponding first threshold, and the first thresholds are generated according to the recognition records when the template images are successfully recognized in the historical time period;
and when the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, in all the facial images to be recognized is not greater than a first occupation ratio, and the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, in all the facial images to be recognized is greater than a second occupation ratio, determining the recognition result, wherein the second threshold value is smaller than the first threshold value and is generated according to the recognition record of the template image successfully recognized in the historical time period.
2. The face recognition method of claim 1, wherein the plurality of face images to be recognized of the same target person are determined according to the temporal-spatial position continuity of the face images in the video, and the continuity of the continuity is as follows: the human face image of the same target person appears in two adjacent video frames, and the distance difference between the position of the human face image of the same target person in the previous video frame and the position of the human face image of the same target person in the next video frame is within a reasonable range.
3. The face recognition method of claim 1, wherein the method further comprises:
and if the recognition result of the face image to be recognized is successful, correspondingly storing the target similarity and the target template image.
4. The face recognition method of claim 1, wherein the method further comprises:
for each template image, acquiring an identification record of the template image when the template image is successfully identified in a historical time period, wherein the identification record comprises the similarity between each face image to be identified and the template image when the face image to be identified is successfully identified as the template image;
and generating a first threshold value and a second threshold value corresponding to the template image according to the similarity contained in the identification record.
5. The face recognition method of claim 4, wherein generating the first threshold corresponding to the template image according to the similarity included in the recognition record comprises:
sorting the similarity contained in the identification records from high to low;
according to the similarity of the sequence after the sequence before the reference value, a first average value is obtained;
and generating a first threshold corresponding to the template image according to the first average value.
6. The face recognition method of claim 5, wherein generating a second threshold corresponding to the template image according to the similarity included in the recognition record comprises:
according to the similarity after the sorting and after the reference value, a second average value is obtained;
and generating a second threshold corresponding to the template image according to the second average value.
7. The face recognition method according to any one of claims 1 to 6, after determining the recognition result, further comprising:
and sending out prompt information matched with the identification result.
8. A video-based face recognition apparatus, comprising:
the acquisition unit is used for intercepting a plurality of face images to be recognized of the same target person appearing in a plurality of video frames in the video;
the target similarity determining unit is used for respectively calculating the similarity between the face image to be recognized and each pre-stored template image so as to determine the target similarity; the target similarity is the similarity corresponding to a target template image, and the target template image is a template image with the maximum similarity with the face image to be recognized in each pre-stored template image;
the identification result determining unit is used for determining the identification result of the face image to be identified when the occupation ratio of all the face images to be identified, of which the target similarity is greater than the first threshold value corresponding to the target template image, is greater than the first occupation ratio, wherein each pre-stored template image respectively has a corresponding first threshold value, and the first threshold value is generated according to the identification record when the template image is successfully identified in the historical time period; when the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a first threshold value corresponding to the target template image, in all the facial images to be recognized is not greater than a first occupation ratio, and the occupation ratio of the facial images to be recognized, of which the target similarity is greater than a second threshold value corresponding to the target template image, in all the facial images to be recognized is greater than a second occupation ratio, determining the recognition result, wherein the second threshold value is smaller than the first threshold value;
the pre-stored template images respectively have corresponding first threshold values and second threshold values, and the first threshold values and the second threshold values are generated according to identification records of the template images in the historical time period when the template images are successfully identified.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the computer program, when executed by the processor, causes the electronic device to carry out the steps of the video-based face recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes an electronic device to carry out the steps of the video-based face recognition method according to any one of claims 1 to 7.
CN202111657465.0A 2021-12-30 2021-12-30 Video-based face recognition method and device, electronic equipment and storage medium Pending CN114373209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111657465.0A CN114373209A (en) 2021-12-30 2021-12-30 Video-based face recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111657465.0A CN114373209A (en) 2021-12-30 2021-12-30 Video-based face recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114373209A true CN114373209A (en) 2022-04-19

Family

ID=81142106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657465.0A Pending CN114373209A (en) 2021-12-30 2021-12-30 Video-based face recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114373209A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935479A (en) * 2023-09-15 2023-10-24 纬领(青岛)网络安全研究院有限公司 Face recognition method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935479A (en) * 2023-09-15 2023-10-24 纬领(青岛)网络安全研究院有限公司 Face recognition method and device, electronic equipment and storage medium
CN116935479B (en) * 2023-09-15 2023-12-15 纬领(青岛)网络安全研究院有限公司 Face recognition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109783685B (en) Query method and device
CN109740004B (en) Filing method and device
CN108228792B (en) Picture retrieval method, electronic device and storage medium
CN109710780A (en) A kind of archiving method and device
CN113762106B (en) Face recognition method and device, electronic equipment and storage medium
CN109426785B (en) Human body target identity recognition method and device
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
CN109784274A (en) Identify the method trailed and Related product
CN109815370A (en) A kind of archiving method and device
CN109784220B (en) Method and device for determining passerby track
CN111724496A (en) Attendance checking method, attendance checking device and computer readable storage medium
CN109800664B (en) Method and device for determining passersby track
CN111368619A (en) Method, device and equipment for detecting suspicious people
CN109857891A (en) A kind of querying method and device
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN112328820A (en) Method, system, terminal and medium for searching vehicle image through face image
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN110263830B (en) Image processing method, device and system and storage medium
CN114373209A (en) Video-based face recognition method and device, electronic equipment and storage medium
CN110796014A (en) Garbage throwing habit analysis method, system and device and storage medium
CN113568934A (en) Data query method and device, electronic equipment and storage medium
CN109801394B (en) Staff attendance checking method and device, electronic equipment and readable storage medium
CN115391596A (en) Video archive generation method and device and storage medium
CN111368115B (en) Data clustering method, device, clustering server and storage medium
CN114817518A (en) License handling method, system and medium based on big data archive identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination