CN110765938A - Identity recognition method and device, electronic equipment and computer readable storage medium - Google Patents

Identity recognition method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110765938A
CN110765938A CN201911008076.8A CN201911008076A CN110765938A CN 110765938 A CN110765938 A CN 110765938A CN 201911008076 A CN201911008076 A CN 201911008076A CN 110765938 A CN110765938 A CN 110765938A
Authority
CN
China
Prior art keywords
image
recognized
library
target face
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911008076.8A
Other languages
Chinese (zh)
Inventor
任超
李大虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201911008076.8A priority Critical patent/CN110765938A/en
Publication of CN110765938A publication Critical patent/CN110765938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides an identity recognition method, an identity recognition device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified; comparing the images to be identified in a static full-scale library; if a target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID and a communication identifier corresponding to the target face image from a static full-scale library; acquiring a corresponding second geographic position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image; and if the first geographic position and the second geographic position meet the preset condition, taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized. The scheme can realize real-time and accurate identification of all the snap-shot images to be identified.

Description

Identity recognition method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an identity recognition method, an identity recognition apparatus, an electronic device, and a computer-readable storage medium.
Background
The dynamic deployment and control function of the existing face recognition system is to utilize a base database containing a small number of face images to match with snapshot image data in real time, and if the matching is successful, an alarm result is generated.
Because the base library containing a small number of face images does not contain the identity information of all passers-by, identity recognition of each snapshot image collected by the front-end camera cannot be realized. If the number of the face images in the base library is increased, the identification of the snap images corresponding to more passers can be realized, but with the increase of the number of the face images, due to the limitation of an algorithm and hardware, the real-time performance and the accuracy of identity identification cannot be guaranteed, and even the identification fails.
Therefore, it is desirable to provide a method capable of real-time and accurate identification of all real-time captured images.
Disclosure of Invention
The purpose of the present application is to solve at least one of the above technical drawbacks, and to provide the following solutions:
in a first aspect, an embodiment of the present application provides an identity identification method, including:
acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified;
comparing images to be recognized in a static full-scale library, wherein the static full-scale library stores face images of people in a preset area, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
if a target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID and a communication identifier corresponding to the target face image from a static full-scale library;
acquiring a corresponding second geographical position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, wherein the communication identification of personnel in a preset area, the geographical position corresponding to each communication identification and the acquisition time corresponding to each geographical position are stored in the dynamic track library;
and if the first geographic position and the second geographic position meet the preset condition, taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized.
Optionally, based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image, the method obtains a corresponding second geographic position from the dynamic trajectory library, and specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
taking the geographic position with the same acquisition time as the image acquisition time in each geographic position as a second geographic position; accordingly, the number of the first and second electrodes,
the preset condition is that the first geographic position is matched with the second geographic position.
Optionally, based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image, the method obtains a corresponding second geographic position from the dynamic trajectory library, and specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring at least two geographical positions which are in the geographical positions and are closest to the image acquisition time before the image acquisition time, and taking the at least two geographical positions as second geographical positions; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographic position and the geographic position, closest to the image acquisition time, of the at least two geographic positions is not greater than the distance between the first geographic position and each of the other geographic positions of the at least two geographic positions.
Optionally, based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image, the method obtains a corresponding second geographic position from the dynamic trajectory library, and specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring the geographical position of the acquisition time in each geographical position within a preset time period, and taking the geographical position of the acquisition time within the preset time period as a second geographical position, wherein the preset time period comprises the image acquisition time; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographical position and each second geographical position is not more than a preset distance.
In a second aspect, an embodiment of the present application provides an identity identification method, including:
acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is contained in a preset time period;
comparing images to be recognized in a first dynamic track library, wherein the first dynamic track library stores face images of persons in a sub-region within a preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
and if a first target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID corresponding to the first target face image from the first dynamic track library, and taking the ID corresponding to the first target face image as an identity recognition result of the image to be recognized.
Optionally, the method further comprises:
if the face image matched with the image to be recognized is not obtained through image comparison, image comparison is carried out on the image to be recognized in a static full-scale library, and the face image of a person in a preset area, the ID corresponding to each face image and the communication identification corresponding to each face image are stored in the static full-scale library;
and if a second target face image matched with the image to be recognized is obtained through image comparison in the static full-scale library, acquiring an ID corresponding to the second target face image from the static full-scale library, and taking the ID corresponding to the second target face image as an identity recognition result of the image to be recognized.
Optionally, the method further includes:
the method comprises the steps of obtaining a first dynamic track library based on a first geographical position, image acquisition time and a second dynamic track library, wherein communication identifications of all personnel in a preset area, geographical positions corresponding to the communication identifications and acquisition moments corresponding to the geographical positions are stored in the second dynamic track library.
Optionally, the obtaining the first dynamic trajectory library based on the first geographic location, the image acquisition time, and the second dynamic trajectory library specifically includes:
acquiring corresponding communication identifiers in a sub-area within a preset time period from a second dynamic track library;
acquiring IDs and face images corresponding to the communication identifiers in the sub-regions within the preset time period from a static full-scale library based on the corresponding communication identifiers in the sub-regions within the preset time period; alternatively, the first and second electrodes may be,
based on each communication identifier in the sub-region within the preset time period, acquiring a face image corresponding to each communication identifier corresponding to the sub-region within the preset time period from a static full-scale library, and acquiring an ID corresponding to each communication identifier corresponding to the sub-region within the preset time period from a real-name database of a telecom operator;
and obtaining a second dynamic track library based on each communication identifier corresponding to the sub-region in a preset time period, the ID corresponding to each communication identifier and the face image corresponding to each communication identifier.
In a third aspect, an embodiment of the present application provides an identity recognition apparatus, including:
the image acquisition module is used for acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified;
the image comparison module is used for comparing images to be recognized in a static full-scale library, and the static full-scale library stores face images of people in a preset area, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
the information acquisition module is used for acquiring an ID and a communication identifier corresponding to the target face image from the static full-scale library if the target face image matched with the image to be recognized is obtained through image comparison;
the geographic position checking module is used for acquiring a corresponding second geographic position from the dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, and the dynamic track library stores the communication identification of personnel in a preset area, the geographic position corresponding to each communication identification and the acquisition time corresponding to each geographic position;
and the recognition result output module is used for taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized if the preset condition is met between the first geographic position and the second geographic position.
In a fourth aspect, an embodiment of the present application provides an identity recognition apparatus, including:
the image acquisition module is used for acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is contained in a preset time period;
the image comparison module is used for carrying out image comparison on images to be recognized in a first dynamic track library, and the first dynamic track library stores face images of persons in a sub-region within a preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
and the recognition result output module is used for acquiring the ID corresponding to the first target face image from the first dynamic track library and taking the ID corresponding to the first target face image as the identity recognition result of the image to be recognized if the first target face image matched with the image to be recognized is obtained through image comparison.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor;
the memory has a computer program stored therein;
a processor for executing a computer program to implement the method provided by the first aspect or the second aspect.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method provided in the first aspect or the second aspect.
The beneficial effect that technical scheme that this application provided brought is:
(1) firstly, image matching is carried out on an image to be recognized through a static full-scale library containing all personnel related information, an ID and a communication identifier corresponding to the image to be recognized are obtained, then a corresponding geographic position is obtained in a dynamic track library through the communication identifier, the image comparison result is verified through the relation between the geographic position obtained from the dynamic track library and the geographic position of an image acquisition device corresponding to the image to be recognized, and if the verification is passed, the corresponding ID is used as the identity recognition result of the image to be recognized. According to the scheme, a static full-scale library containing relevant information of all personnel is adopted in the image comparison process, the integrity of data of a base library is guaranteed, meanwhile, the data in a dynamic track library is used for verifying the image comparison result, the accuracy of the identification result is guaranteed, and then real-time and accurate identification of all snap-shot images to be identified can be achieved.
(2) The method comprises the steps of firstly, obtaining a first target face image corresponding to an image to be recognized in a first dynamic track library corresponding to a preset time period and a sub-region, obtaining an identity recognition result of the image to be recognized in the first dynamic track library according to the first target face image if the first target face image is obtained, obtaining a second target face image of the image to be recognized in a static full-scale estimation library if the first target face image is not obtained, and obtaining the identity recognition result of the image to be recognized in the static full-scale track library according to the second target face image. The database of the first image comparison corresponds to the preset time period and the sub-region, so that the data volume is small, the real-time property of the identity recognition is ensured, and meanwhile, for the image to be recognized which fails in comparison in the first dynamic track library, the static full-volume library with more complete data volume is further adopted for comparison, so that the real-time and accurate recognition of all the snap-shot images to be recognized can be realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of an identity recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of an example in an embodiment of the present application;
fig. 3 is a schematic flowchart of an identity recognition method according to an embodiment of the present application;
FIG. 4 is a flow chart of an example in an embodiment of the present application;
fig. 5 is a block diagram of an identification apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an identification apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The dynamic deployment and control function of the existing face recognition system is to utilize a base database containing a small number of face images to match with snapshot image data in real time, and if the matching is successful, an alarm result is generated.
Because the base library containing a small number of face images does not contain the identity information of all passers-by, identity recognition of each snapshot image collected by the front-end camera cannot be realized. The existing related technology tries to realize the identification of the snap-shot images corresponding to more people by increasing the number of the face images in the bottom library, but the currently known supportable maximum dynamic deployment and control data volume is only in the million level, and when the required number of dynamic deployment and control is further increased, due to the limitation of an algorithm and hardware, the real-time performance and the accuracy of identity identification cannot be guaranteed, and even the identification fails.
In view of the foregoing problems, an embodiment of the present application provides an identity identification method, as shown in fig. 1, the method may include:
step S101, obtaining an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located, and an image acquisition time corresponding to the image to be identified.
The image to be recognized at least includes a face of a person to be recognized, and is used as an object for face comparison in the following, for example, the image to be recognized may be a face image including a face, a whole body image including a face, or a half body image. The image to be identified can be directly acquired by front-end image acquisition equipment, or can be acquired by extracting a corresponding image from a video stream acquired by the front-end image acquisition equipment.
The first geographic position is the geographic position where the front-end image acquisition equipment is located, and the first geographic position also represents the geographic position where the person to be identified is located. In practical applications, each image capturing device may be numbered, each number corresponding to a specific geographic location, and then the corresponding first geographic location may be obtained by obtaining the number of the image capturing device.
The image acquisition time is used for representing the time when the image acquisition equipment acquires the image to be identified.
And S102, comparing the images to be recognized in a static full-scale library, wherein the static full-scale library stores the face images of the persons in the preset area, the identity recognition information IDs corresponding to the face images and the communication identifications corresponding to the face images.
The static full-scale library stores relevant information of all persons in a preset area, wherein the preset area can be set according to the actual monitoring range, and for example, countries, regions, provinces and cities and the like can be respectively used as the preset area.
It is to be understood that "all persons" referred to herein are persons who have records in the relevant department and want the identity of the identifier, and white list persons who have no records in the relevant department (e.g., black family), have records in the relevant department but no cell phone (e.g., children), or are set by the relevant department are not included in "all persons" as described herein.
In practical applications, all the people in the preset area may generally include a standing population and a floating population, the standing population may be obtained from a database of a relevant department, and the floating population may be obtained from a face recognition system disposed at each airport, station, etc. entering or exiting the preset area.
In the static full database, because the face image, the ID and the communication identification have a corresponding relationship, all personnel in a preset area only need to acquire any one of the three information, and the other two corresponding information can be searched in the full database. In practical application, the ID may be an identification number, and the communication identifier may be a mobile phone number.
And step S103, if a target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID and a communication identifier corresponding to the target face image from the static full-scale library.
In theory, because the static full-scale library contains the face images of all people in the preset area, the face image matched with the image to be recognized corresponding to any person to be recognized should exist in the image to be recognized, namely, the target face image of any image to be recognized can be obtained from the static full-scale library. However, it can be understood that due to the influence of the quality of the acquired image to be recognized or the influence of the accuracy of the image comparison algorithm, part of the image to be recognized may not be able to obtain the corresponding target face image, in this case, other images to be recognized including the corresponding person to be recognized may be obtained again, and the other images to be recognized are subjected to image comparison again in the static full-scale library, so as to obtain the corresponding target face image.
Specifically, according to the target face image, a corresponding ID and a communication identifier may be further obtained in the static full-scale library according to a corresponding relationship, where ID information corresponding to the image to be recognized is obtained, but because the data volume in the static full-scale library is large, the accuracy of the comparison result of the corresponding image cannot be guaranteed, that is, there may be a case where the target face image is not a face image of the person to be recognized, and further, the ID obtained according to the target face image is not an ID of the person to be recognized, so that the ID obtained in step S103 is further verified through step S104 and step S105 in the embodiment of the present application.
And step S104, acquiring a corresponding second geographic position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, wherein the communication identification of personnel in a preset area, the geographic position corresponding to each communication identification and the acquisition time corresponding to each geographic position are stored in the dynamic track library.
The geographical position corresponding to each communication identifier in the dynamic track library can be acquired by relevant equipment of a telecom operator. It can be understood that, due to the characteristic of mobility of the personnel, the communication identifier corresponding to each personnel may correspond to different geographical locations at different times, that is, in the dynamic trajectory library, each communication identifier may respectively have a corresponding relationship with a plurality of pairs of geographical locations and corresponding acquisition times. For example, for the communication identifier "123456", the corresponding geographic locations may include "geographic location a at time T1", "geographic location B at time T2", and "geographic location C at time T3", etc., that is, the person corresponding to the communication identifier "123456" is at location a at time T1, location B at time T2, and location C at time T3.
It should be noted that the data stored in the dynamic trajectory library may be determined according to the image acquisition time of the image to be recognized, that is, the dynamic trajectory library may only include data within a certain time period, as long as the time end includes the image acquisition time. Therefore, the verification of the image to be recognized is not influenced, the data volume of the dynamic track library is reduced, the identity recognition accuracy is ensured, and the identity recognition speed is increased.
And step S105, if the first geographic position and the second geographic position meet the preset condition, taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized.
Specifically, as can be seen from the foregoing analysis, the second geographic location may include one or more geographic locations, and the second geographic location may reflect a movement trajectory (at each time) of a person to be recognized corresponding to the target face image, and the first geographic location may reflect a geographic location of the person corresponding to the image to be recognized during image acquisition, so that through a relationship between the second geographic location and the first geographic location, it may be verified whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized, in other words, through a relationship between one or more second geographic locations and the first geographic location, a result of the image comparison in step S103 may be verified, that is, the obtained ID is verified. If the verification is passed, it is determined that the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized, then the result of the image comparison in step S103 is correct, and further the obtained ID of the person to be recognized is also correct, and the ID is used as the identification result of the person to be recognized. If the verification fails, determining that the person corresponding to the target face image is not the person to be recognized corresponding to the image to be recognized, and failing to recognize the identity.
The identity recognition method comprises the steps of firstly carrying out image matching on an image to be recognized through a static full-scale library containing relevant information of all personnel, obtaining an ID and a communication identifier corresponding to the image to be recognized, then obtaining a corresponding geographic position in a dynamic track library by utilizing the communication identifier, verifying an image comparison result by utilizing the relationship between the geographic position obtained from the dynamic track library and the geographic position of an image acquisition device corresponding to the image to be recognized, and taking the corresponding ID as an identity recognition result of the image to be recognized if the verification is passed. According to the scheme, a static full-scale library containing relevant information of all personnel is adopted in the image comparison process, the integrity of data of a base library is guaranteed, meanwhile, the data in a dynamic track library is used for verifying the image comparison result, the accuracy of the identification result is guaranteed, and then real-time and accurate identification of all snap-shot images to be identified can be achieved.
In an alternative embodiment of the present application, the target face image is obtained by:
respectively comparing the image to be recognized with each face image in the static full-scale library to obtain the similarity between the image to be recognized and each face image;
and taking the face image with the similarity between the face image and the image to be recognized larger than a preset threshold value as a target face image.
The setting of the preset threshold value can be set according to actual requirements, the size of the preset threshold value can influence the speed and accuracy of target face image acquisition, the larger the preset threshold value is, the slower the speed of target face acquisition is, and the higher the accuracy is, otherwise, the faster the speed of target face acquisition is, and the higher the accuracy is. In practical application, a proper preset threshold value can be selected to ensure that the speed and the accuracy are both considered when the target face image is acquired.
In an optional embodiment of the present application, based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image, obtaining a corresponding second geographic location from the dynamic trajectory library specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
taking the geographic position with the same acquisition time as the image acquisition time in each geographic position as a second geographic position; accordingly, the number of the first and second electrodes,
the preset condition is that the first geographic position is matched with the second geographic position.
Whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized or not can be verified through the relationship between the second geographic position and the first geographic position, in other words, the obtained ID can be verified through the relationship between one or more second geographic positions and the first geographic position.
And if the preset condition is met, the verification of the image comparison is passed, and the verification of the obtained ID is passed.
Specifically, the embodiment of the application can verify whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized by determining whether the geographic position of the person to be recognized at the same time is matched with the geographic position of the person corresponding to the target face image. The same time can be an image acquisition time, at the image acquisition time, the person to be recognized is located at the first geographic position, and the second geographic position where the person corresponding to the target face image is located at the image acquisition time is acquired. The first geographical position and the second geographical position are matched, namely, the distance between the first geographical position and the second geographical position is smaller than a preset distance, and the preset distance can be set according to actual conditions and requirements.
And determining whether the first geographic position is matched with the second geographic position, if so, passing the verification, determining that the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized, and if so, determining that the image comparison result is correct, further obtaining the ID of the person to be recognized, and taking the ID as the identity recognition result of the person to be recognized. If the images are not matched, the verification is not passed, and the person corresponding to the target face image is determined not to be the person to be recognized corresponding to the image to be recognized, so that the identity recognition fails.
In an optional embodiment of the present application, based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image, obtaining a corresponding second geographic location from the dynamic trajectory library specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring at least two geographical positions which are in the geographical positions and are before the image acquisition time and closest to the image acquisition time, and taking the at least two geographical positions as second geographical positions; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographic position and the geographic position, closest to the image acquisition time, of the at least two geographic positions is not greater than the distance between the first geographic position and each of the other geographic positions of the at least two geographic positions.
Whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized or not can be verified through the relationship between the second geographic position and the first geographic position, in other words, the obtained ID can be verified through the relationship between one or more second geographic positions and the first geographic position.
And if the preset condition is met, the verification of the image comparison is passed, and the verification of the obtained ID is passed.
Specifically, the embodiment of the application can verify whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized by determining whether the geographic position of the person to be recognized is on the movement track of the person corresponding to the target face image. The method comprises the steps of obtaining at least two geographical positions of a target face image before an image acquisition moment and closest to the image acquisition moment, and determining whether a first geographical position is located on a motion track of the target face image according to the change of a distance between the first geographical position where a person to be identified is located and the at least two geographical positions. Specifically, the distances between the first geographic position and the second geographic positions can be sequentially calculated according to the sequence of the acquisition time of each second geographic position, if the distances are gradually reduced, that is, the distance between the first geographic position and the geographic position of the at least two geographic positions, of which the acquisition time is closest to the image acquisition time, is not greater than the distances between the first geographic position and other geographic positions of the at least two geographic positions, it is determined that the first geographic position is located on the movement track of the person corresponding to the target face image, and otherwise, it is determined that the first geographic position is not located on the movement track of the person corresponding to the target face image.
And determining whether the first geographical position is on the motion track of the person corresponding to the target face image, if the first geographical position is on the motion track of the person corresponding to the target face image, the verification is passed, determining that the person corresponding to the target face image is the person to be identified corresponding to the image to be identified, and then determining that the image comparison result is correct, further obtaining the ID of the person to be identified, and taking the ID as the identity identification result of the person to be identified. And if the first geography is not located on the motion track of the person corresponding to the target face image, the verification is not passed, and the person corresponding to the target face image is determined not to be the person to be recognized corresponding to the image to be recognized, so that the identity recognition fails.
In an optional embodiment of the present application, the obtaining a corresponding second geographic location from the dynamic trajectory library based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring the geographical position of the acquisition time in each geographical position within a preset time period, and taking the geographical position of the acquisition time within the preset time period as a second geographical position, wherein the preset time period comprises the image acquisition time; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographical position and each second geographical position is not more than a preset distance.
Whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized or not can be verified through the relationship between the second geographic position and the first geographic position, in other words, the obtained ID can be verified through the relationship between one or more second geographic positions and the first geographic position.
And if the preset condition is met, the verification of the image comparison is passed, and the verification of the obtained ID is passed.
Specifically, if the person to be recognized is the person corresponding to the target face image, the second geographic position where the person corresponding to the target face image is located and the first geographic position where the person to be recognized is located in the same specific area within a preset time period including the image acquisition time.
According to the embodiment of the application, whether the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized can be verified by determining whether the first geographic position where the person to be recognized is located and the geographic position where the person corresponding to the target face image is located in the same specific area within the preset time period. Specifically, the second geographic position of the target face image in the preset time period can be obtained, the distance between the first geographic position and each second geographic position is obtained through calculation, if each distance is not greater than the preset distance, the geographic positions of the first geographic position and the person corresponding to the target face image in the preset time period are determined to belong to the same specific area, and otherwise, the geographic positions of the first geographic position and the person corresponding to the target face image in the preset time period are determined not to belong to the same specific area.
Determining whether the geographic position of the person corresponding to the first geographic position and the target face image in the preset time period belongs to the same specific area, if the geographic position of the person corresponding to the first geographic position and the target face image in the preset time period belongs to the same specific area, passing the verification, determining that the person corresponding to the target face image is the person to be recognized corresponding to the image to be recognized, determining that the result of image comparison is correct, further obtaining the ID of the person to be recognized, and taking the ID as the identity recognition result of the person to be recognized. If the first geographic position and the geographic position of the person corresponding to the target face image in the preset time period do not belong to the same specific area, the verification is not passed, and the person corresponding to the target face image is determined not to be the person to be recognized corresponding to the image to be recognized, so that the identity recognition fails.
The embodiments of the present application are further illustrated by an example, and as shown in fig. 2, the implementation process of the embodiment may include the following steps:
firstly, carrying out image comparison on an image to be identified in a static full-scale library by using a 1: N comparison algorithm, and judging whether a target face image is obtained (namely whether a comparison result exists);
if no comparison result exists, the identification fails;
if the comparison result exists, acquiring the geographic position corresponding to the target face image from the dynamic track library;
determining whether the geographic position corresponding to the target face image is matched with the geographic position corresponding to the image to be recognized;
if not, the identification fails;
and if the matching is successful, the identification is successful.
In this example, the identity recognition method provided by the embodiment of the present application only adopts the 1: N comparison algorithm in the image comparison stage, which saves calculation power and improves the real-time property of identity recognition, and meanwhile, the dynamic trajectory library is used to check the image comparison result, thereby improving the accuracy of identity recognition.
Fig. 3 is a schematic flow chart of an identity recognition method provided in an embodiment of the present application, and as shown in fig. 3, the method may include:
step S301, acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located, and an image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is included in a preset time period.
The image to be recognized at least includes a face of a person to be recognized, and is used as an object for face comparison in the following, for example, the image to be recognized may be a whole body image or a half body image including the face. The image to be identified can be directly acquired by front-end image acquisition equipment, or can be acquired by extracting a corresponding image from a video stream acquired by the front-end image acquisition equipment.
The first geographic position is the geographic position where the front-end image acquisition equipment is located, and the first geographic position also represents the geographic position where the person to be identified is located. In practical applications, each image capturing device may be numbered, each number corresponding to a specific geographic location, and then the corresponding first geographic location may be obtained by obtaining the number of the image capturing device.
The preset area may be set according to an actual range to be monitored, and for example, a country, a region, a province, or the like may be respectively used as the preset area. In practical applications, all the people in the preset area may generally include a standing population and a floating population, the standing population may be obtained from a database of a relevant department, and the floating population may be obtained from a face recognition system disposed at each airport, station, etc. entering or exiting the preset area. The preset area may comprise a plurality of sub-areas, and the first geographical position corresponds to a sub-area within the preset area, i.e. the first geographical position is in the corresponding sub-area.
The image acquisition time is used for representing the time when the image acquisition equipment acquires the image to be identified.
Step S302, image comparison is carried out on images to be recognized in a first dynamic track library, and the first dynamic track library stores face images of all people in a sub-area within a preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images.
The preset time period is determined according to the image capturing time, and for example, the preset time period may be set to a time period between 5 minutes before and after the image capturing time. The sub-area may be determined according to the first geographical location, for example, an area covered by a base station of a telecommunications carrier corresponding to the first location may be set as the corresponding sub-area.
Due to the corresponding relation between the face image and the ID, the corresponding ID can be obtained from the first dynamic track library according to the face image.
Specifically, because the person to be recognized is located at the first geographic position at the image acquisition time, and the first geographic position belongs to the corresponding sub-region, all persons in the sub-region within the preset time period should include the person to be recognized, the face image corresponding to the person to be recognized is stored in the first dynamic trajectory library, and the image to be recognized can be subjected to image comparison in the first dynamic trajectory library to obtain the corresponding target face image.
Step S303, if a first target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID corresponding to the first target face image from a first dynamic track library, and taking the ID corresponding to the first target face image as an identity recognition result of the image to be recognized.
Specifically, after a first target face image corresponding to the image to be recognized is acquired, a corresponding ID can be acquired in the first dynamic trajectory library according to the first target face image, and the ID is an identity recognition result of the person to be recognized.
Theoretically, as can be seen from the foregoing analysis, as long as the image acquisition time and the first geographic location of the image to be recognized meet the requirements, the target image corresponding to the image to be recognized can be acquired from the corresponding first dynamic trajectory library. However, it can be understood that due to the influence of the quality of the acquired image to be recognized, the influence of the accuracy of the image comparison algorithm, or the influence of the integrity of the first dynamic trajectory library, a part of the image to be recognized may not be able to obtain the corresponding target face image. Then, when the first target image corresponding to the image to be recognized is not acquired, the image to be recognized needs to be further recognized.
In an optional embodiment of the present application, the method may further comprise:
and if the face image matched with the image to be recognized is not obtained through image comparison, performing image comparison on the image to be recognized in a static full-scale library, wherein the face image of the person in the preset area, the ID corresponding to each face image and the communication identification corresponding to each face image are stored in the static full-scale library.
And if a second target face image matched with the image to be recognized is obtained through image comparison in the static full-scale library, acquiring an ID corresponding to the second target face image from the static full-scale library, and taking the ID corresponding to the second target face image as an identity recognition result of the image to be recognized.
In the static full database, because the face image, the ID and the communication identification have a corresponding relationship, all personnel in a preset area only need to acquire any one of the three information, and the other two corresponding information can be searched in the full database. In practical application, the ID may be an identification number, and the communication identifier may be a mobile phone number.
Specifically, after a second target face image corresponding to the image to be recognized is obtained, a corresponding ID can be obtained in the static full-scale library according to the second target face image, and the ID is an identity recognition result of the person to be recognized.
In an optional embodiment of the present application, the method further comprises:
the method comprises the steps of obtaining a first dynamic track library based on a first geographical position, image acquisition time and a second dynamic track library, wherein communication identifications of all personnel in a preset area, geographical positions corresponding to the communication identifications and acquisition moments corresponding to the geographical positions are stored in the second dynamic track library.
Compared with the second dynamic track library, the first dynamic track library stores more human face images in data types, and the data in the first dynamic track library is data of people in a sub-region within a preset time period.
In an optional embodiment of the present application, the obtaining the first dynamic trajectory library based on the first geographic location, the image capturing time, and the second dynamic trajectory library specifically includes:
acquiring corresponding communication identifiers in a sub-area within a preset time period from a second dynamic track library;
acquiring IDs and face images corresponding to the communication identifiers in the sub-regions within the preset time period from a static full-scale library based on the corresponding communication identifiers in the sub-regions within the preset time period; alternatively, the first and second electrodes may be,
based on each communication identifier in the sub-region within the preset time period, acquiring a face image corresponding to each communication identifier corresponding to the sub-region within the preset time period from a static full-scale library, and acquiring an ID corresponding to each communication identifier corresponding to the sub-region within the preset time period from a real-name database of a telecom operator;
and obtaining a first dynamic track library based on each communication identifier corresponding to the sub-region in a preset time period, the ID corresponding to each communication identifier and the face image corresponding to each communication identifier.
Specifically, the acquiring process of the first dynamic trajectory library may be understood as: firstly, screening out communication identifiers corresponding to all personnel in a sub-area within a preset time period from a second dynamic track library; then, the communication identification is used as an index to pull the corresponding ID and the corresponding face image from the static full-scale library, or the communication identification is used as an index to pull the corresponding ID and the corresponding face image from a real-name database of an operator; and then, constructing a first sub-dynamic track library by utilizing the communication identifiers, the corresponding IDs and the corresponding face images corresponding to all the persons in the sub-region within the preset time period.
In an alternative embodiment of the present application, the first target face image is obtained by:
respectively comparing the image to be recognized with each face image in the first dynamic track library to obtain the similarity between the image to be recognized and each face image;
and taking the face image with the similarity between the face image and the image to be recognized larger than a preset threshold value as a first target face image.
The setting of the preset threshold value can be set according to actual requirements, the size of the preset threshold value can influence the speed and accuracy of target face image acquisition, the larger the preset threshold value is, the slower the speed of target face acquisition is, and the higher the accuracy is, otherwise, the faster the speed of target face acquisition is, and the higher the accuracy is. In practical application, a proper preset threshold value can be selected to ensure that the speed and the accuracy are both considered when the target face image is acquired.
The embodiment of the present application is further described below by an example, and as shown in fig. 4, a specific implementation process of the scheme may include the following steps:
firstly, acquiring a corresponding first dynamic track library (regional sub-library) according to a first geographical position and image acquisition time corresponding to an image to be identified;
comparing the images to be recognized in the regional sub-libraries by using a 1: N comparison algorithm, and judging whether a first target face image is obtained (namely whether a first comparison result exists);
if the first comparison result exists, acquiring an ID corresponding to the first target face image from a first dynamic track library, and taking the ID as an identification result of the image to be identified;
if the first comparison result does not exist, comparing the image to be recognized in a static full-scale library by using a 1: N comparison algorithm, and judging whether a second target face image is obtained (namely whether a second comparison result exists);
if the second comparison result is obtained, acquiring an ID corresponding to the second target face image from the static full-scale library, and taking the ID as the identification result of the image to be identified;
if the second comparison result does not exist, the identification fails.
In this example, according to the method provided by the embodiment of the application, during the first image comparison, the data volume of the regional sub-database is far smaller than that of the static full database, so that the computational power consumption is reduced, the real-time performance of the identity recognition is improved, and meanwhile, the static full database is used for re-recognizing the image to be recognized, which is obtained by the first image comparison and does not obtain the comparison result, so that the identity recognition of all the images to be recognized is ensured.
Fig. 5 is a block diagram of an identification apparatus according to an embodiment of the present disclosure, and as shown in fig. 5, the apparatus 500 may include: an image acquisition module 501, an image comparison module 502, an information acquisition module 503, a geographic position verification module 504 and an identification result output module 505. Wherein:
the image acquisition module 501 is configured to acquire an image to be identified, a first geographic location where an image acquisition device corresponding to the image to be identified is located, and an image acquisition time corresponding to the image to be identified;
the image comparison module 502 is configured to perform image comparison on images to be recognized in a static full-scale library, where face images of people in a preset area, identity recognition information IDs corresponding to the face images, and communication identifiers corresponding to the face images are stored in the static full-scale library;
the information obtaining module 503 is configured to, if a target face image matched with the image to be recognized is obtained through image comparison, obtain an ID and a communication identifier corresponding to the target face image from the static full-scale library;
the geographic position checking module 504 is configured to obtain a corresponding second geographic position from a dynamic trajectory library based on an acquisition time corresponding to the image to be recognized and a communication identifier corresponding to the target face image, where the dynamic trajectory library stores communication identifiers of people in a preset area, geographic positions corresponding to the communication identifiers, and acquisition times corresponding to the geographic positions;
the recognition result output module 505 is configured to, if the first geographic position and the second geographic position meet a preset condition, take an ID corresponding to the target face image as an identity recognition result of the image to be recognized.
The application provides an identity recognition device, at first, carry out image matching through the static total database that contains all personnel's relevant information to treat the discernment image, obtain ID and communication sign that treat the discernment image corresponds, reuse communication sign and obtain corresponding geographical position in the dynamic track storehouse, utilize the relation between the geographical position that obtains from the dynamic track storehouse and the geographical position that the image acquisition device that treats the discernment image corresponds locates to check the image comparison result, if check and pass, regard corresponding ID as the identity recognition result of treating the discernment image. According to the scheme, a static full-scale library containing relevant information of all personnel is adopted in the image comparison process, the integrity of data of a base library is guaranteed, meanwhile, the data in a dynamic track library is used for verifying the image comparison result, the accuracy of the identification result is guaranteed, and then real-time and accurate identification of all snap-shot images to be identified can be achieved.
In an optional embodiment of the present application, the image comparison module is specifically configured to:
respectively comparing the image to be recognized with each face image in the static full-scale library to obtain the similarity between the image to be recognized and each face image;
and taking the face image with the similarity between the face image and the image to be recognized larger than a preset threshold value as a target face image.
In an optional embodiment of the present application, the geographic location checking module is specifically configured to:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
taking the geographic position with the same acquisition time as the image acquisition time in each geographic position as a second geographic position; accordingly, the number of the first and second electrodes,
the preset condition is that the first geographic position is matched with the second geographic position.
In an optional embodiment of the present application, the geographic location checking module is specifically configured to:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring at least two geographical positions which are in the geographical positions and are before the image acquisition time and closest to the image acquisition time, and taking the at least two geographical positions as second geographical positions; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographic position and the geographic position, closest to the image acquisition time, of the at least two geographic positions is not greater than the distance between the first geographic position and each of the other geographic positions of the at least two geographic positions.
In an optional embodiment of the present application, the geographic location checking module is specifically configured to:
acquiring a geographical position corresponding to a communication identifier corresponding to a target face image from a dynamic track library;
acquiring the geographical position of the acquisition time in each geographical position within a preset time period, and taking the geographical position of the acquisition time within the preset time period as a second geographical position, wherein the preset time period comprises the image acquisition time; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographical position and each second geographical position is not more than a preset distance.
Fig. 6 is a block diagram of an identification apparatus according to an embodiment of the present disclosure, and as shown in fig. 6, the apparatus 600 may include: an image acquisition module 601, an image comparison module 602, and an identification result output module 603. Wherein:
the image acquisition module 601 is configured to acquire an image to be identified, a first geographic position where an image acquisition device corresponding to the image to be identified is located, and an image acquisition time corresponding to the image to be identified, where the first geographic position corresponds to a sub-area in a preset area, and the image acquisition time is included in a preset time period;
the image comparison module 602 is configured to perform image comparison on images to be recognized in a first dynamic trajectory library, where the first dynamic trajectory library stores face images of people in sub-regions within a preset time period, identity identification information IDs corresponding to the face images, and communication identifiers corresponding to the face images;
the recognition result output module 603 is configured to, if a first target face image matched with the image to be recognized is obtained through image comparison, obtain an ID corresponding to the first target face image from the first dynamic trajectory library, and use the ID corresponding to the first target face image as an identity recognition result of the image to be recognized;
in an optional embodiment of the present application, the apparatus further includes a second image comparison module and a second recognition result output module, wherein:
the second image comparison module is used for comparing the images to be recognized in a static full-scale library if the face images matched with the images to be recognized are not obtained through image comparison, and the face images of the persons in the preset area, the IDs corresponding to the face images and the communication identifications corresponding to the face images are stored in the static full-scale library;
and the second recognition result output module is used for acquiring an ID corresponding to the second target face image from the static full-scale library and taking the ID corresponding to the second target face image as an identity recognition result of the image to be recognized if the second target face image matched with the image to be recognized is obtained through image comparison in the static full-scale library.
The identity recognition device comprises a first dynamic track library corresponding to a preset time period and a sub-region, a first target face image corresponding to an image to be recognized is obtained, if the first target face image is obtained, an identity recognition result of the image to be recognized is obtained in the first dynamic track library according to the first target face image, if the first target face image is not obtained, a second target face image of the image to be recognized is obtained in a static full-scale estimation library, and the identity recognition result of the image to be recognized is obtained in the static full-scale track library according to the second target face image. The database of the first image comparison corresponds to the preset time period and the sub-region, so that the data volume is small, the real-time property of the identity recognition is ensured, and meanwhile, for the image to be recognized which fails in comparison in the first dynamic track library, the static full-volume library with more complete data volume is further adopted for comparison, so that the real-time and accurate recognition of all the snap-shot images to be recognized can be realized.
In an optional embodiment of the present application, the apparatus may further include a first dynamic trajectory library obtaining module, configured to:
the method comprises the steps of obtaining a first dynamic track library based on a first geographical position, image acquisition time and a second dynamic track library, wherein communication identifications of all personnel in a preset area, geographical positions corresponding to the communication identifications and acquisition moments corresponding to the geographical positions are stored in the second dynamic track library.
In an optional embodiment of the present application, the first dynamic trajectory library obtaining module is specifically configured to:
acquiring corresponding communication identifiers in a sub-area within a preset time period from a second dynamic track library;
acquiring IDs and face images corresponding to the communication identifiers in the sub-regions within the preset time period from a static full-scale library based on the corresponding communication identifiers in the sub-regions within the preset time period; alternatively, the first and second electrodes may be,
based on each communication identifier in the sub-region within the preset time period, acquiring a face image corresponding to each communication identifier corresponding to the sub-region within the preset time period from a static full-scale library, and acquiring an ID corresponding to each communication identifier corresponding to the sub-region within the preset time period from a real-name database of a telecom operator;
and obtaining a first dynamic track library based on each communication identifier corresponding to the sub-region in a preset time period, the ID corresponding to each communication identifier and the face image corresponding to each communication identifier.
In an optional embodiment of the present application, the first image comparing module is specifically configured to:
respectively comparing the image to be recognized with each face image in the first dynamic track library to obtain the similarity between the image to be recognized and each face image;
and taking the face image with the similarity between the face image and the image to be recognized larger than a preset threshold value as a first target face image.
Based on the same principle, an embodiment of the present application further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method provided in any optional embodiment of the present application is implemented, and the following specific cases may be implemented:
the first condition is as follows: acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified; comparing images to be recognized in a static full-scale library, wherein the static full-scale library stores face images of all persons in a preset area, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images; if a target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID and a communication identifier corresponding to the target face image from a static full-scale library; acquiring a corresponding second geographical position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, wherein the communication identifications of all personnel in a preset area, the geographical positions corresponding to the communication identifications and the acquisition time corresponding to the geographical positions are stored in the dynamic track library; and if the first geographic position and the second geographic position meet the preset condition, taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized.
Case two: acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is contained in a preset time period; comparing images to be recognized in a first dynamic track library, wherein the first dynamic track library stores face images of all persons in a sub-area within a preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images; and if a first target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID corresponding to the first target face image from the first dynamic track library, and taking the ID corresponding to the first target face image as an identity recognition result of the image to be recognized.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method shown in any embodiment of the present application.
It is understood that the medium may store a computer program corresponding to the identification method.
Fig. 7 is a schematic structural diagram of an electronic device to which the embodiment of the present application is applied, and as shown in fig. 7, an electronic device 700 shown in fig. 7 includes: a processor 701 and a memory 703. The processor 701 is coupled to a memory 703, such as via a bus 702. Further, the electronic device 700 may also include a transceiver 704, and the electronic device 700 may interact with other electronic devices through the transceiver 704. It should be noted that the transceiver 704 is not limited to one in practical applications, and the structure of the electronic device 700 is not limited to the embodiment of the present application.
The processor 701, applied in this embodiment of the present application, may be configured to implement the functions of the image obtaining module, the image comparing module, the information obtaining module, the geographic position checking module, and the recognition result output module shown in fig. 5, and may also be configured to implement the functions of the image obtaining module, the first image comparing module, the first recognition result output module, the second image comparing module, and the second recognition result output module shown in fig. 6.
The processor 701 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 701 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
Bus 702 may include a path that transfers information between the above components. The bus 702 may be a PCI bus or an EISA bus, etc. The bus 702 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The memory 703 may be, but is not limited to, ROM or other type of static storage device that can store static information and instructions, RAM or other type of dynamic storage device that can store information and instructions, EEPROM, CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 703 is used for storing application program codes for executing the present invention, and is controlled by the processor 701. The processor 701 is configured to execute the application program code stored in the memory 703 to implement the actions of the identification apparatus provided in the embodiment shown in fig. 5 or fig. 6.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (12)

1. An identity recognition method, comprising:
acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified;
comparing the images to be recognized in a static full-scale library, wherein the static full-scale library stores face images of people in a preset area, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
if a target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID and a communication identifier corresponding to the target face image from the static full-scale library;
acquiring a corresponding second geographic position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, wherein the communication identification of personnel in the preset area, the geographic position corresponding to each communication identification and the acquisition time corresponding to each geographic position are stored in the dynamic track library;
and if the first geographic position and the second geographic position meet preset conditions, taking the ID corresponding to the target face image as an identity recognition result of the image to be recognized.
2. The method according to claim 1, wherein the obtaining a corresponding second geographic location from the dynamic trajectory library based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to the target face image from the dynamic track library;
taking the geographic position with the same acquisition time as the image acquisition time in each geographic position as the second geographic position; accordingly, the number of the first and second electrodes,
the preset condition is that the first geographic position is matched with the second geographic position.
3. The method according to claim 1, wherein the obtaining a corresponding second geographic location from the dynamic trajectory library based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to the target face image from the dynamic track library;
acquiring at least two geographical positions which are in the geographical positions and have the acquisition time before the image acquisition time and are closest to the image acquisition time, and taking the at least two geographical positions as second geographical positions; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographical position and the geographical position, closest to the image acquisition time, of the at least two geographical positions at the acquisition time is not greater than the distance between the first geographical position and other geographical positions of the at least two geographical positions.
4. The method according to claim 1, wherein the obtaining a corresponding second geographic location from the dynamic trajectory library based on the acquisition time corresponding to the image to be recognized and the communication identifier corresponding to the target face image specifically includes:
acquiring a geographical position corresponding to a communication identifier corresponding to the target face image from the dynamic track library;
acquiring the geographical position of the acquisition time in each geographical position within a preset time period, and taking the geographical position of the acquisition time within the preset time period as the second geographical position, wherein the preset time period comprises the image acquisition time; accordingly, the number of the first and second electrodes,
the preset condition is that the distance between the first geographical position and each second geographical position is not more than a preset distance.
5. An identity recognition method, comprising:
acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is contained in a preset time period;
comparing the images to be recognized in a first dynamic track library, wherein the first dynamic track library stores face images of persons in the sub-area within the preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
and if a first target face image matched with the image to be recognized is obtained through image comparison, acquiring an ID corresponding to the first target face image from the first dynamic track library, and taking the ID corresponding to the first target face image as an identity recognition result of the image to be recognized.
6. The method of claim 5, further comprising:
if the face image matched with the image to be recognized is not obtained through image comparison, image comparison is carried out on the image to be recognized in a static full-scale library, and the face image of a person in a preset area, the ID corresponding to each face image and the communication identification corresponding to each face image are stored in the static full-scale library;
and if a second target face image matched with the image to be recognized is obtained through image comparison in the static full-scale library, acquiring an ID corresponding to the second target face image from the static full-scale library, and taking the ID corresponding to the second target face image as an identity recognition result of the image to be recognized.
7. The method of claim 5, further comprising:
and acquiring the first dynamic track library based on the first geographical position, the image acquisition time and a second dynamic track library, wherein the second dynamic track library stores communication identifiers of personnel in the preset area, geographical positions corresponding to the communication identifiers and acquisition moments corresponding to the geographical positions.
8. The method according to claim 7, wherein the obtaining the first dynamic trajectory library based on the first geographic location, the image acquisition time, and a second dynamic trajectory library specifically comprises:
acquiring corresponding communication identifiers in the sub-area within the preset time period from the second dynamic track library;
acquiring IDs and face images corresponding to the communication identifiers in the sub-regions in the preset time period from the static full-scale library based on the corresponding communication identifiers in the sub-regions in the preset time period; alternatively, the first and second electrodes may be,
based on each communication identifier in the sub-region in the preset time period, acquiring a face image corresponding to each communication identifier corresponding to the sub-region in the preset time period from the static full-scale library, and acquiring an ID corresponding to each communication identifier corresponding to the sub-region in the preset time period from a real-name database of a telecom operator;
and obtaining the first dynamic track library based on each communication identifier corresponding to the sub-region in the preset time period, the ID corresponding to each communication identifier and the face image corresponding to each communication identifier.
9. An identification device, comprising:
the image acquisition module is used for acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and an image acquisition time corresponding to the image to be identified;
the image comparison module is used for carrying out image comparison on the images to be recognized in a static full-scale library, and the static full-scale library stores face images of people in a preset area, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
the information acquisition module is used for acquiring an ID and a communication identifier corresponding to the target face image from the static full-scale library if the target face image matched with the image to be recognized is obtained through image comparison;
the geographic position checking module is used for acquiring a corresponding second geographic position from a dynamic track library based on the acquisition time corresponding to the image to be recognized and the communication identification corresponding to the target face image, wherein the communication identification of the personnel in the preset area, the geographic position corresponding to each communication identification and the acquisition time corresponding to each geographic position are stored in the dynamic track library;
and the recognition result output module is used for taking the ID corresponding to the target face image as the identity recognition result of the image to be recognized if the preset condition is met between the first geographic position and the second geographic position.
10. An identification device, comprising:
the image acquisition module is used for acquiring an image to be identified, a first geographical position where an image acquisition device corresponding to the image to be identified is located and image acquisition time corresponding to the image to be identified, wherein the first geographical position corresponds to a sub-area in a preset area, and the image acquisition time is contained in a preset time period;
the image comparison module is used for carrying out image comparison on the images to be recognized in a first dynamic track library, and the first dynamic track library stores face images of all persons in the sub-area within the preset time period, identity recognition information IDs corresponding to the face images and communication identifications corresponding to the face images;
and the recognition result output module is used for acquiring the ID corresponding to the first target face image from the first dynamic track library and taking the ID corresponding to the first target face image as the identity recognition result of the image to be recognized if the first target face image matched with the image to be recognized is obtained through image comparison.
11. An electronic device comprising a memory and a processor;
the memory has stored therein a computer program;
the processor for executing the computer program to implement the method of any one of claims 1 to 8.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN201911008076.8A 2019-10-22 2019-10-22 Identity recognition method and device, electronic equipment and computer readable storage medium Pending CN110765938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911008076.8A CN110765938A (en) 2019-10-22 2019-10-22 Identity recognition method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911008076.8A CN110765938A (en) 2019-10-22 2019-10-22 Identity recognition method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110765938A true CN110765938A (en) 2020-02-07

Family

ID=69331345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911008076.8A Pending CN110765938A (en) 2019-10-22 2019-10-22 Identity recognition method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110765938A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652173A (en) * 2020-06-10 2020-09-11 杭州十域科技有限公司 Acquisition method suitable for people flow management and control in comprehensive mall
CN111898451A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Identity recognition method and device, electronic equipment and computer readable storage medium
CN112036894A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Method and system for identity confirmation by using iris characteristics and motion characteristics
CN113177490A (en) * 2021-05-07 2021-07-27 中国铁道科学研究院集团有限公司 Passenger travel information query method and device
CN113283410A (en) * 2021-07-26 2021-08-20 浙江宇视科技有限公司 Face enhancement recognition method, device and equipment based on data association analysis
CN113423089A (en) * 2021-05-18 2021-09-21 西安艾润物联网技术服务有限责任公司 Equipment identity identification control method and device
WO2022227562A1 (en) * 2021-04-27 2022-11-03 北京市商汤科技开发有限公司 Identity recognition method and apparatus, and electronic device, storage medium and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890586A (en) * 2016-03-30 2016-08-24 上海河广信息科技有限公司 User location marking system and method
CN106469296A (en) * 2016-08-30 2017-03-01 北京旷视科技有限公司 Face identification method, device and gate control system
CN109102531A (en) * 2018-08-21 2018-12-28 北京深瞐科技有限公司 A kind of target trajectory method for tracing and device
CN110276298A (en) * 2019-06-21 2019-09-24 腾讯科技(深圳)有限公司 Determination method, device, storage medium and the computer equipment of user behavior

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890586A (en) * 2016-03-30 2016-08-24 上海河广信息科技有限公司 User location marking system and method
CN106469296A (en) * 2016-08-30 2017-03-01 北京旷视科技有限公司 Face identification method, device and gate control system
CN109102531A (en) * 2018-08-21 2018-12-28 北京深瞐科技有限公司 A kind of target trajectory method for tracing and device
CN110276298A (en) * 2019-06-21 2019-09-24 腾讯科技(深圳)有限公司 Determination method, device, storage medium and the computer equipment of user behavior

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652173A (en) * 2020-06-10 2020-09-11 杭州十域科技有限公司 Acquisition method suitable for people flow management and control in comprehensive mall
CN111652173B (en) * 2020-06-10 2023-07-18 杭州十域科技有限公司 Acquisition method suitable for personnel flow control in comprehensive market
CN111898451A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Identity recognition method and device, electronic equipment and computer readable storage medium
CN112036894A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Method and system for identity confirmation by using iris characteristics and motion characteristics
CN112036894B (en) * 2020-09-01 2023-08-18 中国银行股份有限公司 Method and system for identity confirmation by utilizing iris characteristics and action characteristics
WO2022227562A1 (en) * 2021-04-27 2022-11-03 北京市商汤科技开发有限公司 Identity recognition method and apparatus, and electronic device, storage medium and computer program product
CN113177490A (en) * 2021-05-07 2021-07-27 中国铁道科学研究院集团有限公司 Passenger travel information query method and device
CN113423089A (en) * 2021-05-18 2021-09-21 西安艾润物联网技术服务有限责任公司 Equipment identity identification control method and device
CN113423089B (en) * 2021-05-18 2024-04-19 西安艾润物联网技术服务有限责任公司 Equipment identity recognition control method and device
CN113283410A (en) * 2021-07-26 2021-08-20 浙江宇视科技有限公司 Face enhancement recognition method, device and equipment based on data association analysis

Similar Documents

Publication Publication Date Title
CN110765938A (en) Identity recognition method and device, electronic equipment and computer readable storage medium
CN109086669B (en) Face recognition identity verification method and device and electronic equipment
CN110505583B (en) Trajectory matching method based on bayonet data and signaling data
CN109325964B (en) Face tracking method and device and terminal
CN109635857B (en) Human-vehicle track monitoring and analyzing method, device, equipment and storage medium
CN108091140B (en) Method and device for determining fake-licensed vehicle
CN110969215B (en) Clustering processing method and device, storage medium and electronic device
CN111160243A (en) Passenger flow volume statistical method and related product
CN108563651B (en) Multi-video target searching method, device and equipment
CN103839308A (en) Population obtaining method, device and system
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
CN103996026A (en) Fingerprint feature extraction method, device and system
CN111125382A (en) Personnel track real-time monitoring method and terminal equipment
CN109784220B (en) Method and device for determining passerby track
US20160105763A1 (en) Framework and method for tracking lost or stolen electronic device
CN110245573A (en) A kind of register method, apparatus and terminal device based on recognition of face
CN111126411B (en) Abnormal behavior identification method and device
CN110557722B (en) Target group partner identification method and related device
CN113627334A (en) Object behavior identification method and device
CN110889346B (en) Intelligent tracking method, system, equipment and readable medium
CN109063622B (en) Positioning method and device
CN111079469B (en) Face image processing method, device, equipment and readable storage medium
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
WO2020114128A1 (en) Map drawing method and apparatus, storage medium and electronic apparatus
CN112990156B (en) Optimal target capturing method and device based on video and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207