CN110705469A - Face matching method and device and server - Google Patents

Face matching method and device and server Download PDF

Info

Publication number
CN110705469A
CN110705469A CN201910940622.5A CN201910940622A CN110705469A CN 110705469 A CN110705469 A CN 110705469A CN 201910940622 A CN201910940622 A CN 201910940622A CN 110705469 A CN110705469 A CN 110705469A
Authority
CN
China
Prior art keywords
image
human body
feature information
face
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910940622.5A
Other languages
Chinese (zh)
Inventor
王亚
向秋敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN201910940622.5A priority Critical patent/CN110705469A/en
Publication of CN110705469A publication Critical patent/CN110705469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application provides a face matching method, a face matching device and a server, and relates to the field of face recognition searching. Performing feature extraction on the image to be recognized through the acquired image to be recognized, wherein the image to be recognized has a first resolution, and acquiring human body feature information of the image to be recognized; and further, the human body feature information of the image to be recognized is matched with the stored human body feature information of at least one comparison image, the comparison image has a second resolution, the first resolution is smaller than the second resolution, when the human body feature information of the image to be recognized is matched with the human body feature information of at least one comparison image, the human face feature information of at least one matched comparison image can be obtained, so that the corresponding human face image can be obtained, the purpose of matching the low-resolution image into the high-resolution human face image is achieved, and the common camera is effectively utilized.

Description

Face matching method and device and server
Technical Field
The application relates to the field of face recognition search, in particular to a face matching method, a face matching device and a server.
Background
In recent years, face recognition search represents a prominent value in security application, and helps case detection from pursuit of escape and control, finding of lost pedestrians, identity confirmation of suspected pedestrians to big data analysis with face data as a core, and plays a great role in related work of public security organs.
Generally, public security institutions acquire suspect information through a common camera, but the common camera is high in erection position, and a shot image is low in resolution ratio relative to a snapshot machine, so that the identity of a pedestrian cannot be further determined in a face inspection system directly, and the information acquired by the common camera cannot be fully utilized in the public security inspection system, so that resource waste is caused.
Disclosure of Invention
In view of this, an object of the present application is to provide a face matching method, so as to make full use of information collected by a common camera and reduce resource waste.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, the present application provides a face matching method, including:
acquiring an image to be identified, wherein the image to be identified has a first resolution;
extracting the characteristics of the image to be recognized to obtain the human body characteristic information of the image to be recognized;
matching the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image, wherein the comparison image has a second resolution, and the first resolution is smaller than the second resolution;
and when the human body characteristic information of the image to be identified is matched with the human body characteristic information of at least one comparison image, acquiring the human face characteristic information of at least one matched comparison image.
With reference to the first aspect, in a first possible implementation manner, the method further includes:
obtaining at least one snap-shot image; the at least one snapshot image is from a snapshot machine;
extracting the features of the snap-shot image to obtain human body feature information and human face feature information of the snap-shot image;
and storing the snapshot image as the comparison image.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, before the step of performing feature extraction on the captured image to obtain human body feature information and human face feature information of the captured image, the method further includes:
determining whether a human body and a human face exist in the snapshot image at the same time;
if yes, after the step of performing feature extraction on the snap-shot image to obtain the human body feature information and the human face feature information of the snap-shot image, the method further comprises the following steps:
and establishing an incidence relation between the human body characteristic information and the human face characteristic information of the snapshot image.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the step of matching the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image includes:
comparing the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image to obtain the similarity between the human body feature information of the image to be identified and the human body feature information of each comparison image;
when the at least one matched similarity is larger than or equal to a similarity threshold, determining to acquire the face feature information of at least one matched comparison image according to the incidence relation; the matched similarity belongs to the similarity.
With reference to the first aspect, in a fourth possible implementation manner, the step of acquiring an image to be identified includes:
obtaining video data to be identified, wherein the video data to be identified has the first resolution;
and extracting at least one frame of image from the video data to be identified as the image to be identified.
With reference to the first aspect, in a fifth possible implementation manner, the step of obtaining at least one snapshot image includes:
obtaining video data of a snapshot machine, wherein the video data of the snapshot machine has the second resolution;
and extracting at least one frame of image from the video data of the snapshot machine to serve as the comparison image.
In a second aspect, the present application provides a face matching device, including:
the device comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image to be recognized, and the image to be recognized has a first resolution;
the characteristic extraction module is used for extracting the characteristics of the image to be recognized to obtain the human body characteristic information of the image to be recognized;
the matching module is used for matching the human body characteristic information of the image to be identified with the stored human body characteristic information of at least one comparison image, the comparison image has a second resolution, and the first resolution is smaller than the second resolution;
the acquisition module is further configured to acquire the face feature information of at least one matched comparison image when the body feature information of the image to be recognized is matched with the body feature information of at least one comparison image.
With reference to the second aspect, in a first possible implementation manner, the apparatus further includes a storage module;
the acquisition module is also used for acquiring at least one snapshot image sent by the snapshot machine;
the characteristic extraction module is also used for extracting the characteristics of the snapshot image to obtain the human body characteristic information and the human face characteristic information of the snapshot image;
and the storage module is used for storing the snapshot image as the comparison image.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the matching module is specifically configured to compare the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image, and obtain a similarity between the human body feature information of the image to be recognized and the human body feature information of each comparison image; the matching module is further specifically configured to determine whether the face feature information of the at least one matched comparison image exists according to the association relationship when the at least one matched similarity is greater than or equal to a similarity threshold; the matched similarity belongs to the similarity.
In a third aspect, the present application provides a server, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor can execute the machine executable instructions to implement the face matching method of the first aspect.
The embodiment of the application provides a face matching method, a face matching device and a server, wherein the face matching method, the face matching device and the server are used for obtaining an image to be recognized with a first resolution, and performing feature extraction on the image to be recognized to obtain human body feature information of the image to be recognized; and further, the human body characteristic information of the image to be recognized is matched with the stored human body characteristic information of at least one comparison image, the comparison image has a second resolution, the first resolution is smaller than the second resolution, when the human body characteristic information of the image to be recognized is matched with the human body characteristic information of at least one comparison image, the human face characteristic information of at least one matched comparison image can be obtained, so that the corresponding human image can be obtained, the purpose of matching the low-resolution image into the high-resolution human face image is realized, and the common camera is effectively utilized.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a face matching system architecture;
fig. 2 is a schematic view of point location design of a common camera and a snapshot machine according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a face matching method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of another face matching method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of another face matching method according to an embodiment of the present application;
fig. 6 is a schematic flow chart of another face matching method according to an embodiment of the present application;
fig. 7 is a block diagram of a structure of a face matching apparatus according to an embodiment of the present application;
fig. 8 is a block diagram of another structure of a face matching apparatus according to an embodiment of the present application.
Icon: 10-a server; 11-a terminal; 12-a camera device; 101-a processor; 102-a memory; 1021-view gallery; 1022-cloud storage; 30-face matching means; 301-an obtaining module; 302-a feature extraction module; 303-a matching module; 304-memory module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that the terms "upper", "lower", "inner", "outer", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships conventionally found in use of products of the application, and are used only for convenience in describing the present application and for simplification of description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case for a person of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Currently, a conventional face matching system generally includes a camera, a server and a terminal, and specifically, fig. 1 is a schematic view of a face matching system architecture, referring to fig. 1, the face matching system includes a server 10, a terminal 11 and a camera 12, where the server 10 further includes a processor 101 and a memory 102, and the memory 102 further includes a view library 1021 and a cloud storage 1022.
The server 10 may be configured to process the video or image captured by the camera 12 through the processor 101 and maintain the image and the feature information corresponding to the image through the memory 102.
The processor 101 may be used to read/write data or programs stored in the memory 102 and perform corresponding functions. Specifically, the processor 101 may be configured to parse a video or an image captured by the camera 12, perform feature extraction on the obtained image, and push the obtained face feature information and the feature information of the human body to the view library 1021 in the memory 102. The human face feature information may be but not limited to local components such as eyes, nose, mouth, chin, and the like, and the human body feature information may be but not limited to attributes such as gender, age bracket, upper body color, lower body color, glasses, hat, backpack, mask, and the like.
The memory 102 is used for storing programs or data, and particularly, the view library 1021 in the memory 102 may be used for storing human characteristic information and human characteristic information, and the cloud storage 1022 may be used for storing images corresponding to the human characteristic information and the human characteristic information. The Memory 102 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The terminal 11 may be a computer, a portable device, or a mobile terminal such as a tablet computer and a mobile phone, and is mainly used for sending an image to be recognized to a server or sending a face matching request to the server.
The camera device 12 may be, but is not limited to, a conventional video camera, a snapshot machine, for capturing video and images. The snapshot machine has the advantages of high resolution, clear imaging and the like, and the snapshot human face can be directly used for the identity check library to determine the identity. However, the coverage of the snapshot machine is small and many experienced criminals can avoid these places intentionally. Ordinary camera frame pole height, it is wider to shoot the scope relative snapshot machine, and it is bigger to shoot the scene, but the image of shooing is for snapshot machine, and resolution ratio is low, and the policeman wants to further acquire criminal's identity information through traditional ordinary camera comparatively difficult.
In the existing face matching technology, the image resolution ratio shot by a common camera is low, so that the identity of a pedestrian cannot be further determined in a face inspection system directly, information acquired by the common camera cannot be fully utilized in a public security inspection system, and resource waste is caused.
For example, referring to fig. 1, when the camera 12 is a general camera, the camera 12 provides a low-resolution video or image to the server 10, and after the processor 101 in the server 10 obtains the video or image, because the resolution is low, the face pixels are often smaller than the face feature extraction threshold, the face feature information of the image cannot be extracted, and further the face information matched with the face pixels cannot be obtained.
In order to solve the problems, the application provides a method which can make full use of the established common camera, namely, the common camera is combined with a snapshot image of a snapshot machine to shoot a human body image with low resolution, human body characteristic information matched with the human body image is retrieved from the snapshot image of the snapshot machine by extracting the human body characteristic information, the human face characteristic information is obtained by utilizing the relation between the retrieved human body characteristic information and the human face characteristic information, and then the human face image is obtained, so that the information collected by the common camera can be utilized.
The technical scheme provided by the embodiment of the application is explained below, firstly, in order to combine images shot by a common camera and images shot by a snapshot machine, the application provides a possible arrangement mode of the common camera and the snapshot machine, and fig. 2 provides a point location design schematic diagram of the common camera and the snapshot machine for the application, wherein the design core of the point location design schematic diagram is to ensure that human bodies in the shooting range of a common traditional camera can pass through the shooting point location of the snapshot machine, and the point location design diagram is shown in fig. 2 and comprises common cameras 1-2, snapshot machines 1-8 and pedestrians 1-5.
Specifically, the snapshot machines 1-8 are controlled in a mainstream trunk so as to ensure that people and human bodies can be clearly snapshot, and the coverage range and the shooting range of the common cameras 1-2 are larger than those of the snapshot machines so that people can be snapshot as much as possible. The resolution of the videos or images shot by the common cameras 1-2 is lower than that of the videos or images shot by the capturing machines 1-8.
By designing the shooting point positions, the face images matched with the low-resolution human body images shot by the common cameras 1-2 can be retrieved by utilizing the high-resolution images shot by the snapshot machines 1-8, so that the images collected by the common cameras can be utilized.
For example, referring to fig. 2, in the case of a general camera 2, there are pedestrians 2 and pedestrians 4 in the shooting range of the camera, and the camera 2 can only shoot the human body 2 and the human body 4 with low resolution. When the pedestrian 2 passes through the snapshot machine 4, the snapshot machine 4 can shoot the human face 2 and the human body 2 with high resolution, and when the pedestrian 4 passes through the snapshot machine 6, the snapshot machine 6 can shoot the human face 4 and the human body 4 with high resolution; for the shooting of the common camera 2, when the human body 2 with lower resolution needs to be face-matched, the human face 2 matched with the human body 2 with low resolution can be obtained only by retrieving the human body 2 with high resolution in the snapshot machine 4, and similarly, when the human body 4 with lower resolution needs to be face-matched, the human face 4 matched with the human body 4 with low resolution can be obtained by retrieving the human body 4 with high resolution in the snapshot machine 6. The matched face image can be further subjected to face search or face recognition, so that the image acquired by the common camera can be utilized.
Further, with reference to the scenario of fig. 2, a face matching method provided by the present application is introduced, optionally, fig. 3 is a schematic flow chart of the face matching method provided by the embodiment of the present application, and with reference to fig. 3, the method includes: and step 205, acquiring an image to be identified.
Specifically, the image to be recognized has a first resolution.
And step 206, performing feature extraction on the image to be recognized to obtain the human body feature information of the image to be recognized.
Specifically, because the resolution of the image to be recognized is low, the face pixels are often smaller than the face feature extraction threshold, and therefore only the human body feature information of the image to be recognized is extracted.
And step 207, matching the human body characteristic information of the image to be identified with the stored human body characteristic information of at least one comparison image.
Specifically, the comparison image has a second resolution, and the first resolution is smaller than the second resolution. The comparison image comprises human body characteristic information and human face characteristic information.
And step 208, when the human body feature information of the image to be recognized is matched with the human body feature information of the at least one comparison image, acquiring the human face feature information of the at least one matched comparison image.
For example, referring to fig. 2, all the snap-shot images in the snap-shot machines 1 to 8 can be used as comparison images, and human body feature information and human face feature information of pedestrians 1 to 5 can be obtained by performing feature extraction on all the snap-shot images. For example, when the pedestrian 4 passes through the snapshot machine 6, the pedestrian 4 can be captured by the snapshot machine 6, so that the human body feature information and the face feature information of the pedestrian 4 are obtained. If the image to be recognized is an image of a pedestrian 4 shot by the camera 2, after the human body feature information of the pedestrian 4 is obtained through feature extraction, the feature information is compared with the human body feature information of all the snapshot images, the human body feature information of the pedestrian 4 in the pedestrian snapshot image can be matched, and then the corresponding human face feature information can be obtained.
The embodiment of the application provides a face matching method, which comprises the steps of obtaining an image to be recognized with a first resolution, carrying out feature extraction on the image to be recognized, and obtaining human body feature information of the image to be recognized; and further, the human body characteristic information of the image to be recognized is matched with the stored human body characteristic information of at least one comparison image, the comparison image has a second resolution, the first resolution is smaller than the second resolution, when the human body characteristic information of the image to be recognized is matched with the human body characteristic information of at least one comparison image, the human face characteristic information of at least one matched comparison image can be obtained, so that the corresponding human image can be obtained, the purpose of matching the low-resolution image into the high-resolution human face image is realized, and the common camera is effectively utilized.
Optionally, in order to retrieve a face image matched with the image to be recognized, a structured database needs to be established first, the structured database includes human body feature information and face feature information of all comparison images, a correlation relationship exists between part of the human body feature information and part of the face feature information, and when the human body feature information of the image to be recognized is used for comparison, the human body feature information of the matched comparison image is obtained, and then the face feature information can be obtained according to the correlation relationship.
Therefore, an embodiment of the present application provides a possible implementation manner for establishing a structured database, specifically, on the basis of fig. 3, fig. 4 is a schematic flow chart of another face matching method provided in the embodiment of the present application, see fig. 4, where before obtaining an image to be recognized, the method includes:
step 200, at least one snapshot image is obtained.
Specifically, the snap-shot image is from a snap-shot machine. Due to the fact that the snapshot range of the snapshot machine is small, in the process of obtaining the snapshot image, the human face resolution ratio is quite low in the image with the complete human body, and the human body is quite incomplete in the image with the high human face resolution ratio. Therefore, three situations can occur in the obtained snapshot image: the human face is clear and the human body is incomplete, the human body is complete and the human face is not clear, and the human face is clear and the human body is complete.
For example, referring to fig. 2, when the pedestrian 4 goes to the snapshot machine 6 from a far distance, the image that the snapshot machine 6 first takes is an image of the whole human body 4, but the resolution of the human face 4 is relatively low, when the pedestrian 4 approaches the snapshot machine 6, an image of the whole human body with a clear human face is taken, and when the pedestrian 4 passes through the snapshot machine 6, an image of the whole human body with a clear human face is taken.
Step 202, performing feature extraction on the snapshot image to obtain human body feature information and human face feature information of the snapshot image.
Specifically, three situations occur in the captured image, so that only part of the human body characteristic information and part of the human face characteristic information in the obtained human body characteristic information and human face characteristic information come from the same captured image, namely the image with clear human face and complete human body.
And step 204, storing the snapshot image as the comparison image.
According to the description, a structural database can be established by utilizing the human body characteristic information and the human face characteristic information of the snapshot image, when the image to be recognized needs to retrieve the human face image matched with the snapshot image, the human body characteristic information of the image to be recognized can be compared with the human body characteristic information in the structural database, so that the human body characteristic information matched with the image to be recognized is retrieved, and the human face characteristic information can be further obtained through the matched human body characteristic information.
According to the description of the embodiment, each human body picture does not have a corresponding human face, so that at least one piece of matched human face feature information cannot be acquired in each matching, therefore, only part of human body feature information and part of human face feature information in the structured database come from the same snapshot image, and when human body feature retrieval is carried out, only the part of human body feature information is retrieved, corresponding human face feature information can be acquired, and further, a human face image matched with the human face feature information is acquired.
Therefore, in order to accurately implement face matching, the human body feature information and the face feature information need to be associated together, and for this reason, a possible implementation manner is provided, and on the basis of fig. 4, fig. 5 is a schematic flow diagram of another face matching method provided in the embodiment of the present application, that is, before performing feature extraction on a snapshot image to obtain human body feature information and face feature information of the snapshot image, the method further includes:
step 201, determining whether a human body and a human face exist in the snapshot image at the same time.
If the human body feature information exists, the method further comprises the following steps after the steps of carrying out feature extraction on the snapshot image and obtaining the human body feature information and the human face feature information of the snapshot image:
step 203, establishing an incidence relation between the human body characteristic information and the human face characteristic information of the snapshot image.
Specifically, in order to acquire the human face feature information related to the human body feature information through the human body feature information in the snapshot image, the human body feature information and the human face feature information of the snapshot image can be associated by assigning a unique association identifier to the human body feature information and the human face feature information. After the matched human body feature information is retrieved, whether the human body feature information has the associated identification or not can be determined, if yes, the human face feature information with the same association with the human body feature information can be obtained, and then the corresponding human face image can be accessed through the associated identification.
In other embodiments, feature information of a newly captured human body pair is extracted, and if the matching degree of the feature information of the captured image and the human body feature information of the established human body and human face incidence relation is high, the incidence relation of the human body feature information of the captured image and the human face feature information is established, so that a plurality of human bodies can be associated with the same human face in the structured database.
Optionally, in order to describe in detail the process of obtaining the face feature information through human body feature comparison, a possible implementation manner is given, on the basis of fig. 5, fig. 6 is a schematic flow diagram of another face matching method provided in the embodiment of the present application, see fig. 6, where a possible implementation manner of step 207 is:
step 207-1, comparing the human body characteristic information of the image to be identified with the stored human body characteristic information of at least one comparison image to obtain the similarity between the human body characteristic information of the image to be identified and the human body characteristic information of each comparison image.
Specifically, the similarity between the human body feature information and the human body feature information of each comparison image may be obtained by a similarity algorithm, for example, but not limited to, a euclidean distance similarity algorithm and an angle cosine similarity algorithm. The greater the similarity value between the human body characteristic information and the human body characteristic information of each comparison image is, the more matching between the human body characteristic information and the human body characteristic information is represented.
And step 207-2, when the at least one matched similarity is greater than or equal to the similarity threshold, acquiring the face feature information of at least one comparison image according to the association relationship.
Specifically, the matched similarity belongs to a similarity, and the similarity threshold may be set according to an empirical value or may be obtained through an experiment. For example, when the similarity threshold is 85%, that is, the similarity between the human body feature information of the image to be recognized and the human body feature information of the comparison image is greater than or equal to 85%, it is determined that the human body feature information of the image to be recognized is matched with the human body feature information of the comparison image, and further, the corresponding human face feature information can be obtained through the human body feature information obtained through matching, if the person has a clear human body with a front face, which is captured by a capturing machine through the capturing machine, the corresponding at least one human face image is determined through association, and further, the previous N human face images with high human face image quality and corresponding human body similarity exceeding 85% can be selected.
For example, referring to fig. 2, assuming that the general camera 2 needs to perform face matching on the image of the human body 4 shot by the general camera, and the snapshot machine 6 takes a snapshot of a clear human body complete image of the face of the pedestrian 4, and the human body feature information and the human face feature information of the snapshot image have an association relationship and have a unique association identifier, when human body feature retrieval is performed, the similarity between the human body feature information of the image of the human body 4 and the human body feature information from the snapshot image of the snapshot machine 6 is greater than a similarity threshold, and it can be considered that the clear human body complete image of the face captured by the snapshot machine 6 matches the image of the human body 4 shot by the general camera 2.
Optionally, the manner of obtaining the image to be recognized may be by obtaining video data to be recognized, where the video data to be recognized has the first resolution; and extracting at least one frame of image from the video data to be identified as an image to be identified.
Optionally, the at least one snapshot image may be obtained by obtaining video data of a snapshot machine, the video data of the snapshot machine having the second resolution; and extracting at least one frame of image from the video data of the snapshot machine to serve as a comparison image.
In order to implement the steps of the foregoing embodiment to achieve the corresponding technical effect, an implementation manner of a face matching device is provided below, and optionally, fig. 7 is a structural block diagram of a face matching device provided in the embodiment of the present application, and referring to fig. 7, the face matching device 30 includes: an acquisition module 301, a feature extraction module 302 and a matching module 303.
An obtaining module 301, configured to obtain an image to be identified, where the image to be identified has a first resolution;
the feature extraction module 302 is configured to perform feature extraction on the image to be recognized, so as to obtain human body feature information of the image to be recognized.
The matching module 303 is configured to match the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image, where the comparison image has a second resolution, and the first resolution is smaller than the second resolution.
The obtaining module 301 is further configured to obtain face feature information of at least one matched comparison image when the human body feature information of the image to be recognized is matched with the human body feature information of the at least one comparison image.
Specifically, the obtaining module 301, the feature extracting module 302, and the matching module 303 may perform step 205, step 206, step 207, and step 208 to achieve the corresponding technical effect.
The face matching device comprises an acquisition module, a feature extraction module, a matching module and a storage module, wherein the acquisition module is used for acquiring an image to be recognized with a first resolution, and the feature extraction module is used for performing feature extraction on the image to be recognized to acquire human body feature information of the image to be recognized; furthermore, the matching module is used for matching the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image, the comparison image has a second resolution, the first resolution is smaller than the second resolution, and when the human body feature information of the image to be recognized is matched with the human body feature information of at least one comparison image, the acquisition module can be used for acquiring the human face feature information of at least one matched comparison image, so that the corresponding human image can be acquired, the purpose of matching the low-resolution image into the high-resolution human face image is realized, and the common camera is effectively utilized.
Optionally, the obtaining module 301 is configured to obtain an image to be identified, where the image to be identified has a first resolution; the characteristic extraction module is used for extracting the characteristics of the image to be recognized to obtain the human body characteristic information of the image to be recognized;
optionally, the matching module 303 is configured to match the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image, where the comparison image has a second resolution, and the first resolution is smaller than the second resolution; the acquisition module is further used for acquiring the face feature information of at least one matched comparison image when the human body feature information of the image to be identified is matched with the human body feature information of at least one comparison image.
Optionally, in order to implement the function of storing and comparing images, a possible implementation is given on the basis of fig. 7, referring to fig. 8, fig. 8 is a block diagram of a structure of another face matching apparatus provided in the embodiment of the present application, and specifically, the face matching apparatus 30 further includes a storage module 304;
the acquisition module 301 is further configured to acquire at least one snapshot image sent by the snapshot machine;
the feature extraction module 302 is further configured to perform feature extraction on the captured image to obtain human body feature information and human face feature information of the captured image.
The storage module 304 is configured to store the captured image as a comparison image.
In particular, the storage module 304 may also be used to store human body feature information and human face feature information of the captured image.
Specifically, the obtaining module 301 may perform step 200, the feature extracting module 302 may perform step 202, and the storing module 304 may perform step 203 to achieve the corresponding technical effect.
Optionally, the matching module 303 is specifically configured to compare the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image, so as to obtain a similarity between the human body feature information of the image to be identified and the human body feature information of each comparison image; the matching module is further specifically configured to determine whether there is face feature information of at least one matched comparison image according to the association relationship when the at least one matched similarity is greater than or equal to the similarity threshold; the matched similarity belongs to the similarity.
In particular, the matching module 303 may perform steps 207-1, 207-2 to achieve a corresponding technical effect.
Through the description of the above embodiment, the face matching device provided in the embodiment of the present application is mainly used for executing the face matching method provided in the above content of the embodiment of the present application, and can achieve the same technical effect as the face matching method provided in the embodiment of the present application.
Alternatively, the modules may be stored in the form of software or firmware in the memory 102 shown in fig. 1 or solidified in an Operating System (OS) of the server 10, and may be executed by the processor 101 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above modules may be stored in the memory 102.
The embodiment of the present application further provides a face matching server, which is a server 10 shown in fig. 1 and can be used to implement possible implementation manners of the foregoing embodiments. In the several embodiments provided in this application, it should be understood that the disclosed structures and methods may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of structures, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A face matching method is characterized by comprising the following steps:
acquiring an image to be identified, wherein the image to be identified has a first resolution;
extracting the characteristics of the image to be recognized to obtain the human body characteristic information of the image to be recognized;
matching the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image, wherein the comparison image has a second resolution, and the first resolution is smaller than the second resolution;
and when the human body characteristic information of the image to be identified is matched with the human body characteristic information of at least one comparison image, acquiring the human face characteristic information of at least one matched comparison image.
2. The face matching method according to claim 1, further comprising:
obtaining at least one snap-shot image; the at least one snapshot image is from a snapshot machine;
carrying out feature extraction on the snap-shot image to obtain human body feature information and human face feature information of the snap-shot image;
and storing the snapshot image as the comparison image.
3. The face matching method according to claim 2, wherein before the step of extracting the features of the captured image to obtain the human body feature information and the face feature information of the captured image, the method further comprises:
determining whether a human body and a human face exist in the snapshot image at the same time;
if yes, after the step of performing feature extraction on the snap-shot image to obtain the human body feature information and the human face feature information of the snap-shot image, the method further comprises the following steps:
and establishing an incidence relation between the human body characteristic information and the human face characteristic information of the snapshot image.
4. The face matching method according to claim 3, wherein the step of matching the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image comprises:
comparing the human body feature information of the image to be identified with the stored human body feature information of at least one comparison image to obtain the similarity between the human body feature information of the image to be identified and the human body feature information of each comparison image;
when the at least one matched similarity is larger than or equal to a similarity threshold, determining to acquire the face feature information of at least one matched comparison image according to the incidence relation; the matched similarity belongs to the similarity.
5. The face matching method according to claim 1, wherein the step of obtaining the image to be recognized comprises:
obtaining video data to be identified, wherein the video data to be identified has the first resolution;
and extracting at least one frame of image from the video data to be identified as the image to be identified.
6. The face matching method according to claim 2, wherein the step of obtaining at least one snapshot comprises:
obtaining video data of a snapshot machine, wherein the video data of the snapshot machine has the second resolution;
and extracting at least one frame of image from the video data of the snapshot machine to serve as the comparison image.
7. A face matching apparatus, comprising:
the device comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image to be recognized, and the image to be recognized has a first resolution;
the characteristic extraction module is used for extracting the characteristics of the image to be recognized to obtain the human body characteristic information of the image to be recognized;
the matching module is used for matching the human body characteristic information of the image to be identified with the stored human body characteristic information of at least one comparison image, the comparison image has a second resolution, and the first resolution is smaller than the second resolution;
the acquisition module is further configured to acquire the face feature information of at least one matched comparison image when the body feature information of the image to be recognized is matched with the body feature information of at least one comparison image.
8. The face matching device according to claim 7, further comprising a storage module;
the acquisition module is also used for acquiring at least one snapshot image sent by the snapshot machine;
the characteristic extraction module is also used for extracting the characteristics of the snapshot image to obtain the human body characteristic information and the human face characteristic information of the snapshot image;
and the storage module is used for storing the snapshot image as the comparison image.
9. The face matching device according to claim 8, wherein the matching module is specifically configured to compare the human body feature information of the image to be recognized with the stored human body feature information of at least one comparison image, and obtain a similarity between the human body feature information of the image to be recognized and the human body feature information of each comparison image; the matching module is further specifically configured to determine whether the face feature information of the at least one matched comparison image exists according to the association relationship when the at least one matched similarity is greater than or equal to a similarity threshold; the matched similarity belongs to the similarity.
10. A server comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the face matching method of any one of claims 1 to 6.
CN201910940622.5A 2019-09-30 2019-09-30 Face matching method and device and server Pending CN110705469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910940622.5A CN110705469A (en) 2019-09-30 2019-09-30 Face matching method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910940622.5A CN110705469A (en) 2019-09-30 2019-09-30 Face matching method and device and server

Publications (1)

Publication Number Publication Date
CN110705469A true CN110705469A (en) 2020-01-17

Family

ID=69197915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910940622.5A Pending CN110705469A (en) 2019-09-30 2019-09-30 Face matching method and device and server

Country Status (1)

Country Link
CN (1) CN110705469A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476820A (en) * 2020-04-01 2020-07-31 深圳力维智联技术有限公司 Method and device for positioning tracked target
CN112541384A (en) * 2020-07-30 2021-03-23 深圳市商汤科技有限公司 Object searching method and device, electronic equipment and storage medium
CN112883214A (en) * 2021-01-07 2021-06-01 浙江大华技术股份有限公司 Feature retrieval method, electronic device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680131A (en) * 2015-01-29 2015-06-03 深圳云天励飞技术有限公司 Identity authentication method based on identity certificate information and human face multi-feature recognition
CN106845432A (en) * 2017-02-07 2017-06-13 深圳市深网视界科技有限公司 The method and apparatus that a kind of face is detected jointly with human body
CN107292240A (en) * 2017-05-24 2017-10-24 深圳市深网视界科技有限公司 It is a kind of that people's method and system are looked for based on face and human bioequivalence
CN108319930A (en) * 2018-03-09 2018-07-24 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
CN108921008A (en) * 2018-05-14 2018-11-30 深圳市商汤科技有限公司 Portrait identification method, device and electronic equipment
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
US20190073520A1 (en) * 2017-09-01 2019-03-07 Percipient.ai Inc. Identification of individuals in a digital file using media analysis techniques
CN109766755A (en) * 2018-12-06 2019-05-17 深圳市天彦通信股份有限公司 Face identification method and Related product
CN109977832A (en) * 2019-03-19 2019-07-05 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680131A (en) * 2015-01-29 2015-06-03 深圳云天励飞技术有限公司 Identity authentication method based on identity certificate information and human face multi-feature recognition
CN106845432A (en) * 2017-02-07 2017-06-13 深圳市深网视界科技有限公司 The method and apparatus that a kind of face is detected jointly with human body
CN107292240A (en) * 2017-05-24 2017-10-24 深圳市深网视界科技有限公司 It is a kind of that people's method and system are looked for based on face and human bioequivalence
US20190073520A1 (en) * 2017-09-01 2019-03-07 Percipient.ai Inc. Identification of individuals in a digital file using media analysis techniques
CN108319930A (en) * 2018-03-09 2018-07-24 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
CN108921008A (en) * 2018-05-14 2018-11-30 深圳市商汤科技有限公司 Portrait identification method, device and electronic equipment
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
CN109766755A (en) * 2018-12-06 2019-05-17 深圳市天彦通信股份有限公司 Face identification method and Related product
CN109977832A (en) * 2019-03-19 2019-07-05 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张枝军: "《图形与图像处理技术》", 31 August 2018 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476820A (en) * 2020-04-01 2020-07-31 深圳力维智联技术有限公司 Method and device for positioning tracked target
CN111476820B (en) * 2020-04-01 2023-11-03 深圳力维智联技术有限公司 Method and device for positioning tracked target
CN112541384A (en) * 2020-07-30 2021-03-23 深圳市商汤科技有限公司 Object searching method and device, electronic equipment and storage medium
WO2022021711A1 (en) * 2020-07-30 2022-02-03 深圳市商汤科技有限公司 Surveillance method and apparatus, electronic device, and storage medium
CN112883214A (en) * 2021-01-07 2021-06-01 浙江大华技术股份有限公司 Feature retrieval method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN107292240B (en) Person finding method and system based on face and body recognition
CN109166261B (en) Image processing method, device and equipment based on image recognition and storage medium
Kumar et al. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices
US11017215B2 (en) Two-stage person searching method combining face and appearance features
CN110705469A (en) Face matching method and device and server
CN108875476B (en) Automatic near-infrared face registration and recognition method, device and system and storage medium
CN108875484B (en) Face unlocking method, device and system for mobile terminal and storage medium
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
JP2022518459A (en) Information processing methods and devices, storage media
US20200258236A1 (en) Person segmentations for background replacements
CN111931548B (en) Face recognition system, method for establishing face recognition data and face recognition method
US11657623B2 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
CN113269091A (en) Personnel trajectory analysis method, equipment and medium for intelligent park
CN111860346A (en) Dynamic gesture recognition method and device, electronic equipment and storage medium
CN111860313A (en) Information query method and device based on face recognition, computer equipment and medium
CN105590113A (en) Information-acquiring method based on law enforcement recorder
Gunawan et al. Design of automatic number plate recognition on android smartphone platform
CN111429476A (en) Method and device for determining action track of target person
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
CN105989063B (en) Video retrieval method and device
CN114863364B (en) Security detection method and system based on intelligent video monitoring
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117