CN113255399A - Target matching method and system, server, cloud, storage medium and equipment - Google Patents

Target matching method and system, server, cloud, storage medium and equipment Download PDF

Info

Publication number
CN113255399A
CN113255399A CN202010084446.2A CN202010084446A CN113255399A CN 113255399 A CN113255399 A CN 113255399A CN 202010084446 A CN202010084446 A CN 202010084446A CN 113255399 A CN113255399 A CN 113255399A
Authority
CN
China
Prior art keywords
reference face
image
detected
matching
face set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010084446.2A
Other languages
Chinese (zh)
Inventor
费优亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN202010084446.2A priority Critical patent/CN113255399A/en
Publication of CN113255399A publication Critical patent/CN113255399A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The embodiment of the disclosure discloses a target matching method and system, a server, a cloud, a storage medium and a device, wherein the method comprises the following steps: receiving a first reference face set obtained by splitting a second reference face set by a server; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set; determining an image to be detected; matching the image to be detected with the reference face in the first reference face set to obtain a first matching result; determining a reference face matched with the image to be detected according to the first matching result; according to the embodiment, the secondary retrieval is realized by combining the electronic equipment and the server, and the second reference face set stored by the server is split to obtain the first reference face set with smaller occupied space and is sent to the electronic equipment, so that the deployment cost of the electronic equipment in a specific scene can be greatly reduced, and the retrieval efficiency is improved.

Description

Target matching method and system, server, cloud, storage medium and equipment
Technical Field
The present disclosure relates to computer vision technologies, and in particular, to a target matching method and system, a server, a cloud, a storage medium, and a device.
Background
Object recognition refers to the process by which a particular object (or type of object) is distinguished from other objects (or other types of objects). It includes the identification of both two very similar objects and the identification of one type of object with another type of object. The current mainstream target identification system mainly adopts the following two schemes: the other method is that the front-end equipment tracks and captures the target in the target area, and uploads the captured picture to the server for identification and retrieval. The other is that the front-end equipment has identification capability and directly carries out identification retrieval at the equipment end.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a target matching method and system, a server, a cloud, a storage medium and equipment.
According to an aspect of the embodiments of the present disclosure, there is provided a target matching method applied to an electronic device, including:
receiving a first reference face set obtained by splitting a second reference face set by a server; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set;
determining an image to be detected;
matching the image to be detected with the reference face in the first reference face set to obtain a first matching result;
and determining a reference face matched with the image to be detected according to the first matching result.
According to another aspect of the embodiments of the present disclosure, there is provided a target matching method applied to a server, including:
splitting the second reference face set to obtain at least two first reference face sets; wherein the reference face included in the first reference face set is a part of the reference face in the second reference face set;
respectively sending each first reference face set to an electronic device;
responding to the situation that no reference face matched with the image to be detected exists in the first reference face set in the electronic equipment, and receiving the image to be detected sent by the electronic equipment;
matching the image to be detected with the reference face in the second reference face set to obtain a second matching result;
and determining a reference face matched with the image to be detected according to the second matching result.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
the subdata receiving module is used for receiving a first reference face set obtained by splitting the second reference face set by the server; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set;
the image determining module is used for determining an image to be detected;
the first matching module is used for matching the image to be detected determined by the image determining module with the reference face in the first reference face set determined by the subdata receiving module to obtain a first matching result;
and the image determining module is used for determining a reference face matched with the image to be detected according to the first matching result determined by the first matching module.
According to still another aspect of the embodiments of the present disclosure, there is provided a server including:
the data splitting module is used for splitting the second reference face set to obtain at least two first reference face sets; wherein the reference face included in the first reference face set is a part of the reference face in the second reference face set;
the data set distribution module is used for respectively sending each first reference face set obtained by the data splitting module to an electronic device;
the image receiving module is used for responding to the situation that no reference face matched with the image to be detected exists in a first reference face set in the electronic equipment, and receiving the image to be detected sent by the electronic equipment;
the second matching module is used for matching the image to be detected received by the image receiving module with the reference face in the second reference face set to obtain a second matching result;
and the face determining module is used for determining a reference face matched with the image to be detected based on a second matching result determined by the second matching module.
According to still another aspect of the embodiments of the present disclosure, there is provided a target matching system including: a server as described in the above embodiments and at least two electronic devices as described in the above embodiments.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the object matching method of the above-described embodiments.
According to still another aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the target matching method according to the above embodiment.
Based on the target matching method and system, the server, the cloud, the storage medium and the device provided by the embodiment of the disclosure, a first reference face set obtained by splitting a second reference face set by a server is received; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set; determining an image to be detected; matching the image to be detected with the reference face in the first reference face set to obtain a first matching result; determining a reference face matched with the image to be detected according to the first matching result; according to the embodiment, the secondary retrieval is realized by combining the electronic equipment and the server, and the second reference face set stored by the server is split to obtain the first reference face set with smaller occupied space and is sent to the electronic equipment, so that the deployment cost of the electronic equipment in a specific scene can be greatly reduced, and the retrieval efficiency is improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a block diagram of a target matching system according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of a target matching method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a target matching method according to another exemplary embodiment of the present disclosure.
Fig. 4 is a schematic flow chart of step 303 in the embodiment shown in fig. 3 of the present disclosure.
Fig. 5 is a flowchart illustrating a target matching method according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a schematic flow chart of step 504 in the embodiment shown in fig. 5 according to the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device according to another exemplary embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a server according to an exemplary embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of a server according to another exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram of an electronic device provided in another exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the process of implementing the present disclosure, the inventor finds that, in the prior art, the target matching is implemented at the device side or the server side, but the prior art solution has at least the following problems: the method is limited by the influence of storage size and computing power on the equipment end, and the size of the face library stored on the equipment end is limited, so that the method cannot meet the scene of a large face library. And as the number of the server ends increases, the pressure borne by the server ends increases, continuous capacity expansion is required, and the deployment cost is too high.
Exemplary System
The target matching system provided by the embodiment of the disclosure comprises a primary retrieval part at an electronic equipment end and a secondary retrieval part at a server end, wherein the electronic equipment end is equipment with face recognition retrieval capability, such as a face recognition intelligent camera, a face recognition panel machine, a vehicle-mounted face recognition camera and the like, a face image library is imported into the electronic equipment in an off-line registration or server end registration mode, the electronic equipment captures the face in a target area and compares the face with the face library stored in the electronic equipment, if the face image is matched with the face library stored in the electronic equipment, the recognition result is pushed to the server end, and if the face image is not matched with the face library, the captured face image is packaged and uploaded to the server end for secondary retrieval; and the server-side face recognition system receives the face snapshot image reported by the equipment side, extracts the face characteristics of the snapshot image by using the server-side large model, and then compares the face characteristics with the server-side face library to obtain a recognition result. The face library stored by the front-end equipment is a subset of the face library of the server side, and the threshold value set for recognition on the equipment side is strict, so that the accuracy of the recognition result on the equipment side is ensured; the face recognition capability is used for preposition, the computing power of the equipment end is fully utilized, the service load of the server end can be reduced, and the deployment cost of the server is reduced; meanwhile, the large face library can be split, the split small face library is synchronized to the equipment side, the small face library is searched in the small face library at the equipment side during searching, and the large face library at the server side is not searched, so that the searching efficiency can be improved.
Fig. 1 is a block diagram of a target matching system according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the system includes at least a plurality of electronic devices 101 and a server 103, and the process of implementing target matching by the target matching system includes:
the electronic device 101 can detect, snapshot and identify the face in the target area and perform primary retrieval with the face set stored in the electronic device through the embedded AI chip and the integrated deep learning identification model.
If the record is not matched in the upper-level retrieval of the electronic equipment, packing the face structured data 102 calculated on the electronic equipment and sending the packed face structured data to a server; the face structured data includes, but is not limited to, the following: face attribute information (male and female, gender, age, etc.), face key point information, face feature information.
The server 103 matches the corresponding processing flow through the name of the search library and the model version information reported by the electronic equipment terminal; the versions of the models corresponding to the face features in the electronic device and the server may be consistent or inconsistent, for example: the electronic equipment end uses a low computational power model, and the server uses a high computational power model; if the model version on the electronic equipment side is consistent with the model version in the server, the face features extracted from the electronic equipment side can be directly used in a face set of the server for secondary retrieval; if the model version on the electronic equipment side is not consistent with the model version in the server, firstly identifying and extracting the face feature information by the server, and then carrying out secondary retrieval on the server by using the face feature information extracted by the server.
The server 103 identifies and retrieves the cluster to form by recognition service and retrieval service, the recognition service can be to extracting information such as human face attribute, human face key point and human face characteristic; the search service can use the most similar records that are matched from the face library with the face features.
Fig. 2 is a schematic flowchart of a target matching method according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the object matching method includes the steps of:
201: tracking and capturing the human face in the target area through the electronic equipment terminal, and preferably selecting one or a group of captured pictures with good quality.
202: and the electronic equipment terminal reports the captured face image to a storage system of the electronic equipment terminal.
203: the method comprises the steps that attributes and features of a human face in a snapshot image are extracted through a recognition model on an electronic equipment side by applying a deep learning algorithm; the attributes may include gender, age, key points, and the like, and the features are represented by high-dimensional space vectors of the face for subsequent retrieval and comparison. The recognition model on the electronic device side may be a small calculation power model, which is influenced by the calculation power on the electronic device side.
204: performing a first-level search in a first face set stored on the electronic device side by using the face feature information extracted in step 203, where the search process may include: calculating the distance (such as cosine distance or Euclidean distance) between the feature to be retrieved and the feature corresponding to each face record in the face set, and sequencing the features from small to large according to the distance to find out a face record (the face record comprises face features or face images).
205: judging whether the captured face is matched in the first face set, wherein optional judgment bases comprise: and judging whether the minimum distance obtained in the step 204 is greater than a set distance threshold, if so, determining that one face record in the face set is matched, and otherwise, determining that the face record is not matched. If a face record is matched in the first face set at the electronic equipment end, executing a step 213; and if not, packaging the face information captured at this time and sending the face information to a server for secondary identification and retrieval.
206: and the electronic equipment end packs the calculated face structural data and sends the packed face structural data to the cloud.
207: and the server receives the identification request reported by the electronic equipment terminal, and extracts the snapshot face information and the model version information of the electronic equipment terminal.
208: and judging whether the model version of the electronic equipment side is consistent with the model version corresponding to the face set in the server, if so, turning to a step 209, and if not, turning to a step 210.
209: extracting the face feature information stored in the structured data reported by the electronic device, and going to step 212.
210: and extracting the snapshot URL information stored in the structured data reported by the electronic device, downloading the corresponding snapshot from the storage system, and going to step 211.
211: the server extracts face attributes, face key points and face features from the captured face picture by using a deep learning algorithm to obtain face feature information; the identification model integrated by the server identification module can be the same as or different from that of the equipment side, the calculation power of the cloud server side is stronger, the identification model with higher calculation power can be integrated, and the identification accuracy is improved; go to step 212.
212: finding out a face record with the minimum distance to the face feature information to be retrieved from a second face set in the server based on the face feature information, comparing the minimum distance with a distance threshold, if the minimum distance is smaller than the distance threshold, determining that the record is matched, and executing step 213; otherwise, the matching is not achieved, and the operation is finished.
213: and displaying the matching result. Optionally, the matching result and the matched face record are displayed.
Exemplary method
Fig. 3 is a flowchart illustrating a target matching method according to another exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 3, and includes the following steps:
step 301, receiving a first reference face set obtained by splitting a second reference face set by a server.
And the reference face included in the first reference face set is a part of the reference face in the second reference face set.
In this embodiment, because the storage space of the electronic device is limited, the first reference face set stored in the electronic device only includes a part of the second reference face set, the second reference face set stored in the server is split to obtain a plurality of first reference face sets, and the plurality of first reference face sets are allocated to a plurality of different electronic devices. For example, the second face reference set includes a plurality of face features, and the corresponding first face reference set includes a part of face features in the second face reference set.
Step 302, determining an image to be detected.
Optionally, the process of determining the image to be detected may refer to step 201 in the above embodiment shown in fig. 2, and capture and screen a face to obtain a face image with better quality as the image to be detected, where the quality may include: the face angle, the proportion of the face in the image, the image definition, and the like, and correspondingly, the face image with better quality refers to an image in which the face in the image is more favorable for processing such as detection or recognition, for example, a face image with smaller face angle, larger proportion of the face in the image, higher image definition, and the like, and the screening process may be to delete the face image with poorer quality.
Step 303, matching the image to be detected with the reference face in the first reference face set to obtain a first matching result.
Alternatively, the process of face matching may refer to step 204 in the embodiment shown in fig. 2, and determine whether there is a reference face matching the image to be detected by calculating the distance between the image to be detected and each reference face in the first reference face set.
And step 304, determining a reference face matched with the image to be detected according to the first matching result.
Optionally, there may be two cases in the first matching result, as provided in step 205 in the embodiment shown in fig. 2, it is determined whether the obtained minimum distance is greater than a set distance threshold, if the obtained minimum distance is smaller than the distance threshold, the record is considered to be matched, otherwise, the record is not matched, and when no result is matched in the first reference face set, the image to be detected is transmitted to the server for matching, so as to obtain a matched reference face.
The target matching method provided by the above embodiment of the present disclosure receives a first reference face set obtained by splitting a second reference face set by a server; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set; determining an image to be detected; matching the image to be detected with the reference face in the first reference face set to obtain a first matching result; determining a reference face matched with the image to be detected according to the first matching result; according to the embodiment, the secondary retrieval is realized by combining the electronic equipment and the server, and the second reference face set stored by the server is split to obtain the first reference face set with smaller occupied space and is sent to the electronic equipment, so that the deployment cost of the electronic equipment in a specific scene can be greatly reduced, and the retrieval efficiency is improved.
In some alternative embodiments, step 304 includes:
in response to the first matching result indicating that a reference face matched with the image to be detected exists in the first reference face set, taking the matched reference face as a reference face matched with the image to be detected;
responding to the first matching result that the reference face matched with the image to be detected does not exist in the first reference face set, sending the image to be detected to a server, and receiving a second matching result of the server based on the matching between the image to be detected and the reference face in the second reference face set; and determining a reference face matched with the image to be detected according to the second matching result.
For example, the process of this step may refer to steps 205 and 212 in the embodiment provided in fig. 2, where the criterion for determining whether a reference face matching the image to be detected exists in the first reference face set may include: determining a plurality of distances between the features of the image to be detected and a plurality of features in the first reference face set, judging whether the minimum distance in the plurality of distances is greater than a set distance threshold value, and if the minimum distance is less than the set distance threshold value, determining that a reference face matched with the image to be detected exists in the first reference face set; if the minimum distance is greater than the set distance threshold, it is indicated that all the distances are greater than the set distance threshold, and at this time, it is considered that no reference face matched with the image to be detected exists in the first reference face set, and matching needs to be performed by combining a larger reference face set in the server.
Optionally, the implementation in this embodiment is to match the image to be detected with a first reference face set at the electronic device side and a second reference face set at the server side, where one of the two cases includes that a reference face matched with the image to be detected exists in the first reference face set stored in the electronic device, and at this time, a reference face matched with the image to be detected is directly obtained; however, when there is no reference face matching the image to be detected in the first reference face set, matching is performed through the second reference face set in the server, which may refer to step 205 in the embodiment provided in fig. 2.
As shown in fig. 4, based on the embodiment shown in fig. 3, step 303 may include the following steps:
step 3031, processing the image to be detected to obtain first structured data.
The face structured data includes but is not limited to: face attribute information (male and female, gender, age, etc.), face key point information, face feature information, etc.
Optionally, the first structured data comprises: a first target feature;
step 3031 may include:
and performing feature extraction processing on the image to be detected by using the first neural network model to obtain a first target feature.
Wherein the recognition model (first neural network model) on the electronic device may be a small computational model due to the influence of computational power of the electronic device. In the embodiment, the first neural network model is used for extracting the features of the human face in the snapshot image, and the obtained first target features are represented by the high-dimensional space vector of the human face and are used for subsequent retrieval and comparison.
Step 3032, determining whether a reference face matched with the image to be detected exists in the first reference face set based on the first structured data, and obtaining a first matching result.
In this embodiment, the image to be retrieved is locally matched at the electronic device side, and a specific process of matching may refer to step 204 in the embodiment provided in fig. 2, a distance between the structured data of the image to be retrieved and each reference face in the first reference face set is calculated, all reference faces are sorted according to the distance, and the reference face with the smallest distance is determined, and optionally, whether a matched reference face exists in the first reference face set may be determined through a relationship between the reference face with the smallest distance and a threshold.
Optionally, a plurality of reference faces are stored in the first reference face set, or a plurality of reference faces and reference features corresponding to the reference faces are stored in the first reference face set;
step 3032, comprising:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the first target feature and the reference feature stored in the first reference face set; or the like, or, alternatively,
and performing feature extraction on the reference face stored in the first reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the first target features.
The implementation of the embodiment is that the face features extracted from the image to be detected are subjected to primary retrieval on a first reference face set stored on the electronic equipment side, wherein the first reference face set may store a reference face or reference features corresponding to the reference face; if the reference features are stored in the first reference face set, the distance between the first target features and the reference features is directly calculated, and if the reference faces are stored in the first reference face set, feature extraction needs to be performed on the faces in the first reference face set first, and the distance between the extracted reference features and the first target features is calculated based on the extracted reference features to determine whether the reference faces matched with the image to be detected exist.
Fig. 5 is a flowchart illustrating a target matching method according to still another exemplary embodiment of the present disclosure. The present embodiment can be applied to a server, as shown in fig. 5, and includes the following steps:
step 501, splitting the second reference face set to obtain at least two first reference face sets.
And the reference face included in the first reference face set is a part of the reference face in the second reference face set.
The storage space of the server is larger than that of the electronic equipment, so that the first reference face set in the electronic equipment only comprises part of the reference faces in the second reference face set, and in order to enable the electronic equipment to operate normally, the server splits the second reference face set before sending the reference faces to the electronic equipment to obtain a plurality of first reference face sets.
Step 502, each first reference face set is sent to an electronic device.
Optionally, the plurality of first reference face sets are respectively sent to different electronic devices, and at this time, the plurality of electronic devices that obtain the first reference face sets can respectively perform face matching at the device side, so that the processing efficiency of face matching is improved.
Step 503, in response to that no reference face matched with the image to be detected exists in the first reference face set in the electronic device, receiving the image to be detected sent by the electronic device.
When the first reference face set including fewer reference faces in the electronic device does not have a reference face matched with the image to be detected, the embodiment may perform secondary matching through the second reference face set in the server. Alternatively, reference may be made to step 206 in the embodiment provided in FIG. 2.
And step 504, matching the image to be detected with the reference face in the second reference face set to obtain a second matching result.
Alternatively, the matching process of the image to be detected and the reference face in the second reference face set, similar to the matching process of the image to be detected and the reference face in the first reference face set, may be determined based on calculating the distance between the features.
And 505, determining a reference face matched with the image to be detected according to the second matching result.
According to the embodiment, the secondary retrieval is realized through the electronic equipment and the server, and the deployment cost of the server in a specific scene is reduced and the retrieval efficiency is improved by combining a mode of referring to face set scores.
In some optional embodiments, step 504 includes:
and determining to match the image to be detected with the reference face stored in the second reference face set based on the matching result of the second neural network model and the first neural network model in the electronic equipment to obtain a second matching result.
Optionally, before the image to be detected is matched based on the second reference face set, the first neural network model at the electronic device side is matched with the second neural network model in the server, if the first neural network model is matched, the first target feature server side processed by the first neural network model can be directly adopted, and if the first target feature server side is not matched, the second neural network model is required to process the image to be detected again.
As shown in fig. 6, based on the embodiment shown in fig. 5, step 504 may include the following steps:
at step 5041, it is determined whether the second neural network model is the same as the first neural network model.
Optionally, whether the second neural network model is the same as the first neural network model may be determined by whether a model version number of the second neural network is consistent with a model version number of the first neural network; alternatively, whether the second neural network model is the same as the first neural network model may be determined by determining whether the network structure and network parameters of the second neural network model are the same as those of the first neural network model.
Step 5042, in response to the second neural network model being the same as the first neural network model, matching the first structured data corresponding to the image to be detected with the reference face.
Wherein the first structured data comprises a first target feature; the second reference face set stores a plurality of reference faces, or the second reference face set stores a plurality of reference faces and corresponding reference features thereof. For example, the plurality of reference faces included in the second reference face set may be face images of all employees of a certain company; the reference features corresponding to the reference face are face features obtained by performing feature extraction (for example, performing feature extraction by using a neural network) based on the face image, that is, the face images of all employees of the company may be stored in the second reference face set, or the face images of all employees of the company and the face features corresponding to the face images may be stored.
Optionally, matching the first structured data corresponding to the image to be detected with the reference face includes:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the first target feature and the reference feature stored in the second reference face set; or the like, or, alternatively,
and performing feature extraction on the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the first target features.
The implementation of the embodiment is to perform secondary retrieval on a second reference face set stored on a server by using face features extracted from an image to be detected, wherein the second reference face set may store a reference face or reference features corresponding to the reference face; if the features stored in the second reference face set are reference features, the distance between the first target feature and the reference features can be directly calculated, and if the features stored in the second reference face set are reference faces, feature extraction needs to be performed on the faces in the second reference face set first, and the distance between the extracted reference features and the first target features is calculated based on the extracted reference features to determine whether the reference faces matched with the images to be detected exist.
Step 5043, matching based on the image to be detected and the reference face in response to the second neural network model being different from the first neural network model.
Optionally, performing feature extraction processing on the image to be detected by using a second neural network model to obtain second structured data; and matching with the reference face based on the second structured data.
When the second neural network model is inconsistent with the first neural network model, the face features extracted by the first neural network model in the electronic device are no longer applicable to the server, and in the embodiment, the second neural network model at the server end is used for performing feature extraction processing on the image to be detected so as to match the obtained second structured data with the reference face. When the second structured data is matched with the reference face, feature extraction processing needs to be carried out on the reference face through a second neural network model to obtain face features corresponding to the reference face, and whether the second structured data is matched with the reference face is determined according to the distance by calculating the distance between the second structured data and the face features corresponding to the reference face.
In this embodiment, the model versions corresponding to the face features in the electronic device side and the server may not be consistent, for example, a low-computation model is used at the electronic device side, and a high-computation model is used at the server, and if the model version at the electronic device side is consistent with the model version at the server, the face features extracted at the electronic device side can be directly used in a second reference face set in the server for secondary retrieval; if the versions are not consistent, the server is used for recognizing and extracting the face characteristic information, and secondary retrieval is carried out by using the face characteristic information extracted by the server.
Optionally, the second structured data comprises a second target feature; a plurality of reference faces are stored in the second reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the second reference face set;
matching with the reference face based on the second structured data, comprising:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the second target feature and the reference feature stored in the second reference face set; or the like, or, alternatively,
and performing feature extraction on the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the second target features.
The embodiment realizes that an image to be detected is input into a second reference face set stored on a server for secondary retrieval, before matching, a second neural network model in the server is adopted to extract the features of the image to be detected to obtain second target features, and the second target features are matched with the reference features in the second reference face set to determine the reference face according to the matching result.
Any of the image processing methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, any image processing method provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any image processing method mentioned by the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 7 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 7, the electronic device provided in this embodiment includes:
the sub-data receiving module 71 is configured to receive a first reference face set obtained by splitting the second reference face set by the server.
And the reference face included in the first reference face set is a part of the reference face in the second reference face set.
An image determination module 72 for determining an image to be detected.
The first matching module 73 is configured to match the to-be-detected image determined by the image determining module 72 with the reference face in the first reference face set determined by the sub-data receiving module 71, so as to obtain a first matching result.
And the image determining module 74 is configured to determine a reference face matched with the image to be detected according to the first matching result determined by the first matching module 73.
In the electronic device provided by the above embodiment of the present disclosure, a first reference face set obtained by splitting a second reference face set by a server is received; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set; determining an image to be detected; matching the image to be detected with the reference face in the first reference face set to obtain a first matching result; determining a reference face matched with the image to be detected according to the first matching result; according to the embodiment, the secondary retrieval is realized by combining the electronic equipment and the server, and the second reference face set stored by the server is split to obtain the first reference face set with smaller occupied space and is sent to the electronic equipment, so that the deployment cost of the electronic equipment in a specific scene can be greatly reduced, and the retrieval efficiency is improved.
Fig. 8 is a schematic structural diagram of an electronic device according to another exemplary embodiment of the present disclosure. As shown in fig. 8, the electronic device provided in this embodiment includes:
the image determining module 74 is specifically configured to, in response to that the first matching result indicates that a reference face matching the image to be detected exists in the first reference face set, take the matching reference face as a reference face matching the image to be detected; responding to the fact that no first image which is matched with the target image in the first reference face set does not exist in the first database, indicating that no reference face which is matched with the image to be detected exists in the first reference face set, sending the image to be detected of the target or the first structured data to a cloud server, and receiving a second matching result which is matched by the server based on the image to be detected and the reference face in the second reference face set; and determining a reference face matched with the image to be detected according to the second matching result.
A first matching module 73 comprising:
an image processing unit 731, configured to process the image to be detected to obtain the first structured data.
The first set matching unit 732 is configured to determine whether a reference face matched with the image to be detected exists in the first reference face set based on the first structured data, so as to obtain a first matching result.
Optionally, the first structured data comprises: a first target feature;
the image processing unit 731 is specifically configured to perform feature extraction processing on the target image to be detected by using the first neural network model, so as to obtain a first target feature.
A plurality of reference faces are stored in the first reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the first reference face set;
a first set matching unit 732, configured to determine whether a reference face matching the image to be detected exists based on distances between reference features stored in the first reference face set of the first target feature; or the like, or, alternatively,
and performing feature extraction on the reference face stored in the first reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the first target features.
Fig. 9 is a schematic structural diagram of a server according to an exemplary embodiment of the present disclosure. As shown in fig. 9, the server according to this embodiment includes:
and the data splitting module 91 is configured to split the second reference face set to obtain at least two first reference face sets.
And the reference face included in the first reference face set is a part of the reference face in the second reference face set.
And the data set distribution module 92 is configured to send each first reference face set obtained by the data splitting module 91 to an electronic device.
The image receiving module 93 is configured to receive an image to be detected sent by the electronic device in response to that no reference face matching the image to be detected exists in the first reference face set in the electronic device.
And the second matching module 94 is configured to match the to-be-detected image received by the image receiving module 93 with the reference face in the second reference face set to obtain a second matching result.
And a face determining module 95, configured to determine a reference face matched with the image to be detected based on the second matching result determined by the second matching module 94.
According to the embodiment, the secondary retrieval is realized through the electronic equipment terminal and the server terminal, and the deployment cost of the server in a specific scene is reduced and the retrieval efficiency is improved by combining a mode of referring to face set scores.
Fig. 10 is a schematic structural diagram of a server according to another exemplary embodiment of the present disclosure. As shown in fig. 10, the server according to this embodiment includes:
the second matching module 94 is specifically configured to determine, based on a matching result of the second neural network model and the first neural network model in the electronic device, to match the image to be detected with the reference face stored in the second reference face set, so as to obtain a second matching result.
A second matching module 94 comprising:
a model determining unit 941, configured to determine whether the second neural network model is the same as the first neural network model.
A face matching unit 942, configured to match the reference face with the first structured data corresponding to the image to be detected, in response to that the second neural network model is the same as the first neural network model; and responding to the difference between the second neural network model and the first neural network model, and matching the image to be detected and the reference face.
Optionally, the first structured data comprises a first target feature; a plurality of reference faces are stored in the second reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the second reference face set;
the face matching unit 942 is configured to determine whether a reference face matching the image to be detected exists based on a distance between the first target feature and a reference feature stored in the second reference face set when matching the first structured data corresponding to the image to be detected with the reference face; or, performing feature extraction on the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the first target features.
The face matching unit 942 is configured to, when matching the image to be detected with the reference face based on the image to be detected, perform feature extraction processing on the image to be detected by using the second neural network model to obtain second structured data; and matching with the reference face based on the second structured data.
Optionally, the second structured data comprises a second target feature; a plurality of reference faces are stored in the second reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the second reference face set;
the face matching unit 942 is configured to determine whether a reference face matching the image to be detected exists based on a distance between the second target feature and a reference feature stored in the second reference face set when matching is performed on the basis of the second structured data and the reference face; or, performing feature extraction on the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the plurality of reference features and the second target features.
The present disclosure also provides a target matching system, the system comprising: a server as provided in any of the embodiments above and at least two electronic devices as provided in any of the embodiments above.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 11. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device separate from them that may communicate with the first device and the second device to receive the collected input signals therefrom.
FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 11, electronic device 110 includes one or more processors 111 and memory 112.
Processor 111 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 110 to perform desired functions.
Memory 112 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 111 to implement the target matching methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 110 may further include: an input device 113 and an output device 114, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 113 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 113 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
The input device 113 may also include, for example, a keyboard, a mouse, and the like.
The output device 114 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 114 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 110 relevant to the present disclosure are shown in fig. 11, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 110 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the object matching method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the object matching method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (16)

1. A target matching method is applied to electronic equipment and comprises the following steps:
receiving a first reference face set obtained by splitting a second reference face set by a server; wherein the reference face included in the first reference face set is a part of the reference face in the second reference face set;
determining an image to be detected;
matching the image to be detected with the reference face in the first reference face set to obtain a first matching result;
and determining a reference face matched with the image to be detected according to the first matching result.
2. The method according to claim 1, wherein determining a reference face matching the image to be detected according to the first matching result comprises:
responding to the first matching result that the reference face matched with the image to be detected exists in the first reference face set, and taking the matched reference face as the reference face matched with the image to be detected;
responding to the first matching result that the reference face matched with the image to be detected does not exist in the first reference face set, sending the image to be detected to the server, and receiving a second matching result that the server matches the reference face in the second reference face set based on the image to be detected; and determining a reference face matched with the image to be detected according to the second matching result.
3. The method according to claim 1 or 2, wherein the matching the image to be detected and the reference face in the first reference face set to obtain a first matching result comprises:
processing the image to be detected to obtain first structured data;
and determining whether a reference face matched with the image to be detected exists in the first reference face set or not based on the first structured data to obtain the first matching result.
4. The method of claim 3, wherein the first structured data comprises: a first target feature;
the pair the image to be detected is processed to obtain first structured data, and the method comprises the following steps:
and performing feature extraction processing on the image to be detected by using a first neural network model to obtain a first target feature.
5. The method according to claim 4, wherein a plurality of reference faces are stored in the first reference face set, or a plurality of reference faces and their corresponding reference features are stored in the first reference face set;
the determining whether a reference face matched with the image to be detected exists in the first reference face set based on the first structured data comprises:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the first target feature and the reference feature stored in the first reference face set; or the like, or, alternatively,
and extracting features of the reference face stored in the first reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the reference features and the first target features.
6. A target matching method is applied to a server and comprises the following steps:
splitting the second reference face set to obtain at least two first reference face sets; wherein the reference face included in the first reference face set is a part of the reference face in the second reference face set;
respectively sending each first reference face set to an electronic device;
responding to the situation that no reference face matched with the image to be detected exists in the first reference face set in the electronic equipment, and receiving the image to be detected sent by the electronic equipment;
matching the image to be detected with the reference face in the second reference face set to obtain a second matching result;
and determining a reference face matched with the image to be detected according to the second matching result.
7. The method of claim 6, wherein the matching the image to be detected and the reference faces in the second reference face set to obtain a second matching result comprises:
and determining to match the reference face stored in the second reference face set based on the image to be detected and the reference face based on the matching result of the second neural network model and the first neural network model in the electronic equipment to obtain a second matching result.
8. The method of claim 7, wherein the determining based on the matching of the image to be detected with the reference faces stored in the second set of reference faces based on the matching result of the second neural network model with the first neural network model in the electronic device comprises:
determining whether the second neural network model is the same as the first neural network model;
responding to the second neural network model is the same as the first neural network model, and matching the first structural data corresponding to the image to be detected with the reference face;
and responding to the second neural network model and the first neural network model being different, and matching based on the image to be detected and the reference human face.
9. The method of claim 8, wherein the first structured data comprises a first target feature; a plurality of reference faces are stored in the second reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the second reference face set;
based on the first structured data that the image to be detected corresponds with the reference face match, including:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the first target feature and the reference feature stored in the second reference face set; or the like, or, alternatively,
and extracting the features of the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether a reference face matched with the image to be detected exists or not based on the distances between the reference features and the first target features.
10. The method according to claim 8 or 9, wherein said matching based on said image to be detected and said reference face comprises:
performing feature extraction processing on the image to be detected by using the second neural network model to obtain second structured data;
and matching with the reference face based on the second structured data.
11. The method of claim 10, wherein the second structured data comprises a second target feature; a plurality of reference faces are stored in the second reference face set, or a plurality of reference faces and corresponding reference features thereof are stored in the second reference face set;
the matching with the reference face based on the second structured data comprises:
determining whether a reference face matched with the image to be detected exists or not based on the distance between the second target feature and the reference feature stored in the second reference face set; or the like, or, alternatively,
and extracting the features of the reference face stored in the second reference face set to obtain a plurality of reference features, and determining whether the reference face matched with the image to be detected exists or not based on the distances between the reference features and the second target features.
12. An electronic device, comprising:
the subdata receiving module is used for receiving a first reference face set obtained by splitting the second reference face set by the server; wherein the reference faces included in the first reference face set are part of the reference faces in the second reference face set;
the image determining module is used for determining an image to be detected;
the first matching module is used for matching the image to be detected determined by the image determining module with the reference face in the first reference face set determined by the subdata receiving module to obtain a first matching result;
and the image determining module is used for determining a reference face matched with the image to be detected according to the first matching result determined by the first matching module.
13. A server, comprising:
the data splitting module is used for splitting the second reference face set to obtain at least two first reference face sets; wherein the reference face included in the first reference face set is a part of the reference face in the second reference face set;
the data set distribution module is used for respectively sending each first reference face set obtained by the data splitting module to an electronic device;
the image receiving module is used for responding to the situation that no reference face matched with the image to be detected exists in a first reference face set in the electronic equipment, and receiving the image to be detected sent by the electronic equipment;
the second matching module is used for matching the image to be detected received by the image receiving module with the reference face in the second reference face set to obtain a second matching result;
and the face determining module is used for determining a reference face matched with the image to be detected based on a second matching result determined by the second matching module.
14. An object matching system, comprising: a server according to claim 13 and at least two electronic devices according to claim 12.
15. A computer-readable storage medium storing a computer program for executing the object matching method of any one of claims 1 to 11.
16. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the object matching method of any one of claims 1 to 11.
CN202010084446.2A 2020-02-10 2020-02-10 Target matching method and system, server, cloud, storage medium and equipment Pending CN113255399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010084446.2A CN113255399A (en) 2020-02-10 2020-02-10 Target matching method and system, server, cloud, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010084446.2A CN113255399A (en) 2020-02-10 2020-02-10 Target matching method and system, server, cloud, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN113255399A true CN113255399A (en) 2021-08-13

Family

ID=77219343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010084446.2A Pending CN113255399A (en) 2020-02-10 2020-02-10 Target matching method and system, server, cloud, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113255399A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141950A1 (en) * 2007-11-05 2009-06-04 Olaworks, Inc. Method, system, and computer-readable recording medium for recognizing face of person included in digital data by using feature data
US20090324020A1 (en) * 2007-02-13 2009-12-31 Kabushiki Kaisha Toshiba Person retrieval apparatus
US20140104441A1 (en) * 2012-10-16 2014-04-17 Vidinoti Sa Method and system for image capture and facilitated annotation
CN105117860A (en) * 2015-09-22 2015-12-02 镇江锐捷信息科技有限公司 Internet work attendance system and method on the basis of a facial recognition device
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN107992822A (en) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 Image processing method and device, computer equipment, computer-readable recording medium
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
US20190364249A1 (en) * 2016-12-22 2019-11-28 Nec Corporation Video collection system, video collection server, video collection method, and program
CN110688510A (en) * 2018-06-20 2020-01-14 浙江宇视科技有限公司 Face background image acquisition method and system
CN110741377A (en) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 Face image processing method and device, storage medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090324020A1 (en) * 2007-02-13 2009-12-31 Kabushiki Kaisha Toshiba Person retrieval apparatus
US20090141950A1 (en) * 2007-11-05 2009-06-04 Olaworks, Inc. Method, system, and computer-readable recording medium for recognizing face of person included in digital data by using feature data
US20140104441A1 (en) * 2012-10-16 2014-04-17 Vidinoti Sa Method and system for image capture and facilitated annotation
CN105117860A (en) * 2015-09-22 2015-12-02 镇江锐捷信息科技有限公司 Internet work attendance system and method on the basis of a facial recognition device
US20190364249A1 (en) * 2016-12-22 2019-11-28 Nec Corporation Video collection system, video collection server, video collection method, and program
CN110741377A (en) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 Face image processing method and device, storage medium and electronic equipment
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN107992822A (en) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 Image processing method and device, computer equipment, computer-readable recording medium
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
CN110688510A (en) * 2018-06-20 2020-01-14 浙江宇视科技有限公司 Face background image acquisition method and system
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
US8107689B2 (en) Apparatus, method and computer program for processing information
US9204112B2 (en) Systems, circuits, and methods for efficient hierarchical object recognition based on clustered invariant features
US20150026182A1 (en) Systems and methods for generation of searchable structures respective of multimedia data content
US20150186405A1 (en) System and methods for generation of a concept based database
US20140324840A1 (en) System and method for linking multimedia data elements to web pages
US8805123B2 (en) System and method for video recognition based on visual image matching
CN111897875A (en) Fusion processing method and device for urban multi-source heterogeneous data and computer equipment
US9424466B2 (en) Shoe image retrieval apparatus and method using matching pair
US9471675B2 (en) Automatic face discovery and recognition for video content analysis
WO2012141655A1 (en) In-video product annotation with web information mining
US11966436B2 (en) Method and system for determining product similarity in digital domains
CN110705475B (en) Method, apparatus, medium, and device for target object recognition
CN111931548B (en) Face recognition system, method for establishing face recognition data and face recognition method
CN114708578A (en) Lip action detection method and device, readable storage medium and electronic equipment
KR20170082025A (en) Apparatus and Method for Identifying Video with Copyright using Recognizing Face based on Machine Learning
US20130191368A1 (en) System and method for using multimedia content as search queries
KR20190018274A (en) Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image
CN112307199A (en) Information identification method, data processing method, device and equipment, information interaction method
US11580721B2 (en) Information processing apparatus, control method, and program
KR20080046490A (en) Method for identifying face using montage and apparatus thereof
TW201435627A (en) System and method for optimizing search results
CN113255399A (en) Target matching method and system, server, cloud, storage medium and equipment
CN113111354A (en) Target retrieval method and system, terminal device, cloud server, medium and device
JP6651085B1 (en) Attribute recognition system, learning server, and attribute recognition program
CN113569860A (en) Example segmentation method, training method of example segmentation network and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination