CN114385993A - Identity detection method, device and readable medium - Google Patents

Identity detection method, device and readable medium Download PDF

Info

Publication number
CN114385993A
CN114385993A CN202111621064.XA CN202111621064A CN114385993A CN 114385993 A CN114385993 A CN 114385993A CN 202111621064 A CN202111621064 A CN 202111621064A CN 114385993 A CN114385993 A CN 114385993A
Authority
CN
China
Prior art keywords
detection
image
condition
feature vector
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111621064.XA
Other languages
Chinese (zh)
Inventor
何剑
朱丹
赵雷
刘奎龙
杨昌源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111621064.XA priority Critical patent/CN114385993A/en
Publication of CN114385993A publication Critical patent/CN114385993A/en
Priority to PCT/CN2022/120593 priority patent/WO2023124295A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides an identity detection method, identity detection equipment and a readable medium. The method comprises the following steps: providing a detection page, wherein the detection page comprises an uploading control; responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected; inputting the at least one detection image into a detection model, and determining a corresponding feature vector, wherein the feature vector comprises: a multi-level feature vector; comparing the characteristic vector with a reference characteristic vector to determine a detection result, wherein the detection result comprises: detecting the target object successfully or unsuccessfully; and displaying the detection result in the detection page. The cascade feature detection is carried out through the multi-stage feature vectors, so that the objects can be quickly retrieved through the multi-stage features to be identified, and the processing efficiency is improved.

Description

Identity detection method, device and readable medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an identity detection method, a terminal device, and a machine-readable medium.
Background
With the development of deep learning technology, Artificial Intelligence (AI) image recognition has been greatly advanced, and the image recognition capability of the AI has exceeded that of human beings. Moreover, artificial intelligence also plays a great role in a plurality of fields such as go, automatic driving, computer-aided diagnosis and the like, and promotes social progress.
At present, the identity of an object can be identified through an image, but the identification is often not accurate enough and the accuracy is poor.
Disclosure of Invention
The embodiment of the application provides an identity detection method, which is used for accurately identifying different objects.
Correspondingly, the embodiment of the application also provides a detection method, electronic equipment and a machine readable medium, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present application discloses an identity detection method, including:
providing a detection page, wherein the detection page comprises an uploading control;
responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected;
inputting the at least one detection image into a detection model, and determining a corresponding feature vector, wherein the feature vector comprises: a multi-level feature vector;
comparing the characteristic vector with a reference characteristic vector to determine a detection result, wherein the detection result comprises: detecting the target object successfully or unsuccessfully;
and displaying the detection result in the detection page.
Optionally, the inputting the at least one detection image into the detection model, and determining the corresponding feature vector, includes:
inputting a whole body image in at least one detection image into a first detection model, and determining a whole body feature vector;
and intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector.
Optionally, the determining the detection result by comparing the feature vector with the reference feature vector includes:
comparing the whole body characteristic vector with a whole body reference characteristic vector to determine a first comparison result;
determining that the detection result is failure to detect the target object under the condition that the first comparison result is judged not to meet the first condition;
under the condition that the first comparison result meets a first condition, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result;
determining that the detection result is failure to detect the target object under the condition that the second comparison result is judged not to meet the second condition;
and under the condition that the second comparison result meets a second condition, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference characteristic vector.
Optionally, the inputting the at least one detection image into the detection model, and determining the corresponding feature vector, further includes:
intercepting a key feature image from the at least one image under the condition that the second comparison result meets a second condition;
and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object.
Optionally, the determining the detection result by comparing the feature vector with the reference feature vector includes:
comparing the key characteristic vector with a key reference characteristic vector of the target object to determine a third comparison result;
determining that the detection result is failure to detect the target object under the condition that the third comparison result is judged not to meet the third condition;
and under the condition that the third comparison result meets a third condition, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference characteristic vector.
Optionally, the method further includes: and responding to the trigger of the identification control in the detection page, acquiring target information of the target object, and determining a reference characteristic vector of the target object according to the target information.
Optionally, the method further includes: displaying a target image of a target object and a detection image of an object to be detected in the detection page; and marking comparison positions in the target image and the detection image respectively.
The embodiment of the application also discloses a detection method, which comprises the following steps:
providing a detection page, wherein the detection page comprises an uploading control;
responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected;
uploading the at least one detection image to a server so as to determine a characteristic vector of the object to be detected at the server according to the detection model, comparing the characteristic vector with a reference characteristic vector and determining a detection result;
and receiving a detection result, and displaying the detection result in the detection page.
The embodiment of the application also discloses a detection method, which comprises the following steps:
receiving at least one detection image of an object to be detected, and determining a target object;
inputting the at least one detection image into a detection model, and determining a corresponding feature vector, wherein the feature vector comprises: whole body and facial feature vectors;
comparing the characteristic vector with a reference characteristic vector to determine detection results, wherein the detection results comprise the same or different detection results;
and sending the detection result.
The embodiment of the application also discloses an electronic device, which comprises: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in embodiments of the present application.
One or more machine-readable media having stored thereon executable code that, when executed, causes a processor to perform a method as described in embodiments of the present application are also disclosed.
Compared with the prior art, the embodiment of the application has the following advantages:
in this embodiment of the present application, a detection page may be provided, and in response to triggering of the upload control, at least one detection image of an object to be detected is determined, so that the image can be uploaded to detect the object conveniently, and the at least one detection image may be input into a detection model to determine a corresponding feature vector, where the feature vector includes: the multilevel feature vectors are used for cascade feature detection, the feature vectors are compared with the reference feature vectors to determine detection results, and then the detection results are displayed in a detection page, so that objects can be quickly retrieved through the multilevel features to be identified, and the processing efficiency is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of an embodiment of an identity detection method according to an embodiment of the present application;
FIG. 2A is a diagram illustrating an example of a detection page according to an embodiment of the present application;
FIG. 2B is a flow chart illustrating steps of another method for identity detection according to an embodiment of the present application;
FIG. 3A is a diagram illustrating another exemplary detection page according to an embodiment of the present application;
FIG. 3B is a flow chart of steps in another embodiment of an identity detection method of the present application;
FIG. 4A is a diagram illustrating an example of another exemplary detection page according to an embodiment of the present application;
FIG. 4B is a flow chart of steps in yet another embodiment of an identity detection method of the present application;
FIG. 5 is a diagram illustrating an example of detecting interaction in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The embodiment of the application can be applied to the object identity detection scene, such as animal detection, personnel detection, manual product detection and the like. The cascade detection of various characteristics can be realized by cascade characteristic (cascade feature) detection, and corresponding objects can be detected. The cascade feature detection is to combine multiple features to perform serial analysis according to a certain rule to determine whether the objects are the same object. A plurality of detection models can be trained, different detection models detect the detection image of the object to be detected, and corresponding feature vectors are determined. The features detected by different models are cascade features, such as the features of the whole body, the face and the like in sequence, so that the features are detected step by step, and the detection accuracy is improved.
The detection model in the embodiment of the present application is a feature detection model, which may be various neural networks and machine learning models, for example, Residual Network (resnet) models such as resnet18 and resnet50 may be adopted, or other classification models and feature extraction models may be adopted, which is not limited in the embodiment of the present application. In the cascade feature detection scene, a plurality of detection models can be trained, and different detection models can detect different object features. Therefore, cascade characteristic detection is carried out through various characteristics, and the detection accuracy is improved. In the process of model training, a training set can be set for different features, and training samples in the training set are related to the features to be detected. Features may be determined based on the identified subject, for example, for a subject, may include whole body features, facial features, and other key features, which may be determined based on the type of subject, such as a key feature for a dog being a nose print, a key feature for a cat being a suit, and so forth. The training model may include a plurality of models, such as a first training model, a second training model, a third training model, etc., and may be determined according to the desired characteristics. Wherein, different training models can be obtained by the same model training or different model training.
After model training is completed, cascade feature detection can be performed based on multiple models, and the method can be applied to multiple scenes such as retrieval, identification and comparison aiming at objects. The following embodiments are discussed taking objects as examples.
Referring to fig. 1, a flowchart illustrating steps of an identity detection method according to an embodiment of the present application is shown.
Step 102, providing a detection page, wherein the detection page comprises an uploading control.
And 104, responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected.
The detection page is used for detecting the object and can provide an uploading control, and the uploading control is used for uploading a detection image of the object to be detected. Accordingly, the upload control may include a capture control and an acquisition control. And the camera can be called to shoot a detection image of the object to be detected based on the shooting control. The acquisition control may acquire a detected image of the local or network side of the device, for example, an interface of the album application is called to acquire an image in the album application, and for example, an image of a local album of the application is acquired.
And acquiring at least one detection image for the object to be detected through the uploading control, wherein the number of the detection images can be determined according to the cascade features to be detected, if one detection image can provide various required features, only one image needs to be provided, and if one image cannot provide various required features, the corresponding image can be determined and uploaded according to the required features. For example, in animal detection, if the cascade features are whole body, face, and nose print (body pattern) in sequence, a whole body image and a face image can be provided, and the nose print can be obtained from the face image, or a nose print image can be provided separately.
In the embodiment of the application, the detection page can provide various functions such as retrieval, identification, comparison and the like aiming at the object. Under the retrieval function, only at least one detection image of the object to be detected needs to be provided. Under the recognition and comparison functions, target information of the target object, such as name, identification, image, etc., is also provided. Correspondingly, an identification control can be included in the detection page, and the identification control is used for acquiring target information, such as a name, an identification and the like, of a target object to be identified. Taking animal identification as an example, it is possible to identify whether two animals are the same animal, and the target information of the target object includes an object name, an object identifier, and the like, and also an animal breed, and the like. In the contrasting scenario, the upload control may also receive a target image of the target object, such as a target image of the target object. It is convenient to judge whether two objects are the same object.
Step 106, inputting the at least one detection image into a detection model, and determining a corresponding feature vector, where the feature vector includes: a multi-level feature vector.
Step 108, determining a reference feature vector of the target object, comparing the feature vector with the reference feature vector, and determining a detection result, wherein the detection result comprises: the target object is detected successfully or the target object is detected unsuccessfully.
After at least one detection image is acquired, the detection image can be input into a corresponding detection model, and a corresponding feature vector is acquired. The embodiment of the application can realize cascade characteristic detection, and the corresponding detection model can be processed by stage detection. For example, a first detection image is input into a first detection model to obtain a first-level feature vector, and the first-level feature vector is compared with a stored first-level feature vector; and if the first condition is not met, determining that the detection result of the target object is not detected. And if the first condition is met, inputting the second detection image into a second detection model to obtain a second-level feature vector, comparing the second-level feature vector with the stored second-level feature vector, and judging whether the second condition is met. And by analogy, detection is carried out step by step, if the corresponding submission is not met, the target object is not detected as a result, the detection can be stopped, and the waste of resources is reduced. If the result meets the corresponding condition, the next-stage detection can be continued, and the detection result is obtained through multi-stage detection, so that the accuracy is improved. The detection result comprises: the target object is detected successfully or the target object is detected unsuccessfully. For example, the detected object is a specific object in the database, and for example, the detection result of the detected target object can be obtained by matching the detected object with the target object. For another example, the object is not detected in the database, and a detection result indicating that the detection target object fails can be obtained if the detection object does not match the target object.
In the embodiment of the present application, for an object, the cascade feature sequentially includes: whole-body features, facial features, and key features, which may be determined by subject. In an optional embodiment, the inputting the at least one detection image into the detection model, and determining the corresponding feature vector, includes: inputting a whole body image in at least one detection image into a first detection model, and determining a whole body feature vector; and intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector. The comparing the feature vector with the reference feature vector to determine the detection result includes: comparing the whole body characteristic vector with a whole body reference characteristic vector to determine a first comparison result; under the condition that the first comparison result is judged not to meet the first condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; under the condition that the first comparison result meets a first condition, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result; under the condition that the second comparison result is judged not to meet the second condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; and under the condition that the second comparison result meets the second condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object. The inputting the at least one detection image into a detection model, and determining a corresponding feature vector, further includes: intercepting a key feature image from the at least one image under the condition that the second comparison result meets a second condition; and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object. The comparing the feature vector with the reference feature vector to determine the detection result includes: comparing the key characteristic vector with a key reference characteristic vector of the target object to determine a third comparison result; under the condition that the third comparison result is judged not to meet the third condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; and under the condition that the third comparison result meets the third condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object.
The feature comparison may be performed in various ways, such as determining the distance between two feature vectors, or determining the similarity between two feature vectors. The corresponding comparison conditions can also be various, for example, the distance is smaller than the distance threshold, and for example, the similarity is larger than the similarity threshold, and the like, and can be specifically determined according to the requirements.
When the cascade detection is performed, whether to perform the next-stage detection can be determined based on the comparison result of the previous stage. Therefore, in an optional embodiment, a whole-body image in the at least one detection image is input into the first detection model, and a whole-body feature vector is determined; and comparing the whole body characteristic vector with the whole body reference characteristic vector to determine a first comparison result. Under the condition that the first comparison result is judged not to meet the first condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; and under the condition that the first comparison result meets a first condition, intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector. Comparing the facial feature vector with a facial reference feature vector to determine a second comparison result; under the condition that the second comparison result is judged not to meet the second condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; and under the condition that the second comparison result meets the second condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object.
Taking the object as an example, the whole-body feature can be used as the first-level feature, so that the whole-body image can be input into the first detection model, and the whole-body feature vector of the object to be detected can be extracted through the first detection model. The whole-body feature vector is then compared to a whole-body reference feature vector, where the whole-body reference feature may be each whole-body reference feature stored in the database without determining the target object.
In order to speed up the searching process, a codebook of the reference features and a feature vector library can be established. In which, cascade feature detection may be performed on a known object in advance, a detected image is input into a plurality of detection models, corresponding feature vectors are detected in sequence, then a codebook may be established for each level of features, the codebook may be established based on the feature vectors, such as based on the content of the feature vectors, and then a feature vector library may be created based on the codebook. Corresponding feature vectors may then be retrieved from the feature vector library and compared.
And comparing the whole body characteristic vector with the whole body reference characteristic vector to obtain a corresponding first comparison result. Such as the distance between two vectors, similarity, etc. as the comparison result. And then compared to corresponding thresholds to determine whether the corresponding conditions are met. Or the result of comparison with the threshold value is used as the comparison result, and the like. And under the condition that the first comparison result is judged not to meet the first condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected. And under the condition that the first comparison result meets the first condition, determining whether the target object is possibly detected or not, and further performing next-level judgment. In the retrieved scenario, one or more target objects may be detected, and then comparison may be performed in the next-level reference feature vector of the one or more target objects, so as to narrow the comparison range. A face image can be captured from the at least one detection image, the face image is input into a second detection model to obtain a face feature vector, then the face feature vector and a face reference feature vector are adopted for comparison to determine a second comparison result, and under the condition that the second comparison result is judged not to meet a second condition, the detection result is determined to be that the target object fails to be detected, namely the target object is not detected; and under the condition that the second comparison result meets the second condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object. For more accurate judgment, after the second-stage detection and comparison, a third-stage detection can be performed, and a key feature image is intercepted from the at least one image; and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object. Comparing the key characteristic vector with a key reference characteristic vector of the target object to determine a third comparison result; under the condition that the third comparison result is judged not to meet the third condition, determining that the detection result is that the target object is failed to be detected, namely the target object is not detected; and under the condition that the third comparison result meets the third condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object. In the above, three-level characteristics are taken as an example, and in actual processing, a fourth-level, a fifth-level and other multi-level inspection and comparison can be set according to requirements, which is not limited in the embodiment of the present application.
And determining corresponding detection results through cascade characteristic comparison, such as target object detection failure, target object detection and the like. For the condition that the target object is detected, target information of the target object, such as basic information of object name, object identification, object variety and the like, and various information of images and the like, can be acquired and added into the detection result.
And step 110, displaying the detection result in the detection page.
The detection results may be displayed in a detection page. In other embodiments, the position in the image can be located based on the feature during the detection, and the mark is carried out in the image. Then, a target image of a target object and a detection image of an object to be detected can be displayed in the detection page; and marking comparison positions in the target image and the detection image respectively. Thereby showing the reason why the two objects are the same object or different objects. For example, the location and marking may be based on key features, so that the reason for obtaining the corresponding detection result can be known.
In summary, a detection page may be provided, and in response to the triggering of the upload control, at least one detection image of the object to be detected is determined, so that the image can be uploaded to detect the object conveniently, the at least one detection image may be input into a detection model, and a corresponding feature vector is determined, where the feature vector includes: the multilevel feature vectors are used for cascade feature detection, the feature vectors are compared with the reference feature vectors to determine detection results, and then the detection results are displayed in a detection page, so that objects can be quickly retrieved through the multilevel features to be identified, and the processing efficiency is improved.
On the basis of the above embodiments, the present application further provides an object identity retrieval system, which provides a retrieval service for an object and can detect whether an object is a registered known object. For example, after a lost object is picked up on the way, an image of the object may be taken and then uploaded on a search page of an object identity search system to determine whether the object is a registered known object.
Referring to fig. 2A, a schematic diagram of an example of a detection page according to an embodiment of the present application is shown.
Referring to FIG. 2B, a flowchart illustrating steps of an embodiment of an object retrieval method of the present application is shown.
Step 202, displaying a detection page, wherein the detection page comprises an uploading control.
And 204, responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected.
The object identity retrieval system can provide a retrieval page, and can be accessed through various terminals such as a mobile phone, a tablet computer and a notebook computer. An example of a search page is shown in fig. 2A. For example, the left retrieval page may provide an upload control, and in response to triggering of the upload control, at least one detection image of the object to be detected is acquired.
Step 206, inputting a whole body image in the at least one detection image into the first detection model, and determining a whole body feature vector.
The first detection model may be detected for the whole-body features, for example, by using a residual network model, and the first detection model is obtained by training sample data including a whole-body image. And analyzing and processing the whole-body image through the first detection model to obtain a corresponding whole-body feature vector.
And step 208, comparing the whole body characteristic vector with the whole body reference characteristic vector to determine a first comparison result.
When a certain type of object is not specifically designated, the whole-body feature vector can be adopted to search each whole-body reference feature vector in the feature vector library, wherein the feature vector library can also comprise a code book of the whole-body feature, and one or more whole-body reference feature vectors can be quickly searched based on the code book to be compared to obtain a corresponding first comparison result. In the comparison process, the vector distance between the whole body characteristic vector and the whole body reference characteristic vector can be calculated, the similarity between the whole body characteristic vector and the whole body reference characteristic vector can also be calculated, and the like, and then the first comparison result is determined by comparing the similarity with a corresponding threshold value.
Step 210, determining whether the first comparison result satisfies a first condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the first condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the first condition is not satisfied. If yes, go to step 212; if not, go to step 226.
Step 212, intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector.
The second detection model may be detected for facial features, for example, using a residual network model, trained on sample data containing facial images. And analyzing and processing the face image through the second detection model to obtain a corresponding face feature vector. In some examples, a face image may be extracted from the whole-body image and then input to the second detection model for detection. In still other examples, the facial image may also be uploaded separately, so that an image of the area where the face is located is truncated and then the facial feature vector is detected.
And step 214, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result.
When the first-level features are aligned, target objects with partially similar comparison can be determined, and then face reference feature vectors of the target objects can be acquired subsequently. In the comparison process, the vector distance between the facial feature vector and the facial reference feature vector can be calculated, the similarity between the facial feature vector and the facial reference feature vector can also be calculated, and the like, and then the second comparison result is determined by comparing the similarity with a corresponding threshold value.
Step 216, determine whether the second comparison result satisfies a second condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the second condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the second condition is not satisfied. If yes, go to step 218; if not, go to step 226.
Step 218, intercepting a key feature image from the at least one image, and inputting the key feature image into a third detection model to obtain a key feature vector.
The third detection model may detect for key features, which are determined depending on the kind of the object. For example, the dog is a dog, the key feature of the dog is a nose line, and for a cat, the nose area is small and the line is not obvious enough, so the importance of the nose line as a criterion is reduced, and therefore, the hair color pattern on the cat body can be used as the key feature. Also, as with leopards, specific combinations of patterns can be used as key features.
On the basis that the detection of the whole body features and the detection of the facial features are similar, the key features can be further detected and compared. Therefore, key characteristic images can be extracted, such as extracting a gross pattern image from a whole body image, and intercepting a nose pattern image from a face image, or directly acquiring the key characteristic images and the like. And processing the third detection model to obtain a key feature vector.
Step 220, comparing the key feature vector with the key reference feature vector of the target object, and determining a third comparison result.
The range of the target objects with similar comparison can be further reduced through the facial feature ratio, and then the key reference feature vector of the target object with the facial features still matched can be acquired subsequently. In the comparison process, the vector distance between the key characteristic vector and the key reference characteristic vector can be calculated, the similarity between the key characteristic vector and the key reference characteristic vector can also be calculated, and the like, and then the third comparison result is determined by comparing the similarity with the corresponding threshold value.
Step 222, determining whether the third comparison result satisfies a third condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the third condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the third condition is not satisfied. If yes, go to step 224; if not, go to step 226.
And 224, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference characteristic vector.
If the third comparison result meets the third condition, the object with the minimum vector distance or the maximum similarity can be screened as the target object, and the target object is fed back as the detection result. Of course, feedback may be performed in other examples, which is not limited in this application. The object information stored in the database of the target object, such as the name, identification, image and other data of the target object, can be obtained and added to the detection result.
In step 226, it is determined that the detection result is failure to detect the target object, i.e. the target object is not detected.
After the target object is determined not to be detected at any stage, the extraction and comparison of the features at the next stage are not performed, so that resources can be saved.
Step 228, displaying the detection result in the detection page.
The detection result may be displayed in the detection page. As in the example of fig. 2A, the detection result is displayed as shown in the page example on the right side. Target information, such as a name, of the target object may be displayed, and an image of the target object may also be displayed. The detection page may also provide a notification control, such as the "notify owner" control in fig. 2A, to enable the owner to be quickly contacted in the event that the animal is lost or lost. The notification control can be provided on the basis of reporting that the user is lost or the articles are lost, and the notification control is determined according to the requirements.
On the basis of the above embodiments, the present application further provides an object identity retrieval system, which provides an identification service for an object and can detect whether an object is a specific known object. Therefore, a comparison is carried out on a certain specified object in the system, and whether the specified object is the same object is determined.
Referring to fig. 3A, a schematic diagram of an example of a detection page according to an embodiment of the present application is shown.
Referring to FIG. 3B, a flowchart illustrating steps of an embodiment of an object recognition method of the present application is shown.
Step 302, displaying a detection page, wherein the detection page comprises an uploading control.
And 304, responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected.
The object identity retrieval system can provide a retrieval page, and can be accessed through various terminals such as a mobile phone, a tablet computer and a notebook computer. An example of a search page is shown in fig. 3A. For example, the left retrieval page may provide an upload control, and in response to triggering of the upload control, at least one detection image of the object to be detected is acquired.
Step 306, responding to the trigger of the identification control in the detection page, and acquiring the target information of the target object.
When the designated object in the system is identified, the target information of the designated target object can be provided through the identification control. Such as object name, object identification, object variety, etc. The object identification is the unique identification of the object in the system, and the object identification can uniquely correspond to one object. In some situations, the user may easily determine whether the object is a designated object. Therefore, if more than one specified object exists, the object can be uploaded through the identification control so as to be retrieved in at least one specified target object.
Step 308, determining a reference feature vector of the target object according to the target information, where the reference feature vector includes: a whole-body reference feature vector, a face reference feature vector, and a key reference feature vector.
An object identification is determined based on the target information, and then a reference feature vector of the target object can be determined in the system, the reference feature vector comprising: a whole-body reference feature vector, a face reference feature vector, and a key reference feature vector.
Step 310, inputting a whole body image in the at least one detection image into the first detection model, and determining a whole body feature vector.
The first detection model may be detected for the whole-body features, for example, by using a residual network model, and the first detection model is obtained by training sample data including a whole-body image. And analyzing and processing the whole-body image through the first detection model to obtain a corresponding whole-body feature vector.
Step 312, comparing the whole body feature vector with the whole body reference feature vector to determine a first comparison result.
In the comparison process, the vector distance between the whole body characteristic vector and the whole body reference characteristic vector can be calculated, the similarity between the whole body characteristic vector and the whole body reference characteristic vector can also be calculated, and the like, and then the first comparison result is determined by comparing the similarity with a corresponding threshold value.
Step 314, determining whether the first comparison result satisfies a first condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the first condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the first condition is not satisfied. If yes, go to step 316; if not, go to step 330.
Step 316, intercepting a facial image from the at least one detection image, and inputting the facial image into a second detection model to obtain a facial feature vector.
The second detection model may be detected for facial features, for example, using a residual network model, trained on sample data containing facial images. And analyzing and processing the face image through the second detection model to obtain a corresponding face feature vector. In some examples, a face image may be extracted from the whole-body image and then input to the second detection model for detection. In still other examples, the facial image may also be uploaded separately, so that an image of the area where the face is located is truncated and then the facial feature vector is detected.
And step 318, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result.
When the first-level features are aligned, target objects with partially similar comparison can be determined, and then face reference feature vectors of the target objects can be acquired subsequently. In the comparison process, the vector distance between the facial feature vector and the facial reference feature vector can be calculated, the similarity between the facial feature vector and the facial reference feature vector can also be calculated, and the like, and then the second comparison result is determined by comparing the similarity with a corresponding threshold value.
Step 320, determining whether the second comparison result satisfies a second condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the second condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the second condition is not satisfied. If yes, go to step 322; if not, go to step 330.
Step 322, intercepting a key feature image from the at least one image, and inputting the key feature image into a third detection model to obtain a key feature vector. The key features are determined according to the category of the object.
The third detection model may detect for key features, which are determined depending on the kind of the object. For example, the key feature of dogs is the nasal veins, and for cats, the nasal veins are small in proportion and not obvious enough in lines, so that the importance of the nasal veins as a basis for judgment is reduced, and therefore, the hair color patterns on cats can be used as the key feature. Also, as with leopards, specific combinations of patterns can be used as key features.
On the basis that the detection of the whole body features and the detection of the facial features are similar, the key features can be further detected and compared. Therefore, key characteristic images can be extracted, such as extracting a gross pattern image from a whole body image, and intercepting a nose pattern image from a face image, or directly acquiring the key characteristic images and the like. And processing the third detection model to obtain a key feature vector.
Step 324, comparing the key feature vector with the key reference feature vector of the target object, and determining a third comparison result.
The range of the target objects with similar comparison can be further reduced through the facial feature ratio, and then the key reference feature vector of the target object with the facial features still matched can be acquired subsequently. In the comparison process, the vector distance between the key characteristic vector and the key reference characteristic vector can be calculated, the similarity between the key characteristic vector and the key reference characteristic vector can also be calculated, and the like, and then the third comparison result is determined by comparing the similarity with the corresponding threshold value.
Step 326, determine whether the third comparison result satisfies a third condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the third condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the third condition is not satisfied. If yes, go to step 328; if not, go to step 330.
Step 328, determining that the detection result is that the target object is successfully detected, where the target object is an object corresponding to the reference feature vector. The object information stored in the database of the target object, such as the name, identification, image and other data of the target object, can be obtained and added to the detection result.
In step 330, it is determined that the detection result is failure to detect the target object, i.e. the target object is not detected.
After the target object is determined not to be detected at any stage, the extraction and comparison of the features at the next stage are not performed, so that resources can be saved.
And step 332, displaying the detection result in the detection page.
The detection result may be displayed in the detection page. As in the example of fig. 3A, the detection result is displayed as shown in the page example on the right side. The alignment can be shown to be successful. If the target object is determined from the plurality of target objects, the target information, such as the name, of the determined target object can be displayed, and the image of the target object can also be displayed. The detection page may also provide a notification control, such as the "notify host" control in FIG. 3A, to enable the host to be quickly contacted in the event of the subject being lost. The notification control can be provided on the basis of the missing declaration of the user on the object, and is determined according to the requirement.
On the basis of the above embodiments, the embodiments of the present application further provide an object identity retrieval system, which provides a comparison service for an object and can detect whether an object is a designated object. And uploading an image of the target object to be compared so as to compare the two objects.
Referring to fig. 4A, a schematic diagram of an example of a detection page according to an embodiment of the present application is shown.
Referring to FIG. 4B, a flowchart illustrating steps of an embodiment of an object recognition method of the present application is shown.
Step 402, displaying a detection page, wherein the detection page comprises an uploading control.
And step 404, determining at least one detection image of the object to be detected in response to the trigger of the uploading control.
The object identity retrieval system can provide a retrieval page, and can be accessed through various terminals such as a mobile phone, a tablet computer and a notebook computer. An example of a search page is shown in fig. 3A. For example, the left retrieval page may provide an upload control, and in response to triggering of the upload control, at least one detection image of the object to be detected is acquired.
And step 406, responding to the trigger of the comparison control in the detection page, and uploading at least one detection image of the target object.
The two objects are compared, and the target object does not analyze the characteristics in the system before, a comparison port control can be provided, and at least one detection image of the target object is uploaded through the comparison control.
Step 408, inputting the whole-body image of the object to be detected and the whole-body image of the target object into the first detection model respectively, and determining a corresponding whole-body feature vector and a whole-body reference feature vector respectively.
The first detection model may be detected for the whole-body features, for example, by using a residual network model, and the first detection model is obtained by training sample data including a whole-body image. And analyzing and processing the whole-body image through the first detection model to obtain a corresponding whole-body feature vector. Thereby obtaining the whole body characteristic vector of the object to be detected and the whole body reference characteristic vector of the target object.
Step 410, comparing the whole body feature vector with the whole body reference feature vector to determine a first comparison result.
In the comparison process, the vector distance between the whole body characteristic vector and the whole body reference characteristic vector can be calculated, the similarity between the whole body characteristic vector and the whole body reference characteristic vector can also be calculated, and the like, and then the first comparison result is determined by comparing the similarity with a corresponding threshold value.
In step 412, it is determined whether the first comparison result satisfies a first condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the first condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the first condition is not satisfied. If yes, go to step 414; if not, step 428 is performed.
And 414, respectively intercepting the face image of the object to be detected and the face image of the target object, and respectively inputting the face images into the second detection model to respectively obtain the corresponding face feature vector and the face reference feature vector.
The second detection model may be detected for facial features, for example, using a residual network model, trained on sample data containing facial images. And analyzing and processing the face image through the second detection model to obtain a corresponding face feature vector. In some examples, a face image may be extracted from the whole-body image and then input to the second detection model for detection. In still other examples, the facial image may also be uploaded separately, so that an image of the area where the face is located is truncated and then the facial feature vector is detected. The facial feature vector of the object to be detected and the facial reference feature vector of the target object can be obtained.
And step 416, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result.
In the comparison process, the vector distance between the facial feature vector and the facial reference feature vector can be calculated, the similarity between the facial feature vector and the facial reference feature vector can also be calculated, and the like, and then the second comparison result is determined by comparing the similarity with a corresponding threshold value.
Step 418, determine whether the second comparison result satisfies a second condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the second condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the second condition is not satisfied. If yes, go to step 420; if not, step 428 is performed.
And step 420, respectively intercepting the key characteristic image of the object to be detected and the key characteristic image of the target object, inputting the key characteristic images into a third detection model, and respectively obtaining a corresponding key characteristic vector and a key reference characteristic vector. The key features are determined according to the category of the object.
The third detection model may detect for key features, which are determined depending on the kind of the object. For example, the key feature of dogs is the nasal veins, and for cats, the nasal veins are small in proportion and not obvious enough in lines, so that the importance of the nasal veins as a basis for judgment is reduced, and therefore, the hair color patterns on cats can be used as the key feature. Also, as with leopards, specific combinations of patterns can be used as key features.
On the basis that the detection of the whole body features and the detection of the facial features are similar, the key features can be further detected and compared. Therefore, key characteristic images can be extracted, such as extracting a gross pattern image from a whole body image, and intercepting a nose pattern image from a face image, or directly acquiring the key characteristic images and the like. And processing the third detection model to obtain a key feature vector.
Step 422, comparing the key feature vector with the key reference feature vector of the target object, and determining a third comparison result.
In the comparison process, the vector distance between the key characteristic vector and the key reference characteristic vector can be calculated, the similarity between the key characteristic vector and the key reference characteristic vector can also be calculated, and the like, and then the third comparison result is determined by comparing the similarity with the corresponding threshold value.
Step 424, determine whether the third comparison result satisfies a third condition.
If the vector distance is less than the distance threshold or the similarity is greater than the similarity threshold, it may be determined that the third condition is satisfied. If the vector distance is not greater than the distance threshold or the similarity is less than the similarity threshold, it may be determined that the third condition is not satisfied. If yes, go to step 426; if not, step 428 is performed.
And 426, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference feature vector.
In step 428, it is determined that the detection result is failure to detect the target object, i.e. the target object is not detected.
After the target object is determined not to be detected at any stage, the extraction and comparison of the features at the next stage are not performed, so that resources can be saved.
And step 430, displaying the detection result in the detection page.
The detection result may be displayed in the detection page. As in the example of fig. 3A, the detection result is displayed as shown in the page example on the right side. The alignment can be shown to be successful.
Step 432, displaying a target image of the target object and a detection image of the object to be detected in the detection page, and marking comparison positions in the target image and the detection image respectively.
In the embodiment of the present application, the positions of the features in the image may also be located, for example, the key feature vector is located at a position corresponding to the position in the image, and the position is marked in the image. Therefore, the positions corresponding to the matched features can be marked in the target image of the target object and the detection image of the object to be detected, and a user can intuitively determine the reason that the target image is the same object or different objects.
The above embodiments take object detection as an example, and the actual processing can also be applied to other detection scenarios. For example, the comparison recognition scene with similar groups and individual characteristics can be used for some handmade products, antiques and the like. The above embodiments may be executed independently by the server or the terminal device, or may be implemented by interaction between the server and the terminal device, which is specifically set according to requirements, and the embodiments of the present application do not limit this.
The following provides an embodiment of an interactive implementation, as shown in fig. 5:
step 502, a server provides a detection page, and the detection page comprises an uploading control. The detection page may be displayed at the client.
And 504, the client determines at least one detection image of the object to be detected in response to the trigger of the uploading control.
Step 506, the client uploads the at least one detection image to the server, so that the server determines the characteristic vector of the object to be detected according to the detection model, and compares the characteristic vector with the reference characteristic vector to determine the detection result.
Step 508, the server inputs the at least one detection image into a detection model, and determines a corresponding feature vector, where the feature vector includes: whole-body feature vectors and facial feature vectors.
Step 510, the server compares the feature vector with a reference feature vector to determine a detection result, wherein the detection results are the same or different;
the cascade feature detection can be carried out, so that a whole body image in at least one detection image can be input into the first detection model to determine a whole body feature vector; and comparing the whole body characteristic vector with the whole body reference characteristic vector to determine a first comparison result. Under the condition that the first comparison result is judged not to meet the first condition, determining that the detection result is failure in detecting the target object, namely that the target object is not detected; and under the condition that the first comparison result meets a first condition, intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector. Comparing the facial feature vector with a facial reference feature vector to determine a second comparison result; under the condition that the second comparison result is judged not to meet the second condition, determining that the detection result is failure in detecting the target object, namely that the target object is not detected; and under the condition that the second comparison result meets the second condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object.
After the second-level detection and comparison, a third-level detection can be executed, and a key characteristic image is intercepted from the at least one image; and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object. Comparing the key characteristic vector with a key reference characteristic vector of the target object to determine a third comparison result; under the condition that the third comparison result is judged not to meet the third condition, determining that the detection result is failure in detecting the target object, namely that the target object is not detected; and under the condition that the third comparison result meets the third condition, determining that the detection result is that the target object is successfully detected, namely, the object corresponding to the reference characteristic vector is the target object. In the above, three-level characteristics are taken as an example, and in actual processing, a fourth-level, a fifth-level and other multi-level inspection and comparison can be set according to requirements, which is not limited in the embodiment of the present application.
And step 512, the server side sends the detection result.
In step 514, the client displays the detection result in the detection page.
Therefore, the object comparison can be conveniently carried out based on the image, and whether the objects are the same object or not can be determined.
The embodiment of the application provides a cascading characteristic comparison mode, image information of the whole body, the face and the nose of a pet is comprehensively utilized, the single problem of the nasal print characteristic is solved, and robustness and accuracy of an identification technology are improved. For cats, the nose part of the cat has small occupation ratio and the lines are not obvious enough, so the importance of the nose line as a judgment basis is reduced, but the hair color pattern and the facial features on the cat body have distinction degrees, and the three features are comprehensively utilized, so that the constraint condition during identification is strengthened, the identities of other objects such as the cat and the like can be identified, and the identification can be flexibly carried out on the basis of the key features of the objects.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
On the basis of the above embodiments, the present embodiment further provides an identity detection apparatus, which is applied to electronic devices such as a terminal device and a server device.
And the page providing module is used for providing a detection page, and the detection page comprises an uploading control.
And the page response module is used for responding to the trigger of the uploading control and determining at least one detection image of the object to be detected.
A feature extraction module, configured to input the at least one detection image into a detection model, and determine a corresponding feature vector, where the feature vector includes: a multi-level feature vector. .
And the comparison module is used for comparing the characteristic vector with the reference characteristic vector to determine a detection result.
And the result display module is used for displaying the detection result in the detection page.
In summary, a detection page may be provided, and in response to the triggering of the upload control, at least one detection image of the object to be detected is determined, so that the image can be uploaded to detect the object conveniently, the at least one detection image may be input into a detection model, and a corresponding feature vector is determined, where the feature vector includes: the multilevel feature vectors are used for cascade feature detection, the feature vectors are compared with the reference feature vectors to determine detection results, and then the detection results are displayed in a detection page, so that objects can be quickly retrieved through the multilevel features to be identified, and the processing efficiency is improved.
In an optional embodiment, the feature extraction module includes:
and the first feature extraction submodule is used for inputting a whole body image in the at least one detection image into the first detection model and determining a whole body feature vector.
And the second feature extraction submodule is used for intercepting a face image from the at least one detection image and inputting the face image into a second detection model to obtain a face feature vector.
The third feature extraction submodule is used for intercepting a key feature image from the at least one image under the condition that the second comparison result meets a second condition; and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object.
The comparison module comprises:
the first comparison sub-module is used for comparing the whole-body characteristic vector with the whole-body reference characteristic vector to determine a first comparison result; under the condition that the first comparison result is judged not to meet the first condition, determining that the detection result is that the target object is not detected; and under the condition that the first comparison result meets the first condition, triggering a second comparison module.
The second comparison submodule is used for comparing the facial feature vector with a facial reference feature vector to determine a second comparison result; determining that the detection result is the undetected target object under the condition that the second comparison result is judged not to meet the second condition; and determining that the object corresponding to the reference characteristic vector is the target object. And triggering a third comparison module under the condition that the second comparison result meets a second condition.
The third comparison submodule is used for comparing the key characteristic vector with the key reference characteristic vector of the target object to determine a third comparison result; determining that the detection result is that the target object is not detected under the condition that the third comparison result is judged not to meet the third condition; and under the condition that the third comparison result meets a third condition, determining that the object corresponding to the detection result as the reference characteristic vector is the target object.
The page response module is further configured to respond to triggering of the identification control in the detection page, and acquire target information of the target object, so as to determine a reference feature vector of the target object according to the target information.
The result display module is also used for displaying a target image of a target object and a detection image of an object to be detected in the detection page; and marking comparison positions in the target image and the detection image respectively.
The embodiment of the application further provides a detection device applied to the electronic equipment of the client.
The display module is used for providing a detection page, and the detection page comprises an uploading control; and receiving a detection result and displaying the detection result in the detection page
The uploading module is used for responding to the triggering of the uploading control and determining at least one detection image of the object to be detected; and uploading the at least one detection image to a server so as to determine the characteristic vector of the object to be detected at the server according to the detection model, comparing the characteristic vector with the reference characteristic vector and determining the detection result.
The embodiment of the application also provides a detection device which is applied to the electronic equipment of the server side.
The communication module is used for receiving at least one detection image of an object to be detected and determining a target object; and sending the detection result;
a detection module, configured to input the at least one detection image into a detection model, and determine a corresponding feature vector, where the feature vector includes: whole body and facial feature vectors; and comparing the characteristic vector with a reference characteristic vector to determine detection results, wherein the detection results comprise the same or different detection results.
The embodiment of the application provides a cascading characteristic comparison mode, image information of the whole body, the face and the nose of a pet is comprehensively utilized, the single problem of the nasal print characteristic is solved, and robustness and accuracy of an identification technology are improved. For cats, the nose part of the cat has small occupation ratio and the lines are not obvious enough, so the importance of the nose line as a judgment basis is reduced, but the hair color pattern and the facial features on the cat body have distinction degrees, and the three features are comprehensively utilized, so that the constraint condition during identification is strengthened, the identities of other objects such as the cat and the like can be identified, and the identification can be flexibly carried out on the basis of the key features of the objects.
The present application further provides a non-transitory, readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of method steps in this application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the present application, the electronic device includes various types of devices such as a terminal device and a server (cluster).
Embodiments of the present disclosure may be implemented as an apparatus, which may include electronic devices such as a terminal device, a server (cluster), etc. within a data center, using any suitable hardware, firmware, software, or any combination thereof, in a desired configuration. Fig. 6 schematically illustrates an example apparatus 600 that may be used to implement various embodiments described herein.
For one embodiment, fig. 6 illustrates an exemplary apparatus 600 having one or more processors 602, a control module (chipset) 604 coupled to at least one of the processor(s) 602, a memory 606 coupled to the control module 604, a non-volatile memory (NVM)/storage 608 coupled to the control module 604, one or more input/output devices 610 coupled to the control module 604, and a network interface 612 coupled to the control module 604.
The processor 602 may include one or more single-core or multi-core processors, and the processor 602 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 600 can be used as a terminal device, a server (cluster), or the like in the embodiments of the present application.
In some embodiments, apparatus 600 may include one or more computer-readable media (e.g., memory 606 or NVM/storage 608) having instructions 614 and one or more processors 602 in combination with the one or more computer-readable media and configured to execute instructions 614 to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 604 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 602 and/or any suitable device or component in communication with control module 604.
Control module 604 may include a memory controller module to provide an interface to memory 606. The memory controller module may be a hardware module, a software module, and/or a firmware module.
Memory 606 may be used, for example, to load and store data and/or instructions 614 for device 600. For one embodiment, memory 606 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 606 may comprise a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, control module 604 may include one or more input/output controllers to provide an interface to NVM/storage 608 and input/output device(s) 610.
For example, NVM/storage 608 may be used to store data and/or instructions 614. NVM/storage 608 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 608 may include storage resources that are physically part of the device on which apparatus 600 is installed, or it may be accessible by the device and need not be part of the device. For example, NVM/storage 608 may be accessible over a network via input/output device(s) 610.
Input/output device(s) 610 may provide an interface for apparatus 600 to communicate with any other suitable device, input/output devices 610 may include communication components, audio components, sensor components, and so forth. The network interface 612 may provide an interface for the device 600 to communicate over one or more networks, and the device 600 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 602 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of the control module 604. For one embodiment, at least one of the processor(s) 602 may be packaged together with logic for one or more controller(s) of the control module 604 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 602 may be integrated on the same die with logic for one or more controller(s) of the control module 604. For one embodiment, at least one of the processor(s) 602 may be integrated on the same die with logic of one or more controllers of the control module 604 to form a system on a chip (SoC).
In various embodiments, the apparatus 600 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, apparatus 600 may have more or fewer components and/or different architectures. For example, in some embodiments, device 600 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The identity detection method, the terminal device and the machine-readable medium provided by the present application are introduced in detail, and specific examples are applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An identity detection method, the method comprising:
providing a detection page, wherein the detection page comprises an uploading control;
responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected;
inputting the at least one detection image into a detection model, and determining a corresponding feature vector, wherein the feature vector comprises: a multi-level feature vector;
comparing the characteristic vector with a reference characteristic vector to determine a detection result, wherein the detection result comprises: detecting the target object successfully or unsuccessfully;
and displaying the detection result in the detection page.
2. The method of claim 1, wherein inputting the at least one inspection image into an inspection model, determining corresponding feature vectors, comprises:
inputting a whole body image in at least one detection image into a first detection model, and determining a whole body feature vector;
and intercepting a face image from the at least one detection image, and inputting the face image into a second detection model to obtain a face feature vector.
3. The method of claim 2, wherein the determining the detection result by comparing the feature vector with a reference feature vector comprises:
comparing the whole body characteristic vector with a whole body reference characteristic vector to determine a first comparison result;
determining that the detection result is failure to detect the target object under the condition that the first comparison result is judged not to meet the first condition;
under the condition that the first comparison result meets a first condition, comparing the facial feature vector with a facial reference feature vector to determine a second comparison result;
determining that the detection result is failure to detect the target object under the condition that the second comparison result is judged not to meet the second condition;
and under the condition that the second comparison result meets a second condition, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference characteristic vector.
4. The method of claim 3, wherein inputting the at least one inspection image into an inspection model, determining corresponding feature vectors, further comprises:
intercepting a key feature image from the at least one image under the condition that the second comparison result meets a second condition;
and inputting the key feature image into a third detection model to obtain a key feature vector, wherein the key feature is determined according to the type of the object.
5. The method of claim 4, wherein the determining the detection result by comparing the feature vector with a reference feature vector comprises:
comparing the key characteristic vector with a key reference characteristic vector of the target object to determine a third comparison result;
determining that the detection result is failure to detect the target object under the condition that the third comparison result is judged not to meet the third condition;
and under the condition that the third comparison result meets a third condition, determining that the detection result is that the target object is successfully detected, wherein the target object is an object corresponding to the reference characteristic vector.
6. The method of claim 1, further comprising:
and responding to the trigger of the identification control in the detection page, acquiring target information of the target object, and determining a reference characteristic vector of the target object according to the target information.
7. The method of claim 1, further comprising:
displaying a target image of a target object and a detection image of an object to be detected in the detection page;
and marking comparison positions in the target image and the detection image respectively.
8. A method of detection, the method comprising:
providing a detection page, wherein the detection page comprises an uploading control;
responding to the trigger of the uploading control, and determining at least one detection image of the object to be detected;
uploading the at least one detection image to a server so as to determine a characteristic vector of the object to be detected at the server according to the detection model, comparing the characteristic vector with a reference characteristic vector and determining a detection result;
and receiving a detection result, and displaying the detection result in the detection page.
9. An electronic device, comprising: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of claims 1-8.
10. One or more machine-readable media having executable code stored thereon that, when executed, causes a processor to perform the method of any of claims 1-8.
CN202111621064.XA 2021-12-27 2021-12-27 Identity detection method, device and readable medium Pending CN114385993A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111621064.XA CN114385993A (en) 2021-12-27 2021-12-27 Identity detection method, device and readable medium
PCT/CN2022/120593 WO2023124295A1 (en) 2021-12-27 2022-09-22 Identity detection method and device, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111621064.XA CN114385993A (en) 2021-12-27 2021-12-27 Identity detection method, device and readable medium

Publications (1)

Publication Number Publication Date
CN114385993A true CN114385993A (en) 2022-04-22

Family

ID=81198571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111621064.XA Pending CN114385993A (en) 2021-12-27 2021-12-27 Identity detection method, device and readable medium

Country Status (2)

Country Link
CN (1) CN114385993A (en)
WO (1) WO2023124295A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124295A1 (en) * 2021-12-27 2023-07-06 阿里巴巴(中国)有限公司 Identity detection method and device, and readable medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463123A (en) * 2014-12-11 2015-03-25 南威软件股份有限公司 B/S-based face recognition method and system
CN108090433B (en) * 2017-12-12 2021-02-19 厦门集微科技有限公司 Face recognition method and device, storage medium and processor
CN111262887B (en) * 2020-04-26 2020-08-28 腾讯科技(深圳)有限公司 Network risk detection method, device, equipment and medium based on object characteristics
CN112528265A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Identity recognition method, device, equipment and medium based on online conference
CN114385993A (en) * 2021-12-27 2022-04-22 阿里巴巴(中国)有限公司 Identity detection method, device and readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124295A1 (en) * 2021-12-27 2023-07-06 阿里巴巴(中国)有限公司 Identity detection method and device, and readable medium

Also Published As

Publication number Publication date
WO2023124295A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN106446816B (en) Face recognition method and device
CN108304882B (en) Image classification method and device, server, user terminal and storage medium
CN111680551B (en) Method, device, computer equipment and storage medium for monitoring livestock quantity
CN109710780B (en) Archiving method and device
US9852363B1 (en) Generating labeled images
CN108985057B (en) Webshell detection method and related equipment
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
EP2806374A1 (en) Method and system for automatic selection of one or more image processing algorithm
US11321945B2 (en) Video blocking region selection method and apparatus, electronic device, and system
WO2020155790A1 (en) Method and apparatus for extracting claim settlement information, and electronic device
CN111814690B (en) Target re-identification method, device and computer readable storage medium
CN112084812A (en) Image processing method, image processing device, computer equipment and storage medium
CN114385993A (en) Identity detection method, device and readable medium
CN113128448B (en) Video matching method, device, equipment and storage medium based on limb identification
CN112241470B (en) Video classification method and system
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN111507420A (en) Tire information acquisition method, tire information acquisition device, computer device, and storage medium
CN114998665B (en) Image category identification method and device, electronic equipment and storage medium
CN112784691B (en) Target detection model training method, target detection method and device
CN111783869B (en) Training data screening method and device, electronic equipment and storage medium
CN114155471A (en) Design drawing and object verification method, device, computer equipment and system
CN115018783A (en) Video watermark detection method and device, electronic equipment and storage medium
Fitrianah et al. Fine-tuned mobilenetv2 and vgg16 algorithm for fish image classification
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium
CN113033459A (en) Image recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination