CN112906728A - Feature comparison method, device and equipment - Google Patents

Feature comparison method, device and equipment Download PDF

Info

Publication number
CN112906728A
CN112906728A CN201911227516.9A CN201911227516A CN112906728A CN 112906728 A CN112906728 A CN 112906728A CN 201911227516 A CN201911227516 A CN 201911227516A CN 112906728 A CN112906728 A CN 112906728A
Authority
CN
China
Prior art keywords
feature
detected
current
model
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911227516.9A
Other languages
Chinese (zh)
Other versions
CN112906728B (en
Inventor
王贤礼
方家乐
徐楠
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911227516.9A priority Critical patent/CN112906728B/en
Publication of CN112906728A publication Critical patent/CN112906728A/en
Application granted granted Critical
Publication of CN112906728B publication Critical patent/CN112906728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a feature comparison method, a device and equipment, wherein the method comprises the following steps: acquiring M to-be-detected features and N feature sets; selecting one feature set from the N feature sets as a current feature set; determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result; if the current feature set is not the last of the N feature sets, selecting the next feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set; and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models. By the technical scheme, the operation speed of feature comparison can be improved.

Description

Feature comparison method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for comparing features.
Background
Machine learning is a way to realize artificial intelligence, is a multi-field cross subject, and relates to a plurality of subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. Machine learning is used to study how computers simulate or implement human learning behaviors to acquire new knowledge or skills and reorganize existing knowledge structures to improve their performance. Machine learning focuses more on algorithm design, so that a computer can automatically learn rules from data and predict unknown data by using the rules.
In the related technology of machine learning, feature comparison is used as an effective means and widely applied to the fields of face recognition, human body recognition, vehicle recognition and the like. The characteristic comparison means that the characteristics to be detected are compared with the characteristic models in the characteristic library one by one, and then the characteristic models meeting the requirements are screened from the characteristic library.
Because the number of the feature models in the feature library is very large, when the features to be detected are compared with the feature models in the feature library one by one, the computation amount is very large, so that the computation speed of feature comparison is very slow, and therefore, how to improve the computation speed of feature comparison becomes a problem to be solved at present.
Disclosure of Invention
The application provides a feature comparison method, which comprises the following steps:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
The application provides a device is compared to characteristic, the device includes:
the acquisition module is used for acquiring M to-be-detected features and N feature sets; wherein M and N are both positive integers greater than 1;
the selection module is used for selecting a first feature set from the N feature sets as a current feature set;
the determining module is used for determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
the selection module is further configured to select a next feature set of the current feature set from the N feature sets as the current feature set if the current feature set is not the last of the N feature sets;
the obtaining module is further configured to obtain a target feature model corresponding to the feature to be detected from the candidate feature models if the current feature set is the last one of the N feature sets.
The application provides a device is compared to characteristic, includes: a CPU and a machine-readable storage medium storing machine-executable instructions executable by the CPU;
the CPU is configured to execute machine executable instructions to perform the following steps:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
According to the technical scheme, the M characteristics to be detected can be obtained, the comparison result between the characteristics to be detected and each characteristic model in the current characteristic set is determined for each characteristic to be detected in the M characteristics to be detected, namely, the comparison process is carried out on the M characteristics to be detected, so that the M characteristics to be detected are processed at one time, the operation speed of characteristic comparison is improved, and the data processing speed of a CPU is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a flow chart of a feature comparison method in one embodiment of the present application;
FIG. 2 is a flow chart of a method of feature alignment in another embodiment of the present application;
FIG. 3 is a schematic diagram of a data processing method in one embodiment of the present application;
FIG. 4 is a flow chart of a method of feature alignment in another embodiment of the present application;
FIG. 5 is a block diagram of a feature matching apparatus according to an embodiment of the present application;
fig. 6 is a structural diagram of a feature comparison device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The feature comparison is widely applied to the fields of face recognition, human body recognition, vehicle recognition and the like. The characteristic comparison means that the characteristics to be detected are compared with the characteristic models in the characteristic library one by one, and then the characteristic models meeting the requirements are screened from the characteristic library. Because the number of the feature models in the feature library is very large, when the feature to be detected is compared with the feature models in the feature library one by one, the operation speed is very slow, the operation amount is very large, and how to improve the operation speed of feature comparison becomes a problem to be solved at present.
In a possible implementation manner, feature comparison may be implemented by a GPU (Graphics Processing Unit), and since a large number of GPUs may be deployed on the physical device, each GPU performs feature comparison on the feature to be detected and a plurality of feature models in the feature library, thereby increasing the linear operation speed. However, this method is highly dependent on physical devices, and requires that the physical devices include a CPU (Central Processing Unit) and a GPU, and cannot be used for physical devices that do not include a GPU.
In another possible implementation, the feature comparison may be implemented by the CPU, that is, the CPU performs the feature comparison on the feature to be detected and a plurality of feature models in the feature library, but the operation speed of this method is slow. For example, if feature comparison is performed between the feature 1 to be detected and all feature models in the feature library (taking feature models 1-1000 as an example, in practical application, the number of feature models is much greater than 1000), then:
the feature models 1 to 1000 are stored in the memory, in order to realize feature comparison, the CPU stores the feature models 1 to 100 in the buffer (the buffer can only store part of the feature models in the memory because the storage space of the buffer is limited, and the feature models 1 to 100 are taken as an example here), the CPU stores the feature models 1 to 10 in the register (the register can only store part of the feature models in the buffer because the storage space of the register is limited, and the feature models 1 to 10 are taken as an example here), and the CPU stores the feature 1 to be detected in the register.
Then, the register compares the feature 1 to be detected with each feature model in the feature models 1-10, and after the feature comparison is completed, the CPU stores the feature models 11-20 in the register to replace the feature models 1-10 in the register. The register performs characteristic comparison on the characteristic 1 to be detected and each of the characteristic models 11-20, and so on until the register performs characteristic comparison on the characteristic 1 to be detected and each of the characteristic models 91-100. After the feature comparison is completed, the CPU stores the feature models 101-200 in the buffer, replaces the feature models 1-100 in the buffer, stores the feature models 101-110 in the register, replaces the feature models 91-100 in the register, performs the feature comparison on the feature 1 to be detected and each feature model in the feature models 101-110 by the register, and so on until the register completes the feature comparison on the feature 1 to be detected and the feature models 1-1000 to obtain a feature comparison result.
Similarly, the above manner is also adopted for the other features to be detected, such as the feature to be detected 2, the feature to be detected 3, and the like, and the details are not described herein again. Obviously, for each feature to be detected, the CPU needs to perform 10 copy operations from memory to buffer, 100 copy operations from buffer to register. If there are 8 features to be detected that need to be compared, the CPU needs to execute 80(8 × 10) copy operations from the memory to the register, and 800(8 × 100) copy operations from the register to the register. In summary, a large number of copy operations from the memory to the register and a large number of copy operations from the register to the register result in a slower operation speed of the feature comparison, and the operation speed of the feature comparison is slower as the number of features to be detected increases.
In view of the above problems, in the embodiment of the present application, M to-be-detected features may be obtained, and feature comparison is performed on the M to-be-detected features, so that the M to-be-detected features are processed at one time, and the operation speed of feature comparison is improved. Assuming that M is 8 and 8 features to be detected are 1-8, then:
in order to realize the feature comparison, the CPU stores the feature models 1-100 into the buffer, the CPU stores the feature models 1-10 into the register, and the CPU stores the features to be detected 1-8 into the register.
Then, the register performs feature comparison on the feature 1 to be detected and each of the feature models 1 to 10, performs feature comparison on the feature 2 to be detected and each of the feature models 1 to 10, and so on until feature comparison is performed on the feature 8 to be detected and each of the feature models 1 to 10, that is, feature comparison of 8 features to be detected is completed simultaneously for the feature models 1 to 10.
After the feature comparison is completed, the CPU stores the feature models 11-20 in the registers, replacing the feature models 1-10 in the registers. The register performs feature comparison on the feature 1 to be detected and each feature model in the feature models 11 to 20, and so on until the feature 8 to be detected and each feature model in the feature models 11 to 20 perform feature comparison, that is, feature comparison of 8 features to be detected can be completed simultaneously.
And so on, until the register performs characteristic comparison on the characteristics 1 to 8 to be detected and each characteristic model in the characteristic models 91 to 100. After the feature comparison is completed, the CPU stores the feature models 101-200 in the buffer to replace the feature models 1-100 in the buffer, and the CPU stores the feature models 101-110 in the register to replace the feature models 91-100 in the register. The register may then perform a feature comparison of the feature to be detected 1-8 with each of the feature models 101-110.
And repeating the steps until the register completes feature comparison of the features 1 to 8 to be detected and the feature models 1 to 1000, so as to obtain feature comparison results of the 8 features to be detected.
In summary, for 8 features to be detected, the CPU only performs 10 copy operations from the memory to the register, and 100 copy operations from the register to the register. Compared with the above mode, the number of copy operations from the memory to the buffer is reduced remarkably (from 80 times to 10 times), and the number of copy operations from the buffer to the register is reduced remarkably (from 800 times to 100 times). Obviously, the operation speed of feature comparison is improved by reducing the copy operation times from the memory to the buffer and the copy operation times from the buffer to the register.
The technical solutions of the embodiments of the present application will be described in detail below with reference to specific embodiments.
The embodiment of the present application provides a feature comparison method, which may be applied to a feature comparison device, where the feature comparison device may include a CPU (such as a single-core CPU or a multi-core CPU), and the feature comparison device may be any type of device, such as a personal computer, a server, an imaging device, a storage device, a notebook computer, a smart phone, a mobile terminal, and the like, and the feature comparison device is not limited to this, as long as the feature comparison device includes a CPU. Referring to fig. 1, a schematic flow chart of the feature comparison method is shown, and the method may be applied to a CPU, and the method may include:
step 101, obtaining M to-be-detected features and N feature sets, where M and N are both positive integers greater than 1, a value of M may be configured according to experience, and a value of N may be configured according to experience.
Illustratively, after the CPU obtains the feature to be detected each time, the feature to be detected is added to the processing queue. And the CPU periodically judges whether the number of the features to be detected in the processing queue reaches M. And if so, acquiring M characteristics to be detected from the processing queue. And if not, continuing to wait until the number of the features to be detected in the processing queue reaches M, and acquiring M features to be detected from the processing queue.
For example, a feature library may be created by using a conventional modeling algorithm, and the feature library may include a large number of feature models, and the creation process of the feature library is not limited. For example, in the field of face recognition, a feature library for face recognition is established, and the feature library includes a large number of feature models for face recognition. In the field of human body recognition, a feature library for human body recognition is established, and the feature library comprises a large number of feature models for human body recognition. In the field of vehicle identification, a feature library for vehicle identification is created, which includes a plurality of feature models for vehicle identification. Of course, the above are only a few examples and are not limiting.
For a large number of feature models included in the feature library, the feature models are a string of binary data describing the target through an algorithm, and can be represented by a large number of feature points.
For example, all feature models in the feature library may be divided into N feature sets, and the number of feature models in different feature sets may be the same or different. For example, the feature library includes feature models 1 to 1000, the feature models 1 to 100 are divided into a feature set 1, the feature models 101 to 200 are divided into a feature set 2, and so on, the feature models 901 to 1000 are divided into a feature set 10. Of course, the above is only an example, and the feature model may be divided in any manner without limitation.
And 102, selecting a first feature set from the N feature sets as a current feature set.
Step 103, determining a comparison result between the feature to be detected and each feature model in the current feature set for each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the feature to be detected according to the comparison result between the feature to be detected and each feature model.
And 104, if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, returning to the step 103, and executing the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set.
And 105, if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature model corresponding to the feature to be detected.
In a possible implementation manner, determining a candidate feature model corresponding to the feature to be detected according to a comparison result between the feature to be detected and each feature model may include: and selecting K feature models from the current feature set according to the comparison result between the feature to be detected and each feature model, and determining the selected K feature models as candidate feature models corresponding to the feature to be detected.
Obtaining a target feature model corresponding to the feature to be detected from a candidate feature model corresponding to the feature to be detected may include: and selecting a candidate characteristic model from all candidate characteristic models as a target characteristic model corresponding to the feature to be detected according to a comparison result between the feature to be detected and each candidate characteristic model.
The following describes steps 102 to 105 with reference to a specific application scenario.
In step 102, N feature sets may be sorted, the sorting manner is not limited, and sorting may be performed in any manner, such as feature set 1, feature set 2, …, and feature set 10, and then a first feature set (e.g., feature set 1) is selected from the N feature sets as the current feature set.
In step 103, for the feature 1 to be detected in the M features to be detected, a comparison result between the feature 1 to be detected and each feature model in the feature set 1 may be determined. Based on the comparison result between the feature 1 to be detected and each feature model, K feature models may be selected from the feature set 1, for example, if K is 2, the feature model 11 and the feature model 12 may be selected from the feature set 1, and the selected feature model 11 and the selected feature model 12 are determined as candidate feature models of the feature 1 to be detected.
For other features to be detected in the M features to be detected, the processing procedure is similar to that of the feature to be detected 1, and is not repeated here, so that a candidate feature model of each feature to be detected can be obtained.
In step 104, since the feature set 1 is not the last of the N feature sets, a next feature set (e.g., feature set 2) of the feature set 1 may be selected from the N feature sets as the current feature set, and the process returns to step 103. When step 103 is executed again, for the feature 1 to be detected in the M features to be detected, a comparison result between the feature 1 to be detected and each feature model in the feature set 2 is determined. Based on the comparison result between the feature 1 to be detected and each feature model, 2 feature models can be selected from the feature set 2, and if the feature model 21 and the feature model 22 are selected, the selected feature model 21 and the selected feature model 22 can be determined as candidate feature models of the feature 1 to be detected. For other features to be detected in the M features to be detected, the processing procedure is similar to that of feature 1 to be detected.
In step 104, since the feature set 2 is not the last of the N feature sets, a next feature set (e.g., feature set 3) of the feature set 2 may be selected from the N feature sets as a current feature set, and the process returns to step 103, and so on until the current feature set is the feature set 10, 2 feature models are selected from the feature set 10 as candidate feature models of the features to be detected.
Since the feature set 10 is the last of the N feature sets, the target feature model corresponding to the feature to be detected is obtained from all candidate feature models corresponding to the feature to be detected. For example, the feature 1 to be detected corresponds to 20 candidate feature models (such as the feature model 11, the feature model 12, the feature model 21, the feature model 22, and the like), and at least one candidate feature model may be selected from all candidate feature models according to a comparison result between the feature 1 to be detected and each candidate feature model to serve as a target feature model corresponding to the feature 1 to be detected, for example, 2 candidate feature models (such as the feature model 21, the feature model 62) may be selected from all candidate feature models to serve as a target feature model corresponding to the feature 1 to be detected.
For other features to be detected in the M features to be detected, the processing procedure is similar to that of the feature to be detected 1, and is not repeated here, so that a target feature model of each feature to be detected can be obtained.
In summary, the target feature model of each feature to be detected in the M features to be detected can be obtained.
In another possible implementation manner, determining the candidate feature model corresponding to the feature to be detected according to the comparison result between the feature to be detected and each feature model may include: selecting K feature models from the current feature set according to the comparison result between the feature to be detected and each feature model, acquiring at least one feature model from the K feature models and the recorded candidate feature models according to the comparison result between the feature to be detected and the K feature models and the comparison result between the feature to be detected and the recorded candidate feature models, and recording the acquired feature models as new candidate feature models.
Obtaining a target feature model corresponding to the feature to be detected from a candidate feature model corresponding to the feature to be detected may include: and determining the candidate feature model as a target feature model corresponding to the feature to be detected.
The following describes steps 102 to 105 with reference to a specific application scenario.
In step 102, N feature sets, for example, feature set 1, feature set 2, …, and feature set 10, are ranked, and feature set 1 is selected from the N feature sets as a current feature set.
In step 103, for the feature 1 to be detected in the M features to be detected, a comparison result between the feature 1 to be detected and each feature model in the feature set 1 may be determined. Based on the comparison result between the feature 1 to be detected and each feature model, K feature models may be selected from the feature set 1, for example, if K is 2, the feature models 11 and 12 may be selected from the feature set 1. Since there is no candidate feature model that has been recorded at present, the feature models 11 and 12 are directly recorded as candidate feature models of the feature 1 to be detected. For other features to be detected in the M features to be detected, the processing procedure is similar to that of the feature to be detected 1, and the description is not repeated here.
In step 104, since the feature set 1 is not the last of the N feature sets, the next feature set 2 of the feature set 1 is selected from the N feature sets as the current feature set, and the process returns to step 103.
When step 103 is executed again, for the feature 1 to be detected in the M features to be detected, a comparison result between the feature 1 to be detected and each feature model in the feature set 2 is determined. Based on the comparison result between the feature 1 to be detected and each feature model, the feature model 21 and the feature model 22 can be selected from the feature set 2. Since there are currently recorded candidate feature models (such as the feature model 11 and the feature model 12), according to a comparison result between the feature 1 to be detected and the feature model 21, a comparison result between the feature 1 to be detected and the feature model 22, a comparison result between the feature 1 to be detected and the feature model 11, and a comparison result between the feature 1 to be detected and the feature model 12, at least one (e.g., 2) feature models (such as the feature model 11 and the feature model 22) are obtained from the feature model 21, the feature model 22, the feature model 11, and the feature model 12, and the feature model 11 and the feature model 22 are recorded as candidate feature models of the feature 1 to be detected. The processing procedure of other features to be detected is similar to that of feature 1 to be detected, and is not described herein again.
In step 104, since the feature set 1 is not the last of the N feature sets, the next feature set 3 of the feature set 2 is selected from the N feature sets as the current feature set, and the process returns to step 103.
In step 103, for the feature 1 to be detected in the M features to be detected, the feature model 31 and the feature model 32 are selected from the feature set 3. Since there are currently recorded candidate feature models (such as the feature model 11 and the feature model 22), according to a comparison result between the feature 1 to be detected and the feature model 31, a comparison result between the feature 1 to be detected and the feature model 32, a comparison result between the feature 1 to be detected and the feature model 11, and a comparison result between the feature 1 to be detected and the feature model 22, 2 feature models (such as the feature model 22 and the feature model 31) are obtained from the feature model 31, the feature model 32, the feature model 11, and the feature model 22, and the feature model 31 are recorded as candidate feature models of the feature 1 to be detected. The processing procedure of other features to be detected is similar to that of feature 1 to be detected, and is not described herein again.
And repeating the steps until the current feature set is the feature set 10, and finally recording two feature models (such as the feature model 31 and the feature model 62) as candidate feature models of the feature 1 to be detected. Since the feature set 10 is the last of the N feature sets, the candidate feature models (such as the feature model 31 and the feature model 62) corresponding to the feature 1 to be detected are directly used as the target feature model corresponding to the feature 1 to be detected.
For other features to be detected in the M features to be detected, the processing procedure is similar to that of the feature to be detected 1, and is not repeated here, so that a target feature model of each feature to be detected can be obtained.
In summary, the target feature model of each feature to be detected in the M features to be detected can be obtained.
According to the technical scheme, the M characteristics to be detected can be obtained, the comparison result between the characteristics to be detected and each characteristic model in the current characteristic set is determined for each characteristic to be detected in the M characteristics to be detected, namely, the comparison process is carried out on the M characteristics to be detected, so that the M characteristics to be detected are processed at one time, the operation speed of characteristic comparison is improved, and the data processing speed of a CPU is improved.
Optionally, the feature to be detected may be a matrix, and the feature model may also be a matrix, so that the feature to be detected and the feature model may be subjected to matrix multiplication (i.e., cross product operation), and a feature comparison value between the feature to be detected and the feature model is obtained (i.e., a matrix multiplication result, such as a float-type or int-type feature comparison value). Then, normalizing the feature comparison value to a numerical value of a specified interval (such as 0-1) according to a preset strategy, wherein the numerical value of the specified interval is the similarity between the feature to be detected and the feature model.
For example, the preset policy is used to represent a functional relationship between the feature comparison value and the normalized value, and the functional relationship is not limited. For example, the functional relationship may include a mapping relationship between feature alignment value 1 and normalized value 1, a mapping relationship between feature alignment value 2 and normalized value 2, and so on.
Illustratively, the normalized value is larger when the feature alignment value is larger. The normalized value is a value in a given interval (e.g., 0-1), and when the normalized value is 0.8, it means that the degree of similarity is 80%.
In a possible implementation manner, when determining the comparison result between the feature to be detected and the feature model, the comparison result may be a similarity, that is, a feature comparison value between the feature to be detected and the feature model may be determined first, then the feature comparison value is normalized to a numerical value in a specified interval, and based on the numerical value in the specified interval, the similarity between the feature to be detected and the feature model may be obtained.
Referring to the above embodiment, when determining the candidate feature model corresponding to the feature to be detected according to the comparison result between the feature to be detected and each feature model, each feature model in the current feature set may be ranked according to the similarity between the feature to be detected and each feature model in the order from high to low, and the feature model ranked in the top is selected as the candidate feature model corresponding to the feature to be detected.
When the candidate feature model is selected from all the candidate feature models as the target feature model according to the comparison result between the feature to be detected and each candidate feature model, all the candidate feature models can be ranked according to the similarity between the feature to be detected and each candidate feature model and the sequence of the similarity from high to low, and the feature model ranked in the front is selected as the target feature model corresponding to the feature to be detected.
When at least one feature model is obtained from the K feature models and the recorded candidate feature models according to the comparison result between the feature to be detected and the K feature models and the comparison result between the feature to be detected and the recorded candidate feature models, the feature models can be sorted according to the similarity between the feature to be detected and the K feature models and the similarity between the feature to be detected and the recorded candidate feature models from high to low, and at least one feature model with the top rank is selected.
In another possible embodiment, when determining the comparison result between the feature to be detected and the feature model, the comparison result may be a feature comparison value, that is, the feature comparison value between the feature to be detected and the feature model may be determined, and the similarity between the feature to be detected and the feature model is no longer determined according to the feature comparison value. Obviously, the similarity between the features to be detected and a large number of feature models does not need to be determined, so that the calculation amount of the similarity can be saved, the operation speed of a CPU (central processing unit) is improved, and the normalization operation is reduced.
Illustratively, when the candidate feature model corresponding to the feature to be detected is determined according to the comparison result between the feature to be detected and each feature model, each feature model in the current feature set is sorted according to the feature comparison value between the feature to be detected and each feature model in the descending order of the feature comparison values, and the feature model with the top sorting is selected as the candidate feature model corresponding to the feature to be detected.
And when the candidate feature model is selected from all the candidate feature models as the target feature model according to the comparison result between the feature to be detected and each candidate feature model, sorting all the candidate feature models according to the feature comparison value between the feature to be detected and each candidate feature model and the sequence of the feature comparison values from large to small, and selecting the feature model with the top sorting as the target feature model corresponding to the feature to be detected.
And when at least one feature model is obtained from the K feature models and the recorded candidate feature models according to the comparison result between the feature to be detected and the K feature models and the comparison result between the feature to be detected and the recorded candidate feature models, sorting the feature models according to the feature comparison value between the feature to be detected and the K feature models and the feature comparison value between the feature to be detected and the recorded candidate feature models in the sequence from large to small of the feature comparison value, and selecting at least one feature model with the top sorting.
For example, when the comparison result is the feature comparison value, after the target feature model corresponding to the feature to be detected is obtained from the candidate feature model corresponding to the feature to be detected, the similarity between the feature to be detected and the target feature model may be determined according to the feature comparison value between the feature to be detected and the target feature model, and the target feature model and the similarity are output, that is, the target feature model and the similarity (i.e., the similarity between the feature to be detected and the target feature model) are the final result of the feature comparison.
For example, when the target feature model corresponding to the feature 1 to be detected is the feature model 31 and the feature model 62, the similarity 1 between the feature 1 to be detected and the feature model 31 may be determined according to the feature comparison value between the feature 1 to be detected and the feature model 31, and the similarity 2 between the feature 1 to be detected and the feature model 62 may be determined according to the feature comparison value between the feature 1 to be detected and the feature model 62. In this way, the feature model 31, the similarity 1, the feature model 62, and the similarity 2 can be output, that is, the final result of the feature comparison is output.
Obviously, in the above manner, only the similarity between the feature 1 to be detected and the feature model 31 and the similarity between the feature 1 to be detected and the feature model 62 need to be determined, and the similarities between the feature 1 to be detected and a large number of feature models in the feature set need not be determined, so that the calculation amount of the similarities can be saved.
In one possible implementation, the N feature sets may be stored in a memory. For each feature set, the storage space occupied by all feature models in the feature set is not larger than the storage space of the buffer, that is, the buffer can store all feature models in the feature set.
For each feature set, the feature set may be divided into a plurality of sub-feature sets, and for each sub-feature set, storage space occupied by all feature models in the sub-feature set is not greater than storage space of a register, that is, the register can store all feature models in the sub-feature set.
For example, feature models 1-1000 may be included in the feature library, with feature models 1-100 being partitioned into feature set 1, and so on, with feature models 901-1000 being partitioned into feature set 10. The storage space occupied by all the feature models in the feature set 1 is not more than that of the buffer, and so on, and the storage space occupied by all the feature models in the feature set 10 is not more than that of the buffer.
For feature set 1, the feature models 1-10 may be partitioned into a sub-feature set 11, the feature models 11-20 into a sub-feature set 12, and so on. The storage space occupied by all the feature models in the sub-feature set 11 is not more than that of the register, the storage space occupied by all the feature models in the sub-feature set 12 is not more than that of the register, and so on.
For feature set 2, the feature models 101-110 may be partitioned into the sub-feature set 21, the feature models 111-120 into the sub-feature set 22, and so on. The storage space occupied by all the feature models in the sub-feature set 21 is not more than that of the register, the storage space occupied by all the feature models in the sub-feature set 22 is not more than that of the register, and so on.
In the above application scenario, referring to fig. 2 for step 103, the implementation process may include:
step 1031, storing the current feature set in a buffer, and selecting one sub-feature set from a plurality of sub-feature sets included in the current feature set as the current sub-feature set.
For example, the data to be deleted may be determined from the stored data according to the access time of the stored data in the buffer, and the data to be deleted may be deleted from the buffer. Then, the current feature set in the memory is stored in the buffer. Wherein, the memory stores N characteristic sets.
And 1032, storing the current sub-feature set into a register, and determining a comparison result between the feature to be detected and each feature model in the current sub-feature set through the register.
For example, the data to be deleted may be determined from the stored data according to the access time of the stored data in the register, and the data to be deleted may be deleted from the register. Then, the current sub-feature set in the buffer is stored into the register, i.e. the register includes the current sub-feature set.
Illustratively, before step 1032, the M features to be detected may also all be stored to a register.
In summary, the register includes the current sub-feature set and the M features to be detected, so that for each feature to be detected in the M features to be detected, the register may determine a comparison result between the feature to be detected and each feature model in the current sub-feature set, and the specific determination manner refers to the above embodiments.
And 1033, if the current sub-feature set is not the last of the plurality of sub-feature sets, selecting the next sub-feature set from the plurality of sub-feature sets as the current sub-feature set, and returning to execute the step 1032.
Step 1034, if the current sub-feature set is the last of the plurality of sub-feature sets, ending the feature comparison process between the feature to be detected and the current feature set, i.e. ending step 103.
The following describes steps 1031 to 1034 with reference to specific application scenarios.
In step 1031, assuming that the current feature set is the feature set 1, the feature set 1 may be stored in the buffer, and the feature set 1 may include 10 sub-feature sets, such as the sub-feature set 11 and the sub-feature set 12. The 10 sub-feature sets are sorted, the sorting mode is not limited, and sorting can be performed in any mode, such as the sub-feature set 11, the sub-feature set 12, and …, and then, a first sub-feature set (such as the sub-feature set 11) is selected from the 10 sub-feature sets to serve as a current sub-feature set.
In step 1032, the sub-feature set 11 is stored in the register, and before step 1032, M features to be detected (e.g., feature to be detected 1-feature to be detected 8) have been stored in the register.
For the feature 1 to be detected in the M features to be detected, the register determines a comparison result between the feature 1 to be detected and each feature model in the sub-feature set 11, and the specific manner is as in the above embodiment. For other features to be detected in the M features to be detected, the processing procedure is similar to that of feature 1 to be detected.
In step 1033, the sub-feature set 11 is not the last of all sub-feature sets, so the next sub-feature set 12 of the sub-feature set 11 is selected as the current sub-feature set, and the process returns to step 1032.
In step 1032, the sub-feature set 12 needs to be stored in the register, and when the sub-feature set 12 is stored in the register, the following manner may be adopted: the first way, all feature models 1-10 of the sub-feature set 11 are deleted from the register and the sub-feature set 12 is stored to the register. Then, aiming at the feature 1 to be detected in the M features to be detected, the register determines a comparison result between the feature 1 to be detected and each feature model in the sub-feature set 12, and the processing process of other features to be detected is similar to that of the feature 1 to be detected.
And secondly, determining data to be deleted from the stored data according to the access time of the stored data (such as the feature models 1-10) in the register, wherein the data to be deleted is part of the feature models 1-10. For example, a plurality of data (for example, 5 data) having a relatively long interval between the current time and the last access time are used as the data to be deleted, and assuming that the interval between the current time and the last access time of the feature models 1 to 5 is larger than the interval between the current time and the last access time of the feature models 6 to 10, the feature models 1 to 5 may be used as the data to be deleted, and the feature models 1 to 5 may be deleted from the register. The partial feature models 11-15 in the sub-feature set 12 stored in the buffer may then be stored in registers.
For the feature 1 to be detected in the M features to be detected, the register may determine a comparison result between the feature 1 to be detected and each of the feature models 11 to 15. Then, data to be deleted is determined from the stored data (e.g., feature models 6-10, feature models 11-15) based on the access time of the stored data in the register, for example, assuming that the interval between the current time and the last access time of the feature models 6-10 is greater than the interval between the current time and the last access time of the feature models 11-15, the feature models 6-10 may be regarded as data to be deleted, and the feature models 6-10 may be deleted from the register. Then, part of the feature models 16-20 in the sub-feature set 12 stored in the buffer may be stored in a register, so far, the entire feature models 11-20 in the sub-feature set 12 are successfully stored in the register.
Aiming at the feature 1 to be detected in the M features to be detected, the register determines a comparison result between the feature 1 to be detected and each feature model in the feature models 16-20. For other features to be detected in the M features to be detected, since the register already has all the feature models of the sub-feature set 12, the comparison result between each feature to be detected and all the feature models of the sub-feature set 12 is determined, and details are not repeated here.
In step 1033, the sub-feature set 12 is not the last of all sub-feature sets, so the next sub-feature set 13 of the sub-feature set 12 is selected as the current sub-feature set, and the process returns to step 1032.
And repeating the steps until the current sub-feature set is the last of all the sub-feature sets, ending the feature comparison process of the features to be detected and the feature set 1, and obtaining a candidate feature model of each feature to be detected.
When the current feature set is the feature set 2, the step 103 needs to be executed again, which will be described below.
In step 103, the feature set 2 needs to be stored in the buffer, and when the feature set 2 is stored in the buffer, the following method may be adopted: in the first mode, all the feature models 1-100 of the feature set 1 are deleted from the register, and the feature set 2 is stored in the register. Then, the steps 1031 to 1034 are performed based on the feature set 2, and the specific performing process refers to the above embodiment, which is not described herein again.
And secondly, determining data to be deleted from the stored data according to the access time of the stored data (such as the feature models 1-100) in the buffer, wherein the data to be deleted is part of the feature models 1-100. For example, a plurality of data (for example, 50 data) having a relatively long interval between the current time and the last access time are used as the data to be deleted, and assuming that the interval between the current time and the last access time of the feature models 1 to 50 is larger than the interval between the current time and the last access time of the feature models 51 to 100, the feature models 1 to 50 may be used as the data to be deleted, and the feature models 1 to 50 may be deleted from the buffer. The partial feature models 101-150 for the feature set 2 stored in memory may then be stored in a buffer.
Then, the above steps 1031 to 1034 are performed based on the partial feature models 101 to 150 of the feature set 2, and the specific execution process refers to the above embodiments, which is not described herein again. After the comparison process of the partial feature models 101 to 150 is completed, data to be deleted is determined from the stored data according to the access time of the stored data (e.g., the feature models 51 to 100, the feature models 101 to 150) in the buffer, for example, a plurality of data (e.g., 50 data) having a relatively long interval between the current time and the last access time are used as the data to be deleted, and assuming that the interval between the current time and the last access time of the feature models 51 to 100 is greater than the interval between the current time and the last access time of the feature models 101 to 150, the feature models 51 to 100 may be used as the data to be deleted, and the feature models 51 to 100 may be deleted from the buffer. The partial feature models 151-200 of the feature set 2 stored in memory are then stored in a buffer. So far, all feature models of the feature set 2 are successfully stored in the buffer. Then, the above steps 1031 to 1034 are performed based on the partial feature models 151 to 200 of the feature set 2, and the specific execution process refers to the above embodiments, which is not described herein again.
When the current feature set is the feature set 3-10, the step 103 needs to be repeatedly executed, and the implementation process of the step 103 may refer to the processing of the feature set 2, which is not described herein again.
Referring to fig. 3, a schematic diagram of a data processing method is shown, where the data processing method may include:
step 301, for the data to be processed, determining whether the data to be processed is in the register.
If yes, go to step 302; if not, step 303 is performed.
Step 302, the register processes the data to be processed.
Step 303, determine whether the data to be processed is in the buffer.
If so, go to step 304; if not, step 305 is performed.
Step 304, taking out the data to be processed from the buffer, and storing the data to be processed into the register.
For example, data to be deleted is determined from stored data in the register, and if data with a relatively long interval between the current time and the last access time is used as the data to be deleted, the data to be deleted is deleted from the register, and the data to be processed is stored in the register. After step 304, return to step 301.
In step 305, it is determined whether the data to be processed is in the memory.
If so, go to step 306; if not, the data to be processed cannot be inquired.
Step 306, the data to be processed is taken out from the memory, and the data to be processed is stored in the buffer.
For example, data to be deleted is determined from the stored data in the buffer, and if data with a relatively long interval between the current time and the last access time is used as the data to be deleted, the data to be deleted is deleted from the buffer, and the data to be processed is stored in the buffer. After step 306, return to step 303.
Illustratively, the calculation speed of a register (namely, a CPU register) is greater than that of a buffer, the calculation speed of the buffer is greater than that of a memory, and the maximum reduction of the copy operation from the memory to the buffer and the reduction of the copy operation from the buffer to the register can improve the processing speed of the CPU on data.
Based on the above findings, in the embodiment of the present application, by performing feature comparison on the M to-be-detected features, the number of copy operations from the memory to the buffer is significantly reduced, and the number of copy operations from the buffer to the register is significantly reduced, so that the operation speed of the feature comparison is increased, and the processing efficiency of the CPU is improved.
For example, in this embodiment, the data to be processed in steps 301 to 306 may be a feature model, and when the register processes the data to be processed, a comparison result between the feature to be detected and the feature model is determined.
Based on the same concept as the above method, another feature comparison method is proposed in the embodiments of the present application, and as shown in fig. 4, is a schematic flow chart of the feature comparison method, and the method may include:
step 401, divide the feature library into N feature sets, where the N feature sets are all stored in the memory.
Illustratively, for each feature set, all feature models in the feature set occupy no more memory than the memory of the buffer. The storage space occupied by all the feature models in the feature set is as close as possible to the storage space of the buffer, so that the storage space of the buffer is fully utilized.
For example, the number of feature models in the first N-1 feature sets may be the same, such as P, and the number of feature models in the nth feature set may be P or less than P.
Step 402, selecting a feature set from the N feature sets as a current feature set.
Step 403, storing the current feature set in a buffer, dividing the current feature set into a plurality of sub-feature sets, and selecting one sub-feature set from the plurality of sub-feature sets as the current sub-feature set.
Step 404, store the current sub-feature set to a register.
Illustratively, before step 404, M features to be detected may also be stored in a register.
In a possible implementation manner, the sum of the storage space occupied by all the feature models in the sub-feature set and the storage space occupied by the M features to be detected is not greater than the storage space of the register. The sum of the storage space occupied by all the feature models in the sub-feature set and the storage space occupied by the M features to be detected is as close as possible to the storage space of the register, so that the storage space of the register is fully utilized.
Step 405, determining, by a register, a feature comparison value between each feature model in the current sub-feature set and each feature to be detected in the M features to be detected.
Step 406, determine whether the current sub-feature set is the last sub-feature set of the current feature set.
If not, step 407 is performed, and if yes, step 408 is performed.
Step 407, selecting a next sub-feature set from the plurality of sub-feature sets of the current feature set as the current sub-feature set. After step 407, execution returns to step 404.
Step 408, for each feature to be detected, determining a candidate feature model corresponding to the feature to be detected according to a feature comparison value between the feature to be detected and each feature model in the current feature set.
Step 409, determine whether the current feature set is the last of the N feature sets.
If not, go to step 410; if so, step 411 is performed.
Step 410, selecting the next feature set from the N feature sets as the current feature set.
After step 410, return to performing step 403.
Step 411, for each feature to be detected, obtaining a target feature model corresponding to the feature to be detected from the candidate feature model corresponding to the feature to be detected, and determining a similarity between the feature to be detected and the target feature model according to a feature comparison value between the feature to be detected and the target feature model.
Step 412, for each feature to be detected, outputting a target feature model corresponding to the feature to be detected, and outputting a similarity between the feature to be detected and the target feature model.
Based on the same application concept as the method, the embodiment of the present application further provides a feature comparison apparatus, as shown in fig. 5, which is a structural diagram of the feature comparison apparatus, and the apparatus includes:
an obtaining module 51, configured to obtain M to-be-detected features and N feature sets; wherein M and N are both positive integers greater than 1;
a selecting module 52, configured to select a first feature set from the N feature sets as a current feature set;
a determining module 53, configured to determine, for each feature to be detected in the M features to be detected, a comparison result between the feature to be detected and each feature model in the current feature set, and determine, according to the comparison result, a candidate feature model corresponding to the feature to be detected;
the selecting module 52 is further configured to select a next feature set of the current feature set from the N feature sets as the current feature set if the current feature set is not the last feature set of the N feature sets;
the obtaining module 51 is further configured to obtain a target feature model corresponding to the feature to be detected from the candidate feature models if the current feature set is the last one of the N feature sets.
The determining module 53 is specifically configured to, when determining the candidate feature model corresponding to the feature to be detected according to the comparison result: selecting K feature models from the current feature set according to the comparison result, and determining the K feature models as candidate feature models corresponding to the features to be detected;
the obtaining module 51 is specifically configured to, when obtaining the target feature model corresponding to the feature to be detected from the candidate feature model: and selecting a candidate characteristic model from all candidate characteristic models as a target characteristic model corresponding to the feature to be detected according to the comparison result between the feature to be detected and each candidate characteristic model.
The determining module 53 is specifically configured to, when determining the candidate feature model corresponding to the feature to be detected according to the comparison result: selecting K feature models from the current feature set according to the comparison result; acquiring at least one feature model from the K feature models and the recorded candidate feature models according to the comparison result between the feature to be detected and the K feature models and the comparison result between the feature to be detected and the recorded candidate feature models, and recording the acquired feature models as new candidate feature models;
the obtaining module 51 is specifically configured to, when obtaining the target feature model corresponding to the feature to be detected from the candidate feature model: and determining the candidate feature model as a target feature model corresponding to the feature to be detected.
In one possible embodiment, the comparison result is a feature comparison value or similarity;
if the comparison result is a feature comparison value, the determining module 53 is further configured to: determining the similarity between the features to be detected and the target feature model according to the feature comparison value between the features to be detected and the target feature model; and outputting the target feature model and the similarity.
In a possible implementation manner, the determining module 53 determines a comparison result between the feature to be detected and each feature model in the current feature set, and specifically is configured to:
storing the current feature set into a buffer, and selecting one sub-feature set from a plurality of sub-feature sets included in the current feature set as a current sub-feature set;
storing the current sub-feature set into a register, and determining a comparison result between the feature to be detected and each feature model in the current sub-feature set through the register;
if the current sub-feature set is not the last sub-feature set of the plurality of sub-feature sets, selecting a next sub-feature set from the plurality of sub-feature sets as a current sub-feature set;
and if the current sub-feature set is the last one of the plurality of sub-feature sets, ending the feature comparison process of the feature to be detected and the current feature set.
When the determining module 53 stores the current sub-feature set in a register, the determining module is specifically configured to:
determining data to be deleted from the stored data according to the access time of the stored data in the register, and deleting the data to be deleted from the register;
storing the current set of sub-features in the buffer to the register.
When the determining module 53 stores the current feature set in the buffer, the determining module is specifically configured to:
determining data to be deleted from the stored data according to the access time of the stored data in the buffer, and deleting the data to be deleted from the buffer;
storing the current feature set in the memory to the buffer;
wherein the memory stores the N feature sets.
The determining module 53 is further configured to: and storing the M characteristics to be detected into the register.
Based on the same application concept as the method, a feature comparison device is further provided in the embodiment of the present application, and from a hardware level, a schematic diagram of a hardware architecture of the feature comparison device provided in the embodiment of the present application may be as shown in fig. 6. The feature mapping apparatus may include: a CPU61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the CPU 61; the CPU61 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application. For example, the CPU61 is configured to execute machine-executable instructions to perform the following steps:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where the machine-readable storage medium stores thereon several computer instructions, and when the computer instructions are executed by a CPU, the method disclosed in the above example of the present application can be implemented.
For example, the computer instructions, when executed by the CPU, enable the following steps:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for feature comparison, the method comprising:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
2. The method of claim 1,
the determining the candidate feature model corresponding to the feature to be detected according to the comparison result includes:
selecting K feature models from the current feature set according to the comparison result, and determining the K feature models as candidate feature models corresponding to the features to be detected;
acquiring a target feature model corresponding to the feature to be detected from the candidate feature model, wherein the target feature model comprises the following steps:
and selecting a candidate characteristic model from all candidate characteristic models as a target characteristic model corresponding to the feature to be detected according to a comparison result between the feature to be detected and each candidate characteristic model.
3. The method of claim 1,
the determining the candidate feature model corresponding to the feature to be detected according to the comparison result includes:
selecting K feature models from the current feature set according to the comparison result;
acquiring at least one feature model from the K feature models and the recorded candidate feature models according to the comparison result between the feature to be detected and the K feature models and the comparison result between the feature to be detected and the recorded candidate feature models, and recording the acquired feature models as new candidate feature models;
acquiring a target feature model corresponding to the feature to be detected from the candidate feature model, wherein the target feature model comprises the following steps:
and determining the candidate feature model as a target feature model corresponding to the feature to be detected.
4. The method of any one of claims 1 to 3, wherein the alignment result is a characteristic alignment value or similarity; if the comparison result is a feature comparison value, after the target feature model corresponding to the feature to be detected is obtained from the candidate feature model, the method further includes:
determining the similarity between the features to be detected and the target feature model according to the feature comparison value between the features to be detected and the target feature model;
and outputting the target feature model and the similarity.
5. The method according to claim 1, wherein the determining the comparison result between the feature to be detected and each feature model in the current feature set comprises:
storing the current feature set into a buffer, and selecting one sub-feature set from a plurality of sub-feature sets included in the current feature set as a current sub-feature set;
storing the current sub-feature set into a register, and determining a comparison result between the feature to be detected and each feature model in the current sub-feature set through the register;
if the current sub-feature set is not the last of the plurality of sub-feature sets, selecting the next sub-feature set from the plurality of sub-feature sets as the current sub-feature set, and returning to execute the operation of storing the current sub-feature set in a register;
and if the current sub-feature set is the last one of the plurality of sub-feature sets, ending the feature comparison process of the feature to be detected and the current feature set.
6. The method of claim 5,
the storing the current sub-feature set to a register comprises:
determining data to be deleted from the stored data according to the access time of the stored data in the register, and deleting the data to be deleted from the register;
storing the current set of sub-features in the buffer to the register.
7. The method of claim 5,
the storing the current feature set to a buffer includes:
determining data to be deleted from the stored data according to the access time of the stored data in the buffer, and deleting the data to be deleted from the buffer;
storing the current feature set in the memory to the buffer;
wherein the memory stores the N feature sets.
8. The method of claim 5,
before determining, by the register, a comparison result between the feature to be detected and each feature model in the current sub-feature set, the method further includes:
and storing the M characteristics to be detected into the register.
9. A feature matching apparatus, comprising:
the acquisition module is used for acquiring M to-be-detected features and N feature sets; wherein M and N are both positive integers greater than 1;
the selection module is used for selecting a first feature set from the N feature sets as a current feature set;
the determining module is used for determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
the selection module is further configured to select a next feature set of the current feature set from the N feature sets as the current feature set if the current feature set is not the last of the N feature sets;
the obtaining module is further configured to obtain a target feature model corresponding to the feature to be detected from the candidate feature models if the current feature set is the last one of the N feature sets.
10. A feature comparison device, comprising: a CPU and a machine-readable storage medium storing machine-executable instructions executable by the CPU;
the CPU is configured to execute machine executable instructions to perform the following steps:
obtaining M characteristics to be detected and N characteristic sets, wherein M and N are positive integers larger than 1;
selecting a first feature set from the N feature sets as a current feature set;
determining a comparison result between the features to be detected and each feature model in the current feature set aiming at each feature to be detected in the M features to be detected, and determining a candidate feature model corresponding to the features to be detected according to the comparison result;
if the current feature set is not the last of the N feature sets, selecting the next feature set of the current feature set from the N feature sets as the current feature set, and returning to execute the operation of determining the comparison result between the feature to be detected and each feature model in the current feature set;
and if the current feature set is the last of the N feature sets, acquiring a target feature model corresponding to the feature to be detected from the candidate feature models.
CN201911227516.9A 2019-12-04 2019-12-04 Feature comparison method, device and equipment Active CN112906728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911227516.9A CN112906728B (en) 2019-12-04 2019-12-04 Feature comparison method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911227516.9A CN112906728B (en) 2019-12-04 2019-12-04 Feature comparison method, device and equipment

Publications (2)

Publication Number Publication Date
CN112906728A true CN112906728A (en) 2021-06-04
CN112906728B CN112906728B (en) 2023-08-25

Family

ID=76110767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911227516.9A Active CN112906728B (en) 2019-12-04 2019-12-04 Feature comparison method, device and equipment

Country Status (1)

Country Link
CN (1) CN112906728B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963197A (en) * 2021-09-29 2022-01-21 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09251534A (en) * 1996-03-18 1997-09-22 Toshiba Corp Device and method for authenticating person
CN102831441A (en) * 2011-05-10 2012-12-19 阿自倍尔株式会社 Comparing device
CN103020990A (en) * 2012-12-06 2013-04-03 华中科技大学 Moving object detecting method based on graphics processing unit (GPU)
WO2014034557A1 (en) * 2012-08-31 2014-03-06 日本電気株式会社 Text mining device, text mining method, and computer-readable recording medium
CN106067158A (en) * 2016-05-26 2016-11-02 东方网力科技股份有限公司 A kind of feature comparison method based on GPU and device
CN109492560A (en) * 2018-10-26 2019-03-19 深圳力维智联技术有限公司 Facial image Feature fusion, device and storage medium based on time scale
CN109977859A (en) * 2019-03-25 2019-07-05 腾讯科技(深圳)有限公司 A kind of map logo method for distinguishing and relevant apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09251534A (en) * 1996-03-18 1997-09-22 Toshiba Corp Device and method for authenticating person
CN102831441A (en) * 2011-05-10 2012-12-19 阿自倍尔株式会社 Comparing device
WO2014034557A1 (en) * 2012-08-31 2014-03-06 日本電気株式会社 Text mining device, text mining method, and computer-readable recording medium
CN103020990A (en) * 2012-12-06 2013-04-03 华中科技大学 Moving object detecting method based on graphics processing unit (GPU)
CN106067158A (en) * 2016-05-26 2016-11-02 东方网力科技股份有限公司 A kind of feature comparison method based on GPU and device
CN109492560A (en) * 2018-10-26 2019-03-19 深圳力维智联技术有限公司 Facial image Feature fusion, device and storage medium based on time scale
CN109977859A (en) * 2019-03-25 2019-07-05 腾讯科技(深圳)有限公司 A kind of map logo method for distinguishing and relevant apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QUAN QIAN 等: "The Comparison of the Relative Entropy for Intrusion Detection on CPU and GPU", 《2010 IEEE/ACIS 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE》 *
卫瑜,曾凡平,蒋凡: "基于相似度分析的分布式拒绝服务攻击检测系统", 计算机辅助工程, no. 02 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963197A (en) * 2021-09-29 2022-01-21 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN112906728B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
US10909455B2 (en) Information processing apparatus using multi-layer neural network and method therefor
US8108324B2 (en) Forward feature selection for support vector machines
US11461637B2 (en) Real-time resource usage reduction in artificial neural networks
CN109754359B (en) Pooling processing method and system applied to convolutional neural network
US9208374B2 (en) Information processing apparatus, control method therefor, and electronic device
CN109447096B (en) Glance path prediction method and device based on machine learning
CN111259812B (en) Inland ship re-identification method and equipment based on transfer learning and storage medium
CN112149754B (en) Information classification method, device, equipment and storage medium
CN114444668A (en) Network quantization method, network quantization system, network quantization apparatus, network quantization medium, and image processing method
CN114118207B (en) Incremental learning image identification method based on network expansion and memory recall mechanism
CN112906728B (en) Feature comparison method, device and equipment
CN113360911A (en) Malicious code homologous analysis method and device, computer equipment and storage medium
CN112529078A (en) Service processing method, device and equipment
CN114155388B (en) Image recognition method and device, computer equipment and storage medium
WO2020047354A1 (en) Continuous restricted boltzmann machines
CN112686300B (en) Data processing method, device and equipment
CN111788582A (en) Electronic device and control method thereof
US20220343146A1 (en) Method and system for temporal graph neural network acceleration
CN115700555A (en) Model training method, prediction method, device and electronic equipment
CN114358284A (en) Method, device and medium for training neural network step by step based on category information
CN112668597B (en) Feature comparison method, device and equipment
CN112825143A (en) Deep convolutional neural network compression method, device, storage medium and equipment
US20240013523A1 (en) Model training method and model training system
Sharifi Improving Capsule Networks using zero-skipping and pruning
CN113850302A (en) Incremental learning method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant