CN114417959A - Correlation method for feature extraction, target identification method, correlation device and apparatus - Google Patents

Correlation method for feature extraction, target identification method, correlation device and apparatus Download PDF

Info

Publication number
CN114417959A
CN114417959A CN202111478775.6A CN202111478775A CN114417959A CN 114417959 A CN114417959 A CN 114417959A CN 202111478775 A CN202111478775 A CN 202111478775A CN 114417959 A CN114417959 A CN 114417959A
Authority
CN
China
Prior art keywords
feature
conversion
features
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111478775.6A
Other languages
Chinese (zh)
Other versions
CN114417959B (en
Inventor
张坤
朱树磊
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111478775.6A priority Critical patent/CN114417959B/en
Publication of CN114417959A publication Critical patent/CN114417959A/en
Application granted granted Critical
Publication of CN114417959B publication Critical patent/CN114417959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a related method for feature extraction, a target identification method, related equipment and a device, wherein the feature extraction method comprises the following steps: acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types; performing feature conversion on the first feature based on the feature type of the second feature by using a feature conversion model in training to obtain a conversion feature of the first feature; and adjusting the feature conversion model in training based on the difference between the second feature and the conversion feature to obtain the trained feature conversion model. Through the mode, the invention can realize the type conversion of the features by utilizing the trained feature conversion model, and improve the application range and application scene of the features.

Description

Correlation method for feature extraction, target identification method, correlation device and apparatus
Technical Field
The present invention relates to the field of feature extraction, and in particular, to a method, a device, and a program product for feature extraction.
Background
Currently, with the development of deep learning methods, various feature processing techniques have been successfully used in every corner of life. Such as feature identification, feature comparison, or feature extraction, have been widely used in security, access control, big data, and other scenarios.
However, the types of relevant features extracted by different feature extraction algorithm models are different, and the features of different types are not compatible with each other, such as: the image features extracted by the image algorithm A cannot be compared with the image features extracted by the image algorithm B, so that identification is carried out. The voice features extracted by the voice algorithm C cannot be compared with the voice features extracted by the voice algorithm D, and therefore recognition is carried out.
Therefore, once the feature extraction algorithm adopted by the features is different from the feature extraction algorithm of the target application scene, the corresponding features need to be extracted again, so that the storage space is greatly wasted, the feature application difficulty is increased, and the application range of the features is limited.
Disclosure of Invention
The invention provides a related method for feature extraction, a target identification method, related equipment and a device, which are used for reducing the application difficulty of features and improving the application range and application scene of the features.
In order to solve the above technical problem, the present invention provides a feature extraction method, including: acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types; performing feature conversion on the first feature based on the feature type of the second feature by using a feature conversion model in training to obtain a conversion feature of the first feature; and adjusting the feature conversion model in training based on the difference between the second feature and the conversion feature to obtain the trained feature conversion model.
The number of the training samples comprises N, wherein N is an integer greater than 1; adjusting the feature transformation model in training based on the difference between the second feature and the transformation feature to obtain a trained feature transformation model, including: respectively determining individual feature differences between second features and conversion features corresponding to each training sample in the N training samples; the individual characteristic difference of each training sample is synthesized, and the overall characteristic difference of the N training samples is determined; and adjusting the feature conversion model in training by using the overall feature difference to obtain the trained feature conversion model.
Wherein, synthesizing the individual characteristic difference of each training sample, determining the overall characteristic difference of N training samples, including: and determining the overall characteristic difference of the N training samples by calculating the average value of the individual characteristic differences of the training samples.
Wherein, respectively determining the individual feature difference between the second feature and the conversion feature corresponding to each training sample in the N training samples comprises: transposing the conversion characteristic of the first characteristic to obtain a transpose of the conversion characteristic, and multiplying the conversion characteristic by the transpose of the conversion characteristic to obtain a first product; transposing the second feature to obtain a transpose of the second feature, and multiplying the second feature by the transpose of the second feature to obtain a second product; an individual feature difference is determined based on a difference between the first product and the second product.
The method for obtaining the conversion characteristic of the first characteristic comprises the following steps of performing characteristic conversion on the first characteristic based on the characteristic type of the second characteristic by using a characteristic conversion model in training: and adjusting the feature dimension of the first feature based on the feature type of the second feature by using a feature conversion model in training, and performing nonlinear mapping on the adjusted first feature to obtain a conversion feature.
In order to solve the above technical problem, the present invention further provides an image feature extraction method, where the image feature extraction method includes: acquiring an image to be processed, and performing feature extraction on the image to be processed to obtain image features of the image to be processed; converting the image characteristics through an image characteristic conversion model to obtain converted image characteristics different from the characteristic types of the image characteristics; the image feature conversion model comprises the feature conversion model in any one of the feature extraction methods.
In order to solve the above technical problem, the present invention further provides a target identification method, including: acquiring an object to be recognized, and performing feature extraction on the object to be recognized to obtain target features of the object to be recognized in the object to be recognized; inputting the target characteristics into a characteristic conversion model for characteristic conversion to obtain conversion characteristics different from the characteristic types of the target characteristics; identifying the target to be identified by utilizing the conversion characteristics; wherein the feature transformation model comprises a feature transformation model in any one of the feature extraction methods described above.
The method for identifying the target to be identified by using the conversion characteristics comprises the following steps: identifying the target to be identified based on the similarity between the conversion characteristic and at least one standard characteristic; wherein the feature type of the conversion feature is the same as the feature type of the standard feature.
The standard characteristics comprise characteristics with preset labels; identifying the target to be identified based on the similarity between the conversion feature and the at least one standard feature, including: respectively determining the similarity between the conversion characteristics and each standard characteristic; in response to the fact that the target similarity larger than a preset threshold exists in the similarity between the conversion feature and each standard feature, determining a label of the standard feature corresponding to the target similarity as a label of the target to be identified; the preset label comprises one or more of identity information, operation authority and identity authority.
The method for identifying the target to be identified by using the conversion characteristics comprises the following steps: identifying the target to be identified by utilizing an identification algorithm or an identification model; wherein the recognition algorithm and the recognition model are matched with the feature types of the conversion features.
In order to solve the above technical problem, the present invention further provides an electronic device, including: a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement any one of the feature extraction methods or the image feature extraction method or the object recognition method described above.
In order to solve the above technical problem, the present invention also provides a computer-readable storage medium storing program data that can be executed to implement the feature extraction method or the image feature extraction method or the object recognition method according to any one of the above.
The invention has the beneficial effects that: different from the prior art, the method obtains the first characteristic and the second characteristic with different characteristic types by respectively carrying out the first characteristic extraction and the second characteristic extraction on the training sample, carries out the characteristic conversion on the first characteristic based on the characteristic type of the second characteristic by utilizing the characteristic conversion model in the training to obtain the conversion characteristic of the first characteristic, then adjusts the characteristic conversion model in the training based on the difference between the second characteristic and the conversion characteristic to obtain the trained characteristic conversion model, thereby realizing the conversion of the characteristic types by utilizing the characteristic conversion model and further improving the application range and the application scene of the characteristic, and under the condition of changing the characteristic extraction algorithm, the characteristic does not need to be re-extracted, but converts the existing characteristic by utilizing the characteristic conversion model to obtain the conversion characteristic which can be directly applied, thereby reducing the application difficulty of the characteristic, the storage space of various features is saved, the replacement difficulty and period of the feature extraction algorithm are reduced, and the application efficiency and reliability of the features are improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a feature extraction method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a feature extraction method provided by the present invention;
FIG. 3 is a schematic diagram of the structure of an embodiment of the feature transformation model in training in the embodiment of FIG. 2;
FIG. 4 is a flowchart illustrating an embodiment of an image feature extraction method according to the present invention;
FIG. 5 is a flowchart illustrating an embodiment of a target recognition method according to the present invention;
FIG. 6 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a feature extraction method according to an embodiment of the present invention.
Step S11: and acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types.
Training samples are obtained first. The training samples of the present embodiment may include data in any form, such as images, audio, fingerprints, and smells.
And respectively carrying out first feature extraction and second feature extraction on the training samples to obtain first features and second features with different feature types. The first feature extraction and the second feature extraction adopt different feature extraction algorithms, and specifically, the first feature extraction and the second feature extraction can be respectively carried out on the training sample through different types of feature extraction models, feature recognition models or feature extraction algorithms to obtain the first feature and the second feature with different feature types.
Since the specific steps of the different feature extraction algorithms are different when performing feature extraction, the obtained feature data such as the dimension, the presentation form, or the emphasis content of the features are also different, and the feature types of the embodiment are divided based on the feature data of the features.
Wherein the feature extraction algorithm needs to correspond to the type of the training sample. In a specific application scenario, when the training sample is image data, feature extraction may be performed on the training sample through a corresponding image feature extraction algorithm, so as to obtain a first image feature and a second image feature. In another specific application scenario, when the training sample is voice data, feature extraction may be performed on the training sample through a corresponding voice extraction algorithm, so as to obtain a first voice feature and a second voice feature. Specifically, the type of the training sample and the feature extraction model or the feature extraction algorithm may be set based on actual requirements, and is not limited herein.
Step S12: and performing feature conversion on the first feature based on the feature type of the second feature by using a feature conversion model in training to obtain the conversion feature of the first feature.
And after the first characteristic and the second characteristic of the training sample are obtained, performing characteristic conversion on the first characteristic by using a characteristic conversion model in training based on the characteristic type of the second characteristic to obtain the conversion characteristic of the first characteristic.
The feature conversion model in the training is used for converting the first feature based on the feature type of the second feature or the feature extraction algorithm corresponding to the feature type, and the feature conversion model is obtained by training the feature conversion in the training until the feature conversion in the training can realize the conversion function.
Step S13: and adjusting the feature conversion model in training based on the difference between the second feature and the conversion feature to obtain the trained feature conversion model.
And after the conversion characteristic of the first characteristic is obtained, adjusting the characteristic conversion model in training based on the difference between the second characteristic and the conversion characteristic to obtain the trained characteristic conversion model.
In a specific application scenario, the difference between the second feature and the conversion feature may be calculated through a loss function, and a feature conversion model in training may be adjusted, specifically, a distribution loss function, a cross entropy loss function, an exponential loss function, an L2 loss function, a Log-Cosh loss function, or other loss functions may be adopted, which is not limited herein. In another specific application scenario, the difference between the second feature and the transformed feature may also be determined by a correlation coefficient or other mathematical tool.
And after the difference between the second characteristic and the conversion characteristic of the first characteristic is obtained through calculation, adjusting a characteristic conversion model in training based on the difference to obtain a trained characteristic conversion model.
In a specific application scenario, the feature transformation model in training may be adjusted by an optimization algorithm based on the difference until the optimization algorithm converges to obtain a trained feature transformation model, and the optimization algorithm may include a gradient descent method, a regression loss function, a mean square error or other optimization algorithms, and is not limited herein.
In another specific application scenario, the feature transformation model in the training may also be adjusted based on the difference until the feature transformation model meets a preset condition, where the preset condition may include that the difference between the transformed feature transformed by the feature transformation model and the corresponding second feature is within a preset difference range or is smaller than a preset threshold.
The trained feature conversion model can convert the first features based on the feature types of the second features, so that corresponding conversion among the features extracted by different feature extraction algorithms is realized, the application range and the application scene of the features can be further improved, and the limitation of the features is reduced.
The converted features can be applied to application scenarios such as feature recognition or feature comparison.
Through the steps, the feature extraction method of the embodiment obtains the first feature and the second feature with different feature types by respectively performing the first feature extraction and the second feature extraction on the training sample, performs the feature conversion on the first feature based on the feature type of the second feature by using the feature conversion model in the training to obtain the conversion feature of the first feature, then adjusts the feature conversion model in the training based on the difference between the second feature and the conversion feature to obtain the trained feature conversion model, thereby realizing the conversion of the feature types by using the feature conversion model and further improving the application range and the application scene of the feature, and under the condition of changing the feature extraction algorithm, the feature does not need to be re-extracted, but converts the existing feature by using the feature conversion model to obtain the conversion feature which can be directly applied, thereby reducing the application difficulty of the feature, the storage space of various features is saved, the replacement difficulty and period of the feature extraction algorithm are reduced, and the application efficiency and reliability of the features are improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a feature extraction method according to the present invention.
Step S21: and acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types.
And acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types. In a specific application scenario, the training samples may be respectively input into the first feature extraction model and the second feature extraction model for feature extraction, so as to obtain a first feature and a second feature of the training samples. And the first characteristic extraction model and the second characteristic extraction model have different characteristic extraction algorithms. In a specific application scenario, the first feature extraction and the second feature extraction can be directly performed on the training samples through different feature extraction algorithms to obtain the first feature and the second feature with different feature types.
The present embodiment is directed to feature conversion between two different feature extraction algorithms, and when feature conversion is performed between multiple different feature extraction algorithms, feature extraction may be performed through a newly added feature extraction algorithm or a feature extraction model, and then a new feature conversion model is trained based on a difference between an extracted feature and a target feature, so as to obtain a plurality of feature conversion models that can be converted based on different feature extraction algorithms. In a specific application scenario, when feature conversion among A, B, C three feature extraction algorithms needs to be performed, a feature conversion model capable of realizing a-B conversion and a feature conversion model capable of realizing C-B conversion can be obtained by repeating the feature extraction method of the present embodiment, so that feature conversion among A, B, C three feature extraction algorithms is realized. The feature transformation between other feature extraction algorithms is similar to the application scenario and is not described herein again.
The number of training samples in this embodiment includes N, where N is an integer greater than 1. Specifically, the number of the training samples may be 10000, 20000, 50000, and the like, the more the number of the training samples is, the higher the accuracy of the finally obtained feature transformation model is, the specific number is not limited, and the obtaining of the training samples may be performed based on attributes such as gender, age, and the like, so as to improve the diversity of the training samples, and further improve the transformation accuracy of the finally obtained feature transformation model in the actual recognition process. For example: 5000 parts of a male sample, 5000 parts of a female sample, 1000 parts of a 0-10 year old sample, 1000 parts of a 10-20 year old sample, 1000 parts of a 20-30 year old sample, and the like.
In the step, first feature extraction and second feature extraction are respectively carried out on the N training samples to obtain N first features and N second features respectively corresponding to the N training samples.
Step S22: and adjusting the feature dimension of the first feature based on the feature type of the second feature by using a feature conversion model in training, and performing nonlinear mapping on the adjusted first feature to obtain a conversion feature.
After N first features and N second features are obtained, feature dimensions of the corresponding first features are adjusted by using a feature conversion model in training respectively based on feature types of the N second features, and nonlinear mapping is carried out on the adjusted first features until the N conversion features are obtained.
In a specific implementation manner, the feature transformation model of the embodiment includes a first fully connected layer, a first activation function, a second fully connected layer, and a second activation function which are hierarchically interconnected.
Firstly, performing feature dimension adjustment and nonlinear mapping on the first feature of each training sample through a first full connection layer and a first activation function of a feature conversion model in training to obtain an adjustment feature, and then performing feature dimension adjustment and nonlinear mapping again on the adjustment feature of each training sample through a second full connection layer and a second activation function of the feature conversion model in training in sequence to obtain a conversion feature corresponding to the first feature of each training sample. The full connection layer is used for adjusting the feature dimension of the first feature based on the dimension of the second feature or the extraction dimension of the feature extraction algorithm corresponding to the second feature, and the activation function is used for performing nonlinear mapping on the first feature based on the second feature or the feature extraction algorithm corresponding to the second feature.
In a specific application scenario, when the dimension of the second feature is 16 × 16 and the dimension of the first feature is 20 × 20, the dimension of the first feature is converted into 16 × 16 based on the dimension of the second feature through the fully-connected layer of the feature conversion model. The adjustment of the activation function is similar to the present application scenario.
In this embodiment, the dual full-connection layer and the activation function are set to perform feature dimension adjustment and nonlinear mapping twice on the first feature, so that after the feature dimension adjustment and nonlinear mapping are performed on the first feature again, the feature dimension adjustment and nonlinear mapping are performed again, and thus the accuracy and precision of feature adjustment of the feature conversion model are improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a feature transformation model in training in the embodiment of fig. 2.
The trained feature transformation model 30 of the present embodiment includes a first fully-connected layer 31, a first activation function 32, a second fully-connected layer 33, and a second activation function 34 that are hierarchically interconnected.
When the training feature conversion model 30 receives the first feature of the training sample, firstly, feature dimension adjustment and nonlinear mapping are performed on the first feature sequentially through the first full-connection layer 31 and the first activation function 32 of the training feature conversion model 30 to obtain an adjustment feature; and then, performing feature dimension adjustment and nonlinear mapping on the adjustment feature again through a second full-connection layer 33 and a second activation function 34 of the trained feature conversion model 30 to obtain a conversion feature.
The resulting trained feature transformation model behaves the same as the trained feature transformation model 30.
Wherein the activation function comprises a Prelu activation function. That is, both the first activation function 32 and the second activation function 34 may include a Prelu activation function, so that the Prelu activation function can be used to converge faster and improve the efficiency and reliability of the transformation of the features by the initial model through learnability. In other embodiments, the activation function may also include other activation functions.
Step S23: respectively determining individual feature differences between second features and conversion features corresponding to each training sample in the N training samples; the individual characteristic difference of each training sample is synthesized, and the overall characteristic difference of the N training samples is determined; and adjusting the feature conversion model in training by using the overall feature difference to obtain the trained feature conversion model.
After N conversion characteristics are obtained, the individual characteristic difference between the second characteristic corresponding to each training sample in the N training samples and the conversion characteristic is respectively determined, then the individual characteristic difference of each training sample is synthesized, the overall characteristic difference of the N training samples is determined, finally, the characteristic conversion model in training is adjusted by utilizing the overall characteristic difference of the training samples, and the trained characteristic conversion model is obtained.
Specifically, the step of determining the individual feature difference between the second feature and the transformed feature corresponding to each training sample may include: firstly, transposing the conversion characteristics of the first characteristics of a certain training sample to obtain transpositions of the conversion characteristics, and multiplying the transposition characteristics by the transposition characteristics to obtain a first product; transposing a second feature corresponding to the first feature to obtain a transpose of the second feature, and multiplying the second feature by the transpose of the second feature to obtain a second product; and determining individual feature differences for the training samples based on a difference between the first product and the second product.
After the individual differences of the N training samples are obtained through the method, the overall characteristic differences of the N training samples are determined by calculating the mean value of the individual characteristic differences of the training samples.
In a specific embodiment, the overall feature difference between each second feature and the corresponding conversion feature may be calculated by the following formula:
Figure BDA0003394621990000091
wherein L isdisIs the overall feature difference between each second feature and the transformed feature of the corresponding first feature, N is the number of training samples, F (F)1) A conversion characteristic of the first characteristic, F (F)1) ' is
Transposition of the conversion feature of the first feature, F2Is of secondary character, F'2For the transpose of the second feature, ABS means the absolute value and SUM means the SUM. ABS (F (F)1)*f(F1)′-F2*F2') individual characteristic differences.
Wherein, the feature number of the training sample indicated by N is also the number of the first feature and the second feature, and when the number of the training samples is 10, the number of the first feature and the second feature is also 10, and then N is 10.
In other embodiments, the step of calculating the overall characteristic difference between each second characteristic and the corresponding conversion characteristic may also be performed by using other formulas, which are not limited herein.
And finally, adjusting parameters of the feature conversion model in training by utilizing the overall feature difference to obtain the trained feature conversion model, wherein the training of the feature conversion model in the step is carried out by utilizing the overall feature difference obtained based on the individual feature difference, so that the difference training of the overall feature difference can be increased while the training is carried out by considering the individual feature difference, and the precision of the finally obtained feature conversion model can be improved by double supervision.
In a specific application scenario, after the conversion feature corresponding to each first feature is obtained, the difference between the second feature and the corresponding conversion feature of each training sample may also be calculated based on a distribution loss function. The distribution loss function may characterize the difference in distribution between each conversion feature and the corresponding second feature, or between the entire conversion feature and the entire second feature. Therefore, the training of the feature transformation model is supervised by the distribution loss function, the distribution difference training of the whole sample can be increased while the difference training of a single training sample is considered, and the precision of the finally obtained feature transformation model can be improved by double supervision.
The trained image feature conversion model can accurately convert the image features extracted by two different image recognition algorithms, so that the application range and the application scene of the corresponding image feature extraction model are improved, the image feature extraction model can perform image recognition based on the features extracted by the image feature extraction model, and the image feature extraction model can perform image recognition after converting the features extracted by the image feature extraction model.
Through the steps, the feature extraction method of the embodiment obtains the first feature and the second feature with different feature types by respectively performing the first feature extraction and the second feature extraction on the training sample, then adjusts the feature dimension of the first feature based on the feature type of the second feature by using the feature conversion model in the training, and performs nonlinear mapping on the adjusted first feature to obtain the conversion feature, and can improve the conversion precision and accuracy of the feature conversion model by performing repeated feature dimension adjustment and nonlinear mapping on the first feature through the dual full-connection layer and the activation function. Finally, respectively determining the individual characteristic difference between the second characteristic and the conversion characteristic corresponding to each training sample in the N training samples; the individual characteristic difference of each training sample is synthesized, and the overall characteristic difference of the N training samples is determined; the feature transformation model in training is adjusted by utilizing the overall feature difference to obtain the trained feature transformation model, the overall feature difference training of the training sample can be increased while the difference training of the individual feature difference of the training sample is considered, and therefore the precision of the finally obtained feature transformation model can be improved through double supervision. According to the method and the device, the feature conversion model can be used for realizing the conversion of the feature types, so that the application range and the application scene of the features are improved, the features do not need to be extracted again under the condition that the feature extraction algorithm is replaced, the existing features are converted through the feature conversion model, the conversion features capable of being directly applied are obtained, the application difficulty of the features is reduced, the storage space of various features is saved, the replacement difficulty and the replacement period of the feature extraction algorithm are reduced, and the application efficiency and the reliability of the features are improved. In the embodiment, the training sample does not need to be provided with the identity label, so that the influence of the error identity label on the precision of the feature conversion model is avoided, the training data is simplified, and the training efficiency and accuracy are improved. The finally obtained feature conversion model is simple, easy to train and deploy, and short in time consumption in the actual use process.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an image feature extraction method according to an embodiment of the present invention.
Step S41: and acquiring an image to be processed, and performing feature extraction on the image to be processed to obtain the image features of the image to be processed.
Firstly, an image to be processed is obtained, and feature extraction is carried out on the image to be processed to obtain the image features of the image to be processed. The image feature extraction algorithm for feature extraction may include an LBP (Local Binary Pattern) algorithm, a Haar-like feature algorithm, or a Histogram of Oriented Gradients (HOG) feature algorithm, and the like, and is not limited herein.
Step S42: and converting the image features through the image feature conversion model to obtain converted image features different from the feature types of the image features.
And converting the image features through the image feature conversion model to obtain converted image features different from the feature types of the image features. That is, the feature type of the converted image feature is different from the feature type of the image feature, and the present embodiment can perform application of a plurality of application scenarios, such as feature recognition or feature comparison, by using the converted image feature.
The image feature conversion model of this embodiment includes the feature conversion model in the feature extraction method of any one of the embodiments described above, so that the image feature conversion model can be used to realize conversion of image feature types, and further improve the application range and application scenario of image features, and in the case of replacement of an image feature extraction algorithm, the feature is not required to be re-extracted, but the image feature conversion model is used to convert the existing image features to obtain image conversion features that can be directly applied, thereby reducing the application difficulty of the image features, saving the storage space of various image features, reducing the replacement difficulty and cycle of the image feature extraction algorithm, and improving the application efficiency and reliability of the image features.
Referring to fig. 5, fig. 5 is a flowchart illustrating a target identification method according to an embodiment of the present invention.
Step S51: and acquiring an object to be recognized, and performing feature extraction on the object to be recognized to obtain target features of the object to be recognized in the object to be recognized.
The method comprises the steps of firstly obtaining an object to be recognized, and extracting features of the object to be recognized to obtain target features of the object to be recognized in the object to be recognized. The object to be recognized in the present embodiment may include any form of data such as an image, audio, fingerprint, smell, and the like. The target to be identified may be selected based on actual requirements, for example: human face, target frequency or target molecule, etc., and are not limited herein.
In a specific application scene, when an object to be recognized is an image, the image to be recognized of the object to be recognized can be shot in real time through a camera installed in a specified place, and then feature extraction is performed on the image to be recognized, so that target features of the object to be recognized in the image to be recognized are obtained in real time. In another specific application scenario, after the number of the images to be recognized of the target to be recognized, which are shot by the camera installed in the designated place, reaches a preset number, feature extraction may be performed on the preset number of the images to be recognized, so as to obtain the target features of the objects to be recognized.
Step S52: and inputting the target features into the feature conversion model for feature conversion to obtain conversion features different from the feature types of the target features.
And inputting the target characteristics of the object to be recognized into the characteristic conversion model for characteristic conversion to obtain conversion characteristics. The feature transformation model comprises a feature transformation model obtained by training in any one of the above embodiments.
The feature conversion model can convert the target features to obtain conversion features different from feature types of the target features.
Step S53: and identifying the target to be identified by utilizing the conversion characteristics.
In a specific embodiment, the step of identifying the target to be identified by using the transformed features may include: and identifying the target to be identified based on the similarity between the conversion characteristic and at least one standard characteristic, wherein the at least one standard characteristic can be a standard characteristic stored in an identification base.
The feature type of the conversion feature is the same as the feature type of the standard feature, and the conversion feature with the feature type the same as that of the standard feature is obtained by converting the target feature through the feature conversion model, so that the target to be recognized is conveniently recognized by using at least one standard feature.
In a specific embodiment, the similarity between the conversion feature and each standard feature may be determined respectively, and in response to the target similarity greater than a preset threshold existing in the similarities between the conversion feature and each standard feature, the tag of the standard feature corresponding to the target similarity is determined as the tag of the target to be recognized, so as to complete recognition of the target to be recognized. The preset threshold may be set based on actual conditions, and is not limited herein.
The preset label comprises one or more of identity information, operation authority and identity authority or other label information.
In another specific embodiment, the step of identifying the target to be identified by using the transformed features may further include: identifying the target to be identified by directly utilizing the conversion characteristics based on an identification algorithm or an identification model to obtain an identification result of the target to be identified; wherein the recognition algorithm and the recognition model are matched with the feature types of the conversion features.
Through the steps, the target identification method of the embodiment can realize the conversion of the feature types of the target features by using the feature conversion model, so that the application range of the target features can be enlarged, the target features can be identified based on the target features, and the converted conversion features of the target features can also be used for identification, thereby avoiding repeated feature extraction under the condition of replacement of an identification algorithm, reducing the identification complexity and saving the storage space.
In other embodiments, the step of obtaining the object to be recognized and extracting the features of the object to be recognized to obtain the target features of the target to be recognized in the object to be recognized further includes: and acquiring a plurality of standard images, extracting the features of the standard images to obtain standard features, and storing the standard features and the label information corresponding to the standard features to obtain an identification base library. In this step, when the features of the plurality of standard images are extracted, the feature extraction algorithm used is different from the feature extraction algorithm used in the step S51.
The feature transformation model of the embodiment can be beneficial to transforming the target features by the feature types of the standard features, so that the feature transformation model is matched with the feature types of the standard features, comparison and identification are facilitated, the identification efficiency of the target features can be improved, and the identification difficulty of the target features is reduced.
Based on the same inventive concept, the present invention further provides an electronic device, which can be executed to implement the feature extraction method or the image feature extraction method or the object recognition method of any of the above embodiments, please refer to fig. 6, where fig. 6 is a schematic structural diagram of an embodiment of the electronic device provided by the present invention, and the electronic device includes a processor 61 and a memory 62.
The processor 61 is configured to execute program instructions stored in the memory 62 to implement the steps of any of the above-described feature extraction method embodiments or image feature extraction method embodiments or object recognition method embodiments. In one particular implementation scenario, the electronic devices may include, but are not limited to: the electronic device may further include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 61 is configured to control itself and the memory 62 to implement the steps of any of the above embodiments. The processor 61 may also be referred to as a CPU (Central Processing Unit). The processor 61 may be an integrated circuit chip having signal processing capabilities. The Processor 61 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 61 may be commonly implemented by integrated circuit chips.
According to the scheme, the type conversion of the features can be realized by using the trained feature conversion model, and the application range and the application scene of the features are improved.
Based on the same inventive concept, the present invention further provides a computer-readable storage medium, please refer to fig. 7, and fig. 7 is a schematic structural diagram of an embodiment of the computer-readable storage medium provided in the present invention. The computer-readable storage medium 70 has stored therein at least one program data 71, the program data 71 being for implementing any of the methods described above. In one embodiment, the computer-readable storage medium 70 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the embodiments provided in the present invention, it should be understood that the disclosed method and apparatus can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium, or in a part of or all of the technical solution that contributes to the prior art.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (12)

1. A method of feature extraction, comprising:
acquiring a training sample, and respectively performing first feature extraction and second feature extraction on the training sample to obtain first features and second features with different feature types;
performing feature conversion on the first feature based on the feature type of the second feature by using a feature conversion model in training to obtain a conversion feature of the first feature;
and adjusting the feature conversion model in the training based on the difference between the second feature and the conversion feature to obtain the trained feature conversion model.
2. The feature extraction method according to claim 1, wherein the number of the training samples includes N, where N is an integer greater than 1;
adjusting the feature conversion model in the training based on the difference between the second feature and the conversion feature to obtain a trained feature conversion model, including:
respectively determining individual feature differences between second features and conversion features corresponding to each training sample in the N training samples;
the individual characteristic difference of each training sample is integrated, and the overall characteristic difference of the N training samples is determined;
and adjusting the feature conversion model in the training by using the overall feature difference to obtain the trained feature conversion model.
3. The feature extraction method according to claim 2, wherein the determining the overall feature difference of the N training samples by integrating the individual feature difference of each training sample comprises:
and determining the overall characteristic difference of the N training samples by calculating the average value of the individual characteristic differences of the training samples.
4. The method according to claim 2, wherein the determining the individual feature difference between the second feature and the transformed feature corresponding to each of the N training samples comprises:
transposing the conversion feature of the first feature to obtain a transpose of the conversion feature, and multiplying the conversion feature by the transpose of the conversion feature to obtain a first product; and
transposing the second feature to obtain a transpose of the second feature, and multiplying the second feature by the transpose of the second feature to obtain a second product;
determining the individual feature difference based on a difference between the first product and the second product.
5. The feature extraction method according to any one of claims 1 to 4, wherein the step of performing feature transformation on the first feature based on the feature type of the second feature by using a trained feature transformation model to obtain a transformed feature of the first feature comprises:
and adjusting the feature dimension of the first feature based on the feature type of the second feature by using a trained feature conversion model, and performing nonlinear mapping on the adjusted first feature to obtain the conversion feature.
6. An image feature extraction method, characterized by comprising:
acquiring an image to be processed, and performing feature extraction on the image to be processed to obtain image features of the image to be processed;
converting the image features through an image feature conversion model to obtain conversion image features different from feature types of the image features;
wherein the image feature conversion model comprises the feature conversion model in the feature extraction method of any one of claims 1 to 5.
7. An object recognition method, characterized in that the object recognition method comprises:
acquiring an object to be identified, and extracting the characteristics of the object to be identified to obtain the target characteristics of the target to be identified in the object to be identified;
inputting the target features into a feature conversion model for feature conversion to obtain conversion features different from feature types of the target features;
identifying the target to be identified by using the conversion characteristics;
wherein the feature transformation model comprises the feature transformation model in the feature extraction method of any one of claims 1 to 5.
8. The object recognition method according to claim 7, wherein the recognizing the object to be recognized by using the converted features comprises:
identifying the target to be identified based on the similarity between the conversion characteristic and at least one standard characteristic;
wherein the feature type of the conversion feature is the same as the feature type of the standard feature.
9. The object recognition method of claim 8, wherein the standard features include features having preset labels;
identifying the target to be identified based on the similarity between the conversion feature and at least one standard feature, including:
respectively determining the similarity between the conversion characteristics and each standard characteristic;
in response to the fact that target similarity larger than a preset threshold exists in the similarity between the conversion feature and each standard feature, determining a label of the standard feature corresponding to the target similarity as a label of the target to be identified;
the preset label comprises one or more of identity information, operation authority and identity authority.
10. The object recognition method according to claim 7, wherein the recognizing the object to be recognized by using the converted features comprises:
identifying the target to be identified by utilizing the conversion characteristics based on an identification algorithm or an identification model;
wherein the recognition algorithm and the recognition model are matched with feature types of the conversion features.
11. An electronic device, characterized in that the electronic device comprises: a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the feature extraction method according to any one of claims 1 to 5 or the image feature extraction method according to claim 6 or the object recognition method according to claims 7-10.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program data executable to implement the feature extraction method according to any one of claims 1 to 5 or the image feature extraction method according to claim 6 or the object recognition method according to claims 7-10.
CN202111478775.6A 2021-12-06 2021-12-06 Correlation method for feature extraction, target identification method, correlation device and apparatus Active CN114417959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111478775.6A CN114417959B (en) 2021-12-06 2021-12-06 Correlation method for feature extraction, target identification method, correlation device and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111478775.6A CN114417959B (en) 2021-12-06 2021-12-06 Correlation method for feature extraction, target identification method, correlation device and apparatus

Publications (2)

Publication Number Publication Date
CN114417959A true CN114417959A (en) 2022-04-29
CN114417959B CN114417959B (en) 2022-12-02

Family

ID=81264806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111478775.6A Active CN114417959B (en) 2021-12-06 2021-12-06 Correlation method for feature extraction, target identification method, correlation device and apparatus

Country Status (1)

Country Link
CN (1) CN114417959B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1161753A (en) * 1994-10-24 1997-10-08 奥林公司 Model predictive control apparatus and method
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN111914908A (en) * 2020-07-14 2020-11-10 浙江大华技术股份有限公司 Image recognition model training method, image recognition method and related equipment
CN112633154A (en) * 2020-12-22 2021-04-09 云南翼飞视科技有限公司 Method and system for converting heterogeneous face feature vectors
CN112801014A (en) * 2021-02-08 2021-05-14 深圳市华付信息技术有限公司 Feature comparison identification method compatible with models of different versions
CN112990432A (en) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1161753A (en) * 1994-10-24 1997-10-08 奥林公司 Model predictive control apparatus and method
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN111914908A (en) * 2020-07-14 2020-11-10 浙江大华技术股份有限公司 Image recognition model training method, image recognition method and related equipment
CN112633154A (en) * 2020-12-22 2021-04-09 云南翼飞视科技有限公司 Method and system for converting heterogeneous face feature vectors
CN112801014A (en) * 2021-02-08 2021-05-14 深圳市华付信息技术有限公司 Feature comparison identification method compatible with models of different versions
CN112990432A (en) * 2021-03-04 2021-06-18 北京金山云网络技术有限公司 Target recognition model training method and device and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
VIET-DUY NGUYEN等: "Exploring Facial Differences in European Countries Boundary by Fine-Tuned Neural Networks", 《2018 IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR)》 *
YAN BAI等: "Dual-Tuning: Joint Prototype Transfer and Structure Regularization for Compatible Feature Learning", 《HTTPS://ARXIV.ORG/PDF/2108.02959.PDF》 *
冀中等: "基于自注意力和自编码器的少样本学习", 《天津大学学报(自然科学与工程技术版)》 *

Also Published As

Publication number Publication date
CN114417959B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
US20220027603A1 (en) Fast, embedded, hybrid video face recognition system
CN110362677B (en) Text data category identification method and device, storage medium and computer equipment
CN109446889B (en) Object tracking method and device based on twin matching network
WO2017088432A1 (en) Image recognition method and device
CN111191568B (en) Method, device, equipment and medium for identifying flip image
CN110188829B (en) Neural network training method, target recognition method and related products
CN104680119A (en) Image identity recognition method, related device and identity recognition system
CN111400528B (en) Image compression method, device, server and storage medium
CN112584062B (en) Background audio construction method and device
CN111476310B (en) Image classification method, device and equipment
Khuwuthyakorn et al. Object of interest detection by saliency learning
CN111291887A (en) Neural network training method, image recognition method, device and electronic equipment
Xu et al. Discriminative analysis for symmetric positive definite matrices on lie groups
CN111274446A (en) Video processing method and related device
CN114463805B (en) Deep forgery detection method, device, storage medium and computer equipment
Hameed et al. Content based image retrieval based on feature fusion and support vector machine
CN111640438B (en) Audio data processing method and device, storage medium and electronic equipment
Yan et al. A parameter-free framework for general supervised subspace learning
Liu et al. A novel SVM network using HOG feature for prohibition traffic sign recognition
US20230350973A1 (en) Methods and Systems for Multilinear Discriminant Analysis Via Invariant Theory for Data Classification
CN114417959B (en) Correlation method for feature extraction, target identification method, correlation device and apparatus
CN112001231A (en) Three-dimensional face recognition method, system and medium for weighted multi-task sparse representation
CN110765917A (en) Active learning method, device, terminal and medium suitable for face recognition model training
US11334772B2 (en) Image recognition system, method, and program, and parameter-training system, method, and program
CN111767710B (en) Indonesia emotion classification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant