CN111488936B - Feature fusion method and device and storage medium - Google Patents

Feature fusion method and device and storage medium Download PDF

Info

Publication number
CN111488936B
CN111488936B CN202010290730.5A CN202010290730A CN111488936B CN 111488936 B CN111488936 B CN 111488936B CN 202010290730 A CN202010290730 A CN 202010290730A CN 111488936 B CN111488936 B CN 111488936B
Authority
CN
China
Prior art keywords
feature
features
identified
fusion
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010290730.5A
Other languages
Chinese (zh)
Other versions
CN111488936A (en
Inventor
朱金华
徐�明
熊凡
陈婷
徐丽华
王强
裴卫斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Original Assignee
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZNV Technology Co Ltd, Nanjing ZNV Software Co Ltd filed Critical Shenzhen ZNV Technology Co Ltd
Priority to CN202010290730.5A priority Critical patent/CN111488936B/en
Publication of CN111488936A publication Critical patent/CN111488936A/en
Application granted granted Critical
Publication of CN111488936B publication Critical patent/CN111488936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a feature fusion method, a device and a storage medium, if the similarity between at least two features and a feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, the at least two features may be one type of tag, and the tag of the feature with the highest score in the at least two features is used as the tag of the feature to be identified because the at least two features are scored, so that multiple types of features which may belong to the same type are gradually concentrated to one type of tag, and the multiple types of tag features which should be one type are gradually corrected to one type of tag.

Description

Feature fusion method and device and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a feature fusion method and device and a storage medium.
Background
With the development of artificial intelligence technology, the demand for intelligent processing of information such as images, texts, voices and the like is increasing, wherein the feature labeling label in a plurality of information can provide a basis for intelligent processing of subsequent information. For example, in the security and social administration industry, countless human faces and human body features learned by a deep neural network are carved at any time, and the real identity IDs of the human faces and the human body features in a real list library or the archive IDs in a virtual archive are identified in a feature comparison mode, that is, the similarity of the human faces or the human body features and each feature in a database to be selected is calculated, and a label (real identity ID or archive ID) corresponding to the feature with the largest similarity is used as the label of the human faces or the human body features, so that a basic step is provided for searching or recognizing human faces or human body images in the later stage.
For a new feature of a label to be marked extracted from an image, the prior art classifies the new feature by calculating the similarity between the new feature and the feature of the marked category label, and the category label corresponding to the feature with the highest similarity is the label of the new feature. For example, for a face feature, two face features of the same person in two different forms (such as a face side face and open eyes and closed eyes) or two face features respectively extracted from two photos of the same person due to insufficient feature extractors when the face features are extracted are low in similarity, if the similarity is simply taken as a basis of feature classification, the two face features cannot be labeled with labels of the same type, so that the two face features are labeled with labels of different types, if new face features of the same person appear later, the new face features are compared with the new face features respectively in similarity, the two face features have the highest similarity with the new face features, so that the number of features in the type of the two face features is increased to influence the final feature recognition effect, and therefore, multiple types of label features which are originally in one type need to be corrected to be one type of labels.
Disclosure of Invention
The invention mainly solves the technical problem of accurately labeling the characteristics.
According to a first aspect, in one embodiment, a feature fusion method is provided, including:
acquiring an image and extracting features to be identified in the image;
if the similarity between at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, finding out a feature with the highest score from the at least two features, labeling the feature to be identified with the tag with the highest score, carrying out feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database.
Further, the method further comprises the following steps:
if the similarity between all the features in the preset tag feature database and the features to be identified is smaller than a preset threshold value, adding the features to be identified into the preset tag feature database, and labeling a new tag for the features to be identified;
if the similarity between only one feature and the feature to be identified in the preset tag feature database is greater than or equal to a preset threshold, labeling the tag of the feature to be identified, carrying out feature fusion on the feature and the feature to be identified, and adding the fused feature into the preset tag feature database.
Further, finding a highest scoring feature from the at least two features, comprising:
and scoring each feature according to the fusion feature number of the label corresponding to each feature in the at least two features and the time of final feature fusion.
Further, the score for each of the at least two features is obtained by the following formula:
V k =s k +Δk
wherein V is k Scoring a kth feature of the at least two features, s k The similarity between the kth feature and the feature to be identified in the at least two features;
Δk is the gain of the kth feature of the at least two features, Δk= (1-s) k )*(C k /Sum)+(-ln(I k 5)), wherein C k For the fusion feature number of the label corresponding to the kth feature, sum is all of at least two featuresFusion of features feature number sum, I k And ranking the last feature fusion time of the kth feature in the reverse order of the last fusion time of each feature in the at least two features.
Further, if the preset tag feature database does not include the feature, adding the feature to be identified into the preset tag feature database, and labeling a new tag for the feature to be identified.
Further, the image is a face image, and the feature to be identified is a face feature.
Further, the similarity algorithm for calculating the features in the preset tag feature database and the features to be identified is a Euclidean distance algorithm, a Pearson correlation coefficient algorithm or a cosine distance algorithm.
According to a second aspect, in one embodiment there is provided a feature fusion device comprising:
the acquisition module acquires an image and extracts characteristics to be identified in the image;
and the feature fusion module is used for finding out a feature with the highest score from the at least two features when the similarity between the at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, labeling the tag of the feature with the highest score on the feature to be identified, carrying out feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database.
According to a third aspect, an embodiment provides a product comprising:
a memory for storing a program;
and a processor, configured to implement the method according to the above embodiment by executing the program stored in the memory.
According to a fourth aspect, an embodiment provides a computer readable storage medium including a program executable by a processor to implement the method described in the above embodiments.
According to the feature fusion method, the device and the storage medium of the embodiment, if the similarity between at least two features and the feature to be identified is greater than or equal to the preset threshold value in the preset tag feature database, the at least two features may be a type of tag, and the tag of the feature with the highest score in the at least two features is used as the tag of the feature to be identified because the at least two features are scored, so that the multiple types of features which may belong to the same type gradually concentrate in the tag so as to gradually correct the multiple types of tag features which should be one type into one type of tag.
Drawings
FIG. 1 is a flow chart of a feature fusion method of an embodiment;
FIG. 2 is a flow chart of a feature fusion method according to another embodiment;
FIG. 3 is a flow chart of a feature fusion apparatus of an embodiment.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning. The terms "coupled" and "connected," as used herein, are intended to encompass both direct and indirect coupling (coupling), unless otherwise indicated.
In the security field, face images are often required to be collected, the newly collected face images are classified according to the classified face images in the database, the face images of the same class are represented as face images of the same person, and in this embodiment, the face features in the face images are taken as an example, and a method for fusing the face features is described in detail.
Referring to fig. 1, fig. 1 is a flowchart of a feature fusion method according to an embodiment, where a face feature is taken as an example, and the method includes:
s10, acquiring an image containing a human face; the embodiment can collect the image containing the human face through the image collecting devices such as the monitoring camera.
S20, extracting face features to be identified from the image containing the faces. The embodiment can extract the face features to be identified in the image through the existing face feature extraction method, such as a method based on geometric features and a template matching method, a method based on subspace analysis, a face recognition method based on wavelet theory, a method based on neural network, a method based on hidden Markov model and the like, wherein the extracted face features are data in a vector form.
S30, classifying the face features to be identified. In the prior art, when classifying the face features to be identified, the similarity of the face features to be identified and each feature in a preset tag feature database is calculated first, and the features in the preset tag feature database with the similarity larger than a preset threshold are classified into one type, however, more than one feature in the preset tag feature database with the similarity larger than the preset threshold may exist, and it is also possible that the similarity of the features which are not exist in the preset tag feature database and the face features to be identified is larger than the preset threshold, and at least two features in the preset tag feature database and the similarity of the face features to be identified is larger than the preset threshold.
In addition, the preset label feature data in the embodiment is empty in the initial condition, so that in the initial condition, similarity is not required to be calculated, the face feature to be identified is directly added into a preset label feature database, a new label is marked on the face feature to be identified, and the label can be a virtual ID. The present embodiment adopts any existing method to calculate the similarity of the features, such as euclidean distance algorithm, pearson correlation coefficient algorithm, or cosine distance algorithm.
Embodiment one:
in other words, if the similarity between all the features in the preset tag feature database and the face features to be identified is smaller than the preset threshold, the face features to be identified are added into the preset tag feature database, and a new tag is marked on the face features to be identified. That is, the face image of the same person as the face feature to be recognized is not collected before, the face feature to be recognized can be used as a new category, and a new label is marked and stored in a preset label feature database so as to classify the face image of the same person in the later period if the face image of the same person is collected.
Embodiment two:
the embodiment is described by the fact that only one feature exists in a preset tag feature database, the similarity between the feature and the feature of the face to be recognized is greater than or equal to a preset threshold value, if the similarity between the feature and the feature of the face to be recognized, which is greater than or equal to the preset threshold value, is only present in the preset tag feature database, the feature of the face to be recognized, the similarity of which is greater than the preset threshold value, is described as a class, and the feature is directly classified into a class, namely, the tag of the feature is marked on the feature of the face to be recognized.
Because the face features belonging to the same class may be face features collected when the same person is in different positions, for example, face features collected from the same front face and face features collected from the side face, if the front face features and the side face features of the same person are both stored in a preset tag database, the number of features in the database is increased, since the front face features and the side face features belong to the same person, the front face features and the side face features can be fused into one feature through a feature fusion mode, and the number of features in the database is reduced, so that the embodiment also needs to perform feature fusion on the feature and the face features to be identified, and the fused features are added into the preset tag feature database.
Embodiment III:
the embodiment is described by the fact that the similarity between at least two features and the face feature to be identified in the preset tag feature database is greater than or equal to a preset threshold, if the similarity between at least two features and the face feature to be identified in the preset tag feature database is greater than or equal to the preset threshold, at this time, if the face feature to be identified and the feature greater than or equal to the preset threshold are classified into one type, the classification category is increased due to accidental errors, and finally the feature identification effect is affected. Therefore, the embodiment finds a feature with the highest score from at least two features, marks the label of the feature with the highest score on the face feature to be identified, performs feature fusion on the feature with the highest score and the face feature to be identified, and adds the fused feature into a preset label feature database. In other words, the at least two features are all features in the preset tag feature database, wherein the similarity between the features and the feature of the face to be recognized is greater than or equal to a preset threshold value.
When n (n is greater than or equal to 2) features (f 1, f2 …, fk, … fn) in the preset tag database and the similarity (s 1, s2 …, sk, … sn) between the features and the features of the face to be identified are all greater than a preset threshold, a plurality of features are scored, and the tag corresponding to the feature with the highest score is used as the tag of the feature of the face to be identified. Wherein the labels corresponding to the features f1, f2 …, fk, … fn are T1, T2 …, tk, … Tn.
In this embodiment, each feature is scored according to the feature number of fusion of the label corresponding to each feature in at least two features and the time of last feature fusion, the feature number of fusion of the label corresponding to each feature is the number of feature fusion of the feature marked by each label, and each time feature fusion is performed, the feature number of fusion is added by 1. Assume that the fusion feature number of tags T1, T2 …, tk, … Tn is C1, C2 …, ck, … Cn. Obtaining a score for each of the at least two features by equation (1):
V k =s k +Δk (1)
wherein V is k Scoring a kth feature of the at least two features, s k The similarity between the kth feature and the feature to be identified in the at least two features;
Δk is the gain of the kth feature of the at least two features, Δk= (1-s) k )*(C k /Sum)+(-ln(I k 5)), wherein C k For the fusion feature number of the kth feature, sum is the Sum of the fusion feature numbers of all the features in the at least two features, I k Ranking the kth feature last feature fusion time in the reverse order of the last fusion time of each of the at least two features, e.g., when the kth feature is the last feature of the at least two features to be feature fused, then I k Taking 1; when the kth feature is the next-to-last feature of the at least two features to perform feature fusion, I k Let 2 and so on.
Wherein C is k Sum is the forward excitation of the fusion feature number ratio, the larger the fusion feature number, C k The more/Sum increases. (-ln (I) k And/5)) is that the more weight excitation is performed with the last fusion time being more recent, I k The values corresponding to =1, 2,3,4,5,6 are 1.61,0.92,0.51,0.22,0, -0.18, so that the more recent fusion time, the forward excitation is seen, the smaller the forward excitation effect of the 2 nd and the later is, and the weakening effect is seen until the row is at the 6 th position.
The present embodiment selects score V k (k=1, 2,., n) the feature tag corresponding to the maximum value is the face feature to be identifiedAnd carrying out feature fusion on the face features to be identified and the features corresponding to the scoring maximum value.
In this embodiment, each time feature fusion is performed, the fusion feature number of the corresponding tag is added with 1, and the time of the last feature fusion is updated.
In a specific embodiment, the similarity between 6 features (corresponding tags are T1 and T2 … … T6 respectively) and the feature x to be identified in the preset tag feature database is greater than or equal to a preset threshold, as shown in table 1.
TABLE 1
T1 T2 T3 T4 T5 T6
Similarity sk 0.95 0.96 0.97 0.96 0.98 0.98
Total fusion feature number sum=2000 700 100 200 800 180 20
Ck/Sum 0.35 0.05 0.10 0.40 0.09 0.01
1-sk 0.05 0.04 0.03 0.04 0.02 0.02
Last time fusion time ordering 1 2 3 4 5 6
(-ln(Ik/5) 1.609 0.916 0.510 0.223 0 -0.182
(1-sk)*(Ck/Sum) 0.0175 0.002 0.003 0.016 0.0018 0.0002
Δk 0.0496 0.0203 0.0132 0.0204 0.0018 -0.0034
Vk 0.9996 0.9803 0.9832 0.9804 0.9818 0.9765
As can be seen from table 1, the similarity between the feature x to be identified and the features corresponding to the tags T5 and T6 is 0.98, but the fusion feature number of the two tags is smaller, the last fusion time is also older, and the final scores are 0.9818 and 0.9765 respectively. The similarity between the feature x to be identified and the feature corresponding to the tag T1 is minimum, but the feature corresponding to the tag T1 just participates in fusion, the fusion feature number is many, and the final score is highest.
Based on the above embodiments, please refer to fig. 2, fig. 2 is a specific flowchart of a feature fusion method according to an embodiment, which includes:
s11, acquiring an image through image acquisition equipment such as a camera.
S12, extracting the feature to be identified from the acquired image, for example, extracting the feature of the face to be identified from the face image.
S13, calculating the feature quantity of which the similarity with the feature to be identified is greater than or equal to a preset threshold value in a preset tag database, wherein according to the embodiment, the feature quantity is divided into three cases, and the three cases correspond to the following three steps respectively:
s14, if no feature with similarity to the feature to be identified being greater than or equal to a preset threshold value exists in the preset tag database, adding the feature to be identified into the preset feature database, and labeling a new tag to the feature to be identified.
And S15, if the similarity between only one feature and the feature to be identified in the preset tag database is greater than or equal to a preset threshold value, labeling the tag of the feature to be identified and performing feature fusion.
S16, if the similarity between at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag database, finding out a feature with the highest score from the at least two features, labeling the feature tag with the highest score on the feature to be identified, and carrying out feature fusion.
Embodiment four:
referring to fig. 3, the present embodiment further provides a feature fusion device, which includes an obtaining module 10 and a feature fusion module 20.
The acquiring module 10 is configured to acquire an acquired image, and extract a feature to be identified of the image according to an existing feature extracting method. The acquiring module 20 acquires the acquired image from the image acquisition device such as a camera, and extracts the face feature to be identified from the face image through the existing feature extraction algorithm, where the face feature to be identified in this embodiment is a data vector.
And the feature fusion module is used for adding the feature to be identified into a preset tag feature database when the similarity between all features in the preset tag feature database and the feature to be identified is smaller than a preset threshold value, and labeling a new tag for the feature to be identified. In the embodiment, similarity calculation is performed on all face features in a preset tag feature database and face features to be identified, if the similarity between all face features in the database and the face features to be identified is smaller than a preset threshold value, the fact that the face features in the database and the face features to be identified are not in one category is indicated, so that the face features to be identified are marked with a new tag and are separately classified into one category.
When the similarity between only one feature and the feature to be identified in a preset tag feature database is greater than or equal to a preset threshold value, labeling the feature to be identified with a tag of the feature, carrying out feature fusion on the feature and the feature to be identified, and adding the fused feature into the preset tag feature database; if only one feature in the preset tag feature database is similar to the face feature to be identified, classifying the face feature to be identified and the only feature.
When the similarity between at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, finding out a feature with the highest score from the at least two features, labeling the feature to be identified with the tag with the highest score, fusing the feature with the highest score with the feature to be identified, and adding the fused feature into the preset tag feature database. If the similarity between the two features and the feature to be identified is greater than or equal to a preset threshold in the preset tag feature database, a feature is selected from the plurality of features by calculating the score, the feature and the feature to be identified are classified, and the specific scoring method is described in the above embodiments and is not described herein.
In this embodiment, each feature is scored according to the fusion feature number of the tag corresponding to each feature in at least two features and the time of the last feature fusion. The number of fusion features of the label corresponding to each feature is the number of feature fusion times of the feature under the label, and the more the feature fusion times are, the more the time of the last feature fusion is, the higher the activity of the feature is, the higher the weight is given, and the higher the score is. Thus, even if errors occur occasionally, multiple types of features under the same type of label can be concentrated to one type over time, so that the errors are corrected gradually.
The method in the above embodiment may be implemented by hardware, and the present embodiment provides an article including a memory and a processor, where the processor may be an integrated circuit chip with signal processing capability. The processor may also be a general purpose Microprocessor (MCU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. For further functions and steps of the processor in this embodiment, reference may be made to the description in the embodiment of the feature fusion method, which is not described herein.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (8)

1. A method of feature fusion, comprising:
acquiring an image and extracting features to be identified in the image;
if the similarity between at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, finding out a feature with the highest score from the at least two features, labeling the feature to be identified with the tag with the highest score, carrying out feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database; if the similarity between all the features in the preset tag feature database and the features to be identified is smaller than a preset threshold value, adding the features to be identified into the preset tag feature database, and labeling a new tag for the features to be identified;
if the similarity between only one feature and the feature to be identified in the preset tag feature database is greater than or equal to a preset threshold value, labeling the feature to be identified with the tag of the feature, carrying out feature fusion on the feature and the feature to be identified, and adding the fused feature into the preset tag feature database;
the score for each of the at least two features is obtained by the following formula:
V k =s k +Δk
wherein V is k Scoring a kth feature of the at least two features, s k The similarity between the kth feature and the feature to be identified in the at least two features;
Δk is the gain of the kth feature of the at least two features, Δk= (1-s) k )*(C k /Sum)+(-ln(I k 5)), wherein C k Corresponding label for kth featureThe fusion feature number of the signature, sum is the Sum of the fusion feature numbers of all the features in at least two features, I k And ranking the last feature fusion time of the kth feature in the reverse order of the last fusion time of each feature in the at least two features.
2. The feature fusion method of claim 1, wherein finding a highest scoring feature from the at least two features comprises:
and scoring each feature according to the fusion feature number of the label corresponding to each feature in the at least two features and the time of final feature fusion.
3. The feature fusion method of claim 1, wherein if the preset tag feature database does not include features, adding the feature to be identified to the preset tag feature database, and labeling a new tag for the feature to be identified.
4. A method of feature fusion as claimed in any one of claims 1 to 3, in which the image is a face image and the feature to be identified is a face feature.
5. A feature fusion method according to any one of claims 1 to 3, wherein the similarity algorithm for calculating the feature in the predetermined tag feature database and the feature to be identified is a euclidean distance algorithm, a pearson correlation coefficient algorithm, or a cosine distance algorithm.
6. A feature fusion device, comprising:
the acquisition module acquires an image and extracts characteristics to be identified in the image;
the feature fusion module is used for finding out a feature with the highest score from the at least two features when the similarity between the at least two features and the feature to be identified is greater than or equal to a preset threshold value in a preset tag feature database, labeling the tag of the feature with the highest score on the feature to be identified, carrying out feature fusion on the feature with the highest score and the feature to be identified, and adding the fused feature into the preset tag feature database;
if the similarity between all the features in the preset tag feature database and the features to be identified is smaller than a preset threshold value, adding the features to be identified into the preset tag feature database, and labeling a new tag for the features to be identified;
if the similarity between only one feature and the feature to be identified in the preset tag feature database is greater than or equal to a preset threshold value, labeling the feature to be identified with the tag of the feature, carrying out feature fusion on the feature and the feature to be identified, and adding the fused feature into the preset tag feature database;
the score for each of the at least two features is obtained by the following formula:
V k =s k +Δk
wherein V is k Scoring a kth feature of the at least two features, s k The similarity between the kth feature and the feature to be identified in the at least two features;
Δk is the gain of the kth feature of the at least two features, Δk= (1-s) k )*(C k /Sum)+(-ln(I k 5)), wherein C k For the fusion feature number of the label corresponding to the kth feature, sum is the Sum of the fusion feature numbers of all the features in the at least two features, I k And ranking the last feature fusion time of the kth feature in the reverse order of the last fusion time of each feature in the at least two features.
7. A product, characterized by comprising:
a memory for storing a program;
a processor for implementing the method according to any one of claims 1-5 by executing a program stored in said memory.
8. A computer readable storage medium comprising a program executable by a processor to implement the method of any one of claims 1-5.
CN202010290730.5A 2020-04-14 2020-04-14 Feature fusion method and device and storage medium Active CN111488936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010290730.5A CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010290730.5A CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111488936A CN111488936A (en) 2020-08-04
CN111488936B true CN111488936B (en) 2023-07-28

Family

ID=71797998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010290730.5A Active CN111488936B (en) 2020-04-14 2020-04-14 Feature fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111488936B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment
CN110348362A (en) * 2019-07-05 2019-10-18 北京达佳互联信息技术有限公司 Label generation, method for processing video frequency, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993102A (en) * 2019-03-28 2019-07-09 北京达佳互联信息技术有限公司 Similar face retrieval method, apparatus and storage medium
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110263703A (en) * 2019-06-18 2019-09-20 腾讯科技(深圳)有限公司 Personnel's flow statistical method, device and computer equipment
CN110348362A (en) * 2019-07-05 2019-10-18 北京达佳互联信息技术有限公司 Label generation, method for processing video frequency, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111488936A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
US20240168993A1 (en) Analyzing content of digital images
US9798956B2 (en) Method for recognizing target object in image, and apparatus
CN110909725B (en) Method, device, equipment and storage medium for recognizing text
Xian et al. Latent embeddings for zero-shot classification
Zhang et al. Zero-shot kernel learning
JP6397986B2 (en) Image object region recognition method and apparatus
US10204283B2 (en) Image recognizing apparatus, image recognizing method, and storage medium
US20160260014A1 (en) Learning method and recording medium
US8805752B2 (en) Learning device, learning method, and computer program product
CN103824052A (en) Multilevel semantic feature-based face feature extraction method and recognition method
Chagas et al. Evaluation of convolutional neural network architectures for chart image classification
CN113033438B (en) Data feature learning method for modal imperfect alignment
Azcarraga et al. Keyword extraction using backpropagation neural networks and rule extraction
Li et al. Image classification based on SIFT and SVM
CN113221918B (en) Target detection method, training method and device of target detection model
Crowley et al. Of gods and goats: Weakly supervised learning of figurative art
CN115082659A (en) Image annotation method and device, electronic equipment and storage medium
CN111488936B (en) Feature fusion method and device and storage medium
CN108882033B (en) Character recognition method, device, equipment and medium based on video voice
Ma et al. Text detection in medical images using local feature extraction and supervised learning
Monay et al. Constructing visual models with a latent space approach
CN107092875B (en) Novel scene recognition method
Karakuş et al. A deep learning based fast face detection and recognition algorithm for forensic analysis
CN109034040B (en) Character recognition method, device, equipment and medium based on cast
Oikonomopoulos et al. Discriminative space-time voting for joint recognition and localization of actions.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant