CN108197660A - Multi-model Feature fusion/system, computer readable storage medium and equipment - Google Patents

Multi-model Feature fusion/system, computer readable storage medium and equipment Download PDF

Info

Publication number
CN108197660A
CN108197660A CN201810044482.9A CN201810044482A CN108197660A CN 108197660 A CN108197660 A CN 108197660A CN 201810044482 A CN201810044482 A CN 201810044482A CN 108197660 A CN108197660 A CN 108197660A
Authority
CN
China
Prior art keywords
model
pattern set
feature
accuracy
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810044482.9A
Other languages
Chinese (zh)
Inventor
汪宏
叶浩
邵蔚元
郑莹斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Advanced Research Institute of CAS
Original Assignee
Shanghai Advanced Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Advanced Research Institute of CAS filed Critical Shanghai Advanced Research Institute of CAS
Priority to CN201810044482.9A priority Critical patent/CN108197660A/en
Publication of CN108197660A publication Critical patent/CN108197660A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Abstract

The present invention provides a kind of multi-model Feature fusion/system, computer readable storage medium and equipment, and method includes:The multiple identification models of training, and form candidate family set using trained identification model;Define a preference pattern set;A temporary pattern set is combined into from the model in one identification model of candidate family Resource selection and preference pattern set;Pass through the accuracy of a test set test temporary pattern set;Candidate family set is traversed, selection rejects the highest model of accuracy, and add in preference pattern set so that the highest model of temporary pattern set accuracy from candidate family set;Cycle performs combination step to traversal step, until the number of model reaches preset quantity in preference pattern set.The present invention can carry out efficient Model Fusion, and particularly when model quantity is more, limited model can be quickly selected from model set so that the nearly optimal combination of model group splice grafting chosen, and have the characteristics that fast and efficiently.

Description

Multi-model Feature fusion/system, computer readable storage medium and equipment
Technical field
The invention belongs to artificial intelligence fields, are related to a kind of fusion method/system, more particularly to a kind of multi-model feature Fusion method/system, computer readable storage medium and equipment.
Background technology
The one of which classics task of artificial intelligence field is object classification or identification, the task generally by one or Multiple model extractions include the feature of the image of object, and the mathematical notation of this feature is generally the vector of specific length, passes through Classified to feature or calculate distance to obtain the recognition result to object.
It, may be by multiple when carrying out object identification with the proposition of depth network model and the enhancing of computing capability Model extraction goes out the different characteristic in multiple global or local regions of image, need by these Fusion Features into a feature come into The subsequent identification operation of row.There are two types of Model Fusion mode more typical at present is basic:
First, all models is selected to be merged;
Second is that the model quantity that fixation will merge, optimal model combination is selected by exhaustive search.
The first Feature fusion is suitable for the less situation of feature quantity, but in the more situation of feature quantity, directly Connecing after fusion causes fusion needs the aspect of model extracted excessive, very low in computational valid time rate.Second of Feature fusion subtracts Lack the quantity of model, but excessive using the calculating cost of exhaustive search, it is particularly a fairly large number of in candidate family, Since the time of model measurement general long and model number of combinations is with candidate family quantity non-linear growth so that exhaustion is searched The time cost of rope is excessive.
Therefore, how a kind of multi-model Feature fusion/system, computer readable storage medium and equipment is provided, with The defects of prior art computational efficiency is low, search time cost is excessive is solved, it is urgently to be resolved hurrily to have become those skilled in the art in fact The technical issues of.
Invention content
In view of the foregoing deficiencies of prior art, the purpose of the present invention is to provide a kind of multi-model Fusion Features sides Method/system, computer readable storage medium and equipment, for solving, prior art computational efficiency is low, search time cost is excessive The problem of.
In order to achieve the above objects and other related objects, one aspect of the present invention provides a kind of multi-model Feature fusion, Including:Step 1, the multiple identification models of training, and form candidate family set using trained identification model;Step 2, it is fixed An adopted preference pattern set;Step 3, from one identification model of candidate family Resource selection and the preference pattern set Model be combined into a temporary pattern set;Step 4 tests the accuracy of the temporary pattern set by a test set; Step 5 traverses the candidate family set, and selection causes the highest model of temporary pattern set accuracy, by accuracy Highest model is rejected from the candidate family set, and adds in the preference pattern set;Step 6, cycle perform step Three to step 5, until the number of model reaches preset quantity in the preference pattern set.
In one embodiment of the invention, the preference pattern set is initially empty set.
In one embodiment of the invention, the step 1 is including the use of the parameter of backpropagation techniques more new model to defeated The training dataset entered is trained, to be trained for multiple identification models.
In one embodiment of the invention, the step 4 includes:Using the temporary pattern set to the test set Test model in conjunction extracts characteristics of image respectively;All characteristics of image of same test model are spliced, are spelled with being formed Connect feature;Dimension-reduction treatment is carried out to the splicing feature, to obtain dimensionality reduction feature;It is special with pre-stored criteria image to calculate dimensionality reduction feature COS distance between sign;The COS distance and several classification thresholds are compared, if the COS distance is less than the classification threshold Value, then it represents that classification is correct;The corresponding classification thresholds of sort accuracy highest, the classification accuracy rate under the threshold value is described The accuracy of temporary pattern set.
In one embodiment of the invention, described image character representation is a feature vector, and described eigenvector is pressed The sequencing splicing of preference pattern set is added in, forms the splicing feature.
In one embodiment of the invention, the dimension-reduction treatment includes principal component analysis dimensionality reduction.
In one embodiment of the invention, the calculation formula of COS distance is:Wherein, A represents dimensionality reduction Angle between feature and pre-stored criteria characteristics of image;A represents dimensionality reduction feature, and b represents pre-stored criteria characteristics of image;CosA is drop The cosine value of angle between dimensional feature and pre-stored criteria characteristics of image.
Another aspect of the present invention provides a kind of multi-model Fusion Features system, including:Training module, for training multiple knowledges Other model, and form candidate family set using trained identification model;Definition module, for defining a preference pattern collection It closes;Composite module, for from the model group in one identification model of candidate family Resource selection and the preference pattern set Synthesize a temporary pattern set;Test module, for passing through the accuracy that a test set tests the temporary pattern set;Time Module is gone through, for traversing the candidate family set, selection causes the highest model of temporary pattern set accuracy, will be smart The highest model of exactness is rejected from the candidate family set, and adds in the preference pattern set;Loop module, for following Ring calls composite module, test module and spider module, until the number of model reaches present count in the preference pattern set Amount.
In one embodiment of the invention, the loop module is used for the parameter pair using backpropagation techniques more new model The training dataset of input is trained, to be trained for multiple identification models.
In one embodiment of the invention, the test module is used for using the temporary pattern set to the test set Image in conjunction extracts characteristics of image respectively;All characteristics of image of same test image are spliced, it is special to form splicing Sign;Dimension-reduction treatment is carried out to the splicing feature, to obtain dimensionality reduction feature;Calculate dimensionality reduction feature and pre-stored criteria characteristics of image it Between COS distance;The COS distance and several classification thresholds are compared, if the COS distance is less than the classification thresholds, Presentation class is correct;The corresponding classification thresholds of sort accuracy highest, the classification accuracy rate under the threshold value are described interim The accuracy of model set.
Another aspect of the invention provides a kind of computer readable storage medium, is stored thereon with computer program, the program The multi-model Feature fusion is realized when being executed by processor.
Last aspect of the present invention provides a kind of equipment, including:Processor and memory;The memory is based on storing Calculation machine program, the processor is used to perform the computer program of the memory storage, so that equipment execution is described more Aspect of model fusion method.
As described above, multi-model Feature fusion/system, computer readable storage medium and equipment of the present invention, tool Have following
Advantageous effect:
Multi-model Feature fusion of the present invention/system, computer readable storage medium and equipment can carry out height The Model Fusion of effect particularly when model quantity is more, can quickly select a limited number of a model from model set, So that the nearly optimal combination of model group splice grafting chosen, and have the characteristics that fast and efficiently in model selection.
Description of the drawings
Fig. 1 is shown as flow diagram of the multi-model Feature fusion of the present invention in an embodiment.
Fig. 2 is shown as the flow diagram of S14 in the multi-model Feature fusion of the present invention.
Fig. 3 is shown as theory structure schematic diagram of the multi-model Fusion Features system of the present invention in an embodiment.
Component label instructions
3 multi-model Fusion Features systems
31 training modules
32 definition modules
33 composite modules
34 test modules
35 spider modules
36 loop modules
S11~S16 steps
S141~S146 steps
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification Disclosed content understands other advantages and effect of the present invention easily.The present invention can also pass through in addition different specific realities The mode of applying is embodied or practiced, the various details in this specification can also be based on different viewpoints with application, without departing from Various modifications or alterations are carried out under the spirit of the present invention.It should be noted that in the absence of conflict, following embodiment and implementation Feature in example can be combined with each other.
It should be noted that the diagram provided in following embodiment only illustrates the basic structure of the present invention in a schematic way Think, component count, shape and size when only display is with related component in the present invention rather than according to actual implementation in schema then It draws, kenel, quantity and the ratio of each component can be a kind of random change during actual implementation, and its assembly layout kenel It is likely more complexity.
The technology of multi-model Feature fusion proposed by the present invention/system, computer readable storage medium and equipment is former Reason is as follows:
Note candidate family collection is combined into C, and test set is combined into V, and the model set of final choice is S, and S is initially empty set.It needs M model of quick-pick from C, and m model combination on V with higher accuracy.One is selected from C Model forms a temporary pattern set T with models all in S.The accuracy of test model set T on test set V.Essence The test of exactness is different according to different identification missions.Candidate family set C is traversed, selection is so that temporary pattern set T essences The highest model M of exactness.The troubleshooting model M from candidate family set C, and M additions are selected in model set S.More than repeating Step, until selecting model set S sizes as m, then set S is the model set that this method is picked out.
Embodiment one
The present embodiment provides a kind of multi-model Feature fusion, including:
Step 1, the multiple identification models of training, and form candidate family set using trained identification model;
Step 2 defines a preference pattern set;
Step 3 is combined from one identification model of candidate family Resource selection with the model in the preference pattern set Into a temporary pattern set;
Step 4 tests the accuracy of the temporary pattern set by a test set;
Step 5 traverses the candidate family set, and selection causes the highest model of temporary pattern set accuracy, The highest model of accuracy is rejected, and add in the preference pattern set from the candidate family set;
Step 6, cycle performs step 3 to step 5, until the number of model reaches pre- in the preference pattern set If quantity.
The multi-model Feature fusion provided below with reference to diagram the present embodiment is described in detail.It is described more What aspect of model fusion method model suitable for multi-model fusion combined selects, example of practical application such as recognition of face task In from multiple human face recognition models quick-pick fusion after have more pinpoint accuracy limited quantity model combination.
Referring to Fig. 1, it is shown as flow diagram of the multi-model Feature fusion in an embodiment.As shown in Figure 1, The multi-model Feature fusion specifically includes following steps:
S11, the multiple human face recognition models of training, and form candidate family set using trained identification model.It is described Candidate family set is denoted as C.
In the present embodiment, the training dataset of input is instructed using the parameter of backpropagation techniques more new model Practice, to be trained for multiple identification models.
For example, one depth network human face recognition model of training, inputs a training dataset (training dataset packet A large amount of facial images and the label to facial image are included, which is used to represent the identity of different faces), trained process is Utilize the parameter of backpropagation techniques more new model so that accuracy of the model on training dataset is continuously improved.
S12 defines a preference pattern set.In the present embodiment, the preference pattern set is denoted as S, and S is initially empty Collection.In the present embodiment, m model of quick-pick from C is needed, and the m model combination has very high complementarity, i.e., Accuracy on V is higher, close to optimum combination.M is preset quantity, can according to circumstances be set.
S13 selects a human face recognition model and the model in the preference pattern set S from the candidate family set C It is combined into a temporary pattern set.The temporary pattern set is denoted as T.In the present embodiment, S11 and S12 is to run simultaneously.
S14 tests the accuracy of the temporary pattern set T by a test set V.Referring to Fig. 2, it is shown as S14 Flow diagram.As shown in Fig. 2, the S14 specifically includes following steps:
S141 extracts characteristics of image respectively using the temporary pattern set to the test image in the test set. In the present embodiment, characteristics of image is extracted using extraction model.Extraction model is a function, and the input of the function is figure Picture exports the feature for image.
S142 splices all characteristics of image of same test model, to form splicing feature.In the present embodiment In, described image character representation is a feature vector, by described eigenvector by the sequencing for adding in preference pattern set Splicing, forms the splicing feature.
S143 carries out dimension-reduction treatment to the splicing feature, to obtain dimensionality reduction feature, in the present embodiment, the dimensionality reduction Processing includes principal component analysis dimensionality reduction.
Specifically, a long splicing feature is become by a short splicing feature using PCA dimensionality reductions, spliced for reducing Redundancy and noise information in feature, and accelerate subsequent calculating by reducing length.
S144 calculates the COS distance between dimensionality reduction feature and pre-stored criteria characteristics of image.In the present embodiment, cosine away from From calculation formula be:
Wherein, A represents the angle between dimensionality reduction feature and pre-stored criteria characteristics of image;A represents dimensionality reduction feature, and b represents pre- Deposit standard picture feature;The cosine value of angles of the cosA between dimensionality reduction feature and pre-stored criteria characteristics of image.
The COS distance and several classification thresholds are compared by S145, if the COS distance is less than the classification thresholds, Presentation class is correct;If the COS distance is more than or equal to the classification thresholds, then it represents that classification error.
S146, the corresponding classification thresholds of sort accuracy highest, the classification accuracy rate under the threshold value are described interim The accuracy of model set.The corresponding classification thresholds of classification accuracy rate highest are the temporary pattern set T on test set V Accuracy.
S15 traverses the candidate family set C, and selection causes the highest model M of the temporary pattern set T accuracy, The highest model M of accuracy is rejected, and add in the preference pattern set S from the candidate family set C.
S16, cycle perform S13 to S15, until the number of model reaches preset quantity m in the preference pattern set.When When the model selected reaches preset quantity, the selection model set S is the model set of the invention picked out.
The present embodiment also provides a kind of computer readable storage medium, is stored thereon with computer program, which is located Reason device realizes the multi-model Feature fusion when performing.One of ordinary skill in the art will appreciate that:Realize above-mentioned each side The all or part of step of method embodiment can be completed by the relevant hardware of computer program.Aforementioned computer program can To be stored in a computer readable storage medium.The program when being executed, performs the step of including above-mentioned each method embodiment; And aforementioned storage medium includes:The various media that can store program code such as ROM, RAM, magnetic disc or CD.
Multi-model Feature fusion described in the present embodiment can carry out efficient Model Fusion, particularly in pattern number When measuring more, a limited number of a model can be quickly selected from model set so that the model group splice grafting chosen is closely best Combination, and have the characteristics that fast and efficiently in model selection.
Embodiment two
The present embodiment provides a kind of multi-model Fusion Features system 3, referring to Fig. 3, being shown as multi-model Fusion Features system The theory structure schematic diagram united in an embodiment.As shown in figure 3, the multi-model Fusion Features system 3 includes training module 31st, definition module 32, composite module 33, test module 34, spider module 35 and loop module 36.The multi-model Fusion Features What the model suitable for multi-model fusion of system 3 combined selects, from multiple people in example of practical application such as recognition of face task There is the model combination of the limited quantity of more pinpoint accuracy in face identification model after quick-pick fusion.It it should be noted that should The division for understanding the modules of system above is only a kind of division of logic function, in actual implementation can be all or part of It is integrated on a physical entity, it can also be physically separate.And these modules all can pass through processing element tune with software Form is realized;It can also all realize in the form of hardware;Can software be called by processing element with part of module Form realizes that part of module is realized by the form of hardware.For example, x modules can be the processing element individually set up, it can also It is integrated in some chip of above device and realizes, in addition it is also possible to be stored in above device in the form of program code In memory, called by some processing element of above device and perform the function of more than x modules.The realization of other modules with It is similar.In addition these modules can completely or partially integrate, and can also independently realize.Processing element described here Can be a kind of integrated circuit, the processing capacity with signal.During realization, each step of the above method or more is each Module can be completed by the integrated logic circuit of the hardware in processor elements or the instruction of software form.
For example, the above module can be arranged to implement one or more integrated circuits of above method, such as: One or more specific integrated circuits (ApplicationSpecificIntegratedCircuit, abbreviation ASIC) or, one Or multi-microprocessor (digitalsingnalprocessor, abbreviation DSP) or, one or more field-programmable gate array It arranges (FieldProgrammableGateArray, abbreviation FPGA) etc..For another example, when some above module is dispatched by processing element When the form of program code is realized, which can be general processor, such as central processing unit (CentralProcessingUnit, abbreviation CPU) or it is other can be with the processor of caller code.For another example, these modules can To integrate, realized in the form of system on chip (system-on-a-chip, abbreviation SOC).
The training module 31 is used to train multiple human face recognition models, and form candidate using trained identification model Model set.The candidate family set is denoted as C.
In the present embodiment, the training dataset of input is instructed using the parameter of backpropagation techniques more new model Practice, to be trained for multiple identification models.
The definition module 32 coupled with the training module 31 is for one preference pattern set of definition.In the present embodiment, The preference pattern set is denoted as S, and S is initially empty set.In the present embodiment, m model of quick-pick from C is needed, and The m model combination has very high complementarity, i.e., the accuracy on V is higher, close to optimum combination.M is preset quantity, can According to circumstances set.
The composite module 33 coupled with the training module 31 and definition module 32 is used to select from the candidate family set C It selects a human face recognition model and is combined into a temporary pattern set with the model in the preference pattern set S.The temporary pattern Set is denoted as T.
The test module 34 coupled with the composite module 33 is used to test the temporary pattern collection by a test set V Close the accuracy of T.
Specifically, the test module 34 is used for using the temporary pattern set to the test chart in the test set As extracting characteristics of image respectively;All characteristics of image of same test model are spliced, to form splicing feature;To described Splice feature and carry out dimension-reduction treatment, to obtain dimensionality reduction feature;Calculate the cosine between dimensionality reduction feature and pre-stored criteria characteristics of image Distance;The COS distance and several classification thresholds are compared, if the COS distance is less than the classification thresholds, then it represents that classification Correctly;If the COS distance is more than or equal to the classification thresholds, then it represents that classification error;Corresponding point of sort accuracy highest Class threshold value, the classification accuracy rate under the threshold value are the accuracy of the temporary pattern set.Corresponding point of classification accuracy rate highest Class threshold value is accuracy of the temporary pattern set T in test set.
The spider module 35 coupled with the test module 34 is for traversing the candidate family set C, and selection is so that institute The highest model M of temporary pattern set T accuracy is stated, the highest model M of accuracy is picked from the candidate family set C It removes, and adds in the preference pattern set S.
The loop module 36 coupled with the spider module 35 for recursive call composite module 33, test module 34 and time Module 35 is gone through, until the number of model reaches preset quantity in the preference pattern set.When the model selected reaches default During quantity, the selection model set is the model set of the invention picked out.
Embodiment three
The present embodiment provides a kind of equipment, the equipment includes:Processor, memory, transceiver, communication interface and system Bus;Memory and communication interface are connect with processor and transceiver by system bus and complete mutual communication, storage Device is for storing computer program, and communication interface is used for and other equipment communicates, and processor and transceiver are based on running Calculation machine program makes equipment perform each step of the multi-model Feature fusion as described in embodiment one.
System bus mentioned above can be Peripheral Component Interconnect standard (PeripheralPomponentInterconnect, abbreviation PCI) bus or expanding the industrial standard structure (ExtendedIndustryStandardArchitecture, abbreviation EISA) bus etc..The system bus can be divided into address Bus, data/address bus, controlling bus etc..It for ease of representing, is only represented in figure with a thick line, it is not intended that only one total Line or a type of bus.Communication interface is used to implement database access device and other equipment (such as client, read-write library And read-only library) between communication.Memory may include random access memory (RandomAccessMemory, abbreviation RAM), Nonvolatile memory (non-volatilememory), for example, at least a magnetic disk storage may also be further included.
Above-mentioned processor can be general processor, including central processing unit (CentralProcessingUnit, letter Claim CPU), network processing unit (NetworkProcessor, abbreviation NP) etc.;It can also be digital signal processor (DigitalSignalProcessing, abbreviation DSP), application-specific integrated circuit (ApplicationSpecificIntegratedCircuit, abbreviation ASIC), field programmable gate array (Field- ProgrammableGateArray, abbreviation FPGA) either other programmable logic device, discrete gate or transistor logic device Part, discrete hardware components.
In conclusion multi-model Feature fusion of the present invention/system, computer readable storage medium and equipment can To carry out efficient Model Fusion, particularly when model quantity is more, can quickly be selected from model set limited Several models so that the nearly optimal combination of model group splice grafting chosen, and have the characteristics that fast and efficiently in model selection.Institute With the present invention effectively overcomes various shortcoming of the prior art and has high industrial utilization.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe The personage for knowing this technology all can carry out modifications and changes under the spirit and scope without prejudice to the present invention to above-described embodiment.Cause This, those of ordinary skill in the art is complete without departing from disclosed spirit and institute under technological thought such as Into all equivalent modifications or change, should by the present invention claim be covered.

Claims (12)

1. a kind of multi-model Feature fusion, which is characterized in that including:
Step 1, the multiple identification models of training, and form candidate family set using trained identification model;
Step 2 defines a preference pattern set;
Step 3 is combined into one from the model in one identification model of candidate family Resource selection and the preference pattern set Temporary pattern set;
Step 4 tests the accuracy of the temporary pattern set by a test set;
Step 5 traverses the candidate family set, and selection causes the highest model of temporary pattern set accuracy, will be smart The highest model of exactness is rejected from the candidate family set, and adds in the preference pattern set;
Step 6, cycle performs step 3 to step 5, until the number of model reaches present count in the preference pattern set Amount.
2. multi-model Feature fusion according to claim 1, which is characterized in that the preference pattern set is initially Empty set.
3. multi-model Feature fusion according to claim 1, which is characterized in that the step 1 is including the use of reversed The parameter of communications more new model is trained the training dataset of input, to be trained for multiple identification models.
4. multi-model Feature fusion according to claim 1, which is characterized in that the step 4 includes:
Characteristics of image is extracted respectively to the test model in the test model set using the temporary pattern set;
All characteristics of image of same test model are spliced, to form splicing feature;
Dimension-reduction treatment is carried out to the splicing feature, to obtain dimensionality reduction feature;
Calculate the COS distance between dimensionality reduction feature and pre-stored criteria characteristics of image;
The COS distance and several classification thresholds are compared, if the COS distance is less than the classification thresholds, then it represents that classification Correctly;
The corresponding classification thresholds of sort accuracy highest, the classification accuracy rate under the threshold value are the temporary pattern set Accuracy.
5. multi-model Feature fusion according to claim 4, which is characterized in that described image character representation is one Described eigenvector by the sequencing for adding in preference pattern set is spliced, forms the splicing feature by feature vector.
6. multi-model Feature fusion according to claim 4, which is characterized in that
The dimension-reduction treatment includes principal component analysis dimensionality reduction.
7. multi-model Feature fusion according to claim 4, which is characterized in that
The calculation formula of COS distance is:
Wherein, A represents the angle between dimensionality reduction feature and pre-stored criteria characteristics of image;A represents dimensionality reduction feature, and b represents the mark that prestores Quasi- characteristics of image;The cosine value of angles of the cosA between dimensionality reduction feature and pre-stored criteria characteristics of image.
8. a kind of multi-model Fusion Features system, which is characterized in that including:
Training module for training multiple identification models, and forms candidate family set using trained identification model;
Definition module, for defining a preference pattern set;
Composite module, for from the model group in one identification model of candidate family Resource selection and the preference pattern set Synthesize a temporary pattern set;
Test module, for passing through the accuracy that a test set tests the temporary pattern set;
Spider module, for traversing the candidate family set, selection is so that the highest mould of temporary pattern set accuracy Type rejects the highest model of accuracy, and add in the preference pattern set from the candidate family set;
Loop module, for recursive call composite module, test module and spider module, until mould in the preference pattern set The number of type reaches preset quantity.
9. multi-model Fusion Features system according to claim 8, which is characterized in that the loop module is used for using anti- The training dataset of input is trained to the parameter of communications more new model, to be trained for multiple identification models.
10. multi-model Fusion Features system according to claim 8, which is characterized in that the test module is used to use The temporary pattern set extracts characteristics of image respectively to the test model in the test set;To the institute of same test model There is characteristics of image to be spliced, to form splicing feature;Dimension-reduction treatment is carried out to the splicing feature, to obtain dimensionality reduction feature;
Calculate the COS distance between dimensionality reduction feature and pre-stored criteria characteristics of image;By the COS distance and several classification thresholds into Row compares, if the COS distance is less than the classification thresholds, then it represents that classification is correct;The corresponding classification of sort accuracy highest Threshold value, the classification accuracy rate under the threshold value are the accuracy of the temporary pattern set.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Any one of claim 1 to the 7 multi-model Feature fusion is realized during execution.
12. a kind of equipment, which is characterized in that including:Processor and memory;
For the memory for storing computer program, the processor is used to perform the computer journey of the memory storage Sequence, so that the equipment performs the multi-model Feature fusion as described in any one of claim 1 to 7.
CN201810044482.9A 2018-01-17 2018-01-17 Multi-model Feature fusion/system, computer readable storage medium and equipment Pending CN108197660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810044482.9A CN108197660A (en) 2018-01-17 2018-01-17 Multi-model Feature fusion/system, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810044482.9A CN108197660A (en) 2018-01-17 2018-01-17 Multi-model Feature fusion/system, computer readable storage medium and equipment

Publications (1)

Publication Number Publication Date
CN108197660A true CN108197660A (en) 2018-06-22

Family

ID=62589943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810044482.9A Pending CN108197660A (en) 2018-01-17 2018-01-17 Multi-model Feature fusion/system, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN108197660A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472240A (en) * 2018-11-12 2019-03-15 北京影谱科技股份有限公司 Recognition of face multi-model self-adapting Fusion Features Enhancement Method and device
CN112183830A (en) * 2020-09-16 2021-01-05 新奥数能科技有限公司 Method and device for predicting temperature of chilled water
WO2022116522A1 (en) * 2020-12-01 2022-06-09 广州橙行智动汽车科技有限公司 Trip fusion method and apparatus, and vehicle
CN114757630A (en) * 2022-06-16 2022-07-15 阿里健康科技(杭州)有限公司 Storage management model determining method and device and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046673A (en) * 2015-07-13 2015-11-11 哈尔滨工业大学 Self-learning based hyperspectral image and visible image fusion classification method
CN107358143A (en) * 2017-05-17 2017-11-17 广州视源电子科技股份有限公司 Sweep forward model integrated method, apparatus, storage device and face identification system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046673A (en) * 2015-07-13 2015-11-11 哈尔滨工业大学 Self-learning based hyperspectral image and visible image fusion classification method
CN107358143A (en) * 2017-05-17 2017-11-17 广州视源电子科技股份有限公司 Sweep forward model integrated method, apparatus, storage device and face identification system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472240A (en) * 2018-11-12 2019-03-15 北京影谱科技股份有限公司 Recognition of face multi-model self-adapting Fusion Features Enhancement Method and device
CN112183830A (en) * 2020-09-16 2021-01-05 新奥数能科技有限公司 Method and device for predicting temperature of chilled water
WO2022116522A1 (en) * 2020-12-01 2022-06-09 广州橙行智动汽车科技有限公司 Trip fusion method and apparatus, and vehicle
CN114757630A (en) * 2022-06-16 2022-07-15 阿里健康科技(杭州)有限公司 Storage management model determining method and device and computer equipment
CN114757630B (en) * 2022-06-16 2022-10-14 阿里健康科技(杭州)有限公司 Storage management model determining method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN112184508B (en) Student model training method and device for image processing
CN108197660A (en) Multi-model Feature fusion/system, computer readable storage medium and equipment
CN106294344B (en) Video retrieval method and device
CN111160140B (en) Image detection method and device
CN109241528A (en) A kind of measurement of penalty prediction of result method, apparatus, equipment and storage medium
CN111143578B (en) Method, device and processor for extracting event relationship based on neural network
Chen et al. Vectorization of historical maps using deep edge filtering and closed shape extraction
CN110866930A (en) Semantic segmentation auxiliary labeling method and device
He et al. Aggregating local context for accurate scene text detection
CN115344805A (en) Material auditing method, computing equipment and storage medium
CN111353504B (en) Source camera identification method based on image block diversity selection and residual prediction module
Kausar et al. Multi-scale deep neural network for mitosis detection in histological images
CN110197213B (en) Image matching method, device and equipment based on neural network
CN108460038A (en) Rule matching method and its equipment
CN113723352A (en) Text detection method, system, storage medium and electronic equipment
CN111966836A (en) Knowledge graph vector representation method and device, computer equipment and storage medium
CN116310688A (en) Target detection model based on cascade fusion, and construction method, device and application thereof
CN116468702A (en) Chloasma assessment method, device, electronic equipment and computer readable storage medium
CN116206334A (en) Wild animal identification method and device
CN113688263B (en) Method, computing device, and storage medium for searching for image
CN112801045B (en) Text region detection method, electronic equipment and computer storage medium
CN111539853B (en) Standard case routing determination method, device and equipment
CN111738213B (en) Person attribute identification method and device, computer equipment and storage medium
CN112614134A (en) Image segmentation method and device, electronic equipment and storage medium
CN113792132A (en) Target answer determination method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180622