CN110147840A - The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness - Google Patents

The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness Download PDF

Info

Publication number
CN110147840A
CN110147840A CN201910427847.0A CN201910427847A CN110147840A CN 110147840 A CN110147840 A CN 110147840A CN 201910427847 A CN201910427847 A CN 201910427847A CN 110147840 A CN110147840 A CN 110147840A
Authority
CN
China
Prior art keywords
data set
conspicuousness
feature
unsupervised
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910427847.0A
Other languages
Chinese (zh)
Inventor
庞程
周李
蓝如师
刘振丙
罗笑南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201910427847.0A priority Critical patent/CN110147840A/en
Publication of CN110147840A publication Critical patent/CN110147840A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness, steps are as follows: obtains data set training sample D;It assembles for training data and practices sample D progress foreground segmentation process, data set D1 after being handled;It carries out the division of unsupervised modular construction to flower image prospect using conspicuousness information to data set D1 based on the unsupervised component division methods of conspicuousness, and to local shape factor and coding pond, generates characteristic data set D2;Data set D2 combination various features SIFT, the dense SIFT and Lab color middle level features for calculating fusion multiple features are indicated that the feature using global extraction and pond classifies to image data set, obtain the fine-grained classification results of object.This method simulates the process that the mankind observe object, the discriminating power for effectively improving feature coding, in weak structure object classification, without the feature for introducing new type, it can be formed complementary, conveniently be expanded in the classification method for applying any global characteristics with the method for global class.

Description

The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness
Technical field
The present invention relates to technical field of medical image processing, the weak structure specifically divided based on the unsupervised component of conspicuousness Object fine grit classification method.
Background technique
The object of same basic class generally all has similar body part, so that the component detection of fining and part Feature extraction is possibly realized.But the also target object in some fine grit classification, because of pair of its different classes of sample Answer component that there is significant shape difference, lead to not to train unified component to detect son, substantially increase component detection and The difficulty of local shape factor, our this type objects are referred to as weak structure object, such as flower.General weak structure object classification side Method is classified by extracting global visual signature, this can inevitably ignore some important local features, reduce feature Discriminating power.Therefore one is proposed come the discriminating power of lifting feature using the unsupervised component partition strategy based on conspicuousness Kind weak structure object fine grit classification method.
Have benefited from the analysis to component, general fine grit classification method is in classification birds, dog class, takes on the objects such as vehicle Obtained good effect.The success of these methods has similar appearance dependent on the component of these objects, so that these portions Part can be aligned to a same feature space.However it is also due to this reason, such methods are in classification plant etc. one Performance in terms of a little weak structure objects is not ideal.Specifically, in addition to the tradition of fine grit classification field face is tired Difficulty, such as illumination, attitudes vibration and high localized feature, the great change in shape of weak structure object further aggravate The difficulty of image analysis.For example the pistil of some flowers is very tiny, can be also shown in great class because of attitudes vibration Variation, there are also some flowers to have 5 more roomy petals, and some flowers have the petal fine crushing of cluster, will be aligned this A little components are extremely difficult.For these reasons, some very advanced fine grit classification methods not can be used directly yet In the classification of weak structure object.
Most of weak structure object type method follows an identical classification process: being split first to prospect, so The bottom visual signature of prospect is extracted afterwards, is then carried out using global characteristics of the vision bag of words coding techniques (BOW) to extraction special Assemble-publish code obtains final class vector.Existing work focuses on how to detect and extract foreground segmentation mostly, and extraction has Feature of judgement index etc..Angelova etc. is logical to propose a kind of fine granularity flower for combining region segmentation and foreground segmentation Classification method.Similarly, the dividing method that Chai etc. proposes a kind of two levels is used to divide and flower of classifying.Nilsback etc. It only used color model not only to divide the prospect of flower, also use a kind of each portion for describing flower based on the model of shape Separation structure.It is concentrated on there are also part work and how to design and assess feature such as shape descriptor and angle description, texture is retouched State son.A part of method is investigated how to merge various features and assess their effects in flower classification.But it is all The above method is all to carry out global feature coding pond after the global extraction feature of prospect, obtains final point by BOW Class vector.These schemes all do not explore the structure of weak structure object further, limit the further of its performance It is promoted.Because some vision words for describing tiny flower component can be therefore by other widely distributed vision word institutes It floods, affects feature representation effect of these vision words in final classification vector, reduce the discriminating power of feature.Institute To propose automatically to find and divide the component of weak structure object using image significance characteristic.In the method, it simulates The mankind observe the process of object, and devise local shape factor and local code pondization strategy, can solve to a certain extent The above problem, and improve the classification accuracy of weak structure object.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, and provide a kind of based on the unsupervised component division of conspicuousness Weak structure object fine grit classification method, this method simulate the mankind observe object process, effectively improve final feature The discriminating power of coding, in weak structure object classification, without the feature for introducing new type, so that it may and the method shape of global class At complementation, it is easily extended in the classification method for applying any global characteristics.
Realizing the technical solution of the object of the invention is:
Based on the weak structure object fine grit classification method that the unsupervised component of conspicuousness divides, include the following steps:
S1, data set training sample D is obtained;
S2, foreground segmentation process is carried out to the data set training sample that step S1 is obtained, uses the structure of weak structure image Priori instructs cutting procedure, by the way that structure priori is added and merges a variety of dividing methods, improves one kind and is detected based on conspicuousness Dividing method, using this method to data set carry out foreground segmentation, data set D1 after being handled;
S3, the data set D1 after foreground segmentation of step S2 is made based on conspicuousness unsupervised component division methods The division of unsupervised modular construction is carried out to flower image prospect with conspicuousness information, and to local shape factor and coding Chi Hua generates characteristic data set D2;
S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab Color, the middle level features for further calculating fusion multiple features indicate, using the global feature for extracting simultaneously pond, then to picture number Classify according to collection, obtains the fine-grained classification results of object.
In step S1, the data set training sample is using 102 data set of Oxford Flower, using the number A kind of weak structure object fine grit classification method is tested according to collection, which has collected the totally 8189 width image of 102 kinds of flowers, also Provide true segmentation mask, the information such as training test set division.
In step S2, a kind of dividing method based on conspicuousness detection, although Saliency Cut method is simple, Efficiently and itself has multiple dimensioned characteristic, is still easy to appear problem;Therefore Saliency Cut method is improved, A large amount of topography's block is acquired using foreground and background region of the super-pixel segmentation algorithm in the image correctly divided, these I.e. indicate them belongs to prospect or background to the label of image block, then goes to sentence using these image blocks training SVM classifier A disconnected query image block belongs to prospect or background, divides the image completed via Saliency Cut when given one, Some topography's blocks are sampled in the image background regions, are judged with above-mentioned SVM classifier;If these image blocks are most of Prospect is belonged to, then illustrates the segmentation result inaccuracy of Saliency Cut, this method, which may abandon, should belong to object The foreground area of body, therefore these regions are added in foreground area set, data set D1 after being handled.
It is described based on the unsupervised component division methods of conspicuousness in step S3, it is the process foreground segmentation to step S2 Data set D1 afterwards, and to local shape factor and coding pond;This method includes the more of pixel according to each component area Few significance sequence to them reorganizes, so that the region with less pixel has higher significance degree, Feature extraction and pond stage mark a certain feature (such as SIFT, HOG feature), and in each of Saliency maps label These features are extracted in a component area respectively, and the feature of each extracted region will be used to learn from all sample images Belong to exclusive a set of vision word dictionary in respective region, generates characteristic data set D2.
In step S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab color, the middle level features for further calculating fusion multiple features indicate, specifically consider there are multiple one-to-many supporting vectors Machine classifier SVMs, for each sample, the prediction score by collecting all these classifiers obtains a higher-dimension The middle level features of these high dimension vectors fusion multiple features are extracted the feature in simultaneously pond using the overall situation in sorting phase by vector, And it is tactful with local shape factor pondization using being divided based on unsupervised component, obtain the fine-grained classification results of object.
The utility model has the advantages that the weak structure object fine grit classification side provided by the invention divided based on the unsupervised component of conspicuousness Method, this method have the following advantages:
(1) present invention utilizes the salient region detection algorithm based on global contrast, and iteratively updates the portion of acquisition Part detection is assumed and segmentation prospect it is assumed that retain the portion useful to fine granularity visual analysis as much as possible to the maximum extent in turn Part;
(2) the invention proposes a kind of feature extracting methods based on Saliency maps.This method can learn in region The vision word of part can preferably highlight the local feature with judgement index, to improve local feature in entire spy Ability to express in assemble-publish code;
(3) the middle level features representation method of multiple features of the present invention, global characteristics and local feature respectively describe object Different aspect, they have certain complementarity.The present invention is reorganized with existing feature and obtains a kind of spy of complementation It takes over for use in weak structure object classification, rather than is obtained in performance as other methods by introducing the feature of new type It improves;
(4) the present invention is based on unsupervised component division methods, and the object in image is detected and divided.Then divide The prospect of cutting can further be divided according to the significant angle value of pixel, obtain several regions.Finally, can be at each Bottom visual signature is extracted in region, is learnt respective vision word respectively in each area and is encoded pond feature.
Detailed description of the invention
Fig. 1 is the process of the weak structure object fine grit classification method of the invention divided based on the unsupervised component of conspicuousness Figure;
Fig. 2 is local visual word learning algorithm schematic diagram in embodiment;
Fig. 3 is local shape factor and pond algorithm schematic diagram in embodiment;
Fig. 4 is that unsupervised component divides and feature extraction pond schematic diagram in embodiment.
Specific embodiment
The present invention is further elaborated with reference to the accompanying drawings and examples, but is not limitation of the invention.
Embodiment:
As shown in Figure 1, based on the weak structure object fine grit classification method that the unsupervised component of conspicuousness divides, including it is as follows Step:
S1, data set training sample D is obtained;
S2, foreground segmentation process is carried out to the data set training sample that step S1 is obtained, uses the structure of weak structure image Priori instructs cutting procedure, by the way that structure priori is added and merges a variety of dividing methods, improves one kind and is detected based on conspicuousness Dividing method, using this method to data set carry out foreground segmentation, data set D1 after being handled;
S3, the data set D1 after foreground segmentation of step S2 is made based on conspicuousness unsupervised component division methods The division of unsupervised modular construction is carried out to flower image prospect with conspicuousness information, and to local shape factor and coding Chi Hua generates characteristic data set D2;
S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab Color is further calculated a kind of middle level features expression for having merged multiple features, the feature in simultaneously pond is extracted using the overall situation, then right Image data set is classified, and the fine-grained classification results of object are obtained.
In step S1, the data set training sample is using 102 data set of Oxford Flower, using the number A kind of weak structure object fine grit classification method is tested according to collection, which has collected the totally 8189 width image of 102 kinds of flowers, also Provide true segmentation mask, the information such as training test set division.
In step S2, a kind of dividing method based on conspicuousness detection, in order to detect and divide weak structure object Prospect, first investigate the image in data set, obtain general object color and texture it is all significant be different from the color of background And texture, so object can generally be in most significant status in whole image, therefore, Saliency Cut passes through meter first Calculate Saliency maps, salient region is estimated within the scope of full figure, then these Saliency maps be used to initialize one it is improved Grab Cut automatic segmentation algorithm, Saliency Cut introduce the concept of region contrast, with this by spatial relationship and calculating Region contrast combines;Firstly, dividing the image into several regions using Graph Cut dividing method, then statistics is each The color histogram in region, to any region rq, by calculating itself and other regions riContrast obtain its significance Value, significance value expression are as follows:
In above-mentioned formula (1), w (ri) it is region riWeight, Dr(×, ×) return two regions color distance, riIn Pixel number w (ri) highlight the color contrast in big region, any two region r1And r2Color distance can indicate are as follows:
In above-mentioned formula (2), nqIt is to represent color category in region, p (vq,i) it is i-th of color vq,iIn q-th of region All nqThe probability occurred in a color, wherein q=1,2, have used the probability that color occurs in area probability density function to make For weight, so highlighting the difference between primary color, biological vision is the study found that vision system is to the contrast of signal More sensitive, the calculating process of institute in this way has reacted the habit feature of mankind's observation object, result to a certain extent The subjectivity for also complying with people is expected.
Although Saliency Cut method is simple, efficiently and itself has multiple dimensioned characteristic, is still easy following It goes wrong in the case of two kinds:
(1) when the prospect of an object occupies most of area in whole image, the algorithm tend to by Background area is divided into marking area;
(2) if the component (such as pistil) of some objects is higher by than the significance degree of remaining part region or background area Many, then SaliencyCut tend to abandon remaining these region and only by most significant partial segmentation be prospect to Cause the loss of some components.
In order to solve the problem of that small-size object is accidentally divided in above-mentioned second situation, is considered using those One SVM points of training of the object (the ratio R value that the marking area that algorithm detects accounts for the area of image is greater than t2) correctly divided Class device acquires a large amount of topography using foreground and background region of the super-pixel segmentation algorithm in the image correctly divided Block, i.e. indicate them belongs to prospect or background to the label of these image blocks, is then used with these image blocks training classifier In judging that a query image block belongs to prospect or background;When the figure that given one is completed via Saliency Cut segmentation Picture samples some topography's blocks in the background area of image, is judged with above-mentioned SVM classifier;If these image blocks are big Part belongs to prospect, this just illustrates that the segmentation result of Saliency Cut is inaccurate, these images, which have abandoned, to be belonged to In the foreground area of object, then these regions are added in foreground area set;It is on the contrary then illustrate that segmentation result is accurate.It is logical Cross the result that three steps below detected to optimize SaliencyCut and divided prospect:
1) prospect for calculating the SaliencyCut segmentation result of all images accounts for the ratio R of image area, uses threshold value t2 The segmentation two parts for being divided into reasonable segmentation and need to optimize the image for obtaining segmentation result, are instructed using the result rationally divided Background SVM classifier before practicing;
2) optimize the segmentation result that R value obtained in the previous step is less than t1 using GrabCut;
3) the R value obtained in the first step is applied to be in the segmentation result between t1 and t2 SVM.
Foreground segmentation, data after being handled are carried out to data set using this step improved Saliency Cut algorithm Collect D1.
In step S3, to the data set D1 after foreground segmentation of step S2, using conspicuousness information to flower image Prospect carries out the division of unsupervised modular construction, and to local shape factor and coding pond;Unsupervised component divides step It is rapid: to calculate a Saliency maps first for the prospect of each image, prospect is divided into several and is considered different portions The region of part.It then, in each area, is each extracted region bottom visual signature to retouch by masking other regions This region (component) is stated, these low-level image features are respectively used to the vision word that study describes each region, these vision lists Word is eventually used to the local feature in coding and the region Chi Huaqi, the process difference of entire local shape factor and pond As shown in Figures 2 and 3.
Salient region detection algorithm based on global contrast be used to calculate the aobvious of prospect in weak structure subject image Work degree figure, estimates to the algorithm iteration spatial correlation of global contrast difference and they, than existing significance detection method It has higher efficiency and detection accuracy, then simulates the moving process of mankind's blinkpunkt, calculate the significant of foreground image Degree figure, although calculating similar saliency map there are also method, saliency map has a difference for three aspects:
1, general saliency map is pointed out that most significant region in entire image, and the method for the present invention is to focus on figure As the saliency map in prospect, what is provided is the conspicuousness difference of flower each region (component);
2, the calculating of saliency map is not restricted to the significance numerical value using each pixel, but increases and be suitable for The structural constraints of weak structure object divide region using the significance value of pixel, then according to the possessed pixel in each region The significance degree in how many pairs of regions resequence, enable smaller region that there is stronger conspicuousness, be subsequent part It prepares in feature coding pond;
3) each region in notable figure has its distinctive vision word dictionary, these dictionaries are by each area Domain, which is fetched separately, to be learnt after bottom visual signature.
And existing method is usually in global extracted region feature and to learn same office, a city word dictionary, in this way Expression of some discreet region features in feature coding is often damaged, because some biggish regions or component are learning More features are capable of providing in the feature sampling of vision word, then its shared ratio in the vision word finally learnt Example may also be very big, this is equal to dilute the effect of the vision word from tiny component.
Assuming that the process that a people observes object is divided into k step: what is be observed first is most significant part, later It can just observe that (some variations, such as because flower may sequentially occur in this according to the type of object for less significant part It will appear great appearance difference between piece different classes of, certain parts of some flowers are often blocked by other parts and become It is invisible);This phenomenon implys that the significance degree of component will affect the sequence of observation, therefore by all foreground pixels according to it Significance numerical value is clustered, and k classification in total is divided into, and the classification of these pixels is sorted according to significant angle value;Then It include that how many pairs of pixel their significance sequences reorganize according to each component area, so that having less pixel Region there is higher significance degree, for example, region 1 is the highest component area of significance, and it has least picture Element so far obtains the Saliency maps of object prospect, and object is divided into k component.
In feature extraction and pond stage, a certain feature (such as SIFT, HOG etc.) is marked with f, and in Saliency maps These features are extracted respectively in each component area of label, and the feature of each extracted region will be by from all sample images Exclusive a set of vision word dictionary for learning to belong to respective region obtains k feature histogram after feature pool Figure, rather than a feature histogram is only obtained as traditional global characteristics pond, it is each according to above-mentioned method Kind one histogram vectors H of feature calculation(f), it can be obtained by splicing the k histogram of this feature:
H(f)=[H1,H2,...,Hk] (3)
In above-mentioned formula (3), HkIndicate the histogram obtained in k-th of region, if algorithm apply it is more than one Feature, it is thus only necessary to calculate corresponding H for each feature f(f), wherein f ∈ { 1,2 ..., m }, these last different characteristics Feature vector is fused in final class vector, for certain tiny component such as pistils, the vision generally learnt Word is likely to far less than the vision word acquired from other large-size components, this results in the feature coding in Chi Huahou In, the vision word from small size parts is less than those visions from large-size components for the contribution of entire feature vector Word, and the feature extraction based on conspicuousness information and pondization strategy actually contain the structural information of object, and put down Expression effect of the feature to have weighed in different components in the feature coding of Chi Huahou, as shown in figure 4, feature is in a more higher-dimension Space can distinguish original inseparable some information, bring the promotion of its judgement index, can produce characteristic by this step According to collection D2.
In step S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab color further calculates a kind of middle level features expression for having merged multiple features, and the various features used include SIFT, Dense SIFT and Lab color considers there are m one-to-many support vector machine classifier SVMs (f), it is for m classification The classifier using feature f of object training, f ∈ { 1,2 ..., m }, then these classifiers be used to predict training sample and The classification of test sample, for each sample, the prediction score by collecting all these classifiers obtains a m dimension Vector sf, the middle level features for merging multiple features are indicated are as follows:
S=[s1,s2,...,sn] (4)
In sorting phase, the feature in simultaneously pond is both extracted using the overall situation, also using based on the division of unsupervised component and part Feature extraction pondization strategy, to combine the advantage of the two;Firstly, the vector from two methods is all in accordance with side recited above Method is converted to middle level features and indicates and be stitched together;Then the middle level features expression from training sample is used for training one A regression function;Finally use the classification of this trained softmax function prediction test sample;In conjunction with global approach and base In the advantage of the dividing method of conspicuousness detection, because global characteristics and local feature respectively describe the different aspect of object, In resulting the results show, they have certain complementarity, are reorganized and are obtained a kind of mutual with existing feature for the first time The feature of benefit is used for weak structure object classification, rather than by introducing the feature of new type come acquired as other methods Raising on energy.

Claims (5)

1. the weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness, which is characterized in that including as follows Step:
S1, data set training sample D is obtained;
S2, foreground segmentation process is carried out to the data set training sample that step S1 is obtained, uses the structure priori of weak structure image Cutting procedure is instructed, by the way that structure priori is added and merges a variety of dividing methods, improves a kind of point based on conspicuousness detection Segmentation method carries out foreground segmentation, data set D1 after being handled to data set using this method;
S3, the unsupervised component division methods of conspicuousness are based on, to the data set D1 after foreground segmentation of step S2, using aobvious Work property information carries out the division of unsupervised modular construction to flower image prospect, and to local shape factor and coding pond Change, generates characteristic data set D2;
S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab Color, the middle level features for further calculating fusion multiple features indicate, using the global feature for extracting simultaneously pond, then to picture number Classify according to collection, obtains the fine-grained classification results of object.
2. the weak structure object fine grit classification method according to claim 1 divided based on the unsupervised component of conspicuousness, It is characterized in that, in step S1, the data set training sample is using 102 data set of Oxford Flower.
3. the weak structure object fine grit classification method according to claim 1 divided based on the unsupervised component of conspicuousness, It is characterized in that, in step S2, a kind of dividing method based on conspicuousness detection, be by Saliency Cut method into Row improves, and acquires a large amount of topography using foreground and background region of the super-pixel segmentation algorithm in the image correctly divided Block, i.e. indicate them belongs to prospect or background to the label of these image blocks, then using these image blocks training classifier It goes to judge that a query image block belongs to prospect or background, when the figure that given one is completed via Saliency Cut segmentation Picture samples some topography's blocks in the image background regions, is judged with above-mentioned SVM classifier;If these image blocks are big Part belongs to prospect, then illustrates the segmentation result inaccuracy of Saliency Cut, this method, which may abandon, to be belonged to It is added in foreground area set in the foreground area of object, therefore by these regions, data set D1 after being handled.
4. the weak structure object fine grit classification method according to claim 1 divided based on the unsupervised component of conspicuousness, It is characterized in that, in step S3, it is described based on the unsupervised component division methods of conspicuousness, it is the process prospect point to step S2 Data set D1 after cutting, and to local shape factor and coding pond;This method includes pixel according to each component area How many pairs of their significance sequences reorganize, so that the region with less pixel is with higher significance degree, In feature extraction and pond stage, a certain feature is marked, and in each component area of Saliency maps label respectively These features are extracted, and the feature of each extracted region will be used to learn to belong to the only of respective region from all sample images The a set of vision word dictionary having generates characteristic data set D2.
5. the weak structure object fine grit classification method according to claim 1 divided based on the unsupervised component of conspicuousness, It is characterized in that, in step S4, the characteristic data set D2 obtained according to step S3, in conjunction with various features SIFT, dense SIFT and Lab color, the middle level features for further calculating fusion multiple features indicate, specifically consider to have multiple one-to-many Support vector machine classifier SVMs, for each sample, the prediction score by collecting all these classifiers obtains one The middle level features of these high dimension vectors fusion multiple features in sorting phase, are extracted simultaneously pond using the overall situation by the vector of a higher-dimension Feature obtain the fine-grained classification of object and tie and using being divided based on unsupervised component and local shape factor pondization strategy Fruit.
CN201910427847.0A 2019-05-22 2019-05-22 The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness Withdrawn CN110147840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910427847.0A CN110147840A (en) 2019-05-22 2019-05-22 The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910427847.0A CN110147840A (en) 2019-05-22 2019-05-22 The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness

Publications (1)

Publication Number Publication Date
CN110147840A true CN110147840A (en) 2019-08-20

Family

ID=67592636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910427847.0A Withdrawn CN110147840A (en) 2019-05-22 2019-05-22 The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness

Country Status (1)

Country Link
CN (1) CN110147840A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111080562A (en) * 2019-12-06 2020-04-28 合肥科大智能机器人技术有限公司 Substation suspender identification method based on enhanced image contrast

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050391A1 (en) * 2012-08-17 2014-02-20 Nec Laboratories America, Inc. Image segmentation for large-scale fine-grained recognition
US20160132750A1 (en) * 2014-11-07 2016-05-12 Adobe Systems Incorporated Local feature representation for image recognition
WO2019018063A1 (en) * 2017-07-19 2019-01-24 Microsoft Technology Licensing, Llc Fine-grained image recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050391A1 (en) * 2012-08-17 2014-02-20 Nec Laboratories America, Inc. Image segmentation for large-scale fine-grained recognition
US20160132750A1 (en) * 2014-11-07 2016-05-12 Adobe Systems Incorporated Local feature representation for image recognition
WO2019018063A1 (en) * 2017-07-19 2019-01-24 Microsoft Technology Licensing, Llc Fine-grained image recognition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHENG PANG等: "Rediscover flowers structurally", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
JUNTAN ZHANG等: "Fine-Grained Image Classification via Spatial Saliency Extraction", 《2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA)》 *
尹红: "基于深度学习的花卉图像分类算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111080562A (en) * 2019-12-06 2020-04-28 合肥科大智能机器人技术有限公司 Substation suspender identification method based on enhanced image contrast
CN111080562B (en) * 2019-12-06 2022-12-20 合肥科大智能机器人技术有限公司 Substation suspender identification method based on enhanced image contrast

Similar Documents

Publication Publication Date Title
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN103514456B (en) Image classification method and device based on compressed sensing multi-core learning
CN109670528B (en) Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy
CN109359684A (en) Fine granularity model recognizing method based on Weakly supervised positioning and subclass similarity measurement
CN109948425A (en) A kind of perception of structure is from paying attention to and online example polymerize matched pedestrian's searching method and device
CN108875595A (en) A kind of Driving Scene object detection method merged based on deep learning and multilayer feature
CN111695482A (en) Pipeline defect identification method
CN109800736A (en) A kind of method for extracting roads based on remote sensing image and deep learning
CN107341517A (en) The multiple dimensioned wisp detection method of Fusion Features between a kind of level based on deep learning
CN107886117A (en) The algorithm of target detection merged based on multi-feature extraction and multitask
CN109523520A (en) A kind of chromosome automatic counting method based on deep learning
CN106504255B (en) A kind of multi-Target Image joint dividing method based on multi-tag multi-instance learning
CN103106265B (en) Similar image sorting technique and system
CN106250874A (en) A kind of dress ornament and the recognition methods of carry-on articles and device
CN105825502B (en) A kind of Weakly supervised method for analyzing image of the dictionary study based on conspicuousness guidance
CN108537117A (en) A kind of occupant detection method and system based on deep learning
CN104992142A (en) Pedestrian recognition method based on combination of depth learning and property learning
CN105046197A (en) Multi-template pedestrian detection method based on cluster
CN103164694A (en) Method for recognizing human motion
CN106023145A (en) Remote sensing image segmentation and identification method based on superpixel marking
CN105513066B (en) It is a kind of that the generic object detection method merged with super-pixel is chosen based on seed point
CN105825233B (en) A kind of pedestrian detection method based on on-line study random fern classifier
CN110263712A (en) A kind of coarse-fine pedestrian detection method based on region candidate
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN107092884A (en) Rapid coarse-fine cascade pedestrian detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190820