CN110363101A - A kind of flowers recognition methods based on CNN Fusion Features frame - Google Patents
A kind of flowers recognition methods based on CNN Fusion Features frame Download PDFInfo
- Publication number
- CN110363101A CN110363101A CN201910548293.XA CN201910548293A CN110363101A CN 110363101 A CN110363101 A CN 110363101A CN 201910548293 A CN201910548293 A CN 201910548293A CN 110363101 A CN110363101 A CN 110363101A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- flowers
- gradient
- cnn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Abstract
The invention discloses a kind of flowers recognition methods based on CNN Fusion Features frame.A variety of validity features of picture and the CNN of identification picture better performances are combined by the present invention, train the identification model based on every kind of feature, the specific steps are as follows: step 1: image preprocessing and feature extraction;Step 2: the CNN model training based on every kind of feature;Step 3: based on result training fusion frame, obtaining flowers fusion recognition frame.The present invention utilizes background segmentation techniques, and identified flowers are split from complex background, it is therefore prevented that the interference of background, it can be with the significantly more efficient validity feature using flowers.The present invention uses the fusion frame being composed of each simple flowers identification model, and the different characteristic of flowers is utilized, and improves the accuracy of flowers identification.
Description
Technical field
The invention belongs to computer vision and mode identification technologies, and in particular to one kind is based on CNN Fusion Features frame
The flowers recognition methods of frame.
Background technique
In recent years, the classification and identification of flowers image have become the important directions of computer vision and area of pattern recognition.
As the mobile images such as digital camera and smart phone obtain the universal of terminal, acquisition flowers image becomes very convenient, uses
Computer technology has very high research and application value to the identification and classification of these flowers images.However flowers image classification
Belong to the scope of precise image classification, this image is usually indicated by multiple visual signatures.Therefore to the accurate of flowers image
Classification is always a challenging problem.
In current existing flower classification technology, Nilsback et al. calculates four kinds of different characteristics of flower, including
The overall space distribution of texture, boundary shape, petal and color.SVM classifier is used in combination with kernel learning framework for they,
The optimal weights of each feature are determined based on specific training set and previous constraint.
Cha proposes the partitioning algorithm of the collaboration distribution based on similar flower-shape.By the SIFT feature for extracting entire image
With Lab feature, corresponding BoW feature vector is calculated, and image classification is realized by SVM classifier.
Angelova has used a kind of method for eliminating background interference information, and has for precise image classification experiments
Satisfactory result.In addition, Saitoh et al. extracts the feature of flower and leaf using clustering method from image, then make
Flower is identified with piecewise linear discrimination function.Mishra propose it is a kind of based on color, shaped volumes and cell characteristic it is more
Class classification and identification algorithm.
But these methods are usually only extracted one two kinds of most three or four kinds of features of image, do not make full use of image
Various features, and without being combined using with the CNN (convolutional neural networks) of identification picture better performances, in a large amount of picture
Performance in the case of identification.
The present invention by the five kinds of features (color, texture, local feature, gradient, depth) extracted from picture and CNN into
Row combines, and trains the identification model based on every kind of feature, and certain combination is carried out to model, and it is correct to be fused into an identification
The high model of rate.
Summary of the invention
In order to overcome the disadvantages mentioned above of the prior art and insufficient, the present invention proposes a kind of based on CNN Fusion Features frame
Flowers recognition methods.A kind of a variety of validity features by picture and the CNN of identification picture better performances are combined, and train base
In the identification model of every kind of feature, certain combination is being carried out, to improve the flowers recognition methods of recognition correct rate.
The purpose of the present invention is achieved through the following technical solutions:
Step 1: image preprocessing and feature extraction;
1-1. background segment: carrying out background segment to training sample image first, and specific method is mentioned using Nilsback
Method out.
1-2.HSV feature extraction:
Training sample image is converted into H (tone) S (saturation from original RGB (Red Green Blue, RGB) image
Degree) V (brightness) color space and remove the space V simultaneously.Conversion formula is as follows:
Wherein R, G, B are the corresponding value of each pixel.
By treated, image is divided into an appropriate number of color minizone again, and each minizone becomes the one of histogram
A bin, i.e., quantify image, then by calculating the available hsv color of pixel quantity in each minizone of color
Histogram, the as color characteristic of HSV.
1-3. texture feature extraction:
The textural characteristics of image have translation invariance, but textural characteristics are likely to be present in multiple dimensioned region, and two
Dimension wavelet transform can extract the feature on different scale, and have the advantages that multiresolution analysis characteristic, so using this
Method simultaneously extracts image texture characteristic with DWT.Wherein two-dimensional wavelet transformation formula is as follows:
Wherein, f (x, y) is the information of original image (here with saturation degree (S): the component part of HSV color space);It is a wavelet function, is defined as follows:
When processing the images, wavelet transformation is carried out to image, converts four subgraphs: low resolution subgraph for image
A, horizontal direction subgraph H, vertical direction subgraph V and diagonal subgraph D.It is as shown in table 1 below:
Table 1
A | WH |
WV | WD |
Wherein low frequency resolution level subimage concentrates the main component of signal, and excess-three part is the high frequency letter of signal
Breath, i.e. detailed information.The summation for using three high frequency bands of wavelet transformation to export is as textural characteristics:
T=WH+WV+WD
1-4.HOG feature extraction: the side HOG (Histogram of Oriented Gridients, histograms of oriented gradients)
Method is based on the calculating for normalizing local direction histogram of gradients in dense grid.
Gray level image is converted by image first, is carrying out Gamma normalized, and divide an image into several
Blocks is simultaneously divided into several cells in each block, then appropriate using terraced as Sobel and Laplacian
Degree operator carrys out each cell of convolution to obtain gradient direction and amplitude.Specific formula is as follows:
Wherein, IxAnd IyThe gradient value on both horizontally and vertically is represented, M (x, y) represents the range value of gradient, θ (x, y)
Represent the direction of gradient.
For each cell, gradient direction therein is divided into 36 bin, each bin includes 10 degree, entire histogram packet
Containing 36 dimensions, then mapped that in weighted projection in corresponding angular range again obtain the cell again on gradient direction plus
Weigh histogram.Finally, being standardized to obtain the gradient information with 36 dimensions.It then will be every in the same block
The feature of a cell is together in series the feature as a block, then the feature of each block is together in series, and as HOG is special
Sign.
1-5.SIFT (Scale Invariant Feature Transform, Scale invariant features transform) feature extraction:
SIFT feature maintains the invariance to rotation, scaling, brightness change etc., is a kind of highly stable local feature, therefore will
The most important feature that SIFT is identified as flowers.
The extremum extracting of scale space: multiple dimensioned by image is showed using image pyramid.Among these, first right
Image is smoothed, and then to treated, image progress is down-sampled.Need Gaussian convolution core building scale empty among these
Between, the scale space of two dimensional image is defined as follows:
L (x, y, σ)=G (x, y, σ) × I (x, y)
Wherein G (x, y, σ) is changeable scale Gaussian function, and I (x, y) is space coordinate, and σ is known as the scale space factor, it
It is the standard deviation of Gauss normal distribution, reflects the degree that image is blurred, value is bigger, and image is fuzzyyer, corresponding scale
It is bigger.
Then operation is carried out to the image of obtained adjacent Gaussian spatial by original image and up constructs image pyramid,
Original image is the bottom, and up each layer is that (Gaussian convolution, wherein σ value is gradually to its next layer progress Laplacian transformation thereafter
Gaussian pyramid is constructed greatly).Adjacent gaussian pyramid is subtracted each other again just obtained DoG pyramid (difference Gauss,
Difference of Gaussina)。
Detect DOG scale space extreme point: by each sampled point with it with 8 consecutive points of scale and neighbouring ruler
To spend corresponding 9 × 2 points totally 26 points compare, and ensure all to detect extreme value in scale space and two dimensional image space with this
Point.It repeats this step and detects all extreme points.
It seeks the principal direction of characteristic point: determining its directioin parameter using the gradient distribution characteristic of characteristic point neighborhood territory pixel,
The histogram of gradients of image is recycled to seek the stabilising direction of key point partial structurtes.Centered on characteristic point, with 3 × 1.5 σ
For calculated in the field of radius each pixel gradient argument and amplitude, then the argument of gradient is carried out using histogram
Statistics.The horizontal axis of histogram is the direction of gradient, and the longitudinal axis is the accumulated value that gradient direction corresponds to gradient magnitude, highest in histogram
Direction corresponding to peak is the direction of characteristic point.After obtaining the principal direction of characteristic point, available for each characteristic point three
A information (x, y, σ, θ), i.e. position, scale and direction, it is possible thereby to determine a SIFT feature region.
It generates feature description: reference axis is rotated to be to the direction of characteristic point first, 16 × 16 centered on characteristic point
Pixel in window is divided into 16 pieces by the gradient magnitude of the pixel of window and direction, every piece be 8 directions in its pixel histogram
Figure statistics, can form the feature vector of 128 dimensions, as SIFT feature altogether.
1-6 depth characteristic is extracted: it is (complete to extract the neural network that depth characteristic is based respectively on CNN firstly the need of building two
The thick scale network G lobal Coarse-Scale Network of office;The thin dimensioned network in part, Local Fine-Scale
Network)。
The task of global thick scale network is the global visual angle of usage scenario to predict overall depth graph structure.The thin ruler in part
Very little network carries out local refinement to characteristic pattern, keeps depth characteristic more obvious.
Global thick scale network includes five convolutional layers, maximum pond layer and two full articulamentums, is finally exported
Size is the 1/4 of original image pixel;The thin dimensioned network in part includes three convolutional layers, maximum pond layer, and thick scale network is defeated
Thin dimensioned network is inputted as additional low-level features figure out, which can export depth characteristic.It is wherein specific global thick
The dimension reference paper of some parameter selections and characteristic pattern and output figure in the thin dimensioned network of scale network and part
《Depth Map Prediction from a Single Image using a Multi-Scale Deep Network》)。
Picture inputs to the thin dimensioned network of global thick scale network and part simultaneously first, then by global thick scale network
The second layer that output is added to the thin dimensioned network in part can export depth after the thin dimensioned network in part as one of parameter
Spend feature.
Step 2: the CNN model training based on every kind of feature
2-1. constructs 5 models based on AlexNet neural network, and AlexNet model has 5 convolutional layers, and three complete
Articulamentum, and local acknowledgement's normalization layer (Local Response Normalization) in original model is replaced with and is criticized
Normalization layer (Batch Normalization).
2-2. will need the picture of training to be adjusted to 224*224, and with the ImageNet-1000 pre-training model finely tuned
Parameter initialized.
Five kinds of features that 2-3. extracts step 1 carry out corresponding model training using CNN, and training repeatedly takes
Wherein best one model of generalization ability out.
Step 3: based on result training fusion frame
Step 2 training is obtained the CNN model based on every kind of feature and is made by using multiple response linear regression (MLR) by 3-1.
Certain linear combination is carried out for assisted learning algorithm.In MLR, use standard error as loss function.Loss function is public
Formula is as follows:
Wherein, Wj, BjIt is the weight and offset of corresponding individual learner, LiIt is each full articulamentum of model the last layer
Output valve, F is prediction label matrix.The task of study is based on standard error minimum come the parameter of computation model.It can be with
W is calculated using standard error minimumi, Bi.Formula is as follows:
Wherein,WithRefer to the corresponding weight of each model and biasing.
3-2. tests the fusion frame obtained according to MLR, then based on test result to fusion frame parameter into
Best one fusion frame of test result is found out in row adjustment.
Step 4: identification
The flowers picture of required identification is carried out the operation of background segment described in step 1 and various had by 4-1.
Imitate feature extraction operation.
4-2. will be put into flowers fusion recognition frame from the various validity features that picture extracts and identify.
The title of object in the picture that 4-3. is identified required for providing.
Beneficial effect hair of the present invention is as follows:
Using background segmentation techniques, identified flowers are split from complex background, it is therefore prevented that the interference of background, energy
Enough with the significantly more efficient validity feature using flowers.
Using the fusion frame being composed of each simple flowers identification model, the different characteristic of flowers is utilized, mentions
The high accuracy of flowers identifications.
Detailed description of the invention
Fig. 1 is general frame figure of the present invention;
Fig. 2 is color of image space of the present invention transition diagram;
Fig. 3 is image texture characteristic extraction process of the present invention;
Fig. 4 is image HOG characteristic extraction procedure of the present invention;
Fig. 5 is image SIFT feature extraction process of the present invention;
Fig. 6 is picture depth characteristic extraction procedure of the present invention
Fig. 7 is image recognition operations flow chart of the present invention.
Specific embodiment
The present invention will be described in detail With reference to embodiment.
As shown in Figure 1, a kind of flowers recognition methods based on CNN Fusion Features frame proposed by the present invention, according to following
Step is implemented:
Step 1: training sample image is handled
As shown in Figures 2 and 3, color feature extracted: the background segment is carried out to training sample image first and is handled, so
The image after segmentation is converted into HSV color space image according to color space conversion formula afterwards, it is several dividing an image into
A color minizone, and color is calculated in the pixel quantity of each minizone, hsv color histogram is made, as hsv color is special
Sign.
Texture feature extraction defines a wavelet function, with the method for two-dimensional discrete wavelet conversion to HSV sample image into
Image procossing is low resolution subgraph A, vertical direction subgraph V, diagonal subgraph D, horizontal direction by row processing
Subgraph, then by vertical direction, diagonal, three subgraphs of horizontal direction carry out the summation of information, as texture
Feature then exports textural characteristics.
As shown in figure 4, HOG feature extraction: training sample image being carried out Gamme normalized first, then is calculated
After processing in image each pixel gradient, then the block for carrying out certain area to image divides, and divides in block
Cell unit.After division, weighted histogram is projected and obtained in each cell, then each cell of each block that connects
Feature become the feature of block, then the block that connects feature construction HOG feature to exporting.
As shown in figure 5, SITF feature extraction: being first smoothed training sample image and the drop of picture is adopted
Sample, and DoG pyramid is constructed using Gaussian convolution core, it detects the extreme point of scale space, is then to each extreme point
The principal direction of characteristic point and some other validity feature, generate each characteristic point SITF characteristic area, and summation exports whole figure
The SIFT feature of piece.
As shown in fig. 6, depth characteristic is extracted: picture is inputted to the thin size net of global thick scale network and part simultaneously first
Network, then the output of global thick scale network is added to the second layer of part carefully dimensioned network as one of parameter, by part
After thin dimensioned network, depth characteristic can be exported.
Step 2: the CNN model training based on every kind of feature
Firstly, neural network of the building based on AlexNet, and the addition batch standardization figure layer before animation layers.
Picture size is adjusted to 224*224, while being initialized using the parameter of ImageNet-1000 pre-training model
Parameter simultaneously finely tunes part-structure.
To it is extracted cross feature and obtain image different characteristic is put into its corresponding model repeatedly trained, select it
The good characteristic model of middle generalization ability.
Step 3: the fusion and its adjustment of multiple features model
The good model of the generalization ability selected is subjected to certain linear combination using multiple response linear regression (MLR).
Each fusion frame come that trains is tested, and parameter adjustment is carried out according to its generalization ability, is calculated
The best fusion frame of generalization ability.
Step 3: identifying flowers to be identified, as shown in Figure 7:
Picture to be identified is subjected to background segment first, HSV color space is then reconverted into, is then adjusted to size
224*224。
To treated, picture carries out color, texture, HOG, SIFT, the extraction of depth characteristic.
Validity feature is put into fusion frame model and is identified, the result of the image for the flowers of being identified is obtained.
Claims (5)
1. a kind of flowers recognition methods based on CNN Fusion Features frame, this method is by a variety of validity features of picture and identifies
The CNN of picture better performances is combined, and trains the identification model based on every kind of feature, it is characterised in that including walking as follows
It is rapid:
Step 1: image preprocessing and feature extraction;
Step 2: the CNN model training based on every kind of feature
Step 3: based on result training fusion frame, obtaining flowers fusion recognition frame;
The step 1 is implemented as follows:
1-1. background segment: background segment is carried out to training sample image first;
1-2.HSV feature extraction:
Training sample image is converted into HSV color space from original RGB image and removes the space V simultaneously;Conversion formula is as follows:
Wherein, R, G, B are the corresponding value of each pixel;
Again by the color minizone of treated image is divided into specified quantity, each minizone becomes one of histogram
Bin quantifies image, then obtain hsv color histogram by calculating the pixel quantity in each minizone of color,
The as color characteristic of HSV;
1-3. texture feature extraction:
Image texture characteristic is extracted using two-dimensional discrete wavelet conversion and with DWT, two-dimensional discrete wavelet conversion formula is as follows:
Wherein, f (x, y) is the information of original image, herein refers to saturation degree (S);It is a wavelet function, it is fixed
Justice is as follows:
When processing the images, wavelet transformation is carried out to image, converts four subgraphs: low resolution subgraph A, water for image
Square to subgraph H, vertical direction subgraph V and diagonal subgraph D;Wherein low resolution subgraph A is by signal
Main component is concentrated, remaining horizontal direction subgraph H, vertical direction subgraph V and diagonal subgraph D are the height of signal
Frequency information, i.e. detailed information;The summation for using three high frequency bands of wavelet transformation to export is as textural characteristics:
T=WH+WV+WD
1-4.HOG feature extraction:
Gray level image is converted by image first, then carries out Gamma normalized, and divides an image into multiple blocks simultaneously
Multiple cells are divided in each block, then using the gradient operator as Sobel and Laplacian come each cell of convolution
To obtain gradient direction and amplitude;Specific formula is as follows:
Wherein, IxAnd IyThe gradient value on both horizontally and vertically is represented, M (x, y) represents the range value of gradient, and θ (x, y) is represented
The direction of gradient;
1-5.SIFT feature extraction:
The extremum extracting of scale space: multiple dimensioned by image is showed using image pyramid;It among these, be first to image
It is smoothed, then to treated, image progress is down-sampled;Need among these Gaussian convolution core construct scale space, two
The scale space of dimension image is defined as follows:
L (x, y, σ)=G (x, y, σ) × I (x, y)
Wherein, G (x, y, σ) is changeable scale Gaussian function, and I (x, y) is space coordinate, and σ is known as the scale space factor;
Then operation is carried out to the image of obtained adjacent Gaussian spatial by original image and up constructs image pyramid, original image
For the bottom, up each layer is to its next layer progress Laplacian transformation building gaussian pyramid thereafter;It again will be adjacent
Gaussian pyramid, which subtracts each other, has just obtained DoG pyramid;
Detect DOG scale space extreme point: by each sampled point with it with 8 consecutive points of scale and neighbouring scale pair
Totally 26 points compare 9 × 2 points answered, and ensure all to detect extreme point in scale space and two dimensional image space with this,
It repeats this step and detects all extreme points;
It seeks the principal direction of characteristic point: determining its directioin parameter, then benefit using the gradient distribution characteristic of characteristic point neighborhood territory pixel
The stabilising direction of key point partial structurtes is sought with the histogram of gradients of image;It is half centered on characteristic point, with 3 × 1.5 σ
The argument and amplitude that the gradient of each pixel is calculated in the field of diameter, are then united using argument of the histogram to gradient
Meter;The horizontal axis of histogram is the direction of gradient, and the longitudinal axis is the accumulated value that gradient direction corresponds to gradient magnitude, top in histogram
Corresponding direction is the direction of characteristic point;After obtaining the principal direction of characteristic point, three available for each characteristic point
Information (x, y, σ, θ), i.e. position, scale and direction, it is possible thereby to determine a SIFT feature region;
It generates feature description: reference axis being rotated to be to the direction of characteristic point first, 16 × 16 window centered on characteristic point
Pixel gradient magnitude and direction, the pixel in window is divided into 16 pieces, every piece be 8 directions in its pixel histogram system
Meter, can form the feature vector of 128 dimensions, as SIFT feature altogether;
1-6 depth characteristic is extracted: extracting the neural network that depth characteristic is based respectively on CNN firstly the need of building two: global thick
The thin dimensioned network of scale network and part;
Picture inputs to the thin dimensioned network of global thick scale network and part simultaneously first, then by the output of global thick scale network
The second layer for being added to the thin dimensioned network in part can export depth spy after the thin dimensioned network in part as one of parameter
Sign.
2. a kind of flowers recognition methods based on CNN Fusion Features frame according to claim 1, it is characterised in that for
Gradient direction therein is divided into 36 bin by each cell, and each bin includes 10 degree, and entire histogram includes 36 dimensions, then
The weighted histogram that obtains the cell again gradient direction on is mapped that in corresponding angular range in weighted projection again;Most
Afterwards, it is standardized to obtain the gradient information with 36 dimensions;Then by the feature of cell each in the same block
Being together in series becomes the feature of a block, then the feature of each block is together in series, as HOG feature.
3. a kind of flowers recognition methods based on CNN Fusion Features frame according to claim 2, it is characterised in that global
Thick scale network includes five convolutional layers, a maximum pond layer and two full articulamentums, and the size finally exported is former
The 1/4 of image element;The thin dimensioned network in part includes three convolutional layers, a maximum pond layer, and global thick scale network is defeated
Out by as the additional thin dimensioned network in low-level features figure input part, the thin dimensioned network in the part can export depth characteristic.
4. a kind of flowers recognition methods based on CNN Fusion Features frame according to claim 3, it is characterised in that step
CNN model training based on every kind of feature described in 2, is implemented as follows:
2-1. constructs 5 models based on AlexNet neural network, and AlexNet model has 5 convolutional layers, three full connections
Layer, and local acknowledgement's normalization layer in original model is replaced with into batch normalization layer;
2-2. will need the picture of training to be adjusted to 224*224, and with the ginseng for the ImageNet-1000 pre-training model finely tuned
Number is initialized;
Five kinds of features that 2-3. extracts step 1 carry out corresponding model training using CNN, and training repeatedly takes out it
Best one model of middle generalization ability.
5. a kind of flowers recognition methods based on CNN Fusion Features frame according to claim 4, it is characterised in that step
Based on result training fusion frame described in 3, flowers fusion recognition frame is obtained, is implemented as follows:
Step 2 training is obtained the CNN model based on every kind of feature and is learned by using multiple response linear regression as auxiliary by 3-1.
It practises algorithm and carries out linear combination;In MLR, use standard error as loss function;Loss function formula is as follows:
Wherein, Wi, BiIt is the weight and offset of corresponding individual learner, LiIt is the defeated of each full articulamentum of model the last layer
It is worth out, F is prediction label matrix;The task of study is based on standard error minimum come the parameter of computation model;Use standard
It minimizes the error to calculate Wi, Bi;Formula is as follows:
Wherein,WithRefer to the corresponding weight of each model and biasing;
3-2. tests the flowers fusion recognition frame obtained according to MLR, then based on test result to the ginseng of fusion frame
Number is adjusted, and finds out best one fusion frame of test result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910548293.XA CN110363101A (en) | 2019-06-24 | 2019-06-24 | A kind of flowers recognition methods based on CNN Fusion Features frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910548293.XA CN110363101A (en) | 2019-06-24 | 2019-06-24 | A kind of flowers recognition methods based on CNN Fusion Features frame |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110363101A true CN110363101A (en) | 2019-10-22 |
Family
ID=68216766
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910548293.XA Pending CN110363101A (en) | 2019-06-24 | 2019-06-24 | A kind of flowers recognition methods based on CNN Fusion Features frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363101A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287919A (en) * | 2020-07-07 | 2021-01-29 | 国网江苏省电力有限公司常州供电分公司 | Power equipment identification method and system based on infrared image |
CN114403023A (en) * | 2021-12-20 | 2022-04-29 | 北京市农林科学院智能装备技术研究中心 | Pig feeding method, device and system based on terahertz fat thickness measurement |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021933A (en) * | 2017-11-23 | 2018-05-11 | 深圳市华尊科技股份有限公司 | Neural network recognization model and recognition methods |
CN109190752A (en) * | 2018-07-27 | 2019-01-11 | 国家新闻出版广电总局广播科学研究院 | The image, semantic dividing method of global characteristics and local feature based on deep learning |
CN109859238A (en) * | 2019-03-14 | 2019-06-07 | 郑州大学 | One kind being based on the optimal associated online multi-object tracking method of multiple features |
-
2019
- 2019-06-24 CN CN201910548293.XA patent/CN110363101A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021933A (en) * | 2017-11-23 | 2018-05-11 | 深圳市华尊科技股份有限公司 | Neural network recognization model and recognition methods |
CN109190752A (en) * | 2018-07-27 | 2019-01-11 | 国家新闻出版广电总局广播科学研究院 | The image, semantic dividing method of global characteristics and local feature based on deep learning |
CN109859238A (en) * | 2019-03-14 | 2019-06-07 | 郑州大学 | One kind being based on the optimal associated online multi-object tracking method of multiple features |
Non-Patent Citations (3)
Title |
---|
BUZHEN HUANG ET AL.: ""A Flower Classification Framework Based on Ensemble of CNNs"", 《SPRINGGER LINK》 * |
柳杨: "《数字图像物体识别理论详解与实战》", 31 January 2018 * |
王金龙 等: ""基于SIFT图像特征提取与FLANN匹配算法的研究"", 《计算机测量与控制》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287919A (en) * | 2020-07-07 | 2021-01-29 | 国网江苏省电力有限公司常州供电分公司 | Power equipment identification method and system based on infrared image |
CN112287919B (en) * | 2020-07-07 | 2022-08-30 | 国网江苏省电力有限公司常州供电分公司 | Power equipment identification method and system based on infrared image |
CN114403023A (en) * | 2021-12-20 | 2022-04-29 | 北京市农林科学院智能装备技术研究中心 | Pig feeding method, device and system based on terahertz fat thickness measurement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108573276B (en) | Change detection method based on high-resolution remote sensing image | |
CN103116763B (en) | A kind of living body faces detection method based on hsv color Spatial Statistical Character | |
CN106909902B (en) | Remote sensing target detection method based on improved hierarchical significant model | |
CN103984946B (en) | High resolution remote sensing map road extraction method based on K-means | |
CN107392968B (en) | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure | |
CN103942564B (en) | High-resolution remote sensing image scene classifying method based on unsupervised feature learning | |
CN104680173B (en) | A kind of remote sensing images scene classification method | |
CN105718866B (en) | A kind of detection of sensation target and recognition methods | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN109446922B (en) | Real-time robust face detection method | |
CN110866079A (en) | Intelligent scenic spot real scene semantic map generating and auxiliary positioning method | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN103984963B (en) | Method for classifying high-resolution remote sensing image scenes | |
Zhang et al. | Road recognition from remote sensing imagery using incremental learning | |
CN111639587B (en) | Hyperspectral image classification method based on multi-scale spectrum space convolution neural network | |
CN108334955A (en) | Copy of ID Card detection method based on Faster-RCNN | |
CN106294705A (en) | A kind of batch remote sensing image preprocess method | |
CN112270317A (en) | Traditional digital water meter reading identification method based on deep learning and frame difference method | |
CN108229551A (en) | A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN105405138A (en) | Water surface target tracking method based on saliency detection | |
CN106529472B (en) | Object detection method and device based on large scale high-resolution high spectrum image | |
CN104021567B (en) | Based on the fuzzy altering detecting method of image Gauss of first numeral law | |
CN111652240A (en) | Image local feature detection and description method based on CNN | |
CN110991374B (en) | Fingerprint singular point detection method based on RCNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191022 |