CN110163293A - Red meat classification method, device, equipment and storage medium based on deep learning - Google Patents
Red meat classification method, device, equipment and storage medium based on deep learning Download PDFInfo
- Publication number
- CN110163293A CN110163293A CN201910454174.8A CN201910454174A CN110163293A CN 110163293 A CN110163293 A CN 110163293A CN 201910454174 A CN201910454174 A CN 201910454174A CN 110163293 A CN110163293 A CN 110163293A
- Authority
- CN
- China
- Prior art keywords
- information
- red meat
- sorted
- default
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of red meat classification method, device, equipment and storage medium based on deep learning, which comprises pass through and obtain red meat high spectrum image information to be sorted;The red meat high spectrum image information to be sorted is inputted into default metric function, obtains the similarity information in the red meat high spectrum image information to be sorted between each characteristic information;The fixed reference feature information in the red meat high spectrum image information to be sorted is extracted according to the similarity information;Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature information;Empty spectrum joint classification is carried out to the target signature information based on deep learning, obtain targeted species information, to first by suitably default metric function to the characteristic information for extracting red meat high spectrum image information, it is then based on deep learning and extracts empty spectrum union feature information, improve the nicety of grading of red meat class high spectrum image.
Description
Technical field
The present invention relates to remote sensing technology field more particularly to a kind of red meat classification method based on deep learning, device, set
Standby and storage medium.
Background technique
The hyperspectral technique that last decade is developed by remote sensing fields, applied to the non-destructive testing of multiple fields, EO-1 hyperion
Image is compared with other images, includes also spectral information abundant other than spatial information, in conjunction with spectral manipulation method and image
Processing Algorithm is suitable for red measurement techniques for quality detection of meat non-destructive testing.Currently, based on hyperspectral technique cattle and sheep pork detection, including it is tender
Degree, pH value are that waterpower, marbling and chemistry of micro-organisms etc. achieve certain achievement;
But there are similitude between higher-dimension, spectrum and the characteristics of mixed pixel for high spectrum image, and data volume is huge, these are special
The problem that property causes the Time & Space Complexity of its sorting algorithm all higher, therefore causes nicety of grading not high.
Summary of the invention
It is a primary object of the present invention to propose a kind of red meat classification method based on deep learning, device, equipment and deposit
Storage media, it is intended to solve the not high technical problem of accuracy of identification when high spectrum image is classified.
To achieve the above object, the present invention provides a kind of red meat classification method based on deep learning, described to be based on depth
The red meat classification method of study the following steps are included:
Obtain red meat high spectrum image information to be sorted;
The red meat high spectrum image information to be sorted is inputted into default metric function, obtains the red meat to be sorted
Similarity information in high spectrum image information between each characteristic information;
The fixed reference feature information in the red meat high spectrum image information to be sorted is extracted according to the similarity information;
Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature information;
Empty spectrum joint classification is carried out to the target signature information based on deep learning, obtains targeted species information.
Preferably, acquisition red meat high spectrum image information to be sorted, including;
Obtain red meat EO-1 hyperion sample image information;
Core principle component analysis is carried out to the red meat EO-1 hyperion sample image information, the red meat EO-1 hyperion sample after being analyzed
This image information;
Using the red meat EO-1 hyperion sample image information after analysis as red meat high spectrum image information to be sorted.
Preferably, described that core principle component analysis is carried out to the red meat EO-1 hyperion sample image information, after being analyzed
Red meat EO-1 hyperion sample image information, comprising:
Default kernel function is chosen, it will be non-in the red meat EO-1 hyperion sample image information according to the default kernel function of selection
Linear character is mapped to high-dimensional feature space, obtains correlation matrix information;
Obtain principal component decision condition, the correlation matrix information is substituted into the principal component decision condition obtain it is main at
Divide decision function;
The principal component decision function is solved, the red meat EO-1 hyperion sample image information after being analyzed.
Preferably, described that the red meat high spectrum image information to be sorted is inputted into default metric function, it obtains described
Similarity information in red meat high spectrum image information to be sorted between each characteristic information, comprising:
Extract the unitary matrice and diagonal matrix of the red meat high spectrum image information to be sorted;
The transposition of unitary matrice is obtained according to the unitary matrice;
The transposition of the unitary matrice, diagonal matrix and unitary matrice is inputted into default metric function, is obtained described to be sorted
Red meat high spectrum image information in the distance between each characteristic information information;
Using the range information as the similarity information;
Correspondingly, described input default metric function for the red meat high spectrum image information to be sorted, obtain described
Before similarity information in red meat high spectrum image information to be sorted between each characteristic information, the method also includes:
Judge whether the default metric function meets preset condition, meets preset condition in the default metric function
When, it executes and the red meat high spectrum image information to be sorted is inputted into default metric function, obtain the red meat to be sorted
The step of similarity information in high spectrum image information between each characteristic information.
Preferably, described to carry out dimensionality reduction to the fixed reference feature information, before obtaining target signature information, the method is also
Include:
Extract the number information of the fixed reference feature information;
Default three_layer planar waveguide is trained according to the fixed reference feature information and number information, obtains
One loss function information, wherein the default three_layer planar waveguide includes input layer, hidden layer and reconstruction of layer;
Default learning rate information is obtained, the first-loss function information is carried out more according to the default learning rate information
Newly, updated input layer is obtained to the weight of hidden layer, the deviation ratio, updated of updated input layer to hidden layer
Hidden layer to reconstruction of layer weight and updated hidden layer to reconstruction of layer deviation ratio;
Correspondingly, described carry out dimensionality reduction to the fixed reference feature information, target signature information is obtained, comprising:
According to the weight of updated input layer to hidden layer, the deviation ratio of updated input layer to hidden layer, more
Hidden layer after new to reconstruction of layer weight and updated hidden layer to reconstruction of layer deviation ratio to the fixed reference feature
Information carries out dimensionality reduction, obtains target signature information.
Preferably, described that empty spectrum joint classification is carried out to the target signature information based on deep learning, obtain target species
Category information, comprising:
Extract the target optical spectrum dimension in the target signature information;
The target signature information preset in rectangular area in the target signature information is united to be sorted as sky spectrum
Characteristic information;
By the target optical spectrum dimension and the default convolution mind of the characteristic information input to be sorted based on deep learning
Classified in network model, obtains targeted species information.
Preferably, described by the target optical spectrum dimension and the characteristic information to be sorted input is based on deep learning
Classify in default convolutional neural networks model, obtain targeted species information, comprising:
The characteristic information to be sorted is inputted into the convolutional layer in the default convolutional neural networks based on deep learning, according to
The characteristic information to be sorted generates characteristic pattern by the convolutional layer and default weight matrix;
Default bias term is added to the characteristic pattern, the characteristic pattern after obtaining addition bias term;
Each Pixel Information in characteristic pattern after extracting addition bias term, obtains default nonlinear activation function, passes through
The default nonlinear activation function carries out Nonlinear Processing to the Pixel Information, the characteristic pattern that obtains that treated;
Pass through the main feature in part of the characteristic pattern after the pond layer extraction process in the default convolutional neural networks;
The main feature in the part, which is stretched, becomes default vector, and the default vector is sent into the default convolutional Neural
Full articulamentum in network carries out linear transformation, obtains transformed vector information;
The transformed vector information and the target optical spectrum dimension are sent into the default convolutional neural networks
Classification layer classify, obtain targeted species information;
Correspondingly, described will be in the default convolutional neural networks of the characteristic information input based on deep learning to be sorted
Convolutional layer, before generating characteristic pattern by the convolutional layer and default weight matrix according to the characteristic information to be sorted, institute
State method further include:
It obtains with reference to weight matrix and refers to bias term;
The default convolutional Neural is obtained according to the characteristic information to be sorted, with reference to weight matrix and with reference to bias term
Second loss function information of network;
The second loss function information is updated, updated reference weight matrix and updated reference are obtained
Bias term;
It, will be described updated with reference to bias term work using the updated reference weight matrix as default weight matrix
To preset bias term.
In addition, to achieve the above object, the present invention also proposes a kind of red meat sorter based on deep learning, the base
Include: in the red meat sorter of deep learning
Module is obtained, for obtaining red meat high spectrum image information to be sorted;
Metric module obtains institute for the red meat high spectrum image information to be sorted to be inputted default metric function
State the similarity information in red meat high spectrum image information to be sorted between each characteristic information;
Extraction module, for being extracted in the red meat high spectrum image information to be sorted according to the similarity information
Fixed reference feature information;
Dimensionality reduction module obtains target signature information for carrying out dimensionality reduction to the fixed reference feature information;
Categorization module obtains target for carrying out empty spectrum joint classification to the target signature information based on deep learning
Information.
In addition, to achieve the above object, the present invention also proposes a kind of red meat sorting device based on deep learning, the base
Include: memory, processor in the red meat sorting device of deep learning and is stored on the memory and can be in the processing
The red meat sort program based on deep learning run on device, the red meat sort program based on deep learning are arranged for carrying out
The step of red meat classification method based on deep learning as described above.
In addition, to achieve the above object, the present invention also proposes a kind of storage medium, it is stored with and is based on the storage medium
The red meat sort program of deep learning is realized when the red meat sort program based on deep learning is executed by processor as above
The step of described red meat classification method based on deep learning.
Red meat classification method proposed by the present invention based on deep learning, by obtaining red meat high spectrum image to be sorted
Information;The red meat high spectrum image information to be sorted is inputted into default metric function, it is high to obtain the red meat to be sorted
Similarity information in spectral image information between each characteristic information;It is extracted according to the similarity information described to be sorted
Fixed reference feature information in red meat high spectrum image information;Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature letter
Breath;Empty spectrum joint classification is carried out to the target signature information based on deep learning, targeted species information is obtained, thus logical first
Suitable default metric function is crossed to the characteristic information for extracting red meat high spectrum image information, deep learning is then based on and extracts sky
Union feature information is composed, the nicety of grading of red meat class high spectrum image is improved.
Detailed description of the invention
Fig. 1 is the red meat sorting device knot based on deep learning for the hardware running environment that the embodiment of the present invention is related to
Structure schematic diagram;
Fig. 2 is that the present invention is based on the flow diagrams of the red meat classification method first embodiment of deep learning;
Fig. 3 is that the present invention is based on the overall flow schematic diagrams of the red meat of deep learning one embodiment of classification;
Fig. 4 is that the present invention is based on the flow diagrams of the red meat classification method second embodiment of deep learning;
Fig. 5 is that the present invention is based on the flow diagrams of the red meat classification method 3rd embodiment of deep learning;
Fig. 6 is that the present invention is based on the flow diagrams of the red meat classification method fourth embodiment of deep learning;
Fig. 7 is that the present invention is based on the functional block diagrams of the red meat sorter first embodiment of deep learning.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Referring to Fig.1, Fig. 1 is the red meat based on deep learning point for the hardware running environment that the embodiment of the present invention is related to
Class device structure schematic diagram.
As shown in Figure 1, being somebody's turn to do the red meat sorting device based on deep learning may include: processor 1001, such as centre
It manages device (Central Processing Unit, CPU), communication bus 1002, user interface 1003, network interface 1004, storage
Device 1005.Wherein, communication bus 1002 is for realizing the connection communication between these components.User interface 1003 may include showing
Display screen (Display), input unit such as key, optional user interface 1003 can also include the wireline interface, wireless of standard
Interface.Network interface 1004 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).Memory
1005 can be high-speed random access memory (Random Access Memory, RAM) memory, be also possible to stable deposit
Reservoir (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned place
Manage the storage device of device 1001.
It will be understood by those skilled in the art that device structure shown in Fig. 1 is not constituted to based on the red of deep learning
The restriction of meat sorting device may include perhaps combining certain components or different portions than illustrating more or fewer components
Part arrangement.
As shown in Figure 1, as may include operating system, network communication mould in a kind of memory 1005 of storage medium
Block, Subscriber Interface Module SIM and the red meat sort program based on deep learning.
In red meat sorting device based on deep learning shown in Fig. 1, network interface 1004 is mainly used for connecting outer net,
Data communication is carried out with other network equipments;User interface 1003 be mainly used for connect user equipment, with the user equipment into
Row data communication;Present device calls the red meat based on deep learning stored in memory 1005 point by processor 1001
Class method, and execute the implementation method of the red meat classification provided in an embodiment of the present invention based on deep learning.
Based on above-mentioned hardware configuration, propose that the present invention is based on the red meat classification method embodiments of deep learning.
It is that the present invention is based on the flow diagrams of the red meat classification method first embodiment of deep learning referring to Fig. 2, Fig. 2.
In the first embodiment, the red meat classification method based on deep learning the following steps are included:
Step S10 obtains red meat high spectrum image information to be sorted.
It should be noted that the executing subject of the present embodiment may be based on the red meat sorting device of deep learning, can also be
Other can realize the equipment of same or similar function, the present embodiment to this with no restriction, in the present embodiment, to be based on depth
It is illustrated for the red meat sorting device of habit.
The red meat high spectrum image information to be sorted is inputted default metric function by step S20, is obtained described wait divide
Similarity information in the red meat high spectrum image information of class between each characteristic information.
It should be noted that the default metric function is the default metric function obtained based on mahalanobis distance, to processing
The hyperspectral image data of corresponding quality sample afterwards, using the feature extraction algorithm of mahalanobis distance Multiple Kernel Learning, selection most can
It embodies all kinds of qualities and characterizes the best band and image of its attribute, then extract corresponding spectral signature and characteristic of division.
It is understood that in all distance metrics, distance metric is generally carried out using Euclidean distance, however Euclidean away from
From to every one-dimensional characteristic value all fair plays, i.e., the difference between the different attribute in sample is all treated on an equal basis, this will
The dimension to each component is caused to regard the same amount as to measure, therefore, Euclidean distance has ignored each dimension
Primary-slave relation between feature, so that the influence that different features generates classification cannot be protruded, in contrast to this, mahalanobis distance
It is non-coequally to treat each component of sample, to produce better classifying quality.
Step S30 extracts the reference in the red meat high spectrum image information to be sorted according to the similarity information
Characteristic information.
In the concrete realization, distance function letter of two samples in new feature space can be obtained by mahalanobis distance
Breath, obtains the similarity information between two samples according to the distance function information, when the range information of two samples is closer,
Then illustrate two samples similarity it is higher, two samples range information farther out when, then illustrate two samples similarity
It is lower, obtained mahalanobis distance information can be compared with pre-determined distance threshold value, according to comparison result obtain two samples it
Between similarity information.
Step S40 carries out dimensionality reduction to the fixed reference feature information, obtains target signature information.
In the present embodiment, the feature extracted in data non-supervisoryly is realized using stacking automatic coding machine and effectively extract
Nonlinear characteristic in data out, to carry out the dimensionality reduction in spectrum dimension to high-spectral data, autocoding network is basic by it
Unit automatic coding machine stacks, and automatic coding machine is a three_layer planar waveguide, by input layer, hidden layer, reconstruct
Layer is constituted, and can also be multilayer feedforward neural network, the present embodiment to this with no restriction, it is in the present embodiment, refreshing with three layers of feedforward
Through being illustrated for network.
Step S50 carries out empty spectrum joint classification to the target signature information based on deep learning, obtains targeted species letter
Breath.
In the present embodiment, using the convolutional neural networks by deep learning as classifier, will be with pixel to be sorted
Input of the hyperspectral datacube as convolutional neural networks in the rectangle at center carries out empty spectrum joint classification to obtain most
Whole classification results.
The overall flow schematic diagram of red meat one embodiment of classification based on deep learning as shown in Figure 3, in acquisition red meat
After the high spectrum image of sample, using the spatial information and spectral information of core principle component analysis method removal redundancy, horse is then used
Family name extracts feature apart from Multiple Kernel Learning method, and based on autoencoder network is stacked to high-spectral data dimensionality reduction, finally utilizes convolution
Neural network carries out empty spectrum joint classification, realizes the classification to red meat class high spectrum image.
The present embodiment through the above scheme, passes through acquisition red meat high spectrum image information to be sorted;It will be described to be sorted
Red meat high spectrum image information input default metric function, obtain each in the red meat high spectrum image information to be sorted
Similarity information between characteristic information;The red meat high spectrum image information to be sorted is extracted according to the similarity information
In fixed reference feature information;Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature information;Based on deep learning to institute
It states target signature information and carries out empty spectrum joint classification, obtain targeted species information, to pass through suitably default measurement letter first
The characteristic information of several pairs of extraction red meat high spectrum image informations is then based on deep learning and extracts empty spectrum union feature information, mentions
The nicety of grading of high red meat class high spectrum image.
In one embodiment, as shown in figure 4, proposing that the present invention is based on the classification of the red meat of deep learning based on first embodiment
Method second embodiment, the step S10, including;
Step S101 obtains red meat EO-1 hyperion sample image information.
In the present embodiment, the red meat EO-1 hyperion sample image information is that the red meat for the initial samples being not handled by is high
Spectrum samples information can acquire the high spectrum image information of red meat by remote sensing technique.
Step S102 carries out core principle component analysis to the red meat EO-1 hyperion sample image information, red after being analyzed
Meat EO-1 hyperion sample image information.
In the present embodiment, the high spectrum image of red meat sample is believed using the space of core principle component analysis method removal redundancy
Breath and spectral information, obtain accurate high spectrum image, since original high-spectral data includes nonlinear organization, common line
Property transformation can lose nonlinear transformations in data, point of the nonlinear transformations in data can be realized by core principle component analysis
Analysis, improves the accuracy of data processing.
Step S103, using the red meat EO-1 hyperion sample image information after analysis as the red meat high-spectrum to be sorted
As information.
Further, it proposes to be based on the present invention is based on the red meat classification method 3rd embodiment of deep learning as shown in Figure 5
Second embodiment, the step S102, comprising:
Step S104 chooses default kernel function, according to the default kernel function of selection by the red meat EO-1 hyperion sample image
Nonlinear characteristic in information is mapped to high-dimensional feature space, obtains correlation matrix information.
In the concrete realization, raw data set isThe dimension of each sample is N, whereinInitial data is mapped to high-dimensional feature space Φ (x first with the kernel function of selection1),Φ(x2),...,
Φ(xn), it is mapped to the correlation matrix of higher dimensional space are as follows:
Wherein, R indicates that correlation matrix, n indicate the dimensional information of sample, and j indicates current sample information, and T indicates transposition.
Step S105 obtains principal component decision condition, and the correlation matrix information is substituted into the principal component decision condition
In obtain principal component decision function.
It should be noted that the principal component decision condition RV=λ V, wherein λ indicates variable, and eigenmatrix V is a certain
Linear transformation is in Φ (x1),Φ(x2),...,Φ(xn) positive semidefinite matrix under such one group of base, it needs to find and meets characteristic value
The nonzero eigenvalue and feature vector of equation RV=λ V, due to solving the similar matrix of the positive semidefinite matrix under given one group of base,
Solution can be converted to using its characteristic value as the diagonal matrix of diagonal of a matrix, the data set of the higher dimensional space after rising dimension is { Φ
(x1),Φ(x2),...,Φ(xn), so transformed characteristic equation are as follows:
Φ(xk) RV=λ Φ (xk)V;
Wherein, k indicates current sample data information.
Step S106 solves the principal component decision function, the red meat EO-1 hyperion sample image after being analyzed
Information.
WhereinThat is, there are coefficient a1,a2,...,alSo that following formula is set up:
Wherein, V is the red meat EO-1 hyperion sample image information after analysis, and i is corresponding sample red meat information.
Scheme provided in this embodiment, using the spatial information and spectral information of core principle component analysis method removal redundancy, core
Method is introduced into linear principal component analysis, proposes core principle component analysis, avoids losing the nonlinear transformations in data.
In one embodiment, as shown in fig. 6, proposing that the present invention is based on depth based on first embodiment to 3rd embodiment
The red meat classification method fourth embodiment of habit is illustrated based on first embodiment in the present embodiment, the step S20, packet
It includes:
In order to obtain similarity information, by extract the red meat high spectrum image information to be sorted unitary matrice and
Diagonal matrix;The transposition of unitary matrice is obtained according to the unitary matrice;By the transposition of the unitary matrice, diagonal matrix and unitary matrice
Default metric function is inputted, the distance between each characteristic information in the red meat high spectrum image information to be sorted is obtained and believes
Breath;Using the range information as the similarity information.
In the concrete realization, it is defined according to mahalanobis distance it is found that providing two sample set x for obeying same distribution1=
(x11,x12,...,x1n),x2=(x21,x22,...,x2n), by the way that the two samples are mapped to a new feature space
LTx1,LTx2, then it can get distance function of two samples in new feature space:
Mahalanobis distance is different from Euclidean distance, it considers the connection between each dimension of feature and is unrelated with dimension
, the M=L in above-mentioned formulaTL is geneva matrix, and mahalanobis distance is measured as a kind of super ellipsoids body, in order to meet above formula
In distance metric function requirements four fundamental characteristics, wherein M must be a positive semidefinite symmetrical matrix, that is to say, that M >=
0, this is also just of equal value and initial data x is mapped to a new space L by searching out a matrix Z as mapping matrix:
Rn→Rm.In order to indicate upper and Euclidean distance different from formula, by using DM(x1,x2) indicate mahalanobis distance,
It is expressed as follows:
Also through frequently with d in metric learning algorithm researchM(x1,x2) indicate a square mahalanobis distance, it may be assumed that
dM(x1,x2)=(DM(x1,x2))2;
That is, dM(x1,x2)=((x1-x2)TM(x1-x2))2;
Have it is above-mentioned known to mahalanobis distance have uncoupling and the characteristic unrelated with dimension, then when to geneva matrix M carry out
When singular value decomposition, then geneva matrix M can be decomposed into M=H Σ HT.H therein is unitary matrice, it meets because of geneva matrix
M has symmetry, therefore the left side unitary matrice after its decomposition is the transposition of the right unitary matrice, and Σ matrix therein is diagonal
Matrix, the element on diagonal line are exactly the singular value of the matrix, thus, square mahalanobis distance can also be written as follow shape again
Formula:
dM(x1,x2)=(x1-x2)THΣHT(x1-x2)
=(HTx1-HTx2)TΣ(HTx1-HTx2);
It can be seen that two characteristics of mahalanobis distance, one is mahalanobis distance can find an optimal orthogonal moment
Primitive character can be finally mapped to new space, second so as to be used to remove the coupling between sample characteristics by battle array H
It is to obtain new feature by the way that primitive character is mapped to new space, obtains the weight of the relation allocation between new feature and classification
Diagonal matrix Σ, so as to remove problem caused by the difference of dimension, the two properties enable mahalanobis distance each
All the similitude of feature space is efficiently measured under kind application.
Correspondingly, before the step S20, the method also includes:
Judge whether the default metric function meets preset condition, meets preset condition in the default metric function
When, execute the step S20.
Computer vision and the application aspect many of pattern-recognition have been directed to the similarity measurement between sample at present
Problem then needs the metric function established on the input feature value of sample to describe the similitude between sample
Similarity measurement between sample is carried out, currently, carries out using learning distance metric the method for image classification currently increasingly
More, learning distance metric is exactly to learn a metric function D, and utilization measure function measures out the distance between sample, and needs
Meet certain property.Assuming that giving a data set X ∈ Rn, two sample characteristics therein are x inwards1=(x11,
x12,...,x1n),x2=(x21,x22,...,x2n),x3=(x31,x32,...,x3n), then calculate two of them sample A1And A2It
Between distance be metric function D (x1,x2) meet:
1. nonnegativity: D (x1,x2)≥0;
2. reflexivity: D (x1,x2)=0 is and if only if x1=x2;
3. symmetry: D (x1,x2)=D (x2,x1);
4. triangle inequality: D (x1,x2)≤D(x1,x2)+D(x2,x1);
Then, it is known that the preset condition is to meet nonnegativity, reflexivity, symmetry and triangle inequality.
In one embodiment, before the step S40, the method also includes:
Into before dimensionality reduction, it is necessary first to three_layer planar waveguide is established, in the present embodiment by the fixed reference feature
Information is as input layer;Input layer is set to the weight of hidden layer, the deviation ratio of input layer to hidden layer, hidden layer to reconstruct
Layer weight and hidden layer to reconstruction of layer deviation ratio;According to the weight of the input layer to hidden layer and input layer to hidden
The deviation ratio for hiding layer constructs hidden layer;According to the offset of the weight of the hidden layer to reconstruction of layer and hidden layer to reconstruction of layer
Coefficient constructs reconstruction of layer;According to the input layer, hidden layer and reconstruct layer building three_layer planar waveguide.
It should be noted that in the present embodiment to construct a three_layer planar waveguide by automatic coding machine, by
Input layer, hidden layer and reconstruction of layer are constituted, to realize the building of three_layer planar waveguide, the coding of automatic coding machine and
Decoding process is as follows:
Y=δ (Wy+by)
Z=σ (Wz+bz);
In formula, y indicates that the feature representation of hidden layer initial data, z indicate the feature representation W of reconstruction of layer initial datay,Wz,
by,bz, respectively input layer to hidden layer, the weight of hidden layer to reconstruction of layer and deviation ratio generally take Wy=W=W 'z, δ
() is nonlinear mapping function, is generally taken as sigmoid function, it may be assumed that
In the concrete realization, by adjusting Wy,Wz,by,bzSo that it is similar between input x and reconstruct z, it is general using intersection
Entropy function measures the distance between x and z, is m per batch of sample number using training in batches when training, then first-loss function
Information are as follows:
Wherein, m is the number of every a collection of training sample, and d is the dimension of input layer and reconstruction of layer, and i indicates current sample
Number, k indicate current dimensional information, and i and k are only variable, and range is 1-m and 1-d, are joined using stochastic gradient descent training network
Number, as learning rate is updated parameter.
In the present embodiment, it in order to improve data-handling efficiency, needs to carry out dimension-reduction treatment to the characteristic information of extraction, lead to
Cross the number information for extracting the fixed reference feature information;According to the fixed reference feature information and number information to before three layers default
Feedback neural network is trained, and obtains first-loss function information, wherein the default three_layer planar waveguide includes input
Layer, hidden layer and reconstruction of layer;Default learning rate information is obtained, according to the default learning rate information to the first-loss letter
Number information be updated, obtain updated input layer to hidden layer weight, updated input layer to hidden layer offset
The deviation ratio of coefficient, the weight of updated hidden layer to reconstruction of layer and updated hidden layer to reconstruction of layer.
Wherein, x indicates that the feature representation of hidden layer initial data, η indicate learning rate, in the present embodiment, autocoding
Network is laminated by several automatic coding machines, and the output of upper one layer of automatic coding machine hidden layer is as next layer of autocoding
The input of the output layer of machine.Its training is divided into three phases, altogether include pre-training, expansion and fine tuning, pre-training process be by
Multiple automatic coding machines of layer training composition autocoding network, lower layer's automatic coding machine hide layer unit output and are used as its upper layer
The input of automatic coding machine participates in training, and expansion process is lower layer's self-encoding encoder output unit and its upper layer after the completion of pre-training
Self-encoding encoder merges into one layer, multiple self-encoding encoders is connected into an autocoding depth network, trim process is to pass through exhibition
The autocoding network opened further adjusts the initial weight that pre-training obtains using back-propagation algorithm, is further reduced mistake
Difference.
Correspondingly, the step S40, comprising:
According to the weight of updated input layer to hidden layer, the deviation ratio of updated input layer to hidden layer, more
Hidden layer after new to reconstruction of layer weight and updated hidden layer to reconstruction of layer deviation ratio to the fixed reference feature
Information carries out dimensionality reduction, obtains target signature information.
In one embodiment, the step S50, comprising:
Step S501 extracts the target optical spectrum dimension in the target signature information.
It in the present embodiment, will be in the rectangle centered on pixel to be sorted using convolutional neural networks as classifier
Input of the hyperspectral datacube as convolutional neural networks carries out empty spectrum joint classification to obtain final classification results.
Step S502, using the target signature information preset in the target signature information in rectangular area as sky spectrum joint
Characteristic information to be sorted.
In order to carry out the united classification of sky spectrum to the high-spectral data after dimensionality reduction, the default rectangular area is to take wait divide
Input of the data cube in rectangle as convolutional neural networks centered on class pixel, taking the size of rectangle is 7 × 7, then
The size of network inputs is 7 × 7 × N, and wherein N is the spectral Dimensions after high-spectrum image dimensionality reduction, can also be other rectangle sizes,
The present embodiment to this with no restriction.
Step S503, by the target optical spectrum dimension and the characteristic information input to be sorted based on the pre- of deep learning
If classifying in convolutional neural networks model, targeted species information is obtained.
In the present embodiment, convolutional neural networks are by three convolutional layers, a full articulamentum and a classification layer composition, often
The convolution kernel size of a convolutional layer is 3 × 3, and the number of convolution kernel is N, and the input dimension of full articulamentum is N, and output dimension is
30, the input number for layer of classifying is N, is exported as targeted species information.
Further, the step S503, comprising:
In order to be classified by default convolutional neural networks, by the way that the characteristic information input to be sorted is based on depth
Convolutional layer in the default convolutional neural networks of study by the convolutional layer and is preset according to the characteristic information to be sorted
Weight matrix generates characteristic pattern;Default bias term is added to the characteristic pattern, the characteristic pattern after obtaining addition bias term;Extraction adds
Each Pixel Information in characteristic pattern after adding bias term obtains default nonlinear activation function, by it is described preset it is non-linear
Activation primitive carries out Nonlinear Processing to the Pixel Information, the characteristic pattern that obtains that treated;Pass through the default convolutional Neural
The main feature in part of the characteristic pattern after the layer extraction process of pond in network;The main feature in the part is stretched become preset to
Amount, and the default vector is sent into the full articulamentum in the default convolutional neural networks and carries out linear transformation, it is converted
Vector information afterwards;The transformed vector information and the target optical spectrum dimension are sent into the default convolutional Neural net
Classification layer in network is classified, and targeted species information is obtained.
In the concrete realization, convolutional neural networks are by convolutional layer, pond layer, full articulamentum and classification layer composition, generally,
Convolution operation is carried out to the output of the input of network or previous hidden layer in convolutional layer and generates characteristic pattern, convolution operation is raw
At each characteristic pattern all can be with bias term blIt is added, subsequent nonlinear activation function can act on each of characteristic pattern
In pixel, next, pond layer can choose local main feature in a manner of non-overlap from each characteristic pattern, that is, right
Characteristic pattern carries out dimensionality reduction operation, and whole process can indicate are as follows:
Hl=pool (g (Hl-1*Wl+bl));
In formula, * represents convolution operation, and pool, which is represented, maximizes pond operation, WlIndicate weight matrix, blIndicate bias term.
Several convolutional layers and the alternately stacked extraction feature that can be layer-by-layer from input of pond layer, later, the feature of extraction
Stretch become a vector after be sent into full articulamentum, in full articulamentum, first by with weight matrix WlMultiplication and biasing blPhase
Row linear transformation is added, then nonlinear function is acted on each transformed component, is had:
Hl=g (Hl-1*Wl+bl);
Activation primitive g (n) takes sigmoid function, and the output of full articulamentum is sent to classification layer and classifies.
Correspondingly, described will be in the default convolutional neural networks of the characteristic information input based on deep learning to be sorted
Convolutional layer, before generating characteristic pattern by the convolutional layer and default weight matrix according to the characteristic information to be sorted, institute
State method further include:
Optimal most suitable parameter in order to obtain enables convolutional neural networks best to the data of dimension-reduction treatment before
Fitting, by obtain with reference to weight matrix with refer to bias term;According to the characteristic information to be sorted, with reference to weight matrix with
And the second loss function information of the default convolutional neural networks is obtained with reference to bias term;To the second loss function information
It is updated, obtains updated with reference to weight matrix and updated with reference to bias term;It will be described updated with reference to weight
Matrix as default weight matrix, will the updated bias term that refers to as default bias term, in convolutional neural networks
Parameter is weight matrix { W1,...,WLAnd biasing { b1,...,bL, network is trained by back-propagation algorithm, described
Second loss function information are as follows:
Wherein, x indicates that the sample of input, y indicate actual classification, aLIndicate the output of prediction, L indicates neural network
The maximum number of plies obtains optimal parameter value W and b by using stochastic gradient descent algorithm, that is, presets weight matrix and preset
Bias term.
Scheme provided in this embodiment, by extracting empty spectrum connection based on deep learning in terms of sky composes union feature extraction
When closing feature, suitable parameter is selected by the analysis of multiple comparative experiments, stresses spatial neighborhood size, the scale of network, structure
And complexity, improve the nicety of grading of red meat class high spectrum image.
The present invention further provides a kind of red meat sorter based on deep learning.
It is that the present invention is based on the signals of the functional module of the red meat sorter first embodiment of deep learning referring to Fig. 7, Fig. 7
Figure.
The present invention is based in the red meat sorter first embodiment of deep learning, should be classified based on the red meat of deep learning
Device includes:
Module 10 is obtained, for obtaining red meat high spectrum image information to be sorted.
Metric module 20 is obtained for the red meat high spectrum image information to be sorted to be inputted default metric function
Similarity information in the red meat high spectrum image information to be sorted between each characteristic information.
It should be noted that the default metric function is the default metric function obtained based on mahalanobis distance, to processing
The hyperspectral image data of corresponding quality sample afterwards, using the feature extraction algorithm of mahalanobis distance Multiple Kernel Learning, selection most can
It embodies all kinds of qualities and characterizes the best band and image of its attribute, then extract corresponding spectral signature and characteristic of division.
It is understood that in all distance metrics, distance metric is generally carried out using Euclidean distance, however Euclidean away from
From to every one-dimensional characteristic value all fair plays, i.e., the difference between the different attribute in sample is all treated on an equal basis, this will
The dimension to each component is caused to regard the same amount as to measure, therefore, Euclidean distance has ignored each dimension
Primary-slave relation between feature, so that the influence that different features generates classification cannot be protruded, in contrast to this, mahalanobis distance
It is non-coequally to treat each component of sample, to produce better classifying quality.
Extraction module 30, for being extracted in the red meat high spectrum image information to be sorted according to the similarity information
Fixed reference feature information.
In the concrete realization, distance function letter of two samples in new feature space can be obtained by mahalanobis distance
Breath, obtains the similarity information between two samples according to the distance function information, when the range information of two samples is closer,
Then illustrate two samples similarity it is higher, two samples range information farther out when, then illustrate two samples similarity
It is lower, obtained mahalanobis distance information can be compared with pre-determined distance threshold value, according to comparison result obtain two samples it
Between similarity information.
Dimensionality reduction module 40 obtains target signature information for carrying out dimensionality reduction to the fixed reference feature information.
In the present embodiment, the feature extracted in data non-supervisoryly is realized using stacking automatic coding machine and effectively extract
Nonlinear characteristic in data out, to carry out the dimensionality reduction in spectrum dimension to high-spectral data, autocoding network is basic by it
Unit automatic coding machine stacks, and automatic coding machine is a three_layer planar waveguide, by input layer, hidden layer, reconstruct
Layer is constituted, and can also be multilayer feedforward neural network, the present embodiment to this with no restriction, it is in the present embodiment, refreshing with three layers of feedforward
Through being illustrated for network.
Categorization module 50 obtains mesh for carrying out empty spectrum joint classification to the target signature information based on deep learning
Mark information.
In the present embodiment, using the convolutional neural networks by deep learning as classifier, will be with pixel to be sorted
Input of the hyperspectral datacube as convolutional neural networks in the rectangle at center carries out empty spectrum joint classification to obtain most
Whole classification results.
Red meat one embodiment flow diagram of classification based on deep learning as shown in Figure 3, in acquisition red meat sample
After high spectrum image, using the spatial information and spectral information of core principle component analysis method removal redundancy, mahalanobis distance is then used
Multiple Kernel Learning method extracts feature, and based on autoencoder network is stacked to high-spectral data dimensionality reduction, finally utilizes convolutional Neural net
Network carries out empty spectrum joint classification, realizes the classification to red meat class high spectrum image.
The present embodiment through the above scheme, passes through acquisition red meat high spectrum image information to be sorted;It will be described to be sorted
Red meat high spectrum image information input default metric function, obtain each in the red meat high spectrum image information to be sorted
Similarity information between characteristic information;The red meat high spectrum image information to be sorted is extracted according to the similarity information
In fixed reference feature information;Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature information;Based on deep learning to institute
It states target signature information and carries out empty spectrum joint classification, obtain targeted species information, to pass through suitably default measurement letter first
The characteristic information of several pairs of extraction red meat high spectrum image informations is then based on deep learning and extracts empty spectrum union feature information, mentions
The nicety of grading of high red meat class high spectrum image.
In addition, the embodiment of the present invention also proposes a kind of storage medium, it is stored on the storage medium based on deep learning
Red meat sort program, the red meat sort program based on deep learning is executed by processor as described above based on depth
The step of red meat classification method of study.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In computer readable storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are with so that an intelligent terminal is set
Standby (can be mobile phone, computer, terminal device, air conditioner or network-termination device etc.) executes each embodiment of the present invention
The method.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of red meat classification method based on deep learning, which is characterized in that the red meat classification side based on deep learning
Method includes:
Obtain red meat high spectrum image information to be sorted;
The red meat high spectrum image information to be sorted is inputted into default metric function, obtains the red meat bloom to be sorted
Compose the similarity information in image information between each characteristic information;
The fixed reference feature information in the red meat high spectrum image information to be sorted is extracted according to the similarity information;
Dimensionality reduction is carried out to the fixed reference feature information, obtains target signature information;
Empty spectrum joint classification is carried out to the target signature information based on deep learning, obtains targeted species information.
2. the red meat classification method based on deep learning as described in claim 1, which is characterized in that the acquisition is to be sorted
Red meat high spectrum image information, including;
Obtain red meat EO-1 hyperion sample image information;
Core principle component analysis is carried out to the red meat EO-1 hyperion sample image information, the red meat EO-1 hyperion sample graph after being analyzed
As information;
Using the red meat EO-1 hyperion sample image information after analysis as red meat high spectrum image information to be sorted.
3. the red meat classification method based on deep learning as claimed in claim 2, which is characterized in that described high to the red meat
Spectrum samples image information carries out core principle component analysis, the red meat EO-1 hyperion sample image information after being analyzed, comprising:
Default kernel function is chosen, it will be non-linear in the red meat EO-1 hyperion sample image information according to the default kernel function of selection
Feature Mapping obtains correlation matrix information to high-dimensional feature space;
Principal component decision condition is obtained, is sentenced principal component is obtained in the correlation matrix information substitution principal component decision condition
Determine function;
The principal component decision function is solved, the red meat EO-1 hyperion sample image information after being analyzed.
4. the red meat classification method based on deep learning as claimed any one in claims 1 to 3, which is characterized in that described
The red meat high spectrum image information to be sorted is inputted into default metric function, obtains the red meat high-spectrum to be sorted
As the similarity information between characteristic information each in information, comprising:
Extract the unitary matrice and diagonal matrix of the red meat high spectrum image information to be sorted;
The transposition of unitary matrice is obtained according to the unitary matrice;
The transposition of the unitary matrice, diagonal matrix and unitary matrice is inputted into default metric function, is obtained described to be sorted red
The distance between each characteristic information information in meat high spectrum image information;
Using the range information as the similarity information;
Correspondingly, described input default metric function for the red meat high spectrum image information to be sorted, obtain described wait divide
Before similarity information in the red meat high spectrum image information of class between each characteristic information, the method also includes:
Judge whether the default metric function meets preset condition, when the default metric function meets preset condition, holds
It is about to the red meat high spectrum image information to be sorted and inputs default metric function, obtains the red meat EO-1 hyperion to be sorted
The step of similarity information in image information between each characteristic information.
5. the red meat classification method based on deep learning as claimed any one in claims 1 to 3, which is characterized in that described
Dimensionality reduction is carried out to the fixed reference feature information, before obtaining target signature information, the method also includes:
Extract the number information of the fixed reference feature information;
Default three_layer planar waveguide is trained according to the fixed reference feature information and number information, obtains the first damage
Lose function information, wherein the default three_layer planar waveguide includes input layer, hidden layer and reconstruction of layer;
Default learning rate information is obtained, the first-loss function information is updated according to the default learning rate information,
Updated input layer is obtained to the weight of hidden layer, the deviation ratio, updated hidden of updated input layer to hidden layer
Hide layer to reconstruction of layer weight and updated hidden layer to reconstruction of layer deviation ratio;
Correspondingly, described carry out dimensionality reduction to the fixed reference feature information, target signature information is obtained, comprising:
After the weight of updated input layer to hidden layer, the deviation ratio of updated input layer to hidden layer, update
Hidden layer to reconstruction of layer weight and updated hidden layer to reconstruction of layer deviation ratio to the fixed reference feature information
Dimensionality reduction is carried out, target signature information is obtained.
6. the red meat classification method based on deep learning as claimed any one in claims 1 to 3, which is characterized in that described
Empty spectrum joint classification is carried out to the target signature information based on deep learning, obtains targeted species information, comprising:
Extract the target optical spectrum dimension in the target signature information;
United feature to be sorted is composed using the target signature information preset in rectangular area in the target signature information as sky
Information;
The target optical spectrum dimension and the characteristic information to be sorted are inputted into the default convolutional Neural net based on deep learning
Classify in network model, obtains targeted species information.
7. the red meat classification method based on deep learning as claimed in claim 6, which is characterized in that described by the target light
Classify in spectrum dimension and the default convolutional neural networks model of the characteristic information input based on deep learning to be sorted,
Obtain targeted species information, comprising:
By the convolutional layer in the default convolutional neural networks of the characteristic information input based on deep learning to be sorted, according to described
Characteristic information to be sorted generates characteristic pattern by the convolutional layer and default weight matrix;
Default bias term is added to the characteristic pattern, the characteristic pattern after obtaining addition bias term;
Each Pixel Information in characteristic pattern after extracting addition bias term, obtains default nonlinear activation function, by described
Default nonlinear activation function carries out Nonlinear Processing to the Pixel Information, the characteristic pattern that obtains that treated;
Pass through the main feature in part of the characteristic pattern after the pond layer extraction process in the default convolutional neural networks;
The main feature in the part, which is stretched, becomes default vector, and the default vector is sent into the default convolutional neural networks
In full articulamentum carry out linear transformation, obtain transformed vector information;
Point transformed vector information and the target optical spectrum dimension being sent into the default convolutional neural networks
Class layer is classified, and targeted species information is obtained;
Correspondingly, the convolution by the default convolutional neural networks of the characteristic information input based on deep learning to be sorted
Layer, before generating characteristic pattern by the convolutional layer and default weight matrix according to the characteristic information to be sorted, the side
Method further include:
It obtains with reference to weight matrix and refers to bias term;
The default convolutional neural networks are obtained according to the characteristic information to be sorted, with reference to weight matrix and with reference to bias term
The second loss function information;
The second loss function information is updated, is obtained updated with reference to weight matrix and updated with reference to biasing
?;
Using the updated reference weight matrix as default weight matrix, using the updated reference bias term as pre-
If bias term.
8. a kind of red meat sorter based on deep learning, which is characterized in that the red meat based on deep learning, which is classified, to be filled
It sets and includes:
Module is obtained, for obtaining red meat high spectrum image information to be sorted;
Metric module, for the red meat high spectrum image information to be sorted to be inputted default metric function, obtain it is described to
Similarity information in the red meat high spectrum image information of classification between each characteristic information;
Extraction module, for extracting the reference in the red meat high spectrum image information to be sorted according to the similarity information
Characteristic information;
Dimensionality reduction module obtains target signature information for carrying out dimensionality reduction to the fixed reference feature information;
Categorization module obtains targeted species for carrying out empty spectrum joint classification to the target signature information based on deep learning
Information.
9. a kind of red meat sorting device based on deep learning, which is characterized in that the red meat classification based on deep learning is set
It is standby include: memory, processor and be stored on the memory and can run on the processor based on deep learning
Red meat sort program, the red meat sort program based on deep learning is arranged for carrying out such as any one of claims 1 to 7
The step of described red meat classification method based on deep learning.
10. a kind of storage medium, which is characterized in that be stored with the red meat classification journey based on deep learning on the storage medium
Sequence is realized as described in any one of claims 1 to 7 when the red meat sort program based on deep learning is executed by processor
The red meat classification method based on deep learning the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454174.8A CN110163293A (en) | 2019-05-28 | 2019-05-28 | Red meat classification method, device, equipment and storage medium based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454174.8A CN110163293A (en) | 2019-05-28 | 2019-05-28 | Red meat classification method, device, equipment and storage medium based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110163293A true CN110163293A (en) | 2019-08-23 |
Family
ID=67629436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910454174.8A Pending CN110163293A (en) | 2019-05-28 | 2019-05-28 | Red meat classification method, device, equipment and storage medium based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163293A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242228A (en) * | 2020-01-16 | 2020-06-05 | 武汉轻工大学 | Hyperspectral image classification method, device, equipment and storage medium |
CN111259805A (en) * | 2020-01-16 | 2020-06-09 | 武汉轻工大学 | Meat detection method, device, equipment and storage medium |
CN111476174A (en) * | 2020-04-09 | 2020-07-31 | 北方工业大学 | Face image-based emotion recognition method and device |
CN113420640A (en) * | 2021-06-21 | 2021-09-21 | 深圳大学 | Mangrove hyperspectral image classification method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103439271A (en) * | 2013-08-29 | 2013-12-11 | 华南理工大学 | Visual detecting method for pork maturity condition |
CN106815601A (en) * | 2017-01-10 | 2017-06-09 | 西安电子科技大学 | Hyperspectral image classification method based on recurrent neural network |
CN106845418A (en) * | 2017-01-24 | 2017-06-13 | 北京航空航天大学 | A kind of hyperspectral image classification method based on deep learning |
CN108009559A (en) * | 2016-11-02 | 2018-05-08 | 哈尔滨工业大学 | A kind of Hyperspectral data classification method based on empty spectrum united information |
CN108388917A (en) * | 2018-02-26 | 2018-08-10 | 东北大学 | A kind of hyperspectral image classification method based on improvement deep learning model |
CN109295159A (en) * | 2018-10-26 | 2019-02-01 | 北京工商大学 | Sausage quality Intelligent detecting method |
-
2019
- 2019-05-28 CN CN201910454174.8A patent/CN110163293A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103439271A (en) * | 2013-08-29 | 2013-12-11 | 华南理工大学 | Visual detecting method for pork maturity condition |
CN108009559A (en) * | 2016-11-02 | 2018-05-08 | 哈尔滨工业大学 | A kind of Hyperspectral data classification method based on empty spectrum united information |
CN106815601A (en) * | 2017-01-10 | 2017-06-09 | 西安电子科技大学 | Hyperspectral image classification method based on recurrent neural network |
CN106845418A (en) * | 2017-01-24 | 2017-06-13 | 北京航空航天大学 | A kind of hyperspectral image classification method based on deep learning |
CN108388917A (en) * | 2018-02-26 | 2018-08-10 | 东北大学 | A kind of hyperspectral image classification method based on improvement deep learning model |
CN109295159A (en) * | 2018-10-26 | 2019-02-01 | 北京工商大学 | Sausage quality Intelligent detecting method |
Non-Patent Citations (8)
Title |
---|
MAHMOUD AL-SARAYRE,ET AL.: "Detection of Red-Meat Adulteration by Deep Spectral–Spatial Features in Hyperspectral Images", 《JOURNAL OF IMAGING》 * |
亓呈明,胡立栓著: "《机器学习、智能计算与高光谱遥感影像分类应用研究》", 31 May 2018, 中国财富出版社 * |
徐宝才,等: "《改革开放40年中国猪业发展与进步 猪肉产品加工与流通》", 31 October 2018, 中国农业大学出版社 * |
王学民编著: "《应用多元分析 第4版》", 30 September 2014, 上海财经大学出版社 * |
董小栋,等: "融合高光谱和图像深度特征的腊肉分类与检索算法研究", 《食品工业科技》 * |
解洪胜著: "《基于支持向量机的图像检索若干问题》", 31 October 2013, 山东人民出版社 * |
高巍,等: "基于马氏距离多核学习的高光谱图像分类", 《仪器仪表学报》 * |
黄鸿,等: "基于深度学习的高光谱图像空-谱联合特征提取", 《激光与光电子学进展》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111242228A (en) * | 2020-01-16 | 2020-06-05 | 武汉轻工大学 | Hyperspectral image classification method, device, equipment and storage medium |
CN111259805A (en) * | 2020-01-16 | 2020-06-09 | 武汉轻工大学 | Meat detection method, device, equipment and storage medium |
CN111242228B (en) * | 2020-01-16 | 2024-02-27 | 武汉轻工大学 | Hyperspectral image classification method, hyperspectral image classification device, hyperspectral image classification equipment and storage medium |
CN111476174A (en) * | 2020-04-09 | 2020-07-31 | 北方工业大学 | Face image-based emotion recognition method and device |
CN111476174B (en) * | 2020-04-09 | 2023-04-04 | 北方工业大学 | Face image-based emotion recognition method and device |
CN113420640A (en) * | 2021-06-21 | 2021-09-21 | 深圳大学 | Mangrove hyperspectral image classification method and device, electronic equipment and storage medium |
WO2022267388A1 (en) * | 2021-06-21 | 2022-12-29 | 深圳大学 | Mangrove hyperspectral image classification method and apparatus, and electronic device and storage medium |
CN113420640B (en) * | 2021-06-21 | 2023-06-20 | 深圳大学 | Mangrove hyperspectral image classification method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163293A (en) | Red meat classification method, device, equipment and storage medium based on deep learning | |
CN109840531B (en) | Method and device for training multi-label classification model | |
CN108256544B (en) | Picture classification method and device, robot | |
CN108062551A (en) | A kind of figure Feature Extraction System based on adjacency matrix, figure categorizing system and method | |
CN109840530A (en) | The method and apparatus of training multi-tag disaggregated model | |
CN108520275A (en) | A kind of regular system of link information based on adjacency matrix, figure Feature Extraction System, figure categorizing system and method | |
CN107742107A (en) | Facial image sorting technique, device and server | |
CN104933428B (en) | A kind of face identification method and device based on tensor description | |
Shi et al. | Image manipulation detection and localization based on the dual-domain convolutional neural networks | |
CN109344698A (en) | EO-1 hyperion band selection method based on separable convolution sum hard threshold function | |
Borwarnginn et al. | Breakthrough conventional based approach for dog breed classification using CNN with transfer learning | |
CN104463202A (en) | Multi-class image semi-supervised classifying method and system | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN103077399B (en) | Based on the biological micro-image sorting technique of integrated cascade | |
CN103177265B (en) | High-definition image classification method based on kernel function Yu sparse coding | |
CN108090472A (en) | Pedestrian based on multichannel uniformity feature recognition methods and its system again | |
CN111784665B (en) | OCT image quality evaluation method, system and device based on Fourier transform | |
Chen et al. | Automated design of neural network architectures with reinforcement learning for detection of global manipulations | |
CN113159067A (en) | Fine-grained image identification method and device based on multi-grained local feature soft association aggregation | |
Dong et al. | A combined deep learning model for the scene classification of high-resolution remote sensing image | |
CN109978074A (en) | Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning | |
CN116883726B (en) | Hyperspectral image classification method and system based on multi-branch and improved Dense2Net | |
CN114238659A (en) | Method for intelligently designing network security architecture diagram | |
Zhu et al. | Learning reconfigurable scene representation by tangram model | |
CN109816030A (en) | A kind of image classification method and device based on limited Boltzmann machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190823 |