CN110100774A - River crab male and female recognition methods based on convolutional neural networks - Google Patents
River crab male and female recognition methods based on convolutional neural networks Download PDFInfo
- Publication number
- CN110100774A CN110100774A CN201910380544.8A CN201910380544A CN110100774A CN 110100774 A CN110100774 A CN 110100774A CN 201910380544 A CN201910380544 A CN 201910380544A CN 110100774 A CN110100774 A CN 110100774A
- Authority
- CN
- China
- Prior art keywords
- layer
- neural networks
- convolutional neural
- river crab
- male
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K61/00—Culture of aquatic animals
- A01K61/50—Culture of aquatic animals of shellfish
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/80—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
- Y02A40/81—Aquaculture, e.g. of fish
Abstract
The river crab male and female recognition methods based on convolutional neural networks that the invention discloses a kind of, it is related to area of pattern recognition, the following steps are included: collecting female river crab and the male abdomen of river crab and the original image of crab shell, raw image data collection is established, raw image data collection is divided into training set and test set;Data amplification processing is carried out to the river crab image in training set, increases the quantity of the river crab image in training set;To expanding, treated that training set pre-processes;Construct convolutional neural networks model;By the convolutional neural networks model of pretreated data input building, convolutional neural networks model is trained;Test set is inputted into trained convolutional neural networks model, identifies the male and female of river crab in test set.The present invention has the advantages that effectively increasing the accuracy rate and production efficiency of river crab male and female identification.
Description
Technical field
The present invention relates to area of pattern recognition more particularly to a kind of river crab male and female identification sides based on convolutional neural networks
Method.
Background technique
River crab is the important aquatic products in China, because its delicious meat, nutritive value are high, quite by the favor of consumer, river crab
The growth momentum of industry is swift and violent, has huge economic market.But the male and female of river crab rely primarily on eye recognition, labor at present
Fatigue resistance is big, and high labor cost, production scale is small, and is easy to appear because fatigue leads to the shortcomings such as mistake classification, efficiency
Low manual identified can no longer meet the demand property in river crab season and freshness.
It is existing that river crab male and female are known in method for distinguishing, river crab abdomen triangle character is extracted frequently with machine vision technique
Information is based on template matching algorithm, models to river crab abdomen and distinguishes male and female, and the accuracy rate of river crab male and female identification is lower;And river
Crab is living body biological, and on automatic classification assembly line, the difficulty for capturing river crab abdomen is larger, therefore, is identified by river crab abdomen
The method of male and female has apparent limitation in automatic classification application, and production efficiency is lower.
In recent years, depth learning technology especially convolutional neural networks are widely used for image recognition, target detection, figure
As in the image recognition tasks such as segmentation.Meanwhile with the continuous intensification of social progress and industrial automatization, calculating is utilized
The production of machine support industry, on the one hand can both save manpower and production cost, on the other hand can also be improved production efficiency.
Patent document CN201610691336 discloses a kind of image-recognizing method differentiated for river crab, and this method passes through
The method of image comparison differentiates the family relationship of river crab.There are no can preferably know to river crab male and female in the prior art
Method for distinguishing.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of accuracys rate and production that can be improved the identification of river crab male and female
The river crab male and female recognition methods based on convolutional neural networks of efficiency.
The present invention is to solve above-mentioned technical problem: the river crab male and female based on convolutional neural networks by the following technical programs
Recognition methods, comprising the following steps:
Step A collects female river crab and the male abdomen of river crab and the original image of crab shell, establishes raw image data collection, will
Raw image data collection is divided into training set and test set;
Step B carries out data amplification processing to the river crab image in training set obtained in step A, increases in training set
River crab image quantity;
Step C, to expanding in step B, treated that training set pre-processes;
Step D constructs convolutional neural networks model;
Step E, the convolutional neural networks model that will be constructed in data input step D pretreated in step C, to convolution
Neural network model is trained;
Step F, by convolutional neural networks model trained in test set input step E obtained in step A, identification is surveyed
The male and female of river crab are concentrated in examination.
As the technical solution of optimization, in step B, the mode of data amplification processing is carried out to the river crab image in training set
Including translation, rotation, overturning.
As the technical solution of optimization, in step C, river crab the following steps are included: be moved to by pretreated method first
Then the center of image by river crab image cropping and is sized, the back finally based on gauss hybrid models removal image
Scape.
As the technical solution of optimization, in step D, constructed convolutional neural networks model includes input layer, 3 convolution
Layer, 3 pond layers, full articulamentum, output layer;Input layer is the entrance of convolutional neural networks model;First layer, third layer,
Five layers are convolutional layers, and convolutional layer carries out convolution operation to the image data of input, characteristic pattern is obtained after convolution;The second layer,
4th layer, layer 6 be pond layer, pond layer carries out down-sampling to characteristic pattern;Layer 7 is full articulamentum, in full articulamentum
Each neuron is connect entirely with upper one layer of all neurons;8th layer is output layer, for classifying.
As the technical solution of optimization, pretreatment is normalized to the data of input in input layer.
As the technical solution of optimization, the convolutional layer of first layer shares 4 convolution kernels, and the size of each convolution kernel is 3 ×
3;The convolutional layer of third layer shares 8 convolution kernels, and the size of each convolution kernel is 3 × 3;The convolutional layer of layer 5 shares 16 volumes
Product core, the size of each convolution kernel are 3 × 3.
As the technical solution of optimization, first layer, third layer, layer 5 convolutional layer after convolution operation, be all made of
Relu function carries out activation processing.
As the technical solution of optimization, output layer calculates the predicted value of convolutional neural networks model, using sigmoid function
Predicted value is calculated, the male and female of river crab are judged by predicted value.
As the technical solution of optimization, the calculating formula of the male and female of river crab is judged are as follows:
Wherein, f (x) is predicted value, and when predicted value is greater than or equal to 0.7, y=1 judges river crab for male;Work as prediction
When value is less than 0.7, y=0 judges river crab for female.
As the technical solution of optimization, in step E, convolutional neural networks model is trained the following steps are included:
Step 1 is initialized using all weights of the Gaussian Profile initialization mode to network;
The number of iterations of convolutional neural networks and the learning parameter of convolutional neural networks is arranged in step 2;
Data pretreated in step C are input to the convolutional neural networks model constructed in step D by step 3;
The data of input are obtained output valve by successively propagating forward for each layer by step 4;
Step 5 finds out the error between output valve and real goal value;
Step 6 is reversely successively fed back error by the last layer using back-propagation algorithm, updates every layer of parameter, and
The propagated forward again after undated parameter, move in circles progress, until convolutional neural networks model is restrained;
Step 7 terminates training when error is equal to or less than desired value or when reaching the frequency of training of setting.
The present invention has the advantages that
1, the present invention identifies the male and female of river crab, is kept away using depth learning technology using convolutional neural networks algorithm
Eye recognition bring error is exempted from, has reduced labor intensity of workers, improve work efficiency, reduce labor costs.
2, compared to more traditional machine vision algorithm, the present invention makes full use of the convolutional neural networks algorithm in deep learning
River crab male and female are identified, Preprocessing Technique complicated in conventional machines study is not needed, it is female to effectively increase river crab
The accuracy rate of hero identification.
3, test process of the present invention is stablized, and does not occur compared with large disturbances, robustness with higher.
4, the present invention can use river crab abdomen identification male and female, also can use crab shell identification male and female, overcomes over only
Can be by river crab abdomen identification male and female the drawbacks of, greatly improve production efficiency.
Detailed description of the invention
Fig. 1 is the flow chart of river crab male and female recognition methods of the embodiment of the present invention based on convolutional neural networks.
Fig. 2 is the sample of river crab image in the pretreated training set of the embodiment of the present invention.
The structure chart of Fig. 3 convolutional neural networks model of the embodiment of the present invention.
Specific embodiment
As shown in Figure 1, the river crab male and female recognition methods based on convolutional neural networks, comprising the following steps:
Step A collects river crab image: according to sample homeostatic principle, collecting the abdomen and crab shell of female river crab and male river crab
Original image establishes raw image data collection, and raw image data collection is divided into training set and test set.
The image for acquiring Yangcheng Lake steamed crab, takes pictures to river crab using photographing device, according to sample homeostatic principle, choosing
180 male river crabs and 180 female river crabs are taken, the abdomen and crab shell to river crab acquire image, establish the raw image data of river crab
Collection.It includes 360 male images of river crab and the image of 360 female river crabs, the image of 360 male river crabs that raw image data, which is concentrated,
Including 180 crab shells and 180 abdomens, the image of 360 female river crabs includes 180 crab shells and 180 abdomens.By all images
It is randomly divided into training set and test set, wherein the amount of images of training set is 360, and the amount of images of test set is 360.
Step B expands data set: data amplification processing is carried out to the river crab image in training set obtained in step A,
Increase the quantity of the river crab image in training set, entire experimental data set is by carry out data amplification treated training set and step
The composition of test set obtained in A.
It is concentrated due to the raw image data established in step A and there was only 720 images, and in the training process of deep learning
In, small-scale data set frequently can lead to model and generate over-fitting, it is therefore desirable to expand training using data amplification technique
Collection.
The mode of data amplification processing includes translation, rotation, overturning;It translates and refers to through batch operation, it will be in training set
Every image carry out left and right translation respectively, translation size is a pixel;Rotation refers to through batch operation, will instruct
Practice the every image concentrated and rotation process is carried out with given angle.
Step C, to expanding in step B, treated that training set pre-processes.
Then pretreated method is schemed river crab the following steps are included: river crab to be moved to the center of image first
Background as cutting and being sized as 64 × 64 pixels, finally based on gauss hybrid models removal image.
As shown in Fig. 2, in pretreated training set river crab image sample, (a) is the crab shell of male river crab;It (b) is male
The abdomen of river crab, (c) be female river crab crab shell, (d) be female river crab abdomen.
Step D constructs convolutional neural networks model by using keras deep learning library.
As shown in figure 3, constructed convolutional neural networks model includes input layer, 3 convolutional layers, 3 pond layers, Quan Lian
Layer, output layer are connect, specific model framework is as follows:
Input layer is the entrance of convolutional neural networks model.
Pretreated data are inputted from input layer in step C, and pretreatment is normalized to the data of input in input layer,
It can reduce the difference of the data value range of each dimension and bring is interfered.All pictures are all gray scale picture, each picture
The range of vegetarian refreshments is all 0 to 255 gray value, and each pixel is integer value, with batch operation, by each of picture
Operation is normalized divided by 255 in pixel.
First layer, third layer, layer 5 are convolutional layers, and convolutional layer carries out convolution operation to the image data of input, passes through
Characteristic pattern is obtained after convolution.
The calculating formula of the size of convolutional layer characteristic pattern are as follows:
Wherein, WoAnd HoIt is the width and height of this layer of characteristic pattern, WiAnd HiIt is the width and height of upper one layer of characteristic pattern, WfAnd HfIt is volume
The width and height of product core, P are to expand edge, and S is step-length.
The convolutional layer of first layer shares 4 convolution kernels, and the size of each convolution kernel is 3 × 3 pixels, includes after convolution
4 different characteristic patterns;The convolutional layer of third layer shares 8 convolution kernels, and the size of each convolution kernel is 3 × 3 pixels, through pulleying
It include 8 different characteristic patterns after product;The convolutional layer of layer 5 shares 16 convolution kernels, and the size of each convolution kernel is 3 × 3 pictures
Element includes 16 different characteristic patterns after convolution.The size of convolution kernel is critically important to feature is extracted, when convolution kernel is too small
When, effective local feature can not be extracted;And when convolution kernel is excessive, the complexity of the feature of extraction may be considerably beyond convolution
The expression ability of core.
Expanding edge P is 0, expands edge P and refers to expanding input feature vector figure edge with 0 value, P means for 0
Do not expand, step-length S is 1.
First layer, third layer, layer 5 convolutional layer after convolution operation, be all made of relu function and carry out at activation
Reason, each characteristic pattern need to increase before being transmitted to relu function a biasing, the calculation formula of relu function are as follows:
Wherein, R (x) is activation primitive value, and x is input data, and nonlinear activation function relu function can solve gradient
It disappearance problem and can be restrained with acceleration model.
The second layer, the 4th layer, layer 6 be pond layer, pond layer carries out down-sampling to characteristic pattern.
Pond layer is withed a hook at the end the main feature of image, while reducing by next layer of parameter and calculation amount, prevents over-fitting
Effect.There are two types of common pond methods: one is maximum pond, another kind is average pond, and the present embodiment is using maximum pond
Change.
The calculating formula of the size of pond layer characteristic pattern are as follows:
Wherein, WoAnd HoIt is the width and height of this layer of characteristic pattern, WiAnd HiIt is the width and height of upper one layer of characteristic pattern, WfAnd HfIt is volume
The width and height of product core, S is step-length.
The size of the core of every layer of pond layer is 2 × 2 pixels, and step-length S is 1.
The image size for inputting convolutional neural networks is 64 × 64 pixels, by being calculated, the convolutional layer of first layer into
The size of the characteristic pattern obtained after row convolution operation is 62 × 62 pixels, and the pond layer of the second layer passes through the characteristic pattern of Chi Huahou
Size is 31 × 31 pixels, and the size that the convolutional layer of third carries out the characteristic pattern obtained after convolution operation is 29 × 29 pixels, the
Four layers of pond layer is 14 × 14 pixels by the size of the characteristic pattern of Chi Huahou, after the convolutional layer of layer 5 carries out convolution operation
The size of obtained characteristic pattern is 12 × 12 pixels, and the pond layer of layer 6 is 6 × 6 pictures by the size of the characteristic pattern of Chi Huahou
Element.
Layer 7 is full articulamentum, which shares 256 output neurons, each of full articulamentum neuron with it is upper
One layer of all neurons are connected entirely.
Since there are many parameter in full articulamentum, overfitting problem frequently can lead to.It is asked to effectively mitigate over-fitting
Topic, joined dropout technology, dropout technology refers in the training process of deep learning network, right in full articulamentum
In neural network unit, it is temporarily abandoned from network according to certain probability, the present embodiment is lost at random according to 0.5 probability
Abandon the neuron in full articulamentum.
8th layer is output layer, for classifying.
The number of output layer neuron is the identification number that river crab needs, and is that male and female two are classified here, so of neuron
Number is 2.Due to being two classification, using sigmoid function as objective function.
Output layer calculates the predicted value of convolutional neural networks model, calculates predicted value using sigmoid function, passes through prediction
Value judges the male and female of river crab, the calculation formula of sigmoid function are as follows:
Wherein, f (x) is predicted value, and x is input data.
Judge the calculating formula of the male and female of river crab are as follows:
When predicted value is greater than or equal to 0.7, y=1 judges river crab for male;When predicted value is less than 0.7, y=0 sentences
Disconnected river crab is female.
Step E, the convolutional neural networks model that will be constructed in data input step D pretreated in step C, to convolution
Neural network model is trained, the specific steps are as follows:
Step 1 is initialized using all weights of the Gaussian Profile initialization mode to network;
Step 2, the number of iterations that convolutional neural networks are arranged is 1000 times;The learning parameter of convolutional neural networks is set,
Convolutional neural networks model is trained by using Adam optimization algorithm, Adam optimization algorithm is that deep learning is commonly learned
Algorithm is practised, momentum and the weight decaying of Adam optimization algorithm are respectively 0.9 and 0.001, and Adam optimization algorithm is based on training data
Convolutional neural networks weight is iteratively updated, the convolutional neural networks model is on the training dataset for including 14400 images
About 1000 periods of training, and the batch size of small lot is set as 256, and initial learning rate is set as 0.01.
Data pretreated in step C are input to the convolutional neural networks model constructed in step D by step 3;
The data of input are obtained output valve by successively propagating forward for each layer by step 4;
Step 5 finds out the error between output valve and real goal value;
Step 6 is reversely successively fed back error by the last layer using back-propagation algorithm, updates every layer of parameter, and
The propagated forward again after undated parameter, move in circles progress, until convolutional neural networks model is restrained;
Step 7 terminates training when error is equal to or less than desired value or when reaching the frequency of training of setting.
Step F, by convolutional neural networks model trained in test set input step E obtained in step A, identification is surveyed
The male and female of river crab are concentrated in examination, the specific steps are as follows:
Step 1, by convolutional neural networks model trained in test set input step E obtained in step A;
Step 2, test set data are successively propagated forward in trained convolutional neural networks model and obtain predicted value;
Step 3 judges river crab for male when predicted value is greater than or equal to 0.7;When predicted value is less than 0.7, judgement
River crab is female;
Step 4 counts recognition result, completes the male and female identification test of river crab.
The present invention is tested and has been compared to performance of the several frequently seen machine learning algorithm on river crab data set, right
For example than result table 1:
The river crab male and female recognition accuracy of 1 algorithms of different of table compares
It can be seen that the accuracy rate that the present invention identifies river crab male and female is up to 98.90%, compared to more traditional machine vision
Algorithm effectively increases the accuracy rate of river crab male and female identification.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within principle.
Claims (10)
1. a kind of river crab male and female recognition methods based on convolutional neural networks, which comprises the following steps:
Step A collects female river crab and the male abdomen of river crab and the original image of crab shell, establishes raw image data collection, will be original
Image data set is divided into training set and test set;
Step B carries out data amplification processing to the river crab image in training set obtained in step A, increases the river in training set
The quantity of crab image;
Step C, to expanding in step B, treated that training set pre-processes;
Step D constructs convolutional neural networks model;
Step E, the convolutional neural networks model that will be constructed in data input step D pretreated in step C, to convolutional Neural
Network model is trained;
Convolutional neural networks model trained in test set input step E obtained in step A is identified test set by step F
The male and female of middle river crab.
2. the river crab male and female recognition methods based on convolutional neural networks as described in claim 1, it is characterised in that: in step B,
The mode for carrying out data amplification processing to the river crab image in training set includes translation, rotation, overturning.
3. the river crab male and female recognition methods based on convolutional neural networks as described in claim 1, it is characterised in that: in step C,
River crab the following steps are included: is moved to the center of image, then simultaneously by river crab image cropping by pretreated method first
It is sized, the background finally based on gauss hybrid models removal image.
4. the river crab male and female recognition methods based on convolutional neural networks as described in claim 1, it is characterised in that: in step D,
Constructed convolutional neural networks model includes input layer, 3 convolutional layers, 3 pond layers, full articulamentum, output layer;Input layer
It is the entrance of convolutional neural networks model;First layer, third layer, layer 5 are convolutional layer, image data of the convolutional layer to input
Convolution operation is carried out, characteristic pattern is obtained after convolution;The second layer, the 4th layer, layer 6 be pond layer, pond layer is to characteristic pattern
Carry out down-sampling;Layer 7 is full articulamentum, and each of full articulamentum neuron and upper one layer of all neurons carry out
Full connection;8th layer is output layer, for classifying.
5. the river crab male and female recognition methods based on convolutional neural networks as claimed in claim 4, it is characterised in that: input layer pair
Pretreatment is normalized in the data of input.
6. the river crab male and female recognition methods based on convolutional neural networks as claimed in claim 4, it is characterised in that: first layer
Convolutional layer shares 4 convolution kernels, and the size of each convolution kernel is 3 × 3;The convolutional layer of third layer shares 8 convolution kernels, Mei Gejuan
The size of product core is 3 × 3;The convolutional layer of layer 5 shares 16 convolution kernels, and the size of each convolution kernel is 3 × 3.
7. the river crab male and female recognition methods based on convolutional neural networks as claimed in claim 4, it is characterised in that: first layer,
Third layer, layer 5 convolutional layer after convolution operation, be all made of relu function and carry out activation processing.
8. the river crab male and female recognition methods based on convolutional neural networks as claimed in claim 4, it is characterised in that: output layer meter
The predicted value for calculating convolutional neural networks model calculates predicted value using sigmoid function, judges the female of river crab by predicted value
It is male.
9. the river crab male and female recognition methods based on convolutional neural networks as claimed in claim 8, it is characterised in that: judge river crab
Male and female calculating formula are as follows:
Wherein, f (x) is predicted value, and when predicted value is greater than or equal to 0.7, y=1 judges river crab for male;When predicted value is small
When 0.7, y=0 judges river crab for female.
10. the river crab male and female recognition methods based on convolutional neural networks as described in claim 1, it is characterised in that: step E
In, convolutional neural networks model is trained the following steps are included:
Step 1 is initialized using all weights of the Gaussian Profile initialization mode to network;
The number of iterations of convolutional neural networks and the learning parameter of convolutional neural networks is arranged in step 2;
Data pretreated in step C are input to the convolutional neural networks model constructed in step D by step 3;
The data of input are obtained output valve by successively propagating forward for each layer by step 4;
Step 5 finds out the error between output valve and real goal value;
Step 6 is reversely successively fed back error by the last layer using back-propagation algorithm, updates every layer of parameter, and more
Propagated forward again after new parameter, move in circles progress, until convolutional neural networks model is restrained;
Step 7 terminates training when error is equal to or less than desired value or when reaching the frequency of training of setting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910380544.8A CN110100774A (en) | 2019-05-08 | 2019-05-08 | River crab male and female recognition methods based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910380544.8A CN110100774A (en) | 2019-05-08 | 2019-05-08 | River crab male and female recognition methods based on convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110100774A true CN110100774A (en) | 2019-08-09 |
Family
ID=67488819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910380544.8A Pending CN110100774A (en) | 2019-05-08 | 2019-05-08 | River crab male and female recognition methods based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110100774A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738648A (en) * | 2019-10-12 | 2020-01-31 | 山东浪潮人工智能研究院有限公司 | camera shell paint spraying detection system and method based on multilayer convolutional neural network |
CN111860689A (en) * | 2020-07-31 | 2020-10-30 | 中国矿业大学 | Coal and gangue identification method based on phase consistency and light-weight convolutional neural network |
CN113610540A (en) * | 2021-07-09 | 2021-11-05 | 北京农业信息技术研究中心 | River crab anti-counterfeiting tracing method and system |
CN114241248A (en) * | 2022-02-24 | 2022-03-25 | 北京市农林科学院信息技术研究中心 | River crab origin tracing method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056102A (en) * | 2016-07-01 | 2016-10-26 | 哈尔滨工业大学 | Video-image-analysis-based road vehicle type classification method |
CN206794173U (en) * | 2017-05-08 | 2017-12-26 | 中国计量大学 | Full-automatic male and female silk cocoon separator |
CN108256571A (en) * | 2018-01-16 | 2018-07-06 | 佛山市顺德区中山大学研究院 | A kind of Chinese meal food recognition methods based on convolutional neural networks |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
CN108776793A (en) * | 2018-06-08 | 2018-11-09 | 江南大学 | A kind of crab male and female recognition methods based on region segmentation |
WO2019002880A1 (en) * | 2017-06-28 | 2019-01-03 | Observe Technologies Limited | Data collection system and method for feeding aquatic animals |
CN109684967A (en) * | 2018-12-17 | 2019-04-26 | 东北农业大学 | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network |
-
2019
- 2019-05-08 CN CN201910380544.8A patent/CN110100774A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056102A (en) * | 2016-07-01 | 2016-10-26 | 哈尔滨工业大学 | Video-image-analysis-based road vehicle type classification method |
CN206794173U (en) * | 2017-05-08 | 2017-12-26 | 中国计量大学 | Full-automatic male and female silk cocoon separator |
WO2019002880A1 (en) * | 2017-06-28 | 2019-01-03 | Observe Technologies Limited | Data collection system and method for feeding aquatic animals |
CN108256571A (en) * | 2018-01-16 | 2018-07-06 | 佛山市顺德区中山大学研究院 | A kind of Chinese meal food recognition methods based on convolutional neural networks |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
CN108776793A (en) * | 2018-06-08 | 2018-11-09 | 江南大学 | A kind of crab male and female recognition methods based on region segmentation |
CN109684967A (en) * | 2018-12-17 | 2019-04-26 | 东北农业大学 | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738648A (en) * | 2019-10-12 | 2020-01-31 | 山东浪潮人工智能研究院有限公司 | camera shell paint spraying detection system and method based on multilayer convolutional neural network |
CN111860689A (en) * | 2020-07-31 | 2020-10-30 | 中国矿业大学 | Coal and gangue identification method based on phase consistency and light-weight convolutional neural network |
CN111860689B (en) * | 2020-07-31 | 2023-11-03 | 中国矿业大学 | Coal gangue identification method based on phase consistency and lightweight convolutional neural network |
CN113610540A (en) * | 2021-07-09 | 2021-11-05 | 北京农业信息技术研究中心 | River crab anti-counterfeiting tracing method and system |
CN113610540B (en) * | 2021-07-09 | 2024-02-02 | 北京农业信息技术研究中心 | River crab anti-counterfeiting tracing method and system |
CN114241248A (en) * | 2022-02-24 | 2022-03-25 | 北京市农林科学院信息技术研究中心 | River crab origin tracing method and system |
CN114241248B (en) * | 2022-02-24 | 2022-07-01 | 北京市农林科学院信息技术研究中心 | River crab origin tracing method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kumar et al. | Resnet-based approach for detection and classification of plant leaf diseases | |
CN110100774A (en) | River crab male and female recognition methods based on convolutional neural networks | |
Nasiri et al. | An automatic sorting system for unwashed eggs using deep learning | |
Sakib et al. | Implementation of fruits recognition classifier using convolutional neural network algorithm for observation of accuracies for various hidden layers | |
CN108388896A (en) | A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks | |
CN110363253A (en) | A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks | |
CN108268860A (en) | A kind of gas gathering and transportation station equipment image classification method based on convolutional neural networks | |
CN108805833A (en) | Miscellaneous minimizing technology of copybook binaryzation ambient noise of network is fought based on condition | |
CN109376625A (en) | A kind of human facial expression recognition method based on convolutional neural networks | |
Chen et al. | Cell nuclei detection and segmentation for computational pathology using deep learning | |
Bjørlykhaug et al. | Vision system for quality assessment of robotic cleaning of fish processing plants using CNN | |
Yin et al. | Recognition of grape leaf diseases using MobileNetV3 and deep transfer learning | |
Feng et al. | Classification of shellfish recognition based on improved faster r-cnn framework of deep learning | |
Monigari et al. | Plant leaf disease prediction | |
CN114820471A (en) | Visual inspection method for surface defects of intelligent manufacturing microscopic structure | |
CN110866547A (en) | Automatic classification system and method for traditional Chinese medicine decoction pieces based on multiple features and random forest | |
CN110321922A (en) | A kind of CT image classification method for distinguishing convolutional neural networks based on space correlation | |
CN110334747A (en) | Based on the image-recognizing method and application for improving convolutional neural networks | |
US11682111B2 (en) | Semi-supervised classification of microorganism | |
CN112907503B (en) | Penaeus vannamei Boone quality detection method based on self-adaptive convolutional neural network | |
Dey et al. | Shape Segmentation and Matching from Noisy Point Clouds. | |
CN109272004B (en) | Influenza strain egg embryo viability detection method based on convolutional neural network model | |
Saffari et al. | On Improving Breast Density Segmentation Using Conditional Generative Adversarial Networks. | |
Yu et al. | Evaluation of fiber degree for fish muscle based on the edge feature attention net | |
Chen et al. | Classification of flour types based on PSO-BP neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190809 |