CN108921057B - Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device - Google Patents

Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device Download PDF

Info

Publication number
CN108921057B
CN108921057B CN201810630720.4A CN201810630720A CN108921057B CN 108921057 B CN108921057 B CN 108921057B CN 201810630720 A CN201810630720 A CN 201810630720A CN 108921057 B CN108921057 B CN 108921057B
Authority
CN
China
Prior art keywords
prawn
picture
sample
generate
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810630720.4A
Other languages
Chinese (zh)
Other versions
CN108921057A (en
Inventor
刘向荣
毛勇
龚瑞
柳娟
曾湘祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201810630720.4A priority Critical patent/CN108921057B/en
Publication of CN108921057A publication Critical patent/CN108921057A/en
Application granted granted Critical
Publication of CN108921057B publication Critical patent/CN108921057B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a prawn form measuring method based on a convolutional neural network, which comprises the following steps: shooting a prawn sample and a reference object to obtain a sample picture; calibrating the target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture; establishing a data set according to the sample picture and the description file, and generating a prawn form measurement model according to the data set; preprocessing a sample picture in the test set to generate a test picture; inputting the test picture into a prawn shape measurement model to generate a generalization performance score; determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final measurement model; the invention also discloses a computer readable storage medium, terminal equipment and a prawn form measuring device based on the convolutional neural network; therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, and the manpower and material resources required in the breeding process of the prawns are saved.

Description

Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a prawn form measuring method, medium, terminal equipment and device based on a convolutional neural network.
Background
The prawns are important components of aquatic products in China, and people can ingest a large amount of protein by eating the prawns. Meanwhile, China is the world's largest prawn consuming country and the second prawn importing country. Therefore, prawns are essential to the life of people.
In the breeding process of prawns, the prawns need to be measured to obtain morphological parameters (such as length of cephalothorax, weight, body length and the like) of the prawns, and the morphological parameters can be used as a basis for researchers to select breeding schemes. The main method for measuring the morphological parameters of the prawns in the prior art is manual measurement, namely, the morphological parameters of the prawns are manually measured by measuring tools such as a vernier caliper. The method has low working efficiency and huge workload, and the measurement result has larger error due to human factors and the like, thereby seriously influencing the subsequent data analysis and the selection of breeding schemes.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, one purpose of the invention is to provide a prawn form measuring method based on a convolutional neural network, which can realize efficient and accurate measurement of prawn form parameters, save manpower and material resources required by form parameter measurement in a prawn breeding process, and ensure the accuracy of prawn breeding scheme selection.
A second object of the invention is to propose a computer-readable storage medium.
A third object of the present invention is to provide a terminal device.
The fourth purpose of the invention is to provide a prawn shape measuring device based on a convolutional neural network.
In order to achieve the above object, an embodiment of the first aspect of the present invention provides a prawn shape measurement method based on a convolutional neural network, including the following steps: shooting a prawn sample and a reference object, and carrying out normalization processing on a shot picture to obtain a sample picture; calibrating a target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture; establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; training a reference model according to the training set; inputting the verification set into a reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model; preprocessing the sample pictures in the test set to generate test pictures; inputting the test picture into the prawn shape measurement model to generate a second estimation result, and generating a generalization performance score of the prawn shape measurement model according to the second estimation result; and determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final prawn shape measurement model.
According to the prawn form measuring method based on the convolutional neural network, a prawn sample and a reference object are shot, and a shot picture is normalized to obtain a sample picture; then, calibrating the target area according to the sample picture, generating a rice search fox file corresponding to the target area, and associating the description file with the sample picture; then, establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; after the data set is divided, training a reference model according to a training set, inputting a verification set into the reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model; finally, preprocessing the sample pictures in the test set to generate test pictures; inputting the test picture into the prawn form measurement model to generate a second estimation result, and generating a generalization performance score of the prawn form measurement model according to the second estimation result; determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final prawn shape measurement model; therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required by the morphological parameter measurement in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
In addition, the prawn form measurement method based on the convolutional neural network provided by the embodiment of the invention can also have the following additional technical characteristics:
optionally, adjusting parameters of a reference model according to the first estimation result to generate a prawn shape measurement model, including: judging whether the first estimation result is consistent with a description file associated with the corresponding sample picture so as to obtain the accuracy of the first estimation result, and judging whether the accuracy of the first estimation result reaches a preset accuracy threshold value; and if the accuracy of the first estimation result does not reach a preset accuracy threshold, adjusting the parameters of the reference model so as to carry out iterative training on the reference model according to the verification set until the reference model with the accuracy of the first estimation result reaching the preset accuracy threshold is used as a prawn shape measurement model.
Optionally, the training of the reference model according to the training set includes: extracting image features of sample pictures in the training set to generate feature pictures, and associating the feature pictures with the sample pictures; generating a network according to the feature picture training area to obtain all candidate areas in the feature picture and the possibility score of each candidate area; and training a reference model according to the feature picture, all the candidate regions and the possibility score of each candidate region.
Optionally, extracting image features of sample pictures in the training set to generate a feature picture, including: performing convolution calculation on the sample picture through a ZF network to extract the characteristic information of the sample picture; and performing pooling processing on the characteristic information to generate a characteristic picture.
Optionally, preprocessing the sample picture in the test set to generate a test picture, including: removing reference object pixels of the sample pictures in the test set to generate a first preprocessed picture; inputting the first preprocessed picture into a prawn measuring model to generate a second preprocessed picture; and horizontally correcting the second preprocessed picture according to the target area in the second preprocessed picture to generate a test picture.
Optionally, the reference model is a Fast RCNN model.
Optionally, the dataset is partitioned to generate a training and validation sum text file, a training text file, a validation text file, and a test text file.
In order to achieve the above object, a computer-readable storage medium according to a second aspect of the present invention is provided, on which a prawn shape measurement program based on a convolutional neural network is stored, and when being executed by a processor, the prawn shape measurement program based on the convolutional neural network implements the prawn shape measurement method based on the convolutional neural network as described above.
In order to achieve the above object, a terminal device according to an embodiment of a third aspect of the present invention includes a memory, a processor, and a prawn shape measurement program based on a convolutional neural network, where the prawn shape measurement program based on a convolutional neural network is stored in the memory and is operable on the processor, and when the processor executes the prawn shape measurement program based on a convolutional neural network, the prawn shape measurement method based on a convolutional neural network is implemented.
In order to achieve the above object, a shrimp morphology measuring device based on a convolutional neural network according to a fourth aspect of the present invention includes: the acquisition module is used for shooting a prawn sample and a reference object and carrying out normalization processing on a shot picture to obtain a sample picture; the calibration module is used for calibrating a target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture; the data processing module is used for establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; the model training module is used for training a reference model according to the training set; the model verification module is used for inputting the verification set into a reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model; the test picture generation module is used for preprocessing the sample pictures in the test set to generate test pictures; the model testing module is used for inputting the test picture into the prawn shape measurement model to generate a second estimation result and generating a generalization performance score of the prawn shape measurement model according to the second estimation result; and the recognition module is used for determining a final prawn shape measurement model according to the generalization performance score and performing prawn shape measurement according to the final prawn shape measurement model.
According to the prawn form measuring device based on the convolutional neural network, provided by the embodiment of the invention, firstly, a prawn sample and a reference object are shot through an acquisition module, and a shot picture is normalized to obtain a sample picture; then, the calibration module calibrates the target area according to the sample picture, generates a description file corresponding to the target area, and associates the description file with the sample picture; then, the data processing module establishes a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; after the data set division is completed, the model training module trains a reference model according to the training set; the model verification module inputs the verification set into a reference model to generate a first estimation result, and adjusts parameters of the reference model according to the first estimation result to generate a prawn form measurement model; then, a test picture generation module preprocesses the sample picture in the test set to generate a test picture; the model testing module inputs the testing picture into the prawn shape measuring model to generate a second estimation result, and generates a generalization performance score of the prawn shape measuring model according to the second estimation result; and finally, the identification module determines a final prawn shape measurement model according to the generalization performance score and performs prawn shape measurement according to the final prawn shape measurement model. Therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required by the morphological parameter measurement in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
Drawings
FIG. 1 is a schematic flow chart of a prawn shape measurement method based on a convolutional neural network according to an embodiment of the invention;
FIG. 2 is a schematic flow chart of a prawn morphology measurement method based on a convolutional neural network according to another embodiment of the present invention;
FIG. 3 is a block diagram of a prawn shape measuring device based on a convolutional neural network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a sample picture generated according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a ZF network according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a convolution calculation method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a pooling process according to an embodiment of the present invention;
FIG. 9 is a flow chart illustrating a region-based network according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of candidate regions according to an embodiment of the invention;
FIG. 11 is a schematic diagram of a process for initializing reference model parameters according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of training data construction according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a data structure according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of a fully-connected layer ramp-up according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of joint training of a region-forming network and a reference model according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the process of prawn breeding, morphological parameters of prawns need to be measured. In the prior art, manual measurement is adopted, the efficiency is low, the measurement result is inaccurate, and the subsequent data analysis and the selection of a breeding scheme are seriously influenced. The prawn form measuring method based on the convolutional neural network provided by the embodiment of the invention comprises the steps of firstly obtaining a sample picture, calibrating a target area on the sample picture, and generating a description file corresponding to the target area to generate a data set; then, training a model according to the data set to obtain a prawn form measurement model, and measuring the prawn form according to the prawn form measurement model; therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required by the morphological parameter measurement in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Fig. 1 is a schematic flow diagram of a prawn morphology measurement method based on a convolutional neural network according to an embodiment of the present invention, and as shown in fig. 1, the prawn morphology measurement method based on a convolutional neural network includes the following steps:
s101, shooting a prawn sample and a reference object, and normalizing the shot picture to obtain a sample picture.
Among them, there are various ways of photographing a prawn sample and a reference object. For example, a prawn sample and a reference object are directly photographed by using a smart camera to obtain a prawn picture.
As an example, a prawn sample is video-captured to generate a prawn video; and extracting the image frames of the prawn video according to a preset acquisition frequency to generate a prawn picture.
The reference object is an article with known geometric data, and the arrangement mode can be various.
As an example, when the background is a light color system, a rectangular scale of a dark color system is selected as a reference. The length and the width of the rectangular scale are known, and the color of the scale is selected from the color which is greatly different from the color of the prawns.
As an example, the shooting environment can be normalized before shooting, which is beneficial to improving the quality of the shot picture and reducing the workload of subsequent picture processing.
As another example, as shown in fig. 4, the photographing apparatus is first specified and set as a fixed photographing platform; wherein, the color of the background is a light color line which is not similar to the color of the prawn; then, under the standard photographing environment, a certain light source is given to the bottom of the photographing platform, the standard illumination collection environment can be a D65 light source, and the color temperature is 6500K.
The step of carrying out normalization processing on the shot picture refers to the step of carrying out preprocessing on the picture to generate a sample picture; so that the format and picture elements of each sample picture are consistent.
As an example, pixels of a picture are first normalized to a computer-adapted size; and the picture is processed by software, so that the size, the background color and the like of the picture are kept consistent as much as possible. It should be noted that the pixels of the pictures captured by the capturing devices such as the smart camera are around 3648 × 2736, and the pixel values are too large, which may reduce the efficiency of processing the pictures by the computer, and the pictures cannot be completely displayed on the computer, which is inconvenient to operate. Therefore, preprocessing the picture can speed up the subsequent operation.
S102, calibrating the target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture.
Wherein the target regions include, but are not limited to, a head region, a sternum region, and a tail region. Moreover, there are various ways to calibrate the target region according to the sample picture, for example, the target region is calibrated by manually determining the head region, the breastplate region and the tail region, and the sample picture is shown in fig. 5.
It should be noted that the description file includes, but is not limited to, a target area category, a file directory, a picture name, target box information, prawn information, and the like.
S103, establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set.
That is, after the target area in the sample picture is calibrated and the corresponding description file is generated, a data set is established according to the calibrated sample picture and the corresponding description file thereof, and the data set is divided into a training set, a verification set and a test set.
The data set may be divided in various ways, for example, the data set is divided into three equal parts to form a training set, a verification set and a test set.
As an example; the data set is partitioned to generate a training and validation sum text file, a training text file, a validation text file, and a test text file.
Wherein, the training and verification sum text file is the sum of the training text file and the verification text file; the training and validation summary text file is 70% of the data set, the test text file is 30% of the data set, the training text file is 70% of the training and validation summary text file, and the validation text file is 30% of the training and validation summary text file.
And S104, training a reference model according to the training set.
There are various ways to train the reference model according to the training set, for example, extracting the image features of the prawn pictures in the training set by using a pre-training model, and training the reference model by combining with an SVM classification algorithm.
As an example, extracting image features of prawn pictures in a training set through a ZF network to generate feature pictures, and performing training of a region generation network according to the feature pictures; and then training a reference model through a fast RCNN algorithm according to the characteristic picture and the region generation network.
And S105, inputting the verification set into a reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn shape measurement model.
Namely, the verification set is input into a reference model, the reference model outputs a first estimation result according to a sample picture and a corresponding description file contained in the verification set, and then parameters of the reference model are adjusted according to the first estimation result to generate the prawn shape measurement model.
There are various ways to adjust the parameters of the reference model according to the first estimation result. For example, whether the first estimation result is correct or not is judged, the first estimation result with the judged result of no is stored, and the parameter of the reference model is adjusted according to the incorrect first estimation result.
And S106, preprocessing the sample pictures in the test set to generate test pictures.
For example, the sample picture is cut according to the speed requirement and specification requirement of the processor for processing the picture, and the background color, the size and the consistency of the target area of each sample picture are maintained through program adjustment.
And S107, inputting the test picture into the prawn shape measurement model to generate a second estimation result, and generating the generalization performance score of the prawn shape measurement model according to the second estimation result.
Wherein, generalization performance refers to the adaptability of the machine learning algorithm to fresh samples. The generalization performance of the prawn shape measurement model is further checked through the test picture, and the quality of the prawn shape measurement model can be verified. Thereby screening out the optimal prawn shape measurement model.
And S108, determining a final prawn shape measurement model according to the generalization performance score, and identifying the prawn according to the final prawn shape measurement model.
That is, after the final prawn shape measurement model is determined, prawn identification can be performed according to the final prawn shape measurement model. Specifically, a prawn picture can be obtained, and the prawn picture is input into the final prawn shape measurement model, and the final prawn shape measurement model outputs a target area and corresponding prawn shape measurement data according to the prawn picture. The prawn shape measurement data includes, but is not limited to, prawn head data, prawn brood data, prawn tail data, and the like.
According to the prawn form measuring method based on the convolutional neural network, a prawn sample and a reference object are shot, and a shot picture is normalized to obtain a sample picture; then, calibrating the target area according to the sample picture, generating a rice search fox file corresponding to the target area, and associating the description file with the sample picture; then, establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; after the data set is divided, training a reference model according to a training set, inputting a verification set into the reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model; finally, preprocessing the sample pictures in the test set to generate test pictures; inputting the test picture into the prawn form measurement model to generate a second estimation result, and generating a generalization performance score of the prawn form measurement model according to the second estimation result; determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final prawn shape measurement model; therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required by the morphological parameter measurement in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
Fig. 2 is a schematic flow diagram of a prawn morphology measurement method based on a convolutional neural network according to another embodiment of the present invention, and as shown in fig. 2, the prawn morphology measurement method based on a convolutional neural network includes the following steps:
s201, shooting a prawn sample and a reference object, and normalizing the shot picture to obtain a sample picture.
S202, manually calibrating the target area according to the sample picture, generating an xml file corresponding to the target area, and associating the xml file with the sample picture.
There are various ways to manually calibrate the target area according to the prawn picture.
As an example, the target boxes of the target area may be manually marked and a corresponding label may be marked for each target box. The label includes, but is not limited to, category information of each target area and category information of each sample picture. Therefore, different types of sample pictures can be conveniently and respectively stored.
S203, establishing a data set according to each sample picture and the corresponding xml file, wherein the data set is divided into a training set, a verification set and a test set.
And S204, extracting the image characteristics of the sample pictures in the training set to generate characteristic pictures, and associating the characteristic pictures with the sample pictures.
There are various ways to extract the image features of the sample pictures in the training set. For example, the extraction of the image features of the sample picture is performed by directional histogram of gradient (HOG) features.
As an example, as shown in fig. 6, the extraction of image features is performed by a ZF network. Wherein, the ZF network comprises 5 convolution layers and 2 full-connection layers.
Where convolution is a common calculation, to clearly describe the convolution calculation process, we first number each pixel of the image and use p(i,j)Representing the ith row and jth column element of the image. Each weight of the convolution kernel is numbered w(k,m)And b represents the offset term of the convolution kernel with the weight value of the mth row and the mth column, and the size of the convolution kernel is n x n. Using g(i,j)Indicating the ith row and the jth column element numbering each element of the output matrix. The activation function is represented using f (the activation function is typically chosen to be the relu function), and then the convolution is calculated using the following formula:
Figure BDA0001700461280000081
as shown in fig. 7, we describe how to compute the convolution by a simple example. We then abstract out some of the important concepts and calculations of convolutional layers. Assume that there is an image of size 5 x 5, which is convolved with a 3 x 3 convolution kernel (filter) and it is desired to obtain a 3 x 3 feature map. The calculation formula is as follows:
Figure BDA0001700461280000082
as shown in fig. 8, the main role of the pooled layer after the convolutional layer is downsampling by removing sample data that is not important in the output, thereby further reducing the number of parameters. The maximum pooling is the sample value after taking the maximum value among the n × n size samples.
The neurons in the full connection layer are connected with all the neurons in the previous layer for the final classification work, mainly the feature extraction work, so that the layer is placed in the subsequent operation.
And S205, generating a network according to the feature picture training area to obtain all candidate areas in the feature picture and the possibility score of each candidate area.
As an example, as shown in fig. 9, a feature image is obtained by passing an input image through several convolutional layers, and then a candidate region is generated based on the feature image. Specifically, a 3 x 3 sized sliding window is first used to convert the local feature image into a low dimensional feature (256 dimensions, where the low dimension is not relative to the window size, but rather relative to a maximum sized convolution kernel size, i.e., the product of width and height, and is clearly smaller for 256 dimensions). For a total of k anchors, the cls prediction layer will have 2k outputs (whether it is a candidate region, each anchor is divided into a target and a background), and the reg layer will have 4k outputs (k frames corresponding to the candidate region, each anchor has [ x, y, w, h ] corresponding to 4 offsets).
If fig. 10 shows that only a rough location needs to be found in the Faster RCNN in order to specify the size and location of the anchor, the latter determination of the exact location and size can be done on this basis. Three aspects of fixation were done in fast RCNN: the first and fixed scale changes, there are three scales in total; secondly, the fixed width-height ratio is changed in scale, and three width-height ratios are provided in total; and thirdly, a fixed sampling mode is adopted, namely sampling is carried out according to the first point and the second point. Therefore, the method does not need to train a network for target detection firstly like the previous method, and then judge the sliding window of the whole picture, thereby reducing the task complexity and quickening the training time. Each anchor corresponds to a rectangular box with the same center as the sliding window and different sizes and different widths and high proportions, and the feature extraction and area generation network selection method has the advantage of displacement invariance.
After the area generation network (RPN) is designed, how to train the RPN follows. For the division of positive and negative samples, each picture in a training set is considered, an actual target position frame is marked manually in advance, and then the following operations are carried out:
(1) for each manually-calibrated target position area, calculating an anchor with the largest overlapping proportion with the manually-calibrated target position area, and marking the anchor as a positive sample, wherein each label is ensured to correspond to at least one anchor positive sample;
(2) for the rest of anchors, if the proportion of overlapping of the anchor and a certain label region is more than 0.7, the anchor is marked as a positive sample (the anchor and the label are in a many-to-one relationship), otherwise, if the proportion is less than 0.3, the anchor needs to be marked as a negative sample;
(3) discarding the remaining anchors from the first two steps;
(4) anchors that cross image boundaries are discarded.
After the division of the positive and negative sample sets is selected, the formal training area can be started to generate the network. Like other networks, the loss function of the RPN is the classification error and the window position deviation of the positive sample, and after the loss function is defined, the RPN can be trained by using a conventional BP algorithm.
Loss function of RPN:
Figure BDA0001700461280000101
wherein p isiRepresenting the probability of being an object versus the probability of not being an object, t i4 coordinate positions (x, y, w, h) representing the predicted output of the anchor; i denotes the ith anchor, p when anchor is a positive sample i *1, negative samples are 0. t is ti *Representing the coordinates of an artificially marked target area associated with the positive sample anchor;
x, y, w, h respectively represent the center coordinate and width and height of the box, xa、x*Respectively represent a predicted box,
an anchor box, a ground channel box (y, w, h are the same). t is tiDenotes the offset, t, of the predict box relative to the anchor boxi *Indicating the offset of the group true box relative to the anchor box, the learning objective is naturally to bring the former close to the latter value.
And S206, training a reference model according to the feature picture, all the candidate regions and the possibility score of each candidate region.
As an example, the parameters may be initialized first, and the data may be processed hierarchically; then training the composition of the data, and finally, adjusting the data classification and the position.
Specifically, the method comprises the following steps:
(a) parameter initialization
As shown in fig. 11, the net removes the tail and trains a class 1000 classifier on ImageNet. The resulting parameter serves as an initialization parameter for the corresponding layer. The remaining parameters are initialized randomly. Wherein, pool 1-5 are convolution and pooling steps; the conv map is the feature picture generated after convolution and pooling.
(b) Hierarchical data
During tuning training, N complete pictures are added into each mini-batch firstly, and then R candidate frames selected from the N pictures are added. The R candidate frames may multiplex the network features of the first 5 stages of the N pictures. N-2 and R-128 are actually selected.
(c) Training data composition
N full pictures are flipped horizontally with 50% probability. The configuration of the R candidate frames is as shown in fig. 12.
(d) Classification and location adjustment
Data structure
As shown in fig. 13, the features of the fifth stage are input into two parallel fully-connected layers (called multi-task).
The cls _ score layer is used for classification, and a K + 1-dimensional array p is output and represents the probability of belonging to K classes and backgrounds.
The bbox _ prdct layer is used for adjusting the position of the candidate region, and outputting 4 × K dimensional arrays t which represent parameters to be translated and scaled when the candidate region belongs to the K classes respectively.
The cost function is:
the loss _ cls layer evaluates the classification cost. The probability corresponding to the real classification uu determines:
Lcls=-log pu
the loss _ bbox evaluates the detection box positioning cost. Comparing the difference between the prediction parameter tutu corresponding to the true class and the true translation scaling parameter vv:
Figure BDA0001700461280000111
g is Smooth L1 error, insensitive to outlier:
Figure BDA0001700461280000112
the total cost is a weighted sum of the two, and if the classification is background, the positioning cost is not considered:
Figure BDA0001700461280000113
full-connection layer acceleration
The classification and position adjustment are realized by a full connection layer (fc), the former-level data is xx, the latter-level data is yy, the full connection layer parameter is W, and the size is uxv. The first forward propagation (forward) is:
y=Wx
the computational complexity is uxv.
Performing SVD on W, and approximating by using the first t characteristic values:
W=UΣVT≈U(:,1:t)·Σ(1:t,1:t)·V(:,1:t)T
the original forward propagation is decomposed into two steps:
y=Wx=U·(Σ·VT)·x=U·z
the computational complexity becomes uxt + vxt.
In implementation, it is equivalent to split a full connection layer into two layers, which are connected by a low-dimensional data. As shown in fig. 14.
S207, inputting the verification set into a reference model to generate a first estimation result.
S208, judging whether the first estimation result is consistent with the description file associated with the corresponding sample picture so as to obtain the accuracy of the first estimation result, and judging whether the accuracy of the first estimation result reaches a preset accuracy threshold value.
S209, if the accuracy of the first estimation result does not reach the preset accuracy threshold, adjusting the parameters of the reference model so as to carry out iterative training on the reference model according to the verification set until the reference model with the accuracy of the first estimation result reaching the preset accuracy threshold is used as the prawn shape measurement model.
There are various ways to verify the reference model by the verification set, for example: leave a cross validation mode, K-fold cross validation mode, etc.
As an example, as shown in fig. 15, image features of a sample picture are extracted through a ZF network toGenerating a characteristic picture; and marking the characteristic picture as a parameter set S0. Then, iterative training of the region generation network and the Fast RCNN model is started,
first, according to the parameter set S0Training the area generation network, and extracting S by using the area generation network0The candidate region of (1); then, according to the parameter set S0Training Fast RCNN model in candidate region, and recording the obtained parameter set as S1(ii) a Then, according to the parameter set S1Training of area generation network and extraction of area generation network1The candidate region of (1); then, according to the parameter set S1Training Fast RCNN model in candidate region, and recording the obtained parameter set as S2. In this way, the region generation network and the Fast RCNN model are iteratively trained twice, so as to complete the training of the region generation network and the Fast RCNN model.
S210, removing the reference object pixels of the sample pictures in the test set to generate a first preprocessed picture.
There are various ways to remove the reference pixels of the sample pictures in the test set, for example, cutting out the reference part directly by using a drawing tool.
As an example, removing the pixels of the reference object (rectangular scale) specifically includes: firstly, a sample picture is converted into a single-channel image, a binarization threshold value of a gray level image is obtained according to a maximum inter-class variance method, a binarization image is obtained by using a binarization function, and a smooth connected domain and Gaussian smoothing operation are performed on the binarization image by applying morphological operation. Then, carrying out Row broad detection on the binary image to obtain the contour connecting point K of each connected domaini(ii) a Then, for each connected domain connection point K obtainediObtaining a polygon K by adopting a polygon contour approximation functioncCalculating KArea of c
Figure BDA0001700461280000121
And the area of the entire binarized image. Then, judging KcWhether four sides exist or not is judged, and if yes, the judgment result is further judged
Figure BDA0001700461280000122
Whether it is between 0.2 and 0.8, and further judging K when the judgment result is yescWhether cosine values of the four corners are smaller than a preset threshold value or not, and if yes, determining that the target is a rectangular scale, and removing pixels corresponding to the rectangular scale.
And S211, inputting the first preprocessed picture into the prawn measuring model to generate a second preprocessed picture.
That is to say, the first preprocessed picture is input into the prawn measuring model, and the prawn measuring model generates a second preprocessed picture which is calibrated with target areas and the possibility scores of the target areas according to the first preprocessed picture.
S212, horizontally correcting the second preprocessed picture according to the target area in the second preprocessed picture to generate a test picture.
And if the second preprocessed picture is a picture marked with target areas and the possibility scores of the target areas, determining a symmetry axis of the object according to the target areas, and further calculating the horizontal inclination angle of the prawns according to the elegance angle of the symmetry axis. And judging whether the horizontal inclination angle of the prawn is larger than a preset angle threshold value, and if so, performing horizontal correction on the second preprocessed picture.
And S213, inputting the test picture into the prawn shape measurement model to generate a second estimation result, and generating the generalization performance score of the prawn shape measurement model according to the second estimation result.
As an example, the test picture is input into the prawn shape measurement model to generate the second estimation result, specifically:
(1) and identifying the test picture by using the trained prawn form measurement model, and respectively obtaining specific rectangular positions P1, P2 and P3 of the head, the chest and the tail of the prawn in the test picture. If the shrimp heads are on the left side, the coordinates of the midpoints of the right sides of the P1, the P2 and the P3 are the coordinates (x0, y0) of the base part of the sought eye, the coordinates (x1, y1) of the first belly node and the coordinates (x2, y2) of the tail end, and otherwise, the opposite is true. The height of P2 is actually the coordinate length C of the width of the skull.
(2) The actual width and coordinate width W of the rectangular scale can be obtained by the previous detection of the rectangular scaleSHigh HSRespectively calculating the width ratio alpha of the twoWAnd a high ratio alphaH
Figure BDA0001700461280000131
Figure BDA0001700461280000132
(3) According to alphaW、αHCalculating the actual length of the skull and sternum CRWidth of head and chest nails CHAnd body length L:
Figure BDA0001700461280000133
CH=C*αH
Figure BDA0001700461280000134
and S214, determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final prawn shape measurement model.
In summary, according to the prawn shape measurement method based on the convolutional neural network of the embodiment of the invention, firstly, the image features of the sample picture are extracted through the ZF network, and a feature picture is generated; then, training a region generation network according to the feature picture, and training a reference model according to the feature picture and the region generation network; judging the accuracy of the estimated result output by the reference model through a verification set, adjusting the parameters of the reference model according to the accuracy of the estimated result, and determining the reference model with the accuracy meeting the requirement as a prawn form measurement model; finally, testing the generalization performance of the prawn shape measurement model through the test picture, determining a final prawn shape measurement model according to the generalization performance, and identifying the prawn through the final prawn shape measurement model; therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required by the morphological parameter measurement in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
In order to implement the foregoing embodiments, an embodiment of the present invention provides a computer-readable storage medium, on which a prawn shape measurement program based on a convolutional neural network is stored, and when being executed by a processor, the prawn shape measurement program based on the convolutional neural network implements the prawn shape measurement method based on the convolutional neural network as described above.
In order to implement the foregoing embodiment, the terminal device provided in an embodiment of the present invention includes a memory, a processor, and a prawn shape measurement program based on a convolutional neural network, where the prawn shape measurement program is stored in the memory and is capable of running on the processor, and when the processor executes the prawn shape measurement program based on the convolutional neural network, the prawn shape measurement method based on the convolutional neural network is implemented.
Fig. 3 is a schematic block diagram of a shrimp morphology measuring apparatus based on a convolutional neural network according to an embodiment of the present invention, and as shown in fig. 3, the shrimp morphology measuring apparatus based on a convolutional neural network includes: the system comprises an acquisition module 10, a calibration module 20, a data processing module 30, a model training module 40, a model verification module 50, a test picture generation module 60, a model test module 70 and an identification module 80.
The acquisition module 10 is configured to take a prawn sample and a reference object, and normalize the taken picture to obtain a sample picture.
The calibration module 20 is configured to calibrate the target area according to the sample picture, generate a description file corresponding to the target area, and associate the description file with the sample picture.
And the data processing module 30 is configured to establish a data set according to each sample picture and the corresponding description file, where the data set is divided into a training set, a verification set, and a test set.
And the model training module 40 is used for training the reference model according to the training set.
And the model verification module 50 is used for inputting the verification set into the reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate the prawn form measurement model.
And a test picture generating module 60, configured to preprocess the sample picture in the test set to generate a test picture.
And the model testing module 70 is used for inputting the test picture into the prawn shape measurement model to generate a second estimation result, and generating the generalization performance score of the prawn shape measurement model according to the second estimation result.
And the identification module 80 is used for determining a final prawn shape measurement model according to the generalization performance score and performing prawn shape measurement according to the final prawn shape measurement model.
According to the prawn form measuring device based on the convolutional neural network, provided by the embodiment of the invention, firstly, a prawn sample and a reference object are shot through an acquisition module, and a shot picture is normalized to obtain a sample picture; then, the calibration module calibrates the target area according to the sample picture, generates a description file corresponding to the target area, and associates the description file with the sample picture; then, the data processing module establishes a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set; after the data set division is completed, the model training module trains a reference model according to the training set; the model verification module inputs the verification set into a reference model to generate a first estimation result, and adjusts parameters of the reference model according to the first estimation result to generate a prawn form measurement model; then, a test picture generation module preprocesses the sample picture in the test set to generate a test picture; the model testing module inputs the testing picture into the prawn shape measuring model to generate a second estimation result, and generates a generalization performance score of the prawn shape measuring model according to the second estimation result; and finally, the identification module determines a final prawn shape measurement model according to the generalization performance score and performs prawn shape measurement according to the final prawn shape measurement model. Therefore, the high-efficiency and accurate measurement of the morphological parameters of the prawns is realized, the manpower and material resources required in the breeding process of the prawns are saved, and the accuracy of the selection of the breeding scheme of the prawns is ensured.
It should be noted that the explanation of the aforementioned penaeus chinensis morphology measurement method based on the convolutional neural network described in the embodiment of fig. 1 is also applicable to the penaeus chinensis morphology measurement device based on the convolutional neural network of this embodiment, and details are not repeated here.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above should not be understood to necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A prawn shape measuring method based on a convolutional neural network is characterized by comprising the following steps:
shooting a prawn sample and a reference object, and carrying out normalization processing on a shot picture to obtain a sample picture;
calibrating a target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture;
establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set;
training a reference model according to the training set;
inputting the verification set into a reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model;
preprocessing the sample pictures in the test set to generate test pictures;
inputting the test picture into the prawn shape measurement model to generate a second estimation result, and generating a generalization performance score of the prawn shape measurement model according to the second estimation result;
determining a final prawn shape measurement model according to the generalization performance score, and performing prawn shape measurement according to the final prawn shape measurement model;
wherein, the prawn form comprises: length, body weight and body length of the head and chest beetle of prawn.
2. The convolutional neural network-based prawn morphology measurement method of claim 1, wherein adjusting parameters of a reference model according to the first estimation result to generate a prawn morphology measurement model comprises:
judging whether the first estimation result is consistent with a description file associated with the corresponding sample picture so as to obtain the accuracy of the first estimation result, and judging whether the accuracy of the first estimation result reaches a preset accuracy threshold value;
and if the accuracy of the first estimation result does not reach a preset accuracy threshold, adjusting the parameters of the reference model so as to carry out iterative training on the reference model according to the verification set until the reference model with the accuracy of the first estimation result reaching the preset accuracy threshold is used as a prawn shape measurement model.
3. The convolutional neural network-based prawn morphology measurement method of claim 1, wherein the training of the reference model according to the training set comprises:
extracting image features of sample pictures in the training set to generate feature pictures, and associating the feature pictures with the sample pictures;
generating a network according to the feature picture training area to obtain all candidate areas in the feature picture and the possibility score of each candidate area;
and training a reference model according to the feature picture, all the candidate regions and the possibility score of each candidate region.
4. The convolutional neural network-based prawn morphology measurement method of claim 3, wherein extracting image features of sample pictures in the training set to generate feature pictures comprises:
performing convolution calculation on the sample picture through a ZF network to extract the characteristic information of the sample picture;
and performing pooling processing on the characteristic information to generate a characteristic picture.
5. The convolutional neural network-based prawn morphology measurement method of claim 1, wherein preprocessing the sample pictures in the test set to generate a test picture comprises:
removing reference object pixels of the sample pictures in the test set to generate a first preprocessed picture;
inputting the first preprocessed picture into a prawn measuring model to generate a second preprocessed picture;
and horizontally correcting the second preprocessed picture according to the target area in the second preprocessed picture to generate a test picture.
6. A method as claimed in any one of claims 1 to 5, wherein the reference model is a Fast RCNN model.
7. The convolutional neural network-based prawn morphology measurement method of claim 1 wherein said dataset is partitioned to generate a training and validation sum text file, a training text file, a validation text file, and a test text file.
8. A computer-readable storage medium, on which a convolutional neural network-based prawn morphology measurement program is stored, which when executed by a processor implements the convolutional neural network-based prawn morphology measurement method according to any one of claims 1 to 7.
9. The terminal device is characterized by comprising a memory, a processor and a prawn shape measurement program which is stored on the memory and can run on the processor, wherein the processor realizes the prawn shape measurement method based on the convolutional neural network according to any one of claims 1 to 7 when executing the prawn shape measurement program based on the convolutional neural network.
10. The utility model provides a shrimp form measuring device based on convolutional neural network which characterized in that includes:
the acquisition module is used for shooting a prawn sample and a reference object and carrying out normalization processing on a shot picture to obtain a sample picture;
the calibration module is used for calibrating a target area according to the sample picture, generating a description file corresponding to the target area, and associating the description file with the sample picture;
the data processing module is used for establishing a data set according to each sample picture and the corresponding description file, wherein the data set is divided into a training set, a verification set and a test set;
the model training module is used for training a reference model according to the training set;
the model verification module is used for inputting the verification set into a reference model to generate a first estimation result, and adjusting parameters of the reference model according to the first estimation result to generate a prawn form measurement model;
the test picture generation module is used for preprocessing the sample pictures in the test set to generate test pictures;
the model testing module is used for inputting the testing picture into the prawn shape measuring model to generate a second estimation result and generating a generalization performance score of the prawn shape measuring model according to the second estimation result;
the recognition module is used for determining a final prawn shape measurement model according to the generalization performance score and performing prawn shape measurement according to the final prawn shape measurement model;
wherein, the prawn form comprises: length, body weight and body length of the head and chest beetle of prawn.
CN201810630720.4A 2018-06-19 2018-06-19 Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device Expired - Fee Related CN108921057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810630720.4A CN108921057B (en) 2018-06-19 2018-06-19 Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810630720.4A CN108921057B (en) 2018-06-19 2018-06-19 Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device

Publications (2)

Publication Number Publication Date
CN108921057A CN108921057A (en) 2018-11-30
CN108921057B true CN108921057B (en) 2021-06-01

Family

ID=64419916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810630720.4A Expired - Fee Related CN108921057B (en) 2018-06-19 2018-06-19 Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device

Country Status (1)

Country Link
CN (1) CN108921057B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009055B (en) * 2019-04-15 2020-12-29 中国计量大学 Soft-shell shrimp feature extraction method based on improved AlexNet
CN110000116B (en) * 2019-04-19 2021-04-23 福建铂格智能科技股份公司 Free-fall fruit and vegetable sorting method and system based on deep learning
CN110220466A (en) * 2019-07-05 2019-09-10 中国科学院海洋研究所 A method of body weight gain is estimated based on prawn eyeball diameter
CN110347134A (en) * 2019-07-29 2019-10-18 南京图玩智能科技有限公司 A kind of AI intelligence aquaculture specimen discerning method and cultivating system
CN111597476B (en) * 2020-05-06 2023-08-22 北京金山云网络技术有限公司 Image processing method and device
CN111397709A (en) * 2020-05-18 2020-07-10 扬州大学 Rapid measurement method for thousand-grain weight of wheat
CN111968096B (en) * 2020-08-21 2024-01-02 青岛海米飞驰智能科技有限公司 Method and system for detecting white spot syndrome virus of prawns based on surface features
CN111985477B (en) * 2020-08-27 2024-06-28 平安科技(深圳)有限公司 Animal on-line core claim method, device and storage medium based on monocular camera
CN112070761B (en) * 2020-09-18 2022-09-16 福州大学 Prawn freshness nondestructive testing method based on deep learning
CN112232978B (en) * 2020-10-20 2022-11-04 青岛丰禾星普科技有限公司 Aquatic product length and weight detection method, terminal equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944231A (en) * 2010-08-19 2011-01-12 北京农业智能装备技术研究中心 Method for extracting wheatear morphological parameters
CN103801520A (en) * 2014-01-27 2014-05-21 浙江大学 Method and device for automatically carefully sorting and grading shrimps
CN105066885A (en) * 2015-07-11 2015-11-18 浙江大学宁波理工学院 Fish body dimension and weight rapid acquisition apparatus and acquisition method
CN105160400A (en) * 2015-09-08 2015-12-16 西安交通大学 L21 norm based method for improving convolutional neural network generalization capability
CN105389586A (en) * 2015-10-20 2016-03-09 浙江大学 Method for automatically detecting integrity of shrimp body based on computer vision
CN106469304A (en) * 2016-09-22 2017-03-01 西安理工大学 Handwritten signature location positioning method in bill based on depth convolutional neural networks
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN107330451A (en) * 2017-06-16 2017-11-07 西交利物浦大学 Clothes attribute retrieval method based on depth convolutional neural networks
CN108009591A (en) * 2017-12-14 2018-05-08 西南交通大学 A kind of contact network key component identification method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005027015A2 (en) * 2003-09-10 2005-03-24 Bioimagene, Inc. Method and system for quantitatively analyzing biological samples

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944231A (en) * 2010-08-19 2011-01-12 北京农业智能装备技术研究中心 Method for extracting wheatear morphological parameters
CN103801520A (en) * 2014-01-27 2014-05-21 浙江大学 Method and device for automatically carefully sorting and grading shrimps
CN105066885A (en) * 2015-07-11 2015-11-18 浙江大学宁波理工学院 Fish body dimension and weight rapid acquisition apparatus and acquisition method
CN105160400A (en) * 2015-09-08 2015-12-16 西安交通大学 L21 norm based method for improving convolutional neural network generalization capability
CN105389586A (en) * 2015-10-20 2016-03-09 浙江大学 Method for automatically detecting integrity of shrimp body based on computer vision
CN106469304A (en) * 2016-09-22 2017-03-01 西安理工大学 Handwritten signature location positioning method in bill based on depth convolutional neural networks
CN106504233A (en) * 2016-10-18 2017-03-15 国网山东省电力公司电力科学研究院 Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN107330451A (en) * 2017-06-16 2017-11-07 西交利物浦大学 Clothes attribute retrieval method based on depth convolutional neural networks
CN108009591A (en) * 2017-12-14 2018-05-08 西南交通大学 A kind of contact network key component identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Harbitz A.等.Estimation of shrimp (Pandalus borealis) carapace length by image analysis.《ICES journal of marine Science》.2007,第64卷(第5期),第939-944页. *
基于机器视觉技术的对虾规格检测方法研究;罗艳;《中国优秀硕士学位论文全文数据库信息科技辑》;20130615(第6期);I138-1084 *

Also Published As

Publication number Publication date
CN108921057A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN110837870B (en) Sonar image target recognition method based on active learning
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN109583483B (en) Target detection method and system based on convolutional neural network
WO2020177432A1 (en) Multi-tag object detection method and system based on target detection network, and apparatuses
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN111160269A (en) Face key point detection method and device
CN111666855B (en) Animal three-dimensional parameter extraction method and system based on unmanned aerial vehicle and electronic equipment
CN111723691B (en) Three-dimensional face recognition method and device, electronic equipment and storage medium
CN111985376A (en) Remote sensing image ship contour extraction method based on deep learning
CN110909618B (en) Method and device for identifying identity of pet
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN110443279B (en) Unmanned aerial vehicle image vehicle detection method based on lightweight neural network
CN113420643B (en) Lightweight underwater target detection method based on depth separable cavity convolution
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN110163798B (en) Method and system for detecting damage of purse net in fishing ground
CN111882555B (en) Deep learning-based netting detection method, device, equipment and storage medium
CN111626241B (en) Face detection method and device
CN115937552A (en) Image matching method based on fusion of manual features and depth features
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN114358279A (en) Image recognition network model pruning method, device, equipment and storage medium
CN113674205A (en) Method and system for measuring human body based on monocular depth camera
CN115690546B (en) Shrimp length measuring method, device, electronic equipment and storage medium
CN112132137A (en) FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210601