CN115511803B - Broken rice detection method and system - Google Patents

Broken rice detection method and system Download PDF

Info

Publication number
CN115511803B
CN115511803B CN202211118842.8A CN202211118842A CN115511803B CN 115511803 B CN115511803 B CN 115511803B CN 202211118842 A CN202211118842 A CN 202211118842A CN 115511803 B CN115511803 B CN 115511803B
Authority
CN
China
Prior art keywords
convolution
image
multiplied
rice
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211118842.8A
Other languages
Chinese (zh)
Other versions
CN115511803A (en
Inventor
赵公方
李新奇
樊春晓
沈红艳
严金欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Anjiete Optoelectronic Co ltd
Original Assignee
Hefei Anjiete Optoelectronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Anjiete Optoelectronic Co ltd filed Critical Hefei Anjiete Optoelectronic Co ltd
Priority to CN202211118842.8A priority Critical patent/CN115511803B/en
Publication of CN115511803A publication Critical patent/CN115511803A/en
Application granted granted Critical
Publication of CN115511803B publication Critical patent/CN115511803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/255Details, e.g. use of specially adapted sources, lighting or optical systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Biochemistry (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Dispersion Chemistry (AREA)
  • Nonlinear Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for detecting broken rice of rice, wherein the method comprises the following steps: s1, image acquisition; s2, preprocessing the image acquired in the step S1; s3, calculating the broken rice rate according to the image data preprocessed in the step S2. The invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. The method uses the full convolution network, and the acquired image can be directly input into the network, so that complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved.

Description

Broken rice detection method and system
Technical Field
The invention relates to the field of broken rice detection, in particular to a method and a system for detecting broken rice.
Background
The existing broken rice monitoring method is based on a machine vision technology, the rice in a rice slip state is irradiated by an external light source, an image is collected by using a ccd camera, and a grain area is obtained through pretreatment operations including graying, background segmentation, binarization, corrosion expansion and the like. The detection method comprises the following steps: and processing the image through edge detection, calculating the average circumference, and judging that three quarters of the circumference smaller than the average circumference is broken rice. And calculating the average area, and judging that the crushed rice is three fourths smaller than the average area. And aspect ratio detection, the aspect ratio of the crushed rice is far smaller than that of normal rice grains. The existing monitoring method adopts manual selection of target characteristics, and the characteristics have different expression performances on broken rice characteristics, so that the accuracy of the final detection result is not ideal.
Disclosure of Invention
In order to solve the problem of misjudgment in traditional machine learning, the invention provides a method and a system for detecting broken rice of rice, and the specific scheme is as follows:
a broken rice detection method comprises the following steps:
s1, acquiring at least 3 different types of rice grain images, wherein the visible spectrum image information of rice grains in three wave bands near 700nm of an R channel, 550nm of a G channel and 440nm of a B channel;
s2, preprocessing the image acquired in the step S1;
wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1, so as to enhance the data and expand the sample data set; the rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is the rotation angle, (x 1, y 1) is the current coordinate, (x 2, y 2) is the post-rotation coordinate,adjusting contrast uses gamma transformation s=cr γ S is the output gray level, r is the input gray level, and γ is the gamma value;
s22, carrying out gray scale processing on the data obtained in the step S21; gray=R0.299+G0.587+B0.114, gray is the Gray value of the point, R, G, B is the RGB three channel value of the point;
s23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of a center point with the median value of all pixel points of the matrix to obtain a new matrix;
s24, adopting a watershed algorithm based on morphological gradient of marker control to segment adhered rice, setting f as an input image, b as a structural element, the morphological gradient of the image being expressed as g,
Figure GDA0004212578030000021
for the expansion operation, the expansion formula is
Figure GDA0004212578030000022
Figure GDA0004212578030000023
For the corrosion operation, the corrosion formula is +.>
Figure GDA0004212578030000024
Then there is
Figure GDA0004212578030000025
The watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows
Figure GDA0004212578030000026
Figure GDA0004212578030000027
gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and the additional mark is utilized to search the mark in the original image, so that the algorithm is guided to divide.
S3, calculating the broken rice rate according to the image data preprocessed in the step S2;
the method comprises the following steps of:
s31, counting the total number of rice grains by using a journey marking algorithm;
s32, constructing a full convolution neural network to divide the particle area, and counting the number of particles;
the built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4;
wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage2;
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; the other part of the images passes through a hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage3;
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; the other part of the images passes through a hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage4;
stage4: the input is an image transmitted by stage3, the image after convolution is transmitted to residual convolution modules C2-4 and C3-4 through residual convolution modules C1-4 and C1-4 comprising 64 convolution kernels of 3 multiplied by 3, C2-4 comprises 16 convolution kernels of 1 multiplied by 1, C3-4 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_4;
in order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, a residual convolution module C1 is input, C1 comprises 1 convolution kernel of 1×1, and a final segmentation image is Output;
s33, calculating the broken rice rate, wherein
Figure GDA0004212578030000031
Preferably, the specific step of counting the broken rice in step S32 includes:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:
Figure GDA0004212578030000032
w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is a convolved value;
s322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rate of the convolution kernels is 2, 4 and 8 respectively; the actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter;
s323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:
Figure GDA0004212578030000041
w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature maps obtained in each stage by using a filter with the size of 1 multiplied by 1;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
The invention also discloses a computer system, which comprises a processor and a storage medium, wherein the storage medium is provided with a computer program, and the processor reads and runs the computer program from the storage medium to execute the broken rice detection method according to any one of claims 1 to 2.
Preferably, the system of the broken rice detection method comprises a camera bellows, a CCD camera integrated with an image acquisition card, a computer, a light source and an objective table;
the annular array at the top end of the camera bellows is provided with the light sources, so as to provide illumination of different wave bands in the camera bellows and avoid generating shadows;
the objective table is arranged in the center of the bottom of the camera bellows and used for placing rice samples of different types; the CCD camera is arranged right above the objective table outside the camera bellows and used for collecting rice samples irradiated by light sources with different wavelengths, and the rice samples are uploaded to the computer through the image acquisition card for further processing of the samples;
background paper is stuck on the inner side wall of the camera bellows, so that specular reflection is avoided.
The invention has the beneficial effects that:
according to the invention, the camera bellows is used in the image acquisition process, the light source is placed at the top and the inner surface of the camera bellows is treated, so that misjudgment caused by strong reflection and shadow generated by a normal light source is reduced. Meanwhile, the invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. In the rice grain classification process, the full convolution neural network is used for replacing manual feature extraction, and feature graphs with different depths are fused, so that complicated feature algorithm selection and feature design are avoided, and the recognition accuracy is improved. Compared with a judging method using the circumference, the area and the length-width ratio, the method uses the full convolution network, and because the acquired image can be directly input into the network, complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved. Compared with the method for identifying by using a neural network, the method for identifying the adhered rice by using the watershed algorithm based on the morphological gradient of the marker control in the pretreatment stage has the advantages that the number of rice grains is obtained, statistics of the broken rice rate is completed after identification, the detection rate is improved, and the method has practical application value in practical rice processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a system architecture according to the present invention;
fig. 3 is a block diagram of a full convolutional neural network of the present invention.
The reference numerals are as follows: 1. camera bellows 2, light source 3, CCD camera 4, objective table.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for detecting broken rice comprises the following steps:
s1, image acquisition.
Wherein the acquired image comprises: at least 3 different types of rice grains are adopted for collection, and three wave bands of visible spectrum image information of the rice grains are near 700mm of the R channel, 550mm of the G channel and 440mm of the B channel. The experimental sample adopts various different kinds of rice grains so as to improve the robustness of the training result.
S2, preprocessing the image acquired in the step S1.
Wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1.
In the image preprocessing process, all data are obtained from experiments, the workload is large, the data set is insufficient, and firstly, the original image data set is subjected to rotation, overturning and contrast adjustment transformation processing to carry out data enhancement so as to expand the sample data set.
The rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is a rotation angle, (x) 1 ,y 1 ) Is the current coordinate, (x 2 ,y 2 ) For rotating the post-coordinates, adjusting contrast uses gamma transformation s=cr γ S is output gray level, r is input gray level, gamma is gamma value, and is used for adjusting gray level conversion degree, c is called gray scale coefficient, and is used for integrally stretching image gray scale, so as to achieve the purpose of data enhancement, expand sample data set, improve the integral learning performance of the network, and avoid the phenomenon of overfitting of the surface depth convolution neural network.
S22, performing gray scale processing on the data obtained in the step S21.
In the image preprocessing process, the original image is a color image, and comprises 3 color channels of red, green and blue, so that the image tends to be soft, has moderate contrast, is easy for human eyes to observe, and is grayed. The Gray formula is gray=r×0.299+g×0.587+b×0.114, gray is the Gray value of the point, and R, G, B is the value of the RGB three channels of the point.
S23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of the center point with the median value of all pixel points of the matrix to obtain a new matrix.
S24, during the falling process of the rice grains, partial rice grains are overlapped, so that the total number of the rice grains is difficult to distinguish, and the adhered rice is further segmented. Wherein, the watershed algorithm based on morphological gradient of mark control is adopted to divide the adhered rice.
Let f be the input image, b be the structural element, the morphological gradient of the image be g,
Figure GDA0004212578030000071
for the expansion operation, the expansion formula is +.>
Figure GDA0004212578030000072
Figure GDA0004212578030000073
For the corrosion operation, the corrosion formula is +.>
Figure GDA0004212578030000074
There is->
Figure GDA0004212578030000075
The watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows
Figure GDA0004212578030000076
Figure GDA0004212578030000077
gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and the additional mark is used for searching the mark in the original image, so that the algorithm is guided to perform segmentation and over-segmentation is prevented.
S3, calculating the broken rice rate according to the image data preprocessed in the step S2.
The calculating step S of the broken rice rate in the step S3 comprises the following steps:
s31, counting the total number of rice grains by using a journey marking algorithm.
S32, constructing a full convolution neural network to divide the particle area, and counting the number of particles.
And (3) constructing a full convolution neural network, wherein the network design is based on a VGG network, and the network structure is shown in figure 3. Because the rice grain has different shapes and sizes, the standard convolution kernel is difficult to extract all the features, the network uses a residual variability convolution module, and the size and the position of the deformable convolution kernel can be dynamically adjusted according to the image content which is required to be identified at present, so that the image features with different sizes and different shapes are learned.
The built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4, wherein the stage1, stage2 and stage3 have the same structure and all comprise convolution layers C1, C2, C3 and C4, a cavity convolution layer S1 of a pooling layer is replaced, and stage4 comprises convolution layers C1, C2 and C3.
The built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4.
Wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; and the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into stage2.
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; and the other part of the images passes through the hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, the images are downsampled, and the convolved images are input into stage3.
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; and the other part of the images passes through the hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into stage4.
stage4: the input is the image that stage3 was imported, through residual convolution module C1-4, C1-4 includes 64 convolution kernels of 3 x 3, the image after convolution is passed to residual convolution module C2-4, C3-4, C2-4 includes 16 convolution kernels of 1 x1, C3-4 includes 1 convolution kernel of 1 x1, the image that the convolution produced is output_4.
In order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, the input residual convolution module C1, C1 comprises 1 convolution kernel of 1×1, and the final segmented image is Output.
Wherein, the specific steps of counting broken rice number include:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:
Figure GDA0004212578030000091
w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is the convolved value. The pooling layer is eliminated by enlarging the convolution kernel to increase the receptive field.
S322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rates of convolution kernels are 2, 4 and 8 respectively. The hole convolution layer is used instead of the conventional convolution layer because the pooling layer not only reduces the spatial acuity during segmentation, but may also lose some important edge information, which directly results in under-segmentation of the image. The actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter.
S323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:
Figure GDA0004212578030000092
w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature graphs obtained in each stage by using a filter with the size of 1 multiplied by 1, so as to avoid the fact that the extracted features are not specific enough when the convolution depth is large;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
S33, calculating the broken rice rate, wherein
Figure GDA0004212578030000093
Referring to fig. 2, a system for detecting broken rice of rice comprises a camera bellows 1, a CCD camera 3 integrated with an image acquisition card, a computer, a light source 2 and an objective table 4.
The annular array at the top end of the camera bellows 1 is provided with the light sources 2, so as to provide illumination of different wave bands in the camera bellows 1 and avoid generating shadows.
The center of the bottom of the camera bellows 1 is provided with the objective table 4 for placing rice samples of different types; the CCD camera 3 is arranged right above the objective table 4 outside the camera bellows 1 and used for collecting rice samples irradiated by the light sources 2 with different wavelengths, and the rice samples are uploaded to the computer through the image collecting card for further processing of the samples.
Background paper is stuck on the inner side wall of the camera bellows 1, so that specular reflection is avoided.
The invention also discloses a computer system, which comprises a processor and a storage medium, wherein the storage medium is provided with a computer program, and the processor reads and runs the computer program from the storage medium to execute the broken rice detection method.
According to the invention, the camera bellows 1 is used in the image acquisition process, the light source 2 is placed at the top and the inner surface of the camera bellows 1 is processed, so that misjudgment caused by strong reflection and shadow generated by the normal light source 2 is reduced.
Meanwhile, the invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. In the rice grain classification process, the full convolution neural network is used for replacing manual feature extraction, and feature graphs with different depths are fused, so that complicated feature algorithm selection and feature design are avoided, and the recognition accuracy is improved.
Compared with a judging method using the circumference, the area and the length-width ratio, the method uses the full convolution network, and because the acquired image can be directly input into the network, complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved.
Compared with the method for identifying by using a neural network, the method for identifying the adhered rice by using the watershed algorithm based on the morphological gradient of the marker control in the pretreatment stage has the advantages that the number of rice grains is obtained, statistics of the broken rice rate is completed after identification, the detection rate is improved, and the method has practical application value in practical rice processing.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disk) as used herein include Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disk) usually reproduce data magnetically, while discs (disk) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (4)

1. The broken rice detection method is characterized by comprising the following steps of:
s1, acquiring at least 3 different types of rice grain images, wherein the visible spectrum image information of rice grains in three wave bands near 700nm of an R channel, 550nm of a G channel and 440nm of a B channel;
s2, preprocessing the image acquired in the step S1;
wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1, so as to enhance the data and expand the sample data set; the rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is a rotation angle, (x 1, y 1) is a current coordinate, (x 2, y 2) is a post-rotation coordinate, and gamma conversion s=cr is used for adjusting contrast γ S is the output gray level, r is the input gray level, and γ is the gamma value;
s22, carrying out gray scale processing on the data obtained in the step S21; gray=R0.299+G0.587+B0.114, gray is the Gray value of the point, R, G, B is the RGB three channel value of the point;
s23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of a center point with the median value of all pixel points of the matrix to obtain a new matrix;
s24, adopting a watershed algorithm based on morphological gradient of marker control to segment adhered rice, setting f as an input image, b as a structural element, the morphological gradient of the image being expressed as g,
Figure QLYQS_1
for the expansion operation, the expansion formula is
Figure QLYQS_2
For the corrosion operation, the corrosion formula is +.>
Figure QLYQS_3
Then there is
Figure QLYQS_4
The watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows
Figure QLYQS_5
Figure QLYQS_6
gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and an additional mark is utilized to search a mark in an original image, so that a guiding algorithm is used for segmentation;
s3, calculating the broken rice rate according to the image data preprocessed in the step S2;
the method comprises the following steps of:
s31, counting the total number of rice grains by using a journey marking algorithm;
s32, constructing a full convolution neural network to divide the particle area, and counting the number of particles;
the built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4;
wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage2;
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; the other part of the images passes through a hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage3;
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; the other part of the images passes through a hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage4;
stage4: the input is an image transmitted by stage3, the image after convolution is transmitted to residual convolution modules C2-4 and C3-4 through residual convolution modules C1-4 and C1-4 comprising 64 convolution kernels of 3 multiplied by 3, C2-4 comprises 16 convolution kernels of 1 multiplied by 1, C3-4 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_4;
in order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, a residual convolution module C1 is input, C1 comprises 1 convolution kernel of 1×1, and a final segmentation image is Output;
s33, calculating the broken rice rate, wherein
Figure QLYQS_7
2. The method according to claim 1, wherein the step S32 of counting the number of broken rice comprises:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:
Figure QLYQS_8
w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is a convolved value;
s322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rate of the convolution kernels is 2, 4 and 8 respectively; the actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter;
s323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:
Figure QLYQS_9
w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature maps obtained in each stage by using a filter with the size of 1 multiplied by 1;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
3. A computer system, characterized in that: comprising a processor, a storage medium having a computer program stored thereon, the processor reading and running the computer program from the storage medium to perform the rice milling detection method according to any one of claims 1 to 2.
4. A system for a method of detecting broken rice according to any one of claims 1 to 2, characterized in that: the device comprises a camera bellows (1), a CCD camera (3) integrated with an image acquisition card, a computer, a light source (2) and an objective table (4);
the annular array at the top end of the camera bellows (1) is provided with the light sources (2) for providing illumination of different wave bands in the camera bellows (1) and avoiding generating shadows;
the center of the bottom of the camera bellows (1) is provided with the objective table (4) for placing rice samples of different types; the CCD camera (3) is arranged right above the objective table (4) outside the camera bellows (1) and used for collecting rice samples irradiated by the light sources (2) with different wavelengths, and the rice samples are uploaded to the computer through the image acquisition card for further processing of the samples;
background paper is stuck on the inner side wall of the camera bellows (1) to avoid specular reflection.
CN202211118842.8A 2022-09-15 2022-09-15 Broken rice detection method and system Active CN115511803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211118842.8A CN115511803B (en) 2022-09-15 2022-09-15 Broken rice detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211118842.8A CN115511803B (en) 2022-09-15 2022-09-15 Broken rice detection method and system

Publications (2)

Publication Number Publication Date
CN115511803A CN115511803A (en) 2022-12-23
CN115511803B true CN115511803B (en) 2023-06-27

Family

ID=84503844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211118842.8A Active CN115511803B (en) 2022-09-15 2022-09-15 Broken rice detection method and system

Country Status (1)

Country Link
CN (1) CN115511803B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111879735A (en) * 2020-07-22 2020-11-03 武汉大学 Rice appearance quality detection method based on image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097549A (en) * 2019-05-08 2019-08-06 广州中国科学院沈阳自动化研究所分所 Based on morphologic land, water and air boundary line detecting method, system, medium and equipment
CN114066916A (en) * 2021-11-01 2022-02-18 浙江工商大学 Detection and segmentation method of adhered rice grains based on deep learning
CN114689527A (en) * 2022-05-31 2022-07-01 合肥安杰特光电科技有限公司 Rice chalkiness detection method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111879735A (en) * 2020-07-22 2020-11-03 武汉大学 Rice appearance quality detection method based on image

Also Published As

Publication number Publication date
CN115511803A (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN110276356B (en) Fundus image microaneurysm identification method based on R-CNN
CN107316300B (en) Tire X-ray defect detection method based on deep convolutional neural network
CN110033040B (en) Flame identification method, system, medium and equipment
CN103034838B (en) A kind of special vehicle instrument type identification based on characteristics of image and scaling method
WO2015142923A1 (en) Methods and systems for disease classification
CN104990892B (en) The spectrum picture Undamaged determination method for establishing model and seeds idenmtification method of seed
AU2014230824A1 (en) Tissue object-based machine learning system for automated scoring of digital whole slides
CN109635806A (en) Ammeter technique for partitioning based on residual error network
CN114093501B (en) Intelligent auxiliary analysis method for child movement epilepsy based on synchronous video and electroencephalogram
Alegro et al. Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding
CN116091421A (en) Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo
CN117232791B (en) Intelligent detection method for surface flaws and defects of optical film
CN115115598B (en) Global Gabor filtering and local LBP feature-based laryngeal cancer cell image classification method
Hasan et al. Dermo-DOCTOR: A framework for concurrent skin lesion detection and recognition using a deep convolutional neural network with end-to-end dual encoders
CN115439456A (en) Method and device for detecting and identifying object in pathological image
CN110866547B (en) Automatic classification system and method for traditional Chinese medicine decoction pieces based on multiple features and random forests
CN115187852A (en) Tibetan medicine urine diagnosis suspended matter identification method and device
Wu et al. Automatic kernel counting on maize ear using RGB images
CN116805416A (en) Drainage pipeline defect identification model training method and drainage pipeline defect identification method
CN114689527A (en) Rice chalkiness detection method and system
CN110335240B (en) Method for automatically grabbing characteristic pictures of tissues or foreign matters in alimentary canal in batches
Okuboyejo et al. Segmentation of melanocytic lesion images using gamma correction with clustering of keypoint descriptors
CN114627116A (en) Fabric defect identification method and system based on artificial intelligence
CN115511803B (en) Broken rice detection method and system
CN111881965B (en) Hyperspectral pattern classification and identification method, device and equipment for medicinal material production place grade

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant