CN115511803B - Broken rice detection method and system - Google Patents
Broken rice detection method and system Download PDFInfo
- Publication number
- CN115511803B CN115511803B CN202211118842.8A CN202211118842A CN115511803B CN 115511803 B CN115511803 B CN 115511803B CN 202211118842 A CN202211118842 A CN 202211118842A CN 115511803 B CN115511803 B CN 115511803B
- Authority
- CN
- China
- Prior art keywords
- convolution
- image
- multiplied
- rice
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 235000007164 Oryza sativa Nutrition 0.000 title claims abstract description 89
- 235000009566 rice Nutrition 0.000 title claims abstract description 89
- 238000001514 detection method Methods 0.000 title claims description 14
- 240000007594 Oryza sativa Species 0.000 title 1
- 241000209094 Oryza Species 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims abstract description 34
- 235000013339 cereals Nutrition 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 230000000877 morphologic effect Effects 0.000 claims abstract description 11
- 239000003550 marker Substances 0.000 claims abstract description 7
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 238000005260 corrosion Methods 0.000 claims description 7
- 230000007797 corrosion Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 239000002245 particle Substances 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 238000005286 illumination Methods 0.000 claims description 3
- 238000001429 visible spectrum Methods 0.000 claims description 3
- 238000003801 milling Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 10
- 238000013461 design Methods 0.000 abstract description 8
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000012549 training Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000000835 fiber Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/255—Details, e.g. use of specially adapted sources, lighting or optical systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/27—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/36—Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Immunology (AREA)
- Analytical Chemistry (AREA)
- Pathology (AREA)
- Artificial Intelligence (AREA)
- Biochemistry (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Dispersion Chemistry (AREA)
- Nonlinear Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a system for detecting broken rice of rice, wherein the method comprises the following steps: s1, image acquisition; s2, preprocessing the image acquired in the step S1; s3, calculating the broken rice rate according to the image data preprocessed in the step S2. The invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. The method uses the full convolution network, and the acquired image can be directly input into the network, so that complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved.
Description
Technical Field
The invention relates to the field of broken rice detection, in particular to a method and a system for detecting broken rice.
Background
The existing broken rice monitoring method is based on a machine vision technology, the rice in a rice slip state is irradiated by an external light source, an image is collected by using a ccd camera, and a grain area is obtained through pretreatment operations including graying, background segmentation, binarization, corrosion expansion and the like. The detection method comprises the following steps: and processing the image through edge detection, calculating the average circumference, and judging that three quarters of the circumference smaller than the average circumference is broken rice. And calculating the average area, and judging that the crushed rice is three fourths smaller than the average area. And aspect ratio detection, the aspect ratio of the crushed rice is far smaller than that of normal rice grains. The existing monitoring method adopts manual selection of target characteristics, and the characteristics have different expression performances on broken rice characteristics, so that the accuracy of the final detection result is not ideal.
Disclosure of Invention
In order to solve the problem of misjudgment in traditional machine learning, the invention provides a method and a system for detecting broken rice of rice, and the specific scheme is as follows:
a broken rice detection method comprises the following steps:
s1, acquiring at least 3 different types of rice grain images, wherein the visible spectrum image information of rice grains in three wave bands near 700nm of an R channel, 550nm of a G channel and 440nm of a B channel;
s2, preprocessing the image acquired in the step S1;
wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1, so as to enhance the data and expand the sample data set; the rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is the rotation angle, (x 1, y 1) is the current coordinate, (x 2, y 2) is the post-rotation coordinate,adjusting contrast uses gamma transformation s=cr γ S is the output gray level, r is the input gray level, and γ is the gamma value;
s22, carrying out gray scale processing on the data obtained in the step S21; gray=R0.299+G0.587+B0.114, gray is the Gray value of the point, R, G, B is the RGB three channel value of the point;
s23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of a center point with the median value of all pixel points of the matrix to obtain a new matrix;
s24, adopting a watershed algorithm based on morphological gradient of marker control to segment adhered rice, setting f as an input image, b as a structural element, the morphological gradient of the image being expressed as g,for the expansion operation, the expansion formula is For the corrosion operation, the corrosion formula is +.>Then there isThe watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and the additional mark is utilized to search the mark in the original image, so that the algorithm is guided to divide.
S3, calculating the broken rice rate according to the image data preprocessed in the step S2;
the method comprises the following steps of:
s31, counting the total number of rice grains by using a journey marking algorithm;
s32, constructing a full convolution neural network to divide the particle area, and counting the number of particles;
the built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4;
wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage2;
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; the other part of the images passes through a hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage3;
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; the other part of the images passes through a hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage4;
stage4: the input is an image transmitted by stage3, the image after convolution is transmitted to residual convolution modules C2-4 and C3-4 through residual convolution modules C1-4 and C1-4 comprising 64 convolution kernels of 3 multiplied by 3, C2-4 comprises 16 convolution kernels of 1 multiplied by 1, C3-4 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_4;
in order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, a residual convolution module C1 is input, C1 comprises 1 convolution kernel of 1×1, and a final segmentation image is Output;
Preferably, the specific step of counting the broken rice in step S32 includes:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is a convolved value;
s322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rate of the convolution kernels is 2, 4 and 8 respectively; the actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter;
s323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature maps obtained in each stage by using a filter with the size of 1 multiplied by 1;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
The invention also discloses a computer system, which comprises a processor and a storage medium, wherein the storage medium is provided with a computer program, and the processor reads and runs the computer program from the storage medium to execute the broken rice detection method according to any one of claims 1 to 2.
Preferably, the system of the broken rice detection method comprises a camera bellows, a CCD camera integrated with an image acquisition card, a computer, a light source and an objective table;
the annular array at the top end of the camera bellows is provided with the light sources, so as to provide illumination of different wave bands in the camera bellows and avoid generating shadows;
the objective table is arranged in the center of the bottom of the camera bellows and used for placing rice samples of different types; the CCD camera is arranged right above the objective table outside the camera bellows and used for collecting rice samples irradiated by light sources with different wavelengths, and the rice samples are uploaded to the computer through the image acquisition card for further processing of the samples;
background paper is stuck on the inner side wall of the camera bellows, so that specular reflection is avoided.
The invention has the beneficial effects that:
according to the invention, the camera bellows is used in the image acquisition process, the light source is placed at the top and the inner surface of the camera bellows is treated, so that misjudgment caused by strong reflection and shadow generated by a normal light source is reduced. Meanwhile, the invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. In the rice grain classification process, the full convolution neural network is used for replacing manual feature extraction, and feature graphs with different depths are fused, so that complicated feature algorithm selection and feature design are avoided, and the recognition accuracy is improved. Compared with a judging method using the circumference, the area and the length-width ratio, the method uses the full convolution network, and because the acquired image can be directly input into the network, complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved. Compared with the method for identifying by using a neural network, the method for identifying the adhered rice by using the watershed algorithm based on the morphological gradient of the marker control in the pretreatment stage has the advantages that the number of rice grains is obtained, statistics of the broken rice rate is completed after identification, the detection rate is improved, and the method has practical application value in practical rice processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a system architecture according to the present invention;
fig. 3 is a block diagram of a full convolutional neural network of the present invention.
The reference numerals are as follows: 1. camera bellows 2, light source 3, CCD camera 4, objective table.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for detecting broken rice comprises the following steps:
s1, image acquisition.
Wherein the acquired image comprises: at least 3 different types of rice grains are adopted for collection, and three wave bands of visible spectrum image information of the rice grains are near 700mm of the R channel, 550mm of the G channel and 440mm of the B channel. The experimental sample adopts various different kinds of rice grains so as to improve the robustness of the training result.
S2, preprocessing the image acquired in the step S1.
Wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1.
In the image preprocessing process, all data are obtained from experiments, the workload is large, the data set is insufficient, and firstly, the original image data set is subjected to rotation, overturning and contrast adjustment transformation processing to carry out data enhancement so as to expand the sample data set.
The rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is a rotation angle, (x) 1 ,y 1 ) Is the current coordinate, (x 2 ,y 2 ) For rotating the post-coordinates, adjusting contrast uses gamma transformation s=cr γ S is output gray level, r is input gray level, gamma is gamma value, and is used for adjusting gray level conversion degree, c is called gray scale coefficient, and is used for integrally stretching image gray scale, so as to achieve the purpose of data enhancement, expand sample data set, improve the integral learning performance of the network, and avoid the phenomenon of overfitting of the surface depth convolution neural network.
S22, performing gray scale processing on the data obtained in the step S21.
In the image preprocessing process, the original image is a color image, and comprises 3 color channels of red, green and blue, so that the image tends to be soft, has moderate contrast, is easy for human eyes to observe, and is grayed. The Gray formula is gray=r×0.299+g×0.587+b×0.114, gray is the Gray value of the point, and R, G, B is the value of the RGB three channels of the point.
S23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of the center point with the median value of all pixel points of the matrix to obtain a new matrix.
S24, during the falling process of the rice grains, partial rice grains are overlapped, so that the total number of the rice grains is difficult to distinguish, and the adhered rice is further segmented. Wherein, the watershed algorithm based on morphological gradient of mark control is adopted to divide the adhered rice.
Let f be the input image, b be the structural element, the morphological gradient of the image be g,for the expansion operation, the expansion formula is +.> For the corrosion operation, the corrosion formula is +.>There is->The watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and the additional mark is used for searching the mark in the original image, so that the algorithm is guided to perform segmentation and over-segmentation is prevented.
S3, calculating the broken rice rate according to the image data preprocessed in the step S2.
The calculating step S of the broken rice rate in the step S3 comprises the following steps:
s31, counting the total number of rice grains by using a journey marking algorithm.
S32, constructing a full convolution neural network to divide the particle area, and counting the number of particles.
And (3) constructing a full convolution neural network, wherein the network design is based on a VGG network, and the network structure is shown in figure 3. Because the rice grain has different shapes and sizes, the standard convolution kernel is difficult to extract all the features, the network uses a residual variability convolution module, and the size and the position of the deformable convolution kernel can be dynamically adjusted according to the image content which is required to be identified at present, so that the image features with different sizes and different shapes are learned.
The built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4, wherein the stage1, stage2 and stage3 have the same structure and all comprise convolution layers C1, C2, C3 and C4, a cavity convolution layer S1 of a pooling layer is replaced, and stage4 comprises convolution layers C1, C2 and C3.
The built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4.
Wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; and the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into stage2.
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; and the other part of the images passes through the hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, the images are downsampled, and the convolved images are input into stage3.
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; and the other part of the images passes through the hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into stage4.
stage4: the input is the image that stage3 was imported, through residual convolution module C1-4, C1-4 includes 64 convolution kernels of 3 x 3, the image after convolution is passed to residual convolution module C2-4, C3-4, C2-4 includes 16 convolution kernels of 1 x1, C3-4 includes 1 convolution kernel of 1 x1, the image that the convolution produced is output_4.
In order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, the input residual convolution module C1, C1 comprises 1 convolution kernel of 1×1, and the final segmented image is Output.
Wherein, the specific steps of counting broken rice number include:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is the convolved value. The pooling layer is eliminated by enlarging the convolution kernel to increase the receptive field.
S322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rates of convolution kernels are 2, 4 and 8 respectively. The hole convolution layer is used instead of the conventional convolution layer because the pooling layer not only reduces the spatial acuity during segmentation, but may also lose some important edge information, which directly results in under-segmentation of the image. The actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter.
S323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature graphs obtained in each stage by using a filter with the size of 1 multiplied by 1, so as to avoid the fact that the extracted features are not specific enough when the convolution depth is large;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
Referring to fig. 2, a system for detecting broken rice of rice comprises a camera bellows 1, a CCD camera 3 integrated with an image acquisition card, a computer, a light source 2 and an objective table 4.
The annular array at the top end of the camera bellows 1 is provided with the light sources 2, so as to provide illumination of different wave bands in the camera bellows 1 and avoid generating shadows.
The center of the bottom of the camera bellows 1 is provided with the objective table 4 for placing rice samples of different types; the CCD camera 3 is arranged right above the objective table 4 outside the camera bellows 1 and used for collecting rice samples irradiated by the light sources 2 with different wavelengths, and the rice samples are uploaded to the computer through the image collecting card for further processing of the samples.
Background paper is stuck on the inner side wall of the camera bellows 1, so that specular reflection is avoided.
The invention also discloses a computer system, which comprises a processor and a storage medium, wherein the storage medium is provided with a computer program, and the processor reads and runs the computer program from the storage medium to execute the broken rice detection method.
According to the invention, the camera bellows 1 is used in the image acquisition process, the light source 2 is placed at the top and the inner surface of the camera bellows 1 is processed, so that misjudgment caused by strong reflection and shadow generated by the normal light source 2 is reduced.
Meanwhile, the invention uses a watershed algorithm based on morphological gradient of marker control to realize the segmentation of adhered rice and count the number of rice grains. In the image preprocessing stage, the data set is expanded by image enhancement, so that the phenomenon of over-fitting in the model training process is prevented. In the rice grain classification process, the full convolution neural network is used for replacing manual feature extraction, and feature graphs with different depths are fused, so that complicated feature algorithm selection and feature design are avoided, and the recognition accuracy is improved.
Compared with a judging method using the circumference, the area and the length-width ratio, the method uses the full convolution network, and because the acquired image can be directly input into the network, complex feature extraction is avoided, the difficulty of data preprocessing is reduced, the complexity of algorithm design is reduced, misjudgment is avoided, and the recognition accuracy is improved.
Compared with the method for identifying by using a neural network, the method for identifying the adhered rice by using the watershed algorithm based on the morphological gradient of the marker control in the pretreatment stage has the advantages that the number of rice grains is obtained, statistics of the broken rice rate is completed after identification, the detection rate is improved, and the method has practical application value in practical rice processing.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disk) as used herein include Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disk) usually reproduce data magnetically, while discs (disk) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (4)
1. The broken rice detection method is characterized by comprising the following steps of:
s1, acquiring at least 3 different types of rice grain images, wherein the visible spectrum image information of rice grains in three wave bands near 700nm of an R channel, 550nm of a G channel and 440nm of a B channel;
s2, preprocessing the image acquired in the step S1;
wherein the step of preprocessing comprises:
s21, rotating, overturning and adjusting the image data acquired in the step S1, so as to enhance the data and expand the sample data set; the rotation formula is y 2 =x 1 ×sin(θ)+y 1 ×cos(θ);x 2 =x 1 ×cos(θ)-y 1 X sin (θ), θ is a rotation angle, (x 1, y 1) is a current coordinate, (x 2, y 2) is a post-rotation coordinate, and gamma conversion s=cr is used for adjusting contrast γ S is the output gray level, r is the input gray level, and γ is the gamma value;
s22, carrying out gray scale processing on the data obtained in the step S21; gray=R0.299+G0.587+B0.114, gray is the Gray value of the point, R, G, B is the RGB three channel value of the point;
s23, removing noise points through median filtering, and taking a matrix of 3 multiplied by 3 when the median filtering is performed, and replacing the value of a center point with the median value of all pixel points of the matrix to obtain a new matrix;
s24, adopting a watershed algorithm based on morphological gradient of marker control to segment adhered rice, setting f as an input image, b as a structural element, the morphological gradient of the image being expressed as g,for the expansion operation, the expansion formula isFor the corrosion operation, the corrosion formula is +.>Then there isThe watershed algorithm converts the gray level image into a gradient image, and the conversion formula is as follows gx is the gradient of the point in the x direction, gy is the gradient of the point in the y direction, M (x, y) is the gradient of the point, and an additional mark is utilized to search a mark in an original image, so that a guiding algorithm is used for segmentation;
s3, calculating the broken rice rate according to the image data preprocessed in the step S2;
the method comprises the following steps of:
s31, counting the total number of rice grains by using a journey marking algorithm;
s32, constructing a full convolution neural network to divide the particle area, and counting the number of particles;
the built full convolution neural network comprises four stages, namely stage1, stage2, stage3 and stage4;
wherein stage1: the gray level image is input, the gray level image is subjected to a residual convolution module C1-1, the C1-1 comprises 64 convolution kernels of 3 multiplied by 3, the convolved image is duplicated into two parts, one part is transmitted to the residual convolution modules C2-1, C3-1 and C4-1, the C2-1 comprises 16 convolution kernels of 1 multiplied by 1, the C3-1 comprises 1 convolution kernel of 1 multiplied by 1, the C4-1 comprises 1 convolution kernel of 1 multiplied by 1, and the convolved image is output_1; the other part of the images passes through a hole convolution layer S1-1, the S1-1 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage2;
stage2: the input is an image transmitted by stage1, the image is duplicated into two parts through a residual convolution module C1-2, wherein the C1-2 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-2, C3-2 and C4-2, the C2-2 comprises 16 convolution kernels of 1 multiplied by 1, the C3-2 comprises 1 convolution kernel of 1 multiplied by 1, the C4-2 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_2; the other part of the images passes through a hole convolution layer S1-2, the S1-2 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage3;
stage3: the input is an image transmitted by stage2, the image is duplicated into two parts through a residual convolution module C1-3, wherein the C1-3 comprises 64 convolution kernels of 3 multiplied by 3, one part is transmitted to the residual convolution modules C2-3, C3-3 and C4-3, the C2-3 comprises 16 convolution kernels of 1 multiplied by 1, the C3-3 comprises 1 convolution kernel of 1 multiplied by 1, the C4-3 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_3; the other part of the images passes through a hole convolution layer S1-3, the S1-3 is used for replacing a pooling layer in a conventional network, downsampling is carried out on the images, and the convolved images are input into a stage4;
stage4: the input is an image transmitted by stage3, the image after convolution is transmitted to residual convolution modules C2-4 and C3-4 through residual convolution modules C1-4 and C1-4 comprising 64 convolution kernels of 3 multiplied by 3, C2-4 comprises 16 convolution kernels of 1 multiplied by 1, C3-4 comprises 1 convolution kernel of 1 multiplied by 1, and the image generated after convolution is output_4;
in order to improve the segmentation accuracy, after output_1, output_2, output_3 and output_4 are directly overlapped, a residual convolution module C1 is input, C1 comprises 1 convolution kernel of 1×1, and a final segmentation image is Output;
2. The method according to claim 1, wherein the step S32 of counting the number of broken rice comprises:
s321, extracting rice grain characteristics through a residual deformable convolution module; the variable convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n +Δp n ) Δp, which is the value of the convolution kernel pixel point n The additional addition of direction parameters, y (p) n ) Is a convolved value;
s322, connecting the images processed in the step S321 through a cavity convolution layer, wherein the expansion rate of the convolution kernels is 2, 4 and 8 respectively; the actual convolution kernel size of the cavity convolution layer is K=k+ (K-1) (r-1), K is the original convolution kernel size, and r is the cavity rate of the cavity convolution parameter;
s323, performing convolution operation on rice grain characteristics obtained in four stages included in the full convolution neural network by using 16 filters with the size of 1 multiplied by 1; the convolution formula is:w(p n ) Is the value of the original image pixel point, x (p 0 +p n ) For the value of the convolution kernel pixel, y (p n ) Is a convolved value;
s324, fusing the 16 feature maps obtained in each stage by using a filter with the size of 1 multiplied by 1;
s325, comparing the fused characteristic diagram with a manually calibrated result, and calculating a loss value;
s326, fusing the feature graphs at different stages to obtain a final segmentation graph, so as to count the broken rice.
3. A computer system, characterized in that: comprising a processor, a storage medium having a computer program stored thereon, the processor reading and running the computer program from the storage medium to perform the rice milling detection method according to any one of claims 1 to 2.
4. A system for a method of detecting broken rice according to any one of claims 1 to 2, characterized in that: the device comprises a camera bellows (1), a CCD camera (3) integrated with an image acquisition card, a computer, a light source (2) and an objective table (4);
the annular array at the top end of the camera bellows (1) is provided with the light sources (2) for providing illumination of different wave bands in the camera bellows (1) and avoiding generating shadows;
the center of the bottom of the camera bellows (1) is provided with the objective table (4) for placing rice samples of different types; the CCD camera (3) is arranged right above the objective table (4) outside the camera bellows (1) and used for collecting rice samples irradiated by the light sources (2) with different wavelengths, and the rice samples are uploaded to the computer through the image acquisition card for further processing of the samples;
background paper is stuck on the inner side wall of the camera bellows (1) to avoid specular reflection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118842.8A CN115511803B (en) | 2022-09-15 | 2022-09-15 | Broken rice detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118842.8A CN115511803B (en) | 2022-09-15 | 2022-09-15 | Broken rice detection method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115511803A CN115511803A (en) | 2022-12-23 |
CN115511803B true CN115511803B (en) | 2023-06-27 |
Family
ID=84503844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211118842.8A Active CN115511803B (en) | 2022-09-15 | 2022-09-15 | Broken rice detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115511803B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111879735A (en) * | 2020-07-22 | 2020-11-03 | 武汉大学 | Rice appearance quality detection method based on image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097549A (en) * | 2019-05-08 | 2019-08-06 | 广州中国科学院沈阳自动化研究所分所 | Based on morphologic land, water and air boundary line detecting method, system, medium and equipment |
CN114066916A (en) * | 2021-11-01 | 2022-02-18 | 浙江工商大学 | Detection and segmentation method of adhered rice grains based on deep learning |
CN114689527A (en) * | 2022-05-31 | 2022-07-01 | 合肥安杰特光电科技有限公司 | Rice chalkiness detection method and system |
-
2022
- 2022-09-15 CN CN202211118842.8A patent/CN115511803B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111879735A (en) * | 2020-07-22 | 2020-11-03 | 武汉大学 | Rice appearance quality detection method based on image |
Also Published As
Publication number | Publication date |
---|---|
CN115511803A (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276356B (en) | Fundus image microaneurysm identification method based on R-CNN | |
CN107316300B (en) | Tire X-ray defect detection method based on deep convolutional neural network | |
CN110033040B (en) | Flame identification method, system, medium and equipment | |
CN103034838B (en) | A kind of special vehicle instrument type identification based on characteristics of image and scaling method | |
WO2015142923A1 (en) | Methods and systems for disease classification | |
CN104990892B (en) | The spectrum picture Undamaged determination method for establishing model and seeds idenmtification method of seed | |
AU2014230824A1 (en) | Tissue object-based machine learning system for automated scoring of digital whole slides | |
CN109635806A (en) | Ammeter technique for partitioning based on residual error network | |
CN114093501B (en) | Intelligent auxiliary analysis method for child movement epilepsy based on synchronous video and electroencephalogram | |
Alegro et al. | Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding | |
CN116091421A (en) | Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo | |
CN117232791B (en) | Intelligent detection method for surface flaws and defects of optical film | |
CN115115598B (en) | Global Gabor filtering and local LBP feature-based laryngeal cancer cell image classification method | |
Hasan et al. | Dermo-DOCTOR: A framework for concurrent skin lesion detection and recognition using a deep convolutional neural network with end-to-end dual encoders | |
CN115439456A (en) | Method and device for detecting and identifying object in pathological image | |
CN110866547B (en) | Automatic classification system and method for traditional Chinese medicine decoction pieces based on multiple features and random forests | |
CN115187852A (en) | Tibetan medicine urine diagnosis suspended matter identification method and device | |
Wu et al. | Automatic kernel counting on maize ear using RGB images | |
CN116805416A (en) | Drainage pipeline defect identification model training method and drainage pipeline defect identification method | |
CN114689527A (en) | Rice chalkiness detection method and system | |
CN110335240B (en) | Method for automatically grabbing characteristic pictures of tissues or foreign matters in alimentary canal in batches | |
Okuboyejo et al. | Segmentation of melanocytic lesion images using gamma correction with clustering of keypoint descriptors | |
CN114627116A (en) | Fabric defect identification method and system based on artificial intelligence | |
CN115511803B (en) | Broken rice detection method and system | |
CN111881965B (en) | Hyperspectral pattern classification and identification method, device and equipment for medicinal material production place grade |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |