CN114881984A - Detection method and device for rice processing precision, electronic equipment and medium - Google Patents

Detection method and device for rice processing precision, electronic equipment and medium Download PDF

Info

Publication number
CN114881984A
CN114881984A CN202210547479.5A CN202210547479A CN114881984A CN 114881984 A CN114881984 A CN 114881984A CN 202210547479 A CN202210547479 A CN 202210547479A CN 114881984 A CN114881984 A CN 114881984A
Authority
CN
China
Prior art keywords
rice
target
image
grain
characteristic parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210547479.5A
Other languages
Chinese (zh)
Inventor
陈卫东
李宛玉
李智
王莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202210547479.5A priority Critical patent/CN114881984A/en
Publication of CN114881984A publication Critical patent/CN114881984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application provides a method and a device for detecting rice processing precision, electronic equipment and a medium; the detection method comprises the following steps: acquiring an initial rice image comprising a plurality of randomly placed target rice grains, and extracting a target image of each single target rice grain from the initial rice image; extracting a plurality of characteristic parameters of each single-grain target rice according to the target image; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, and the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in the target image; performing dimensionality reduction fusion processing on the plurality of characteristic parameters to obtain a plurality of target fusion characteristics, wherein the cumulative contribution rate of the plurality of target fusion characteristics is greater than a preset cumulative contribution rate threshold value; and inputting the fusion characteristics of the multiple targets into the trained detection model to obtain the processing precision detection result of each single-grain target rice, thereby rapidly, nondestructively, objectively and accurately determining the processing precision of the rice.

Description

Detection method and device for rice processing precision, electronic equipment and medium
Technical Field
The application relates to the field of rice production detection, in particular to a method and a device for detecting rice processing precision, electronic equipment and a medium.
Background
China is a world large country for rice production, consumption and trade, the rice yield is the 1 st in the world, accounts for 27.7% of the total rice yield in the world and accounts for about 1/3% of the total domestic grain yield, and the rice processing production plays a very important role in China.
At present, the detection means and the detection level of rice in China fall behind, wherein the detection method of the rice processing precision adopts an eosin Y-methylene blue staining method combined with a manual contrast observation method and an instrument image analysis method according to the regulation of testing the rice processing precision by grain and oil (GB/T5502-2018). Meanwhile, the existing rice processing precision detection in China usually adopts a manual sensory evaluation method or a detection method combining machines and workers, the detection process is long in time consumption, the detection result is strong in subjectivity, low in accuracy rate and poor in repeatability. The processing precision detection based on image processing is mostly combined with a dyeing method, the process is complicated, a certain dyeing technology needs to be mastered, meanwhile, a certain damage is caused to a detection sample, and the method cannot be applied to actual detection. The phenomenon of overhigh processing precision caused by the method not only influences the nutritive value of the rice, but also brings huge loss to enterprises.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method, an apparatus, an electronic device and a medium for detecting rice processing accuracy, which can determine the rice processing accuracy quickly, nondestructively, objectively and accurately according to the color and texture of a rice image.
The detection method for the rice processing precision provided by the embodiment of the application comprises the following steps:
acquiring an initial rice image comprising a plurality of randomly placed target rice grains, and extracting a target image of each single target rice grain from the initial rice image;
extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, and the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in the target image;
performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain target rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and inputting the target fusion characteristics of each single-grain target rice into a trained detection model to obtain a processing precision detection result of each single-grain target rice.
In some embodiments, in the method for detecting rice processing accuracy, the extracting, according to the target image of each single-grain target rice, a plurality of characteristic parameters of each single-grain target rice includes:
extracting a first moment, a second moment and a third moment of a target image of each single-grain target rice on a plurality of color components to serve as a plurality of color characteristic parameters of the target image; the color components are color components of the target image in HSV color space and color components of the target image in RGB color space;
and acquiring a gray-gradient co-occurrence matrix of the target image according to the gray level and the gradient of the pixel points in the target image of each single-grain target rice, and extracting a plurality of texture characteristic parameters of the target image through the gray-gradient co-occurrence matrix.
In some embodiments, in the method for detecting rice processing accuracy, extracting a target image of each single piece of target rice from the initial rice image includes:
converting the initial rice image of the randomly placed multiple grains of target rice into a rice gray image;
determining a gray segmentation threshold value of the rice gray image according to the rice gray and the background gray in the rice gray image, and processing the rice gray image according to the gray segmentation threshold value to obtain a binary rice image; the gray segmentation threshold values corresponding to different rice gray images are different;
and extracting a target image of each single-grain target rice from the binarized rice image.
In some embodiments, in the method for detecting rice processing accuracy, extracting a target image of each single piece of target rice from the binarized rice image includes:
removing rice boundary noise and isolated noise in the binarized rice image through morphological open operation to obtain a denoised rice image;
filling holes of the single-grain target rice in the de-noised rice image through morphological closed operation to obtain a smooth rice image;
and extracting a target image of each single-grain target rice from the smooth rice image.
In some embodiments, the method for detecting rice processing accuracy, in which the initial rice image of the randomly placed multiple target rice grains is converted into a rice gray scale image, includes:
converting the initial rice image of the randomly placed multiple grains of target rice into a first gray image by adopting image gray conversion;
determining the target gray level of each pixel point according to the gray levels of a plurality of pixel points in a preset neighborhood of each pixel point of the first gray level image;
and updating the gray level of each pixel point according to the target gray level of each pixel point in the first gray level image so as to obtain a rice gray level image.
In some embodiments, in the method for detecting rice processing accuracy, a target image of each single grain of target rice is extracted from the smoothed rice image, including;
separating the adhered target rice in the smooth rice image through a concave point detection algorithm so as to enable the target rice in the rice image to be single grains;
rotating the initial rice image according to the inclination angle of the single-grain target rice in the smooth rice image and the coordinate of the first minimum circumscribed rectangle so as to adjust the single-grain target rice to a target vertical posture, and determining a second minimum circumscribed rectangle of the single-grain target rice in the rotated initial rice image;
and expanding a preset number of pixel points outside the second minimum external rectangle, determining a cutting area of the single-grain target rice in the rotated initial rice image, and cutting to obtain a target image of the single-grain target rice. In some embodiments, in the method for detecting rice processing accuracy, the trained detection model is obtained by training according to the following training method:
constructing a sample data set of the rice, wherein the sample data set comprises a target image of the single-grain sample rice and the processing precision of the single-grain sample rice; the target image of the single-grain sample rice is extracted from an initial rice image of a plurality of randomly placed sample rice;
extracting a plurality of characteristic parameters of each single-grain sample rice according to the target image of each single-grain sample rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain sample rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and inputting the fusion characteristics and the processing precision of the multiple targets of the single-grain sample rice into a pre-constructed detection model until the pre-constructed detection model meets the training end condition to obtain the trained detection model.
In some embodiments, there is also provided a detection apparatus for processing accuracy of rice, the detection apparatus comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring an initial rice image comprising a plurality of randomly placed target rice grains and extracting a target image of each single target rice grain from the initial rice image;
the extraction module is used for extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
the dimensionality reduction module is used for carrying out dimensionality reduction fusion processing on the characteristic parameters of each single-grain target rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion characteristics is greater than a preset accumulated contribution rate threshold value;
and the input module is used for inputting the target fusion characteristics of each single-grain target rice into the trained detection model to obtain the processing precision detection result of each single-grain target rice.
In some embodiments, there is also provided an electronic device comprising: the device comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory are communicated through the bus when the electronic device runs, and the machine-readable instructions are executed by the processor to execute the steps of the method for detecting the processing precision of the rice.
In some embodiments, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for detecting processing accuracy of rice.
The application provides a method, a device, electronic equipment and a medium for detecting rice processing precision, wherein after a rice image with a plurality of grains of rice placed at random is processed, color characteristic parameters and texture characteristic parameters capable of reflecting rice processing characteristics are extracted to be used as characteristic parameters of single grain of rice, and the characteristic parameters of the single grain of rice are input into a trained detection model, so that the processing precision of the single grain of rice is obtained; the whole processing process does not need dyeing, does not damage a detection sample, does not involve manpower, has objective detection results, high accuracy and strong repeatability, consumes less time in the detection process, and thus can quickly, nondestructively, objectively and accurately detect the processing precision of the rice.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flow chart of a method for detecting rice processing accuracy according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for extracting a target image of each single piece of target rice from the initial rice image according to an embodiment of the present application;
fig. 3 shows a flowchart of a method for extracting a target image of each single grain of target rice from the binarized rice image according to the embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for extracting a target image of each single grain of target rice from the smoothed rice image according to an embodiment of the present application;
fig. 5 shows a flowchart of a method for extracting a plurality of characteristic parameters of each single-grain target rice according to the embodiment of the application;
FIG. 6 is a flow chart of a method of training a test model according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a detection device for processing precision of rice according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The precision of the rice comprises three precision grades of fine grinding, proper grinding and the like. Because the rice processing process excessively pursues bright, white and fine rice, and the existing rice processing is in low-level rough processing, the rice processing precision can not be accurately evaluated to process the rice to proper precision, so that the rice is excessively processed for bright, white and fine rice; however, China is a big country for rice production, consumption and trade in the world, and because the rice is processed excessively, more than 130 million jin of grain is lost every year.
Therefore, in order to prevent excessive processing and avoid loss of grains, the processing precision of the rice must be accurately and quickly measured.
However, the detection means and the detection level of rice in China are relatively lagged behind at present, wherein the detection method of the rice processing precision adopts an eosin Y-methylene blue staining agent staining method combined with a manual contrast observation method and an instrument image analysis method according to the regulations of the inspection of the rice processing precision by grain and oil (GB/T5502 and 2018). Meanwhile, the existing rice processing precision detection in China usually adopts a manual sensory evaluation method or a detection method combining machines and workers, the detection process is long in time consumption, the detection result is strong in subjectivity, low in accuracy rate and poor in repeatability. The processing precision detection based on image processing is mostly combined with a dyeing method, the process is complicated, a certain dyeing technology needs to be mastered, meanwhile, a certain damage is caused to a detection sample, and the method cannot be applied to actual and real-time detection. The phenomenon of overhigh processing precision caused by the lagging processing precision detection means not only influences the nutritive value of the rice, but also brings huge loss to enterprises.
Based on the above, the application provides a detection method, wherein after a rice image with a plurality of rice grains is randomly placed is processed, color characteristic parameters and texture characteristic parameters capable of reflecting rice processing characteristics are extracted to be used as characteristic parameters of single rice grains, and the characteristic parameters of the single rice grains subjected to dimensionality reduction are input into a trained detection model, so that the processing precision of the single rice grains is obtained; the whole processing process does not need dyeing, does not damage a detection sample, does not involve manpower, has objective detection results, high accuracy and strong repeatability, consumes less time in the detection process, and thus detects the precision of a rice technician quickly, nondestructively, objectively and accurately.
The following describes a method, an apparatus, an electronic device, and a medium for detecting rice processing accuracy according to embodiments of the present application in detail.
As shown in fig. 1, the method for detecting the processing accuracy of rice provided by the embodiment of the application comprises the following steps S101-S104; specifically, the method comprises the following steps:
s101, obtaining an initial rice image comprising a plurality of randomly placed target rice grains, and extracting a target image of each single target rice grain from the initial rice image;
s102, extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, and the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in the target image;
s103, performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain target rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
s104, inputting the target fusion characteristics of each single-grain target rice into a trained detection model to obtain a processing precision detection result of each single-grain target rice.
In the embodiment of the application, the detection method of the rice processing precision is operated on terminal equipment or a server; the terminal device may be a local terminal device, and when the entity positioning method is executed in the server, the detection method of the rice processing accuracy may be implemented and executed based on a cloud interaction system, where the cloud interaction system at least includes the server and a client device (i.e., the terminal device).
Specifically, for example, when the entity positioning method is applied to a terminal device, the method for detecting the processing accuracy of the rice is used for detecting the processing accuracy of the individual target rice in the rice image.
Here, in step S101, the obtained initial rice image is acquired by an image acquisition device, specifically, the image acquisition device may be a camera, a scanner, or the like.
Specifically, data transmission and interaction can be performed between at least one image acquisition device and the terminal device in a wired network/wireless network manner according to a preset communication Protocol (such as a Real Time Streaming Protocol (RTSP)) Protocol; in the data interaction process, the terminal equipment can control the image acquisition device to acquire the processed rice image.
Here, in step S101, the image capturing device is used to characterize the image capturing device installed in the rice processing production line or the rice detecting system, wherein the specific number of the installed image capturing devices is not specifically limited in the embodiment of the present application in consideration of the difference in the number of times of detecting the rice processing precision and the difference in the sampling point during the rice processing.
Specifically, when the image acquisition device is a scanner, a plurality of target rice grains are randomly placed on the surface of the flat scanner, and are not required to be placed in order, so that automatic detection is facilitated, the detection speed is increased, and the detection time is shortened; and selecting a black non-reflective background, and realizing imaging by adopting a reflection draft of a scanner.
Illustratively, in the embodiment of the present application, the size of the rice image scanned by the flat bed scanner is 7000 pixels × 5000 pixels, and the resolution of the flat bed scanner is 600 dpi.
In the embodiment of the present application, in step S101, as shown in fig. 2, extracting a target image of each single-grain target rice from the initial rice image includes the following steps S201 to S203:
s201, converting the initial rice image of the randomly placed multiple target rice into a rice gray image;
s202, determining a gray segmentation threshold value of the rice gray image according to the rice gray and the background gray in the rice gray image, and processing the rice gray image according to the gray segmentation threshold value to obtain a binary rice image; the gray segmentation threshold values corresponding to different rice gray images are different;
and S203, extracting a target image of each single-grain target rice from the binarized rice image.
Preprocessing the obtained initial rice image, wherein the preprocessing process adopts image gray level conversion and a median filter to convert the initial rice image into a rice gray level image; in the embodiment of the present application, the converting the initial rice image of the randomly placed multiple target rice into the rice gray image in step S201 includes:
converting the initial rice image of the randomly placed multiple grains of target rice into a first gray image by adopting image gray conversion;
determining the target gray level of each pixel point according to the gray levels of a plurality of pixel points in a preset neighborhood of each pixel point of the first gray level image;
and updating the gray level of each pixel point according to the target gray level of each pixel point in the first gray level image so as to obtain a rice gray level image.
That is to say, when the initial rice image is converted into the rice gray image, the gray value of each pixel point is adjusted and determined according to the gray values of a plurality of pixel points in the preset neighborhood around the pixel point.
The preset neighborhood refers to a neighborhood window with the pixel point as the center and with a preset number of pixel points. For example, a 5 × 5 neighborhood window in size.
Specifically, a median filter with a window size of 5 × 5 is used for smoothing the image, the gray value of each pixel point of the rice image is set as the median of the gray values of all pixel points in the 5 × 5 neighborhood window of the point, and noise is eliminated while edge information of the rice image is kept.
After preprocessing, a gray segmentation threshold of the rice gray image needs to be determined, so that the separation of a rice area and a background area in the rice gray image is realized according to the determined gray segmentation threshold.
Here, the binarized rice image represents the rice image after the background region is separated, and the binarized rice image includes only the image of the rice region.
The gray segmentation threshold of the rice gray image is determined, and there are various implementations, for example, a preset gray segmentation threshold is adopted.
In step S202 in the embodiment of the application, specifically, the gray segmentation threshold of the rice gray image is determined according to the gray of the rice area and the gray of the background area in each rice gray image, that is, the gray segmentation threshold of the rice gray image is adaptively determined for each rice gray image, and the gray segmentation thresholds corresponding to different rice gray images are different, so that the separation of the rice area and the background area is realized with a more accurate effect for each rice gray image.
In step S202 in the embodiment of the present application, a gray level segmentation threshold of the rice gray level image is determined according to the rice gray level and the background gray level in the rice gray level image, and specifically, a histogram dual-peak method is used to determine a gray level segmentation threshold of each rice gray level image; the method comprises the steps of drawing a gray level histogram for each rice gray level image, selecting a gray level corresponding to a valley between two peak values (a rice area and a background area) in each gray level histogram as a gray level segmentation threshold value, and determining the gray level segmentation threshold values of the segmented rice area and the background area for the rice gray level image of each rice image. Compared with rice segmentation with skin, the histogram double-peak method is simple in calculation, can realize separation of a rice region and a background region with more accurate effect, is small in calculation amount and small in occupied calculation resource, and is beneficial to improving the detection speed.
In the embodiment of the present application, as shown in fig. 3, extracting a target image of each single grain of target rice from the binarized rice image includes the following steps S301 to S303:
s301, removing rice boundary noise and isolated noise in the binarized rice image through morphological open operation to obtain a denoised rice image;
s302, filling holes of the rice with the single target in the de-noised rice image through morphological closed operation to obtain a smooth rice image;
and S303, extracting a target image of each single-grain target rice from the smooth rice image.
Because rice texture in the rice image is uneven, edges are not smooth and impurities exist, holes appear on the surface of rice grains in the processed binary rice image, and subsequent adhesion segmentation is influenced, the rice binary image is subjected to operation of firstly corroding and then expanding by adopting morphological open operation, rice boundary noise and other isolated noises are removed, and opposite operation is carried out by adopting morphological close operation to fill the holes in the single-grain target rice binary image, so that the quality of the image is improved, the integral position and the shape of the rice are kept unchanged, and the effect of smoothing the image is achieved.
In the embodiment of the application, as shown in fig. 4, extracting a target image of each single-grain target rice from the smoothed rice image includes the following steps S401 to S403;
s401, separating the adhered target rice in the smooth rice image through a concave point detection algorithm so as to enable the target rice in the rice image to be single grains;
s402, rotating the initial rice image according to the inclination angle of the single-grain target rice in the smooth rice image and the coordinate of the first minimum circumscribed rectangle to adjust the single-grain target rice to a target vertical posture, and determining a second minimum circumscribed rectangle of the single-grain target rice in the rotated initial rice image;
s403, expanding a preset number of pixel points outside the second minimum circumscribed rectangle, determining the cutting area of the single-grain target rice in the rotated initial rice image, and cutting to obtain the target image of the single-grain target rice.
Here, the rice image binarized in step S203 may also be separated by a pit detection algorithm.
That is, the step S401 may also be: and separating the adhered target rice in the binarized rice image through a concave point detection algorithm so as to enable the target rice in the rice image to be single grains.
In the step S401, specifically, a concave point detection algorithm is used to realize the segmentation of the adhered rice grains in the rice image, wherein a concave area is obtained by subtracting the minimum convex closure packet in the image of the adhered rice from the adhesion target, two areas with the largest size are selected as the areas where the concave points are located according to the weight of the area of the concave area, the minimum distance between the two areas is calculated to match the adhesion points to obtain a segmentation line, and the adhered rice grains are segmented through the segmentation line.
The concave point detection algorithm solves the problem that the rice husk remaining area and the rice grain surface gray value are different greatly and the adhered rice grains are separated by mistake in a corresponding algorithm for dividing the adhered target based on the gray value, so that the adhered rice grains are divided more accurately, the target rice in the rice image is single grains, and the target image of the single grain rice is extracted conveniently by cutting.
In S402, the initial rice image is rotated according to the inclination angle of the single grain target rice in the rice image to adjust the single grain target rice to a target vertical posture, specifically, a first minimum circumscribed rectangle of each grain target rice in the image after adhesion segmentation is detected to obtain four vertex coordinates and inclination angles of the first minimum circumscribed rectangle, and a rotation angle is obtained according to the inclination angles. And then, taking the upper left vertex (when the width of the first minimum circumscribed rectangle is smaller than the height) or the upper right vertex (when the width of the first minimum circumscribed rectangle is larger than the height) of the minimum circumscribed rectangle as a center, and rotating the rice image through the obtained rotation angle so as to adjust the single-grain target rice to be cut to the target vertical posture.
And determining a second minimum circumscribed rectangle of the single target rice in the rotated initial rice image according to the vertex coordinates of the first minimum circumscribed rectangle.
Here, the rice image includes a plurality of single-grain target rice, and each single-grain target rice corresponds to one inclination angle.
In the step S403, a preset number of pixel points are expanded outside a second minimum external rectangle of the single-grain target rice in the target vertical posture to determine a clipping region of the single-grain target rice, and a target image of the single-grain target rice is obtained by clipping, specifically, after vertex coordinates of the second minimum external rectangle after rotation are obtained, a preset number of pixel points are expanded outside the periphery, the expanded external rectangle is drawn as the clipping region of the single-grain target rice, and clipping is performed according to the clipping region to realize segmentation of the single-grain target rice, so that the target image of the single-grain target rice in the target vertical posture is obtained.
In this embodiment of the application, in the step S102, as shown in fig. 5, extracting a plurality of characteristic parameters of each single piece of target rice according to a target image of each single piece of target rice includes the following steps S501 and S502:
s501, extracting a first moment, a second moment and a third moment of a target image of each single-grain target rice on a plurality of color components to serve as a plurality of color characteristic parameters of the target image; the color components are color components of the target image in HSV color space and color components of the target image in RGB color space;
s502, obtaining a gray-gradient co-occurrence matrix of each target image according to the gray level and the gradient of pixel points in the target image of each single-grain target rice, and extracting a plurality of texture characteristic parameters of the target image through the gray-gradient co-occurrence matrix.
The characteristic parameters of the single-grain target rice are divided into two types, namely color characteristic parameters extracted based on the color characteristics of the single-grain target rice and texture characteristic parameters extracted based on the texture characteristics of the single-grain target rice.
In the step S501, a plurality of color characteristic parameters are extracted based on a plurality of dimensions of the target image of the single-grain target rice on a plurality of color components, so that the color characteristic parameters more comprehensively represent the color characteristics of the single-grain target rice. In the embodiment of the present application, a plurality of dimensions of the target rice grayscale image include: the sensitivity of the image, the extent of the color distribution, and the symmetry of the image color distribution.
Specifically, in the application, the first moment, the second moment and the third moment of the target image of each single-grain target rice on a plurality of color components are respectively extracted as a plurality of color characteristic parameters of the target image, and the first moment, the second moment and the third moment on the color components are respectively the mean value, the standard deviation and the deviation of the target image on the corresponding color components, and respectively represent the sensitivity degree of the target image, the range of color distribution and the symmetry of image color distribution.
Here, a total of 18 color feature parameters are extracted for the first moment, the second moment, and the third moment of R, G, B, H, S, V color components in the HSV color space and the RGB color space, respectively.
In step S502, texture features of the target image in multiple dimensions are respectively represented according to multiple texture feature parameters obtained by extracting gray levels and gradients of pixel points in the target image.
In the step S502, the gray-gradient co-occurrence matrix combines the joint statistical distribution of the pixel gray and the edge gradient, and reflects the correlation between the gray and the gradient (edge) of the pixel point, so that the phenomenon of misclassification caused by unclear rice boundary due to dark brown rice skin left in part of the rice is effectively improved by the plurality of texture features extracted by the gray-gradient co-occurrence matrix.
Specifically, 15 texture characteristic parameters including small gradient advantage, large gradient advantage, gray distribution nonuniformity, gradient distribution nonuniformity, energy, gray average, gradient average, gray mean square error, gradient mean square error, correlation, gray entropy, gradient entropy, mixed entropy, inertia and inverse difference moment are obtained through a gray-gradient co-occurrence matrix.
In the embodiment of the present application, 33-dimensional feature parameters are extracted altogether, and the dimension is high, based on this, as described in step S103, the present application reconstructs a plurality of low-dimensional target fusion features that retain high-dimensional features as much as possible based on the feature parameters in the plurality of dimensions, and represents the plurality of high-dimensional feature parameters by using the plurality of low-dimensional target fusion features; and the accumulated contribution rate of the target fusion characteristics is greater than a preset accumulated contribution rate threshold value, and the accumulated contribution rate represents the degree of representing the rice characteristic information by the target fusion characteristics. For example, the preset cumulative contribution rate threshold is 99%, which indicates that the target fusion characteristics can represent at least 99% of the rice characteristic information.
Specifically, the characteristic parameters with strong correlation are subjected to dimensionality reduction treatment by adopting a principal component analysis method, original 33-dimensional rice characteristic parameters are combined into a group of new several mutually-unrelated target fusion characteristics (namely principal components) through corresponding mathematical transformation, loss of original rice characteristic information (namely the high-dimensionality rice characteristic parameters) is reduced to the minimum while data redundancy and calculated amount are reduced, and a large amount of time cost is saved.
In the embodiment of the present application, a principal component whose cumulative contribution ratio obtained by a principal component analysis method reaches 99% is set as a preferable target fusion feature.
In the step S104, the multiple target fusion features of each single-grain target rice are input to the trained detection model, so as to obtain a processing accuracy detection result of each single-grain target rice.
Specifically, the processing precision detection result of each single-grain target rice is one of the processing precisions of three levels, namely fine grinding, proper grinding and the like.
When the processing precision of rice such as target batch of rice and rice on a production line is determined, a detection sample can be obtained, and the processing precision of each target rice of the detection sample is detected by the detection method of the processing precision of the rice according to the embodiment of the application; then, the processing precision of the detection sample is determined according to the processing precision of each target rice in the detection sample, and the processing precision of the target batch of rice and the equal batch of rice in the production line of rice are represented by the processing precision of the detection sample.
For example, the processing accuracy of the detection sample is determined according to the processing accuracy of each target rice in the detection sample, and may be determined according to the ratio of the target rice in each processing accuracy in the detection sample.
In the embodiment of the present application, the trained detection model in the detection method of rice processing accuracy is obtained by training through the following training method as shown in fig. 6:
s601, constructing a sample data set of rice, wherein the sample data set comprises a target image of single-grain sample rice and the processing precision of the single-grain sample rice; the target image of the single-grain sample rice is extracted from an initial rice image of a plurality of randomly placed sample rice;
s602, extracting a plurality of characteristic parameters of each single-grain sample rice according to the target image of each single-grain sample rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
s603, performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain sample rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
s604, inputting a plurality of target fusion characteristics and processing precision of the single-grain sample rice into a pre-constructed detection model until the pre-constructed detection model meets training end conditions to obtain a trained detection model.
In the step S601, the sample data set includes a target image of a single piece of sample rice and a processing precision of the single piece of sample rice; wherein, the single-grain sample rice is a rice sample with processing precision meeting the processing precision standard sample of early long-shaped rice (LS/T15121); the sample data set comprises single-grain sample rice with the processing precision of fine grinding, proper grinding and the like; and the number of the single-grain sample rice at each processing precision is plural.
That is to say, the sample data set includes a plurality of rice samples at each processing precision, and each rice sample includes a target image of a single piece of sample rice and the processing precision of the single piece of sample rice.
Wherein the sample data set comprises test samples and training samples. The training sample is used for training the detection model, so that the detection model learns the rice characteristic information; the test sample is used for testing the detection performance of the detection model.
The training end condition may be that the number of times of training reaches a preset number of times, the error rate of the detection model is reduced to a preset error rate, and the like.
In step S604, the pre-constructed detection model adopts a BP neural network model; carrying out initialization assignment on the weight parameters of the constructed BP neural network model, setting iteration times, and constructing a 3-layer BP neural network model to realize the classification of the processing precision of 3 kinds of rice; the number of nodes of the BP neural network model input layer is the number of fusion characteristic parameters of the input model, and the number of nodes of the hidden layer is determined by adopting Kolmogorov theorem; the number of the nodes of the BP neural network model output layer is the classified number of the rice processing precision.
Inputting the fusion characteristic parameters of each rice sample in the training samples into a BP neural network model to train the BP neural network model, and optimizing the structure and the parameters of the BP neural network model according to the training result of the BP neural network model; and storing parameters such as weight, bias, learning rate and the like which enable the BP neural network model to reach the optimal value to obtain a trained detection model, wherein the detection model is used for realizing the detection of the rice processing precision.
In some embodiments, there is also provided a detection apparatus for processing accuracy of rice, as shown in fig. 7, the detection apparatus comprising:
the acquisition module 701 is used for acquiring an initial rice image comprising a plurality of randomly placed target rice grains and extracting a target image of each single target rice grain from the initial rice image;
the extraction module 702 is configured to extract a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
the dimension reduction module 703 is configured to perform dimension reduction fusion processing on the multiple feature parameters of each single-grain target rice to obtain multiple target fusion features after dimension reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and the input module 704 is used for inputting the target fusion characteristics of each single-grain target rice into the trained detection model to obtain the processing precision detection result of each single-grain target rice.
According to the detection device for the rice processing precision, after a rice image with a plurality of grains of rice placed at random is processed, color characteristic parameters and texture characteristic parameters capable of reflecting rice processing characteristics are extracted to be used as characteristic parameters of single grain of rice, and the characteristic parameters of the single grain of rice after dimensionality reduction are input into a trained detection model, so that the processing precision of the single grain of rice is obtained; the whole processing process does not need dyeing, does not damage a detection sample, does not involve manpower, has objective detection results, high accuracy and strong repeatability, consumes less time in the detection process, and thus can quickly, nondestructively, objectively and accurately detect the processing precision of the rice.
In some embodiments, the extraction module in the detection device extracts a plurality of characteristic parameters of each single-grain target rice according to a target image of each single-grain target rice, and is specifically configured to:
extracting a first moment, a second moment and a third moment of a target image of each single-grain target rice on a plurality of color components to serve as a plurality of color characteristic parameters of the target image; the color components are color components of the target image in HSV color space and color components of the target image in RGB color space;
and acquiring a gray-gradient co-occurrence matrix of the target image according to the gray level and the gradient of the pixel points in the target image of each single-grain target rice, and extracting a plurality of texture characteristic parameters of the target image through the gray-gradient co-occurrence matrix.
In some embodiments, when the target image of each single-grain target rice is extracted from the initial rice image, the obtaining module in the detection apparatus is specifically configured to:
converting the initial rice image of the randomly placed multiple grains of target rice into a rice gray image;
determining a gray segmentation threshold value of the rice gray image according to the rice gray and the background gray in the rice gray image, and processing the rice gray image according to the gray segmentation threshold value to obtain a binary rice image; the gray segmentation threshold values corresponding to different rice gray images are different;
and extracting a target image of each single-grain target rice from the binarized rice image.
In some embodiments, when extracting the target image of each single grain of target rice from the binarized rice image, the obtaining module in the detection apparatus is specifically configured to:
removing rice boundary noise and isolated noise in the binarized rice image through morphological open operation to obtain a denoised rice image;
filling holes of the single-grain target rice in the de-noised rice image through morphological closed operation to obtain a smooth rice image;
and extracting a target image of each single-grain target rice from the smooth rice image.
In some embodiments, the obtaining module in the detecting device, when converting the initial rice image of the randomly placed multiple target rice grains into a rice gray scale image, is specifically configured to:
converting the initial rice image of the randomly placed multiple grains of target rice into a first gray image by adopting image gray conversion;
determining the target gray level of each pixel point according to the gray levels of a plurality of pixel points in a preset neighborhood of each pixel point of the first gray level image;
and updating the gray level of each pixel point according to the target gray level of each pixel point in the first gray level image so as to obtain a rice gray level image.
In some embodiments, when the target image of each single-grain target rice is extracted from the smoothed rice image, the obtaining module in the detection device is specifically configured to:
separating the adhered target rice in the rice image through a concave point detection algorithm so as to enable the target rice in the rice image to be single grains;
rotating the initial rice image according to the inclination angle of the single-grain target rice in the smooth rice image and the coordinate of the first minimum circumscribed rectangle so as to adjust the single-grain target rice to a target vertical posture, and determining a second minimum circumscribed rectangle of the single-grain target rice in the rotated initial rice image;
and expanding a preset number of pixel points outside the second minimum external rectangle, determining a cutting area of the single-grain target rice in the rotated initial rice image, and cutting to obtain a target image of the single-grain target rice.
In some embodiments, the device for detecting the processing accuracy of the rice further comprises a training module; the training module is specifically configured to:
constructing a sample data set of the rice, wherein the sample data set comprises a target image of the single-grain sample rice and the processing precision of the single-grain sample rice; the target image of the single-grain sample rice is extracted from an initial rice image of a plurality of randomly placed sample rice;
extracting a plurality of characteristic parameters of each single-grain sample rice according to the target image of each single-grain sample rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain sample rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and inputting the fusion characteristics and the processing precision of the multiple targets of the single-grain sample rice into a pre-constructed detection model until the pre-constructed detection model meets the training end condition to obtain the trained detection model.
In some embodiments, there is also provided an electronic device 800, as shown in fig. 8, the electronic device 800 comprising: a processor 802, a memory 801 and a bus, wherein the memory 801 stores machine-readable instructions executable by the processor 802, when the electronic device 800 operates, the processor 802 communicates with the memory 801 through the bus, and the machine-readable instructions are executed by the processor 802 to perform the steps of the method for detecting the processing accuracy of rice.
In some embodiments, there is also provided a computer-readable storage medium having stored thereon a computer program for executing the steps of the method for detecting rice processing accuracy when being executed by a processor.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a platform server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall cover the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A detection method for rice processing precision is characterized by comprising the following steps:
acquiring an initial rice image comprising a plurality of randomly placed target rice grains, and extracting a target image of each single target rice grain from the initial rice image;
extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, and the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in the target image;
performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain target rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and inputting the target fusion characteristics of each single-grain target rice into a trained detection model to obtain a processing precision detection result of each single-grain target rice.
2. The method for detecting the processing accuracy of the rice according to claim 1, wherein the extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice comprises:
extracting a first moment, a second moment and a third moment of a target image of each single-grain target rice on a plurality of color components to serve as a plurality of color characteristic parameters of the target image; the color components are color components of the target image in HSV color space and color components of the target image in RGB color space;
and acquiring a gray-gradient co-occurrence matrix of the target image according to the gray level and the gradient of the pixel points in the target image of each single-grain target rice, and extracting a plurality of texture characteristic parameters of the target image through the gray-gradient co-occurrence matrix.
3. The method for detecting rice processing accuracy according to claim 1, wherein extracting a target image of each single piece of target rice from the initial rice image comprises:
converting the initial rice image of the randomly placed multiple grains of target rice into a rice gray image;
determining a gray segmentation threshold value of the rice gray image according to the rice gray and the background gray in the rice gray image, and processing the rice gray image according to the gray segmentation threshold value to obtain a binary rice image; the gray segmentation threshold values corresponding to different rice gray images are different;
and extracting a target image of each single-grain target rice from the binarized rice image.
4. The method for detecting rice processing accuracy according to claim 3, wherein extracting a target image of each single piece of target rice from the binarized rice image comprises:
removing rice boundary noise and isolated noise in the binarized rice image through morphological open operation to obtain a denoised rice image;
filling holes of the single-grain target rice in the de-noised rice image through morphological closed operation to obtain a smooth rice image;
and extracting a target image of each single-grain target rice from the smooth rice image.
5. The method for detecting rice processing accuracy according to claim 3, wherein converting the initial rice image of the randomly placed plurality of target rice grains into a rice gray scale image comprises:
converting the initial rice image of the randomly placed multiple grains of target rice into a first gray image by adopting image gray conversion;
determining the target gray level of each pixel point according to the gray levels of a plurality of pixel points in a preset neighborhood of each pixel point of the first gray level image;
and updating the gray level of each pixel point according to the target gray level of each pixel point in the first gray level image so as to obtain a rice gray level image.
6. The method for detecting rice processing accuracy according to claim 4, wherein a target image of each single piece of target rice is extracted from the smoothed rice image, including;
separating the adhered target rice in the smooth rice image through a concave point detection algorithm so as to enable the target rice in the rice image to be single grains;
rotating the initial rice image according to the inclination angle of the single-grain target rice in the smooth rice image and the coordinate of the first minimum circumscribed rectangle so as to adjust the single-grain target rice to a target vertical posture, and determining a second minimum circumscribed rectangle of the single-grain target rice in the rotated initial rice image;
and expanding a preset number of pixel points outside the second minimum external rectangle, determining a cutting area of the single-grain target rice in the rotated initial rice image, and cutting to obtain a target image of the single-grain target rice.
7. The method for detecting rice processing accuracy according to claim 1, wherein the trained detection model is obtained by training by the following training method:
constructing a sample data set of the rice, wherein the sample data set comprises a target image of the single-grain sample rice and the processing precision of the single-grain sample rice; the target image of the single-grain sample rice is extracted from an initial rice image of a plurality of randomly placed sample rice;
extracting a plurality of characteristic parameters of each single-grain sample rice according to the target image of each single-grain sample rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
performing dimensionality reduction fusion processing on the characteristic parameters of each single-grain sample rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and inputting the fusion characteristics and the processing precision of the multiple targets of the single-grain sample rice into a pre-constructed detection model until the pre-constructed detection model meets the training end condition to obtain the trained detection model.
8. The utility model provides a detection device of rice machining precision which characterized in that, detection device includes:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring an initial rice image comprising a plurality of randomly placed target rice grains and extracting a target image of each single target rice grain from the initial rice image;
the extraction module is used for extracting a plurality of characteristic parameters of each single-grain target rice according to the target image of each single-grain target rice; the characteristic parameters comprise color characteristic parameters and texture characteristic parameters, wherein the texture characteristic parameters are obtained by extracting according to the gray level and the gradient of pixel points in a target image;
the dimensionality reduction module is used for carrying out dimensionality reduction fusion processing on the characteristic parameters of each single-grain target rice to obtain a plurality of target fusion characteristics subjected to dimensionality reduction; the accumulated contribution rate of the target fusion features is larger than a preset accumulated contribution rate threshold value;
and the input module is used for inputting the target fusion characteristics of each single-grain target rice into the trained detection model to obtain the processing precision detection result of each single-grain target rice.
9. An electronic device, comprising: a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, the processor and the memory are communicated through the bus when the electronic device runs, and the machine readable instructions are executed by the processor to execute the steps of the method for detecting the processing accuracy of the rice according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor, performs the steps of the method for detecting rice processing accuracy according to any one of claims 1 to 7.
CN202210547479.5A 2022-05-18 2022-05-18 Detection method and device for rice processing precision, electronic equipment and medium Pending CN114881984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210547479.5A CN114881984A (en) 2022-05-18 2022-05-18 Detection method and device for rice processing precision, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210547479.5A CN114881984A (en) 2022-05-18 2022-05-18 Detection method and device for rice processing precision, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN114881984A true CN114881984A (en) 2022-08-09

Family

ID=82676996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210547479.5A Pending CN114881984A (en) 2022-05-18 2022-05-18 Detection method and device for rice processing precision, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114881984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423702A (en) * 2022-08-23 2022-12-02 自然资源部国土卫星遥感应用中心 Method and system for manufacturing large-area space-borne optical and SAR (synthetic Aperture Radar) image DOM (document object model)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423702A (en) * 2022-08-23 2022-12-02 自然资源部国土卫星遥感应用中心 Method and system for manufacturing large-area space-borne optical and SAR (synthetic Aperture Radar) image DOM (document object model)

Similar Documents

Publication Publication Date Title
CN108776140B (en) Machine vision-based printed matter flaw detection method and system
Yiyang The design of glass crack detection system based on image preprocessing technology
CN113592861B (en) Bridge crack detection method based on dynamic threshold
CN110599552B (en) pH test paper detection method based on computer vision
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN109447945B (en) Quick counting method for basic wheat seedlings based on machine vision and graphic processing
CN107157447B (en) Skin surface roughness detection method based on image RGB color space
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN116645367B (en) Steel plate cutting quality detection method for high-end manufacturing
CN112149543B (en) Building dust recognition system and method based on computer vision
CN111415339B (en) Image defect detection method for complex texture industrial product
CN116721391B (en) Method for detecting separation effect of raw oil based on computer vision
CN114926410A (en) Method for detecting appearance defects of brake disc
CN114549441A (en) Sucker defect detection method based on image processing
CN108665468B (en) Device and method for extracting tangent tower insulator string
CN115511814A (en) Image quality evaluation method based on region-of-interest multi-texture feature fusion
CN114881984A (en) Detection method and device for rice processing precision, electronic equipment and medium
CN116805302A (en) Cable surface defect detection device and method
CN109682821B (en) Citrus surface defect detection method based on multi-scale Gaussian function
CN114332079A (en) Plastic lunch box crack detection method, device and medium based on image processing
CN111738984B (en) Skin image spot evaluation method and system based on watershed and seed filling
CN110490868B (en) Nondestructive counting method based on computer vision corn cob grain number
CN116433978A (en) Automatic generation and automatic labeling method and device for high-quality flaw image
CN112837271B (en) Melon germplasm resource character extraction method and system
CN114723728A (en) Method and system for detecting CD line defects of silk screen of glass cover plate of mobile phone camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination