CN110782025B - Rice processing online process detection method - Google Patents

Rice processing online process detection method Download PDF

Info

Publication number
CN110782025B
CN110782025B CN201911406271.6A CN201911406271A CN110782025B CN 110782025 B CN110782025 B CN 110782025B CN 201911406271 A CN201911406271 A CN 201911406271A CN 110782025 B CN110782025 B CN 110782025B
Authority
CN
China
Prior art keywords
neural network
layer
pixels
network system
rice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911406271.6A
Other languages
Chinese (zh)
Other versions
CN110782025A (en
Inventor
蒋志荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Rongye Intelligent Manufacturing Co ltd
Original Assignee
Changsha Rongye Intelligent Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Rongye Intelligent Manufacturing Co ltd filed Critical Changsha Rongye Intelligent Manufacturing Co ltd
Priority to CN201911406271.6A priority Critical patent/CN110782025B/en
Publication of CN110782025A publication Critical patent/CN110782025A/en
Application granted granted Critical
Publication of CN110782025B publication Critical patent/CN110782025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a rice processing online process detection method, which comprises the following steps: step S1: constructing an artificial neural network system based on machine vision; the artificial neural network system comprises an input layer, a hidden layer and an output layer, wherein the first layer is the input layer, and each input vector occupies one neuron; the last layer is an output layer; step S2: image information acquired based on machine vision is used as an input vector of the artificial neural network system; step S3: after the output of the last layer of the artificial neural network system, fuzzifying each object by using a membership function, and finally judging through a hard polarity function to finish the solution of fuzzification; step S4: training an artificial neural network system by using a physical sample; step S5: and finishing the training of the neural network and using the neural network on line. The invention has the advantages of simple principle, easy realization, high detection precision and the like.

Description

Rice processing online process detection method
Technical Field
The invention mainly relates to the technical field of rice intelligent processing, in particular to a rice processing online process detection method.
Background
The smart manufacturing process must also only be deployed around smart factories, the core of which is "data driven". The core of the intelligent rice processing factory must be driven by data, and the on-line process detection aiming at each procedure processing is the core of the core, and provides original driving force and core basis for intelligent control of rice processing production equipment.
In fact, online process detection of rice processing is just a bottleneck in realizing intelligent factories of rice processing. Currently, for different processing procedures, the detection content of online process detection includes the following aspects:
(a) cleaning and stone removing: detecting inorganic impurities (such as mud, cobble, etc.) and organic impurities (straw, other organic matters such as wheat grain, corn, barnyard grass, etc.) from rice (including brown rice);
(b) a rice hulling procedure: detecting paddy, crushed brown rice, unripe brown rice, cracked brown rice, etc. from brown rice;
(c) rice milling process: detecting brown rice, opening brown rice, rice layer, germ-remaining rice and husk-remaining rice required by each rice milling process;
(d) and (3) color selection: detecting abnormal grains such as pathological grains, yellow grains, chalky grains and the like;
(e) and (3) polishing: detecting the attachment degree of bran powder on the surface of the rice grains, the smoothness of the surface of the rice grains, the transparency of the rice grains, the whiteness of the rice grains and the level of the ground structure of the rice grains;
(f) and (3) finished product: skin-remaining grains, embryo-remaining grains, morbid grains, yellow grains, chalky grains, broken grains, needlepoint morbid grains, immature grains and the like.
Therefore, the detection items in the processing and production process are various, and meanwhile, the naturally growing crops have the expression characteristic richness that each particle is different, so that the online process detection of rice processing has great difficulty and challenge, which is also the root cause of the worldwide problem.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the rice processing online process detection method which is simple in principle, easy to realize and high in detection precision.
In order to solve the technical problems, the invention adopts the following technical scheme:
a rice processing on-line process detection method comprises the following steps:
step S1: constructing an artificial neural network system based on machine vision;
the artificial neural network system comprises an input layer, a hidden layer and an output layer, wherein the first layer is the input layer, and each input vector occupies one neuron; the last layer is an output layer; the former part of the hidden layer is a convolutional neural network, and the latter part is a BP feedback neural network; the hidden layer comprises the following components: the second layer and the third layer of the entire artificial neural network system are convolutional layers of the neural network, and the S function is used as an activation function, and f (X) = X ethanol + b is used as a filtering function; the fourth layer and the fifth layer are pooling layers of a neural network, and the S function is used as an activation function to reduce the dimension of the feature space of the detection object; the sixth layer to the tenth layer are BP layers, supervised learning of neural network training is realized by applying a feedback algorithm, a deformed L function is used as an activation function, and the deformation rule of the L function is that f (X) = aX + b, when X >0, a =3, and when X <0, a = 0.2;
step S2: rice image information acquired based on machine vision is used as an input vector of an artificial neural network system;
step S3: after the output of the last layer of the artificial neural network system, fuzzifying each object by using a membership function, and finally judging through a hard polarity function to finish the solution of fuzzification;
step S4: training an artificial neural network system by using a rice physical sample;
step S5: and finishing the training of the neural network and using the neural network on line.
As a further improvement of the invention: after step S4 is completed, the method further includes: and providing the real object sample which is not subjected to machine learning for the artificial neural network system for discrimination, marking the sample with the wrong discrimination, and providing the sample for the artificial neural network system for reinforcement learning.
As a further improvement of the invention: in step S2, pixels in the image are scanned one by one, each pixel is compared with adjacent pixels to obtain Δ V values in four directions, where Δ V is a difference in brightness between two adjacent pixels, and pixels with Δ V conforming to a "sudden change" characteristic are marked and connected to form an outer contour of the detected object;
dividing each obtained object to obtain A, B, C, D, E, F six sections of curves, respectively converting the six sections of curves into curve functions F (x) in rectangular coordinates, and converting the six sections of curve functions into six input vectors of a neural network, wherein the six input vectors correspond to six neurons of a neural network input layer, and q1, q2, q3, q4, q5 and q 6.
As a further improvement of the invention: scanning pixels in the outline of the detected object one by one, comparing each pixel with adjacent pixels to obtain a delta H value of each pixel, wherein H is 'hue' in machine vision, marking the pixels of which delta H accords with 'mutation' characteristics, and forming points which accord with the mutation characteristics; if no other point exists in a certain distance threshold range around a certain point, the point is determined; if there is another point within a certain distance threshold around a certain point, then connecting to a line; if more than two other points are within a certain distance threshold range around a certain point, connecting the points into a plane; through extraction, a plurality of values are respectively extracted at random from the number of the extracted points, the length of the line, the width of the surface, H, S, V in the color blocks and the number of pixels contained in each color block, and the values are respectively organized into input vectors corresponding to the neurons of the input layer of the neural network.
As a further improvement of the invention: and scanning the pixels in the outline of the detected object one by one to obtain the delta V value of each pixel and the adjacent pixels, wherein the delta V value is still the difference of the brightness values, point marks of which delta V accords with the characteristic of secondary mutation are connected into lines, and the lines are the texture of the detected object. The 'textures' obtained from the image information respectively organize the H, S, V values of the textures, the H, S, V values of the texture 'edges' into input vectors corresponding to input layer neurons of the neural network, wherein the H, S, V values are a plurality of the textures, the lengths, the trends, the distribution positions, the texture 'inner' values are a plurality of the textures.
As a further improvement of the invention: and (3) forming a small unit by adjacent pixels according to a preset value in the outline of the detected object, counting the average value of H, S, V in each small unit, and randomly extracting a plurality of small units from the outline to obtain an input vector.
As a further improvement of the invention: defining a rectangular coordinate, dividing the coordinate into eight quadrants, respectively marking the quadrants as 1, 2, 3, 4, 5, 6, 7 and 8, scanning all constituent pixels of the outline boundary of the detected object one by one, marking a numerical value corresponding to a quadrant when the position relation between every adjacent next pixel and the previous pixel accords with the certain quadrant in the graph, and respectively counting the proportion of 1, 2, 3, 4, 5, 6, 7 and 8 in all adjacent relations to form 8 input vectors.
Compared with the prior art, the invention has the advantages that: the rice processing online process detection method has the advantages of simple principle, easy realization and high detection precision, and can really realize efficient, rapid and accurate rice processing online process detection by constructing the machine vision-based artificial neural network system and depending on the processing strategy after scanning.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIG. 2 is a schematic diagram of the effect of the invention on the outer contour of the detected object in a specific application example; in the figure, (a), (b), and (c) are outer contours of the inspection object acquired from three different viewing angles, respectively.
Fig. 3 is a schematic diagram illustrating an effect of segmenting each acquired object in a specific application example of the present invention.
FIG. 4 is a schematic diagram of the effect of scanning pixels in the outline of the detected object one by one again in the specific application example of the present invention; in the figure, (a) is a schematic representation of the points characteristic of the mutations; (b) a schematic of a point-to-line for mutation features; (c) schematic representation of the point-to-surface connection of the mutant features.
FIG. 5 is a schematic diagram illustrating the effect of scanning pixels within the outline of an object to be inspected one by one to obtain Δ V values of each pixel and adjacent pixels in an exemplary embodiment of the present invention; in the figure, (a), (b), and (c) are schematic diagrams of acquiring textures in a detection object from three different viewing angles, respectively.
FIG. 6 is a diagram illustrating the effect of defining a rectangular coordinate in a specific application example of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1, the rice processing online process detection method of the present invention includes the steps of:
step S1: constructing an artificial neural network system based on machine vision;
the neural network system is composed of an input layer, a hidden layer and an output layer. The first layer is an input layer, and each input vector occupies one neuron; the last, eleventh, layer is the output layer.
Step S2: rice image information acquired based on machine vision is used as an input vector of the artificial neural network system.
Step S3: after the output of the last layer of the neural network, each object is fuzzified by a membership function, and finally, the hard polarity function is used for distinguishing (deblurring).
Step S4: the neural network is trained using a physical sample of rice.
Step S5: and finishing the training of the neural network and using the neural network on line.
As a preferred embodiment, the present invention may further include, after step S4: and providing the real object sample which is not subjected to machine learning for the neural network for discrimination, marking the sample with the wrong discrimination, and providing the sample for the neural network for reinforcement learning.
In a specific application example, the second layer to the tenth layer are hidden layers of the neural network, and in order to adapt to the complexity of the detected (i.e. classified) object, the invention designs the artificial neural network by fully considering the extraction of sensitive features of the neural network on the classification of the detected object and the application of the prior knowledge, and designs the artificial neural network into a hybrid neural network: the former part of the hidden layer is a convolutional neural network, the latter part is a BP feedback neural network, and different activation functions are respectively used to give full play to the advantages of the two neural networks, so that the training time of the neural networks is reduced as much as possible on the basis of achieving accurate and complex classification.
The hidden layer is composed of: second layer (the second layer of the entire neural network) and the third layer are convolutional layers of the neural network, using S function as the activation function, and f (X) = X comprise + b as the filtering function; the fourth layer and the fifth layer are pooling layers of a neural network, and the S function is still used as an activation function to reduce the dimension of the feature space of the detection object; the sixth layer to the tenth layer are BP layers, supervised learning of neural network training is realized by using a feedback algorithm, and in order to prevent gradient disappearance, a deformed L function is used as an activation function, and the deformation rule of the L function is that f (X) = aX + b, when X >0, a =3, and when X <0, a = 0.2.
It is understood that different hidden layer configurations can be selected according to actual detection requirements, the above example is only a preferred embodiment of the present invention, and other embodiments should also be within the scope of the present invention.
In a specific application example, the neural network constructed by the invention is based on machine vision, so the design of the input vector thereof is still based on machine vision. For this reason, in a specific application, the specific flow of step S2 may be:
step S201: as shown in fig. 2, pixels in an image are scanned one by one, each pixel is compared with adjacent pixels, and Δ V values in four directions of the scanned pixels are obtained (note: H, S, V is a general concept and standard parameter in machine vision, where H is hue, S is saturation, V is value, brightness is gray, Δ V is the difference between brightness of two adjacent pixels), pixel points where Δ V meets the "abrupt change" feature are marked, and the pixel points are connected to form the outline of the detected object.
As shown in fig. 3, each of the obtained objects is then segmented as follows to obtain A, B, C, D, E, F six curves, the six curves are respectively converted into curve functions f (x) in rectangular coordinates, and the six curve functions are further converted into six input vectors of the neural network, q1, q2, q3, q4, q5, and q6, wherein the six input vectors correspond to six neurons of the input layer of the neural network.
Step S202: as shown in fig. 4, the pixels in the contour of the detected object are further scanned one by one, and each pixel is compared with the adjacent pixels to obtain the Δ H value (note: H is "hue" in machine vision), and the pixels with Δ H conforming to the "abrupt change" feature are marked to form the points conforming to the abrupt change feature; if no other point exists in a certain distance threshold range around a certain point, the point is determined; if there is another point within a certain distance threshold around a certain point, then connecting to a line; if more than two other points are within a certain distance threshold range around a certain point, connecting the points into a plane; as shown in fig. 4, points (as in fig. 4 left), lines (as in fig. 4), faces (as in fig. 4 right) are presented, where a is a face, b and c are lines, e, f, g, h.. are points; by means of decimation, the number of points to be decimated, the length of the line, the width of the plane, the respective randomly decimated 20 values of H, S, V in the color patch (including point, line, plane), and the number of pixels contained in each color patch are organized into input vectors, q7, q8, q9, q10, q11, q12, q13.. q80, respectively, corresponding to 87 neurons in the input layer of the neural network.
Step S203: as shown in fig. 5, further, the pixels in the contour of the object to be detected are scanned one by one to obtain the Δ V value of each pixel and the adjacent pixels, which is still the difference between the brightness values, the Δ V is matched with the "secondary mutation" feature (note: the outer contour is obtained by searching for the point mark matched with the "mutation feature", here, the point mark matched with the "secondary mutation" feature, meaning that the mutation degree is small, the difference between the "mutation" and the "secondary mutation" is determined by the fuzzy theory "membership degree" algorithm, the membership degree is greater than 0.8, the point mark is a mutation, and the membership degree is 0.4-0.8, the point mark is a secondary mutation), and the point mark is connected to form a line, the line in the image information is the "texture" of the object to be detected, the "texture" is actually a wide or narrow "line" in the image information, and the "line" actually has a certain area from the pixels of the image information, the texture "edge" is the contour of the texture and the texture "inner" is the region within the texture contour. The number, length, direction, distribution position, and H, S, V values of the textures "in" are 10, and H, S, V values of texture "edge" are 20 (10 on the left and right sides), which are respectively organized into 123 input vectors, corresponding to 123 input layer neurons of the neural network.
Step S204: further, every adjacent 9 pixels in the outline of the detected object form a small unit, the average value of H, S, V in each small unit is counted, 500 small units are randomly extracted from the outline, and 1500 input vectors are obtained.
Step S205: as shown in fig. 6, a rectangular coordinate is defined, the coordinate is divided into eight quadrants, which are respectively marked as 1, 2, 3, 4, 5, 6, 7, and 8, all the constituent pixels on the contour boundary of the detected object are scanned one by one, the position relationship between each adjacent next pixel and the previous pixel conforms to a certain direction (quadrant) in the graph, i.e., the numerical value corresponding to the quadrant is marked, and the proportions of 1, 2, 3, 4, 5, 6, 7, and 8 in all the adjacent relationships are respectively counted to form 8 input vectors.
In the specific application example, in step S3, the last layer of the neural network outputs a feature description for each specific sample, and this description is digital and multidimensional. Each "dimension" of the multi-dimension is specifically represented as a "quantity" of one direction of the feature euclidean space — in effect, the "value" of the sample in this "feature dimension" as understood by the neural network. This value, which cannot be directly identified and classified into samples, needs to be converted into "conformity" in the feature dimension, expressed as a decimal between 0 and 1, and this process is called "fuzzification", and the hard polarity function uses f (x) =1, when x is greater than a, and f (x) =0, when x is less than a, what the value of a is, and is also learned through training of the neural network. The process of completing a discrimination of 1 or 0, i.e., a classification, is "deblurring".
In a specific application example, the invention physically relies on a particle dynamic detection device (patent number 201720785155X, name: a particulate matter automatic detection mechanism), and the invention carries out actual operation according to the method, and the specific flow is as follows:
1. all kinds of samples were obtained by an online detection device, including: stone, mud block, rice straw, wheat, corn, barnyard grass, rice stem, rice, brown rice, crushed brown rice, immature brown rice, standard sample of each rice milling (I, II, III and IV), rough grain, husk grain, germ grain, disease grain, yellow grain rice, crushed rice, rice grain with bran powder on grain surface, clean rice, smooth rice and unpolished rice.
2. Training a neural network;
(1) the rice image information acquired based on machine vision in the method is used as an input vector of an artificial neural network system, namely unpolished rice grains are used as a negative sample B, smooth-surfaced grains are used as a positive sample A, the neural network is trained, after the training is finished, the samples which are not learned are provided for the neural network to be distinguished, the A which is misjudged as B is used as the positive sample to be relearned, the B which is misjudged as A is used as the B sample to be relearned until the misjudgment rate and the misjudgment rate are both less than 0.1 percent; and finishing the learning of the polished standard sample.
(2) Taking a sample with smooth grain surface as a negative sample B, taking a sample with bran powder left on the grain surface as a positive sample A, training a neural network, and repeating the action of (1); and finishing the learning of the grain flour bran powder sample.
(3) Taking all the samples (including the positive sample and the negative sample, the same in the following) in the steps (1) and (2) as a negative sample B, taking a final grinding standard sample as a positive sample A, training a neural network, and repeating the action of the step (1); and finishing the learning of the final grinding standard-reaching sample.
(4) Taking all the samples in (1), (2) and (3) as a negative sample B, taking the preserved skin as a positive sample A, training a neural network, and repeating the action of (1); and finishing the study of the skin-leaving grains.
(5) Taking all the samples in the steps (1), (2) and (3) as a negative sample B, taking the remained embryo grains as a positive sample A, training a neural network, and repeating the action of the step (1); and finishing the study of embryo grain retention.
(6) According to the same procedure of (3), the sample learning of each procedure of rice milling is completed, no matter the number of rice milling channels lightly milled by multiple machines (2, 3, 4, 5 and 6 rice milling channels).
(7) Taking all the samples from (1) to (6) and brown rice as a negative sample B, taking the sick ban grains as a positive sample A, training a neural network, and repeating the action of (1); completing the study of the sick class particles.
In the same way, the learning of the yellow rice is completed.
In the same way, learning of chalky grains is completed.
(8) Taking all the samples from (1) to (7) and brown rice as negative samples B, taking the rough grains as positive samples A, training a neural network, and repeating the action of (1); the study of the rough grains is completed.
(9) And (3) taking all the samples from (1) to (8) as a negative sample B, taking the broken rice grains meeting the national standard broken rice characteristics as a negative sample A to train a neural network, and repeating the action of (1) to finish the learning of the broken rice.
(10) Taking all the samples from (1) to (9) as a negative sample B, taking the unripe brown rice as a positive sample A, training a neural network, and repeating the action of (1); completing the learning of the unripe brown rice.
(11) Taking all the samples from (1) to (10) as negative samples B, taking rice as positive samples A, training a neural network, and repeating the action of (1); and finishing the learning of the rice.
(12) Taking all the samples from (1) to (11) as negative samples B, taking straw, rice stems, corn, wheat and barnyard grass as positive samples A respectively, training a neural network, and repeating the action of (1); the study of the straw, the rice stalk, the corn, the wheat and the barnyard grass is completed.
Different from the prior art, the straw, the rice stalk, the corn, the wheat and the barnyard grass are sequentially put into a sample for learning, and the straw, the rice stalk, the corn, the wheat and the barnyard grass are not mixed and put into the sample for learning or are put into the sample at one time.
(13) Taking all the samples from (1) to (12) as negative samples B, taking inorganic impurities such as stones and mud blocks as positive samples A, training a neural network, and repeating the action of (1); and finishing the learning of inorganic impurities.
(14) And (3) testing the training effect of the neural network, and repeating the training steps when deviation exists until the classification misjudgment rate and the missing judgment rate of each class are less than 0.1%.
(15) After the training is finished, when the training device is used on line, the distinguishing and classifying sequence is opposite to the learning sequence.
After the method is adopted and practical application is carried out, the rice processing online process detection is carried out on a dynamic detection device after the training is finished, the error of each index is less than or equal to 0.2 percent, and the repetition error is less than or equal to 0.3 percent.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (2)

1. An online rice processing technology detection method is characterized by comprising the following steps:
step S1: constructing an artificial neural network system based on machine vision;
the artificial neural network system comprises an input layer, a hidden layer and an output layer, wherein the first layer is the input layer, and each input vector occupies one neuron; the last layer is an output layer; the former part of the hidden layer is a convolutional neural network, and the latter part is a BP feedback neural network; the hidden layer comprises the following components: the second layer and the third layer of the entire artificial neural network system are convolutional layers of the neural network, and the S function is used as an activation function, and f (X) = X ethanol + b is used as a filtering function; the fourth layer and the fifth layer are pooling layers of a neural network, and the S function is used as an activation function to reduce the dimension of the feature space of the detection object; the sixth layer to the tenth layer are BP layers, supervised learning of neural network training is realized by applying a feedback algorithm, a deformed L function is used as an activation function, and the deformation rule of the L function is that f (X) = aX + b, when X >0, a =3, and when X <0, a = 0.2; b is a constant;
step S2: rice image information acquired based on machine vision is used as an input vector of an artificial neural network system;
in step S2, pixels in the image are scanned one by one, each pixel is compared with adjacent pixels to obtain Δ V values in four directions, where Δ V is a difference in brightness between two adjacent pixels, and pixels with Δ V conforming to a "sudden change" characteristic are marked and connected to form an outer contour of the detected object;
dividing each obtained object to obtain A, B, C, D, E, F six sections of curves, respectively converting the six sections of curves into curve functions F (x) in a rectangular coordinate, and converting the six sections of curve functions into six input vectors of a neural network, wherein the six input vectors correspond to six neurons of a neural network input layer, and q1, q2, q3, q4, q5 and q 6;
scanning pixels in the outline of the detected object one by one, comparing each pixel with adjacent pixels to obtain a delta H value of each pixel, wherein H is 'hue' in machine vision, marking the pixels of which delta H accords with 'mutation' characteristics, and forming points which accord with the mutation characteristics; if no other point exists in a certain distance threshold range around a certain point, the point is determined; if there is another point within a certain distance threshold around a certain point, then connecting to a line; if more than two other points are within a certain distance threshold range around a certain point, connecting the points into a plane; by extraction, respectively randomly extracting a plurality of values from the number of extracted points, the length of a line, the width of a surface, H, S, V in color blocks and the number of pixels contained in each color block, and respectively organizing the values into input vectors corresponding to neurons of a neural network input layer;
scanning the pixels in the outline of the detected object one by one to obtain the delta V value of each pixel and the adjacent pixels, which is still the difference of the brightness values, marking the points of which the delta V accords with the characteristic of the secondary mutation, and connecting the points into lines, wherein the lines are the texture of the detected object; the 'textures' obtained from the image information respectively organize the H, S, V values of the textures, the H, S, V values of the texture 'edges' into input vectors corresponding to input layer neurons of a neural network, wherein the H, S, V values are a plurality of the textures, and the 8932 values are a plurality of the textures in the textures, and the lengths, the trends, the distribution positions and the textures are a plurality of;
forming a small unit by adjacent pixels in the outline of the detected object according to a preset value, counting the average value of H, S, V in each small unit, and randomly extracting a plurality of small units from the outline to obtain an input vector;
defining a rectangular coordinate, dividing the coordinate into eight quadrants, respectively marking the quadrants as 1, 2, 3, 4, 5, 6, 7 and 8, scanning all constituent pixels of the outline boundary of the detected object one by one, marking a numerical value corresponding to a quadrant when the position relation between each adjacent next pixel and the previous pixel accords with the certain quadrant in the graph, and respectively counting the proportion of 1, 2, 3, 4, 5, 6, 7 and 8 in all adjacent relations to form 8 input vectors;
step S3: after the output of the last layer of the artificial neural network system, fuzzifying each object by using a membership function, and finally judging through a hard polarity function to finish the solution of fuzzification;
step S4: training an artificial neural network system by using a rice physical sample;
step S5: and finishing the training of the neural network and using the neural network on line.
2. The rice processing on-line process detecting method as claimed in claim 1, further comprising, after completion of step S4: and providing the real object sample which is not subjected to machine learning for the artificial neural network system for discrimination, marking the sample with the wrong discrimination, and providing the sample for the artificial neural network system for reinforcement learning.
CN201911406271.6A 2019-12-31 2019-12-31 Rice processing online process detection method Active CN110782025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911406271.6A CN110782025B (en) 2019-12-31 2019-12-31 Rice processing online process detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911406271.6A CN110782025B (en) 2019-12-31 2019-12-31 Rice processing online process detection method

Publications (2)

Publication Number Publication Date
CN110782025A CN110782025A (en) 2020-02-11
CN110782025B true CN110782025B (en) 2020-04-14

Family

ID=69394805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911406271.6A Active CN110782025B (en) 2019-12-31 2019-12-31 Rice processing online process detection method

Country Status (1)

Country Link
CN (1) CN110782025B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112986275A (en) * 2021-03-17 2021-06-18 哈尔滨工程大学 Germ rice germ integrity on-line measuring system
CN113426709B (en) * 2021-07-21 2023-04-25 长沙荣业软件有限公司 Intelligent detection robot for grain purchase and grain classification method
CN114160234B (en) * 2021-11-17 2022-11-01 长沙荣业软件有限公司 Rice milling production process control method and rice-pearl production line
CN114721270B (en) * 2022-04-11 2022-11-01 中南林业科技大学 Rice hulling and milling cooperative control method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning
CN108197636A (en) * 2017-12-06 2018-06-22 云南大学 A kind of paddy detection and sorting technique based on depth multiple views feature
CN108333936A (en) * 2018-01-30 2018-07-27 山西机电职业技术学院 A method of asynchronous machine positioning accuracy is improved based on neural network
CN109086886A (en) * 2018-08-02 2018-12-25 工极(北京)智能科技有限公司 A kind of convolutional neural networks learning algorithm based on extreme learning machine
CN110186924A (en) * 2019-07-24 2019-08-30 长沙荣业智能制造有限公司 A kind of rice variety intelligent detecting method, system and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100378351B1 (en) * 2000-11-13 2003-03-29 삼성전자주식회사 Method and apparatus for measuring color-texture distance, and method and apparatus for sectioning image into a plurality of regions using the measured color-texture distance
CN101556611B (en) * 2009-05-08 2014-05-28 白青山 Image searching method based on visual features
US11256982B2 (en) * 2014-07-18 2022-02-22 University Of Southern California Noise-enhanced convolutional neural networks
US10497089B2 (en) * 2016-01-29 2019-12-03 Fotonation Limited Convolutional neural network
US10140392B1 (en) * 2017-06-29 2018-11-27 Best Apps, Llc Computer aided systems and methods for creating custom products
CN110503645A (en) * 2019-08-29 2019-11-26 国合通用(青岛)测试评价有限公司 The method that metallograph grain size is determined based on convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning
CN108197636A (en) * 2017-12-06 2018-06-22 云南大学 A kind of paddy detection and sorting technique based on depth multiple views feature
CN108333936A (en) * 2018-01-30 2018-07-27 山西机电职业技术学院 A method of asynchronous machine positioning accuracy is improved based on neural network
CN109086886A (en) * 2018-08-02 2018-12-25 工极(北京)智能科技有限公司 A kind of convolutional neural networks learning algorithm based on extreme learning machine
CN110186924A (en) * 2019-07-24 2019-08-30 长沙荣业智能制造有限公司 A kind of rice variety intelligent detecting method, system and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mechanical Property Prediction of Strip Model Based on PSO-BP Neural Network;WANG Ping等;《JOURNAL OF IRON AND STEEL RESEARCH》;20061221;第87-90页 *
基于MapReduce并行框架的神经网络改进研究与应用;曲宏锋;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20180215(第02期);第I140-82页 *

Also Published As

Publication number Publication date
CN110782025A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110782025B (en) Rice processing online process detection method
CN108280856B (en) Unknown object grabbing pose estimation method based on mixed information input network model
CN107392232B (en) Flotation working condition classification method and system
Capizzi et al. Automatic classification of fruit defects based on co-occurrence matrix and neural networks
Mhaski et al. Determination of ripeness and grading of tomato using image analysis on Raspberry Pi
Ji et al. Recognition method of green pepper in greenhouse based on least-squares support vector machine optimized by the improved particle swarm optimization
CN105654107A (en) Visible component classification method based on SVM
Ghazvini et al. Defect detection of tiles using 2D-wavelet transform and statistical features
CN111931700A (en) Corn variety authenticity identification method and identification system based on multiple classifiers
CN110516648B (en) Ramie plant number identification method based on unmanned aerial vehicle remote sensing and pattern identification
Pattnaik et al. Machine learning-based approaches for tomato pest classification
Ni et al. Convolution neural network based automatic corn kernel qualification
CN110084820A (en) Purple soil image adaptive division and extracting method based on improved FCM algorithm
Garcia et al. Identification of visually similar vegetable seeds using image processing and fuzzy logic
Aygün et al. A benchmarking: Feature extraction and classification of agricultural textures using LBP, GLCM, RBO, Neural Networks, k-NN, and random forest
CN111897333A (en) Robot walking path planning method
CN114972336B (en) Color segmentation-based lead screw thread burn detection image enhancement method and system
CN111222559A (en) Training method of principal component analysis network for classifying small sample images
Maheswari et al. An adaptive region based color texture segmentation using fuzzified distance metric
CN116543414A (en) Tongue color classification and tongue redness and purple quantification method based on multi-model fusion
Li et al. A novel denoising autoencoder assisted segmentation algorithm for cotton field
Krishna et al. Color Image Segmentation Using Soft Rough Fuzzy-C-Means and Local Binary Pattern.
Zou et al. Recognition of Tea Diseases under Natural Background Based on Particle Swarm Optimization Algorithm Optimized Support Vector Machine
Bai et al. Recognition of bovine milk somatic cells based on multi-feature extraction and a GBDT-AdaBoost fusion model
CN114757916A (en) Industrial CT image defect classification method based on feature extraction and BP network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant