CN111753707B - Method and system for detecting imperfect grains of granular crops - Google Patents

Method and system for detecting imperfect grains of granular crops Download PDF

Info

Publication number
CN111753707B
CN111753707B CN202010564912.7A CN202010564912A CN111753707B CN 111753707 B CN111753707 B CN 111753707B CN 202010564912 A CN202010564912 A CN 202010564912A CN 111753707 B CN111753707 B CN 111753707B
Authority
CN
China
Prior art keywords
image
crops
classification
features
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010564912.7A
Other languages
Chinese (zh)
Other versions
CN111753707A (en
Inventor
柴新禹
陈坚品
李恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010564912.7A priority Critical patent/CN111753707B/en
Publication of CN111753707A publication Critical patent/CN111753707A/en
Application granted granted Critical
Publication of CN111753707B publication Critical patent/CN111753707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

A method and a system for detecting defective grains of granular crops are characterized in that the crops are regularly arranged, upper and lower image information of the crops are collected and preprocessed, then image classification is carried out through a classification model, so that a corresponding classification report of the defective grains of the crops is obtained, meanwhile, a corresponding sorting prompt array image of the defective grains is generated according to the image classification result, and sorting and collection of the defective grains of the crops are carried out according to the classification prompt array image; the invention constructs the classification model by using the artificial intelligence technology based on deep learning, improves the accuracy, repeatability and generalization of the imperfect classification result of the granular crops, can be used for imperfect grain detection of various granular crops, greatly shortens the sorting time of inspectors, avoids the defect that the screening standards are different from person to person, and also avoids the problem of error rate increase caused by the work fatigue of the inspectors.

Description

Method and system for detecting imperfect grains of granular crops
Technical Field
The invention relates to a technology in the field of image processing application, in particular to a method and a system for detecting defective grains of granular crops.
Background
The existing imperfect grain detection of granular crops mainly depends on artificial vision sensory detection, the consumed time is long, the error rate caused by fatigue possibly occurs along with the increase of the detection working time is increased, and in addition, the detection standard of each detector has subjective difference, so that the detection results of samples in the same batch are inconsistent; some research institutions extract manually designed features from collected images of imperfect grains of crops by using a traditional image processing method, and perform analysis processing on the extracted features to obtain a classification result of the imperfect grains, however, the image features of the imperfect grains are complex and have small areas, and in addition, visual feature differences among different imperfect grains are usually small, so that the method is often difficult to obtain a satisfactory result in terms of accuracy, repeatability and generalization.
Disclosure of Invention
The invention provides a method and a system for detecting imperfect granular crops, which aim to overcome the defects in the prior art, constructs a classification model by using an artificial intelligence technology based on deep learning, improves the accuracy, repeatability and generalization of imperfect classification results of granular crops, can be used for imperfect detection of various granular crops, greatly shortens the sorting time of inspectors, avoids the defect that screening standards are different from person to person, and also avoids the problem that the error rate is increased due to the working fatigue of the inspectors.
The invention is realized by the following technical scheme:
the invention relates to a method for detecting defective grains of granular crops, which comprises the steps of regularly arranging the crops, collecting upper and lower image information of the crops, preprocessing the upper and lower image information, classifying the images through a classification model to obtain a corresponding classification report of the defective grains of the crops, generating a corresponding sorting prompt array image of the defective grains according to the image classification result, and sorting and collecting the defective grains of the crops according to the corresponding sorting prompt array image.
The regular arrangement is as follows: the granular crops to be detected are regularly and uniformly discharged, and the crops are not overlapped.
The image acquisition is performed by using an object stage for carrying granular crops which is movable in the X, Y axial direction and cameras disposed above and below the object stage to acquire front and back surfaces, i.e., upper and lower image information of each granular crop.
The classification includes but is not limited to perfect grains and broken grains, germinated grains, worm eaten grains and other imperfect grains.
The image classification means: the method comprises the steps of extracting primary features of upper and lower sides of preprocessed upper and lower side images respectively to obtain upper and lower primary features, amplifying a key region to obtain a key region enlarged image, further extracting secondary features from the key region enlarged image to obtain upper and lower secondary features, performing feature fusion on the upper and lower primary features and the secondary features to obtain fusion features, obtaining classification loss, central loss and homoparticle loss through the received outputs of a primary feature extraction unit, a key region amplification unit, a secondary feature extraction unit and a feature fusion unit by a loss function unit, calculating the weighted sum of the classification loss, the central loss and the homoparticle loss to serve as a total loss function, and finally realizing image classification after a plurality of times of back propagation updating training.
The imperfect particle sorting prompt array image is as follows: different mark images are displayed on the image display device through different colors so as to represent different perfectness categories, the category and the position of each mark image on the display device correspond to the classification result and the position of each crop, and the plurality of mark images form an imperfect grain sorting prompt array image.
The invention relates to a system for realizing the method, which comprises the following steps: crop image acquisition device, image information processing device and the supplementary sorting device of crops.
The crop image acquisition device is connected with the image information processing device, and the image information processing device is also connected with the crop auxiliary sorting device.
Crops image acquisition device for carry out regular arrangement and the image acquisition of upper and lower side to graininess crops to transmit the image of gathering to image information processing apparatus, crops image acquisition device includes: a objective table that is used for regular arrangement crops that await measuring and can carry out X, Y axle direction removal, be located objective table top and below respectively and be used for gathering crops upper and lower side image information's top camera and below camera, be located objective table top and below respectively and be used for providing the top light source and the below light source of illumination when image acquisition, be used for control respectively two step motor and two pairs of slide bars that the objective table removed for control and drive step motor's step motor controller, the light-blocking box that is used for blockking external light to and be used for the power module who supplies power for top light source, below light source and step motor controller.
The image information processing device is used for carrying out imperfect grain classification on the received crop image, generating a crop imperfect grain classification report, generating an imperfect grain sorting prompt array image corresponding to the imperfect grain classification report and transmitting the imperfect grain sorting prompt array image to the crop auxiliary sorting device, can be equipment with image information transmission and image information processing, and can run a program carrying a classification model.
The classification model comprises: the device comprises an upper side and lower side image pair generating unit, a pair of primary feature extracting units with the same structure for extracting the upper side and lower side image features, a pair of key region amplifying units with the same structure for generating the upper side and lower side key region enlarged images, a pair of secondary feature extracting units with the same structure for extracting the upper side and lower side key region enlarged images, a feature fusion unit respectively connected with the two primary feature extracting units and the secondary feature extracting unit, and a loss function unit respectively connected with the two primary feature extracting units, the key region amplifying units and the secondary feature extracting unit, wherein: the image pair generating unit generates an upper and lower side image pair of crops, and controls the generated image pair to have a certain probability of being the upper and lower side image pair of the same crop, and the other image pair to be the upper side image of a certain crop and the lower side image of another certain crop to form different upper and lower side image pairs of crops; the two primary feature extraction units take a convolutional neural network as a main frame and share the weight, and respectively output primary features to a key region amplification unit, a feature fusion unit and a loss function unit according to upper side or lower side images; two key region amplifying units respectively perform dimensionality reduction on primary features of upper and lower side images to obtain attention features, the attention features are output to a loss function unit, on one hand, corresponding weights of the attention features are obtained to obtain weighted sums of the attention features, then dimensionality compression is performed to obtain a pair of attention feature maps corresponding to the upper and lower side images, pixels lower than a threshold value are all set to be zero to obtain a pair of attention feature activation maps, a minimum rectangular region containing all non-zero pixels in each attention feature activation map is extracted and amplified to the size of an upper and lower side input image, and finally pixel-by-pixel multiplication is performed on the minimum rectangular region and the corresponding upper and lower side images to obtain a pair of upper and lower side key region enlarged images which are respectively output to two secondary feature extracting units; the two secondary feature extraction units also take the convolutional neural network as a main framework and share the weight, and output secondary features to the feature fusion unit according to the upper and lower side key region enlarged images; the feature fusion unit has two layers, wherein the first layer is divided into four fully-connected layers, four groups of weights of the first layer are obtained after the two primary features and the two secondary features are received respectively, the weights are spliced and output to the fully-connected layer of the second layer, and a predicted value is obtained and is used as the output of the feature fusion unit and is transmitted to the loss function unit; the loss function unit obtains the same grain loss by splicing the primary features, obtains a first central loss and a second central loss respectively by performing bilinear global pooling on the corresponding primary features and the attention features, obtains the classification loss by the feature fusion unit, calculates the weighted sum of the four losses to obtain a total loss function and performs back propagation on the total loss function to update system parameters, and when testing the classification model, takes the output value of the feature fusion module as the predicted value of whether the corresponding wheat of the input image pair is classified.
The crop auxiliary sorting device is used for assisting a detector to finish the imperfect grain sorting work of granular crops and comprises a color display device capable of displaying imperfect grain sorting prompt array images.
Technical effects
The invention integrally solves the problem that the prior art can not carry out perfect classification on granular crops.
Compared with the prior art, the method has the advantages that the artificial intelligence technology based on deep learning is utilized to construct the classification model of the crop perfectness so as to realize classification and statistics of the crop perfectness to be detected, the model is high in accuracy, high in judgment speed, strong in generalization capability and high in repeatability; the objective table is customized according to different varieties of granular crops, and the size and the shape of the groove on the objective table are matched with the size and the shape of a single crop to be detected of a corresponding variety, so that the application range of the invention is widened; according to the invention, the enough abundant image information of the granular crops can be acquired through the movable upper camera and the movable lower camera; the invention combines the object stage which can lead the granular crops to be regularly arranged with the imperfect grain sorting prompt array image, and leads the inspector to be capable of quickly and efficiently sorting the crops.
Drawings
FIGS. 1 and 2 are schematic diagrams of the system of the present invention;
FIG. 3 is a schematic view of the mounting of the object stage and the display panel according to the present invention;
FIG. 4 is a schematic diagram of the system operation of the present invention;
FIG. 5 is a schematic diagram of the classification method of the present invention;
FIG. 6 is a schematic diagram of a classification model according to the present invention;
in the figure: the crop image acquisition device comprises a crop image acquisition device 1, an image information processing device 2, a crop sorting device 3, a computer 4, a display screen 5, an object stage 6, an upper camera 7, a lower camera 8, an upper light source 9, a lower light source 10, an X-axis stepping motor 11, a Y-axis stepping motor 12, an X-axis sliding rod 13, a Y-axis sliding rod 14, a fixing component 15, a stepping motor controller 16, a light blocking box 17, a power supply module 18, a supporting column 19, a camera control line 20, a control line 21, a light source control line 22, a fixing column 23, a display screen connecting line 24, a clamping groove 25 and an object carrying groove 26.
Detailed Description
In this embodiment, wheat is used as the crop to be tested, and the specific description is given. As shown in fig. 1, the present embodiment relates to a system for detecting defective granules in granular crops, comprising: crop image acquisition device 1, image information processing apparatus 2 and crop auxiliary sorting device 3, wherein: the crop image acquisition device 1 acquires and outputs image information of crops to be detected to the computer 4 serving as the image information processing device 2, the image information processing device 2 performs analysis processing according to the image information and outputs a processing result to the display screen 5 serving as the crop auxiliary sorting device 3, and the crop auxiliary sorting device 3 performs imperfect grain sorting prompt array image display.
As shown in fig. 2, the crop image capturing device 1 includes: objective table 6, top camera 7, below camera 8, top light source 9, below light source 10, X axle stepper motor 11, Y axle stepper motor 12, X axle slide bar 13, Y axle slide bar 14, fixed part 15, stepper motor controller 16, be in the light case 17, power module 18 and support column 19, wherein: the image information processing device 2 is respectively connected with the upper camera 7 and the lower camera 8 through camera control lines 20, connected with the stepping motor controller 16 through control lines 21, and respectively connected with the upper light source 9 and the lower light source 10 through light source control lines 22; the power supply module 18 is used for supplying power to the upper light source 9, the lower light source 10 and the stepping motor controller 18; the objective table 6, the upper camera 7, the lower camera 8, the upper light source 9, the lower light source 10, the X-axis stepping motor 11, the Y-axis stepping motor 12, the X-axis slide bar 13 and the Y-axis slide bar 14 are all arranged inside the light blocking box 17, the light blocking box 17 blocks light rays from the outside of the crop image acquisition device 1, and the support column 19 is used for supporting the light blocking box 17.
The auxiliary crop sorting device 3 comprises: display screen 5 and fixed column 23, wherein: the display screen 5 is connected with the computer 4 through a display screen connecting wire 24.
The crop image acquisition device 1 is fixedly connected with the crop auxiliary sorting device 3, as shown in fig. 3, when sorting is performed, an inspector fixes the objective table 6 on the display screen 5 through the clamping groove 25 of the objective table 6 and the fixing column 23 of the display screen 5.
As shown in fig. 2 and 3, the stage 6 is transparent, a plurality of carrying grooves 26 for placing wheat grains falling therein are arranged on the table top, the bottom area of each carrying groove 26 is slightly larger than the cross-sectional area when a normal single wheat grain is flatly placed, and the depth of each carrying groove 26 is 50% -100% of the average value of the flat height of the single wheat grain, so that each carrying groove 26 can only fall into a wheat grain with a normal size.
Preferably, when the depth of the loading groove 26 is too shallow, the wheat grains having fallen into the loading groove 26 are easily slipped out when the inspector shakes the stage 6, and when the depth is too deep, a plurality of wheat grains are easily stacked in one loading groove 26.
There is the clearance between the adjacent objective groove 26 to when the inspector rocks objective table 6, the wheat that awaits measuring that does not fall into objective groove 26 can move to other empty objective grooves 26 along the clearance, and when also being convenient for the inspector inclines objective table 6, unnecessary wheat that awaits measuring can move and pile up in the corner position that does not disturb image acquisition along the clearance.
Object stage 6 on further be equipped with 2 draw-in grooves 25 in case 2 fixed columns 23 of display screen 5 pass corresponding draw-in groove 25, object stage 6 alright fix on display screen 5, the inspector of being convenient for sorts.
The upper camera 7 and the lower camera 8 are respectively positioned above and below the objective table 6, are positioned on the same straight line vertical to the objective table 6, have high enough definition and large enough visual field, and can acquire the image information of the upper side and the lower side of the wheat to be detected with enough image information; when a signal to start image acquisition is received, the front and back images can be simultaneously acquired, and the acquired upper and lower side image information is transmitted to the computer 4 through the camera control line 20.
The objective table 6 is connected with the fixing part 15 and the Y-axis stepping motor 12 through the two clamping grooves 25, the fixing part 15 is connected with the X-axis stepping motor 11 through the Y-axis slide bar 14, and when the X-axis stepping motor 11 or the Y-axis stepping motor 12 moves on the X-axis slide bar 13 and the Y-axis slide bar 14 respectively, the objective table 6 can be driven to move in the corresponding X-axis or Y-axis direction.
The fixing parts 15 are four in total, the fixing parts 15 connected with the clamping grooves 25 form through holes in the Y-axis direction, the through holes penetrate through the Y-axis slide bars 14 to drive the object stage 6 to move on the Y-axis slide bars 14, the other three fixing parts 15 are fixedly connected with the Y-axis slide bars 14 and form through holes in the X-axis direction, and the through holes penetrate through the X-axis slide bars 13 to drive the object stage 6 to move on the X-axis slide bars 14.
The step motor controller 16 receives a start instruction sent by the computer 4 through a control line 21, and the X-axis step motor 11 or the Y-axis step motor 12 moves on the corresponding X-axis slide bar 13 or the Y-axis slide bar 14 after receiving a control signal sent by the step motor controller 16, specifically: the computer 4 sends a start instruction to the stepping motor controller 16, simultaneously, the computer 4 also sends an acquisition start instruction to the upper camera 7 and the lower camera 8 through the camera control line 22, the stepping motor controller 16 enables the stepping motor to drive the object carrying table 6 to move through a control signal, an image ROI is set in a program carrying a classification model, the movement rule of the object carrying table 6 is that the object carrying groove 26 array can be scanned and acquired in the image ROI row by row or column by column, simultaneously, the stepping motor is controlled to pause once after moving for a certain distance according to different sizes of the object carrying table 6 and intervals of the object carrying grooves 26, the image is simultaneously acquired by the upper camera 7 and the lower camera 8 during the pause period and then transmitted to the computer 4 through the camera control line 20, and the moving distance of each time ensures that the upper and lower side image information of any object carrying groove 26 is not missed and not repeated in all the finally obtained ROI, after the upper and lower image information of all the carrier tanks 26 is acquired, the stage 6 is moved to the initial position, and the upper camera 7 and the lower camera 8 stop image acquisition and image transmission.
The upper light source 9 and the lower light source 10 are respectively positioned above and below the objective table 6, and are uniformly irradiated by light rays from the periphery of the objective table 6 at positions where the upper camera 7 and the lower camera 8 are not blocked to acquire images of the front and the back of the wheat to be detected, so as to ensure that the obtained images are uniform in brightness and free of shadows; the inspector can control the brightness of the corresponding light source, and the computer 4 transmits a brightness control signal to the upper light source 9 or the lower light source 10 through the light source control line 22 to adjust the brightness of the corresponding light source.
When the invention is used for the imperfect grain classification of other varieties of crops, the heights of the upper camera 7 and the lower camera 8, the positions, the sizes and the angles of the upper light source 9 and the lower light source 10, the sizes, the shapes, the numbers, the arrangement of the object carrying grooves 26, the gaps among the object carrying grooves 26 and the like also need to be customized according to the varieties of the crops to be detected.
As shown in fig. 6, the present embodiment relates to a method for detecting defective granules in granular crops based on the above system, in which the crops are regularly arranged, the upper and lower image information of the crops are collected for preprocessing, and then image classification is performed through a classification model, so as to obtain a corresponding report of defective granule classification in the crops, and meanwhile, a corresponding defective granule sorting prompt array image is generated according to the image classification result, so as to sort and collect the defective granules in the crops.
The classification includes but is not limited to perfect grains and broken grains, germinated grains, worm eaten grains and other imperfect grains.
The image classification means: the preprocessed image is input to an image pair generating unit to generate an upper side and a lower side image pairs, primary features of the upper side and the lower side are extracted from the upper side and the lower side images respectively to obtain upper and lower side primary features, a key region enlarged image is obtained through key region enlargement, secondary features are further extracted from the key region enlarged image to obtain upper and lower side secondary features, feature fusion is carried out on the upper and lower side primary features and the secondary features to obtain fusion features, finally a loss function unit obtains first central loss, second central loss, same particle loss and classification loss through the received primary feature extracting unit, the key region enlargement unit, the secondary feature extracting unit and the feature fusion unit, calculates weighted sum of the first central loss, the second central loss, the same particle loss and the classification loss to serve as a total loss function, and finally image classification is.
The image pair generating means generates an upper and lower side image pair of the crop, and controls the generated image pair to have a certain probability of being the upper and lower side image pair of the same crop, and to have a different crop upper and lower side image pair composed of the upper image of a certain crop and the lower image of another certain crop
The primary feature extraction is characterized in that feature extraction is respectively carried out on upper and lower side images through two primary feature extraction units with the same structure, each primary feature extraction unit is provided with a plurality of convolution layers, each convolution layer is formed by sequentially connecting convolution, activation and pooling units, the main structure of each primary feature extraction unit can be a main flow convolution neural network structure such as VGG, ResNet or DenseNet, and the two primary feature extraction units share the weight and respectively obtain a plurality of feature maps as respective outputs.
The key area amplifying unit comprises a dimension reducing unit, an Se module, an attention module, a key area module and a cutting amplifying module, wherein: the dimension reduction module performs dimension reduction through a plurality of 1 x 1 convolution kernels to obtain an attention feature with the thickness of m, and the attention feature is output to the Se module on one hand and is output to the loss function unit on the other hand; the Se module is mainly used for the function of an Squeeze-and-Excitation block in the SeNet, convolution kernel activation operation is carried out on the attention features to obtain corresponding weights, after the weighted sum of the attention features is calculated, dimension compression is carried out to obtain an attention feature map corresponding to an upper side image or a lower side image, and the attention feature map is output to the attention map module; the attention feature module sets the pixels of the attention feature map lower than the threshold value to zero through the set threshold value to obtain an attention feature activation map and outputs the attention feature activation map to the key area module; the key area module extracts a minimum rectangular area containing all non-zero pixels in the attention feature activation image, amplifies the minimum rectangular area to the size of the upper and lower side input images, and multiplies the minimum rectangular area by the corresponding upper and lower side images pixel by pixel to obtain an enlarged image of the upper or lower side key area.
The secondary feature extraction is carried out on the enlarged images of the upper and lower side key regions respectively and again in the same way as the primary feature extraction, namely, through two secondary feature extraction units with the same structure, the main structure of the secondary feature extractor can be a main flow convolution neural network structure such as VGG, ResNet or DenseNet, the two secondary feature extractors have respective input and output, the input of the two secondary feature extractors is the output of the corresponding key region amplification unit respectively, and the two primary feature extraction units share the weight and respectively obtain a plurality of feature maps as respective outputs;
the feature fusion unit has two layers, wherein the first layer full connection layer is divided into four partial full connection layers, four weights of the first layer are obtained after 2 primary features and 2 secondary features are received respectively, the four weights are spliced and output to the full connection layer of the second layer, and a predicted value is obtained
Figure BDA0002547485710000071
As the output of the feature fusion unit and passed to the loss function unit.
The loss function unit is divided into a central loss module 1, a central loss module 2, a same particle loss module and a classification loss module to respectively obtain a first central loss Lcenter1Second center loss Lcenter2Same particle loss LsameAnd a classification loss Lclass
In the saidThe core loss module 1 and the core loss module 2 have the same structure and are composed of bilinear global pooling and center L2 regularization: respectively carrying out bilinear global pooling on respective primary characteristics and attention characteristics to obtain f1And f1The two are respectively formed by m one-dimensional vectors f1kAnd f2kK is formed by (1, 2, …, m), and is respectively corresponding to f1kAnd f2kCalculating a center vector ckIs regularized and summed separately to obtain a first center loss
Figure BDA0002547485710000072
And second center loss
Figure BDA0002547485710000073
Figure BDA0002547485710000074
ckIs set to 0, c in the same iterative trainingkSuccessively updating c in the first central module and the second central module according to the formulanewk=ck+μ·(fk-ck) Where μ is the update coefficient set, in the first central module fkIs fkIn the second central module fkIs f2k
The same particle loss module consists of a splicing module and a full connecting layer: the splicing module performs characteristic splicing on the two pairs of primary characteristics and outputs the characteristics to a full connection layer to obtain a prediction result
Figure BDA0002547485710000075
Recalculating the cross entropy to obtain
Figure BDA0002547485710000076
Wherein x is a real label of the upper and lower image information pairs, when the image pair generating unit generates the upper and lower image information pairs of the same wheat, x is 1, otherwise, x is 0.
The classification loss module obtains classification loss by performing cross entropy calculation on the output of the feature fusion unit
Figure BDA0002547485710000077
Wherein: y is a true category label for the tag,
Figure BDA0002547485710000078
to predict the class label.
The weighted total loss function L ═ α (L)center1+Lcenter2)+x(βLsame+Lclass) Wherein: alpha and beta are respectively set weight coefficients, after a weighted total loss function is obtained, the classification model is subjected to back propagation to update learnable parameters, and only calculation is needed when the classification model is tested
Figure BDA0002547485710000079
And obtaining whether the wheat corresponding to the input image pair is the predicted value of the classification.
For each category, a corresponding classification model is obtained through training to judge whether the image pair input during the test is the category, if the perfect kernel classification model is used for judging whether the wheat corresponding to the image pair input during the test is perfect kernels, the broken kernel classification model is used for judging whether the wheat corresponding to the image pair input during the test is broken kernels, and the like.
And in the back propagation updating training, the training set comprises information of images on the upper side and the lower side of a plurality of pairs of single wheat grains and label information thereof, and the label at least comprises the grain number and classification information of wheat in the corresponding image and the label information of the front side or the back side of the image.
When the classification model is tested, the output value of the feature fusion module is fused
Figure BDA0002547485710000081
Whether the corresponding wheat as the input image pair is a predictive value for the classification.
The imperfect particle sorting prompt array image is as follows: different mark images are displayed on the image display device through different colors so as to represent different categories, the category and the position of each mark image on the display device correspond to the classification result and the position of each crop, and the plurality of mark images form an imperfect grain sorting prompt array image.
The encoding strategy of the image marker can adopt, but is not limited to, the following modes: different classification options can correspond to different colors, different patterns and image flicker prompts with different frequencies.
The method for detecting the defective grains of the granular crops specifically comprises the following steps:
1) the inspector selects the variety of the crop to be detected in the batch as wheat in the program of the classification model loaded on the computer 4, and selects the classification options required in the batch, including perfect grains, broken grains, germinated grains, wormhead grains and the like, wherein each classification option corresponds to a classification model which respectively corresponds to a perfect grain classification model, a broken grain classification model, a germinated grain classification model, a wormhead grain classification model and the like, and each classification model judges whether the input image of the wheat to be detected is the corresponding classification of the model;
2) an inspector starts the classification model and sends a starting instruction to the stepping motor controller 16 to obtain images collected by the upper camera 7 and the lower camera 8, each front small image corresponds to a front image of a certain carrying groove 26, each back small image corresponds to a back image of the carrying groove 26, and the back image is obtained by the lower camera 8 through the carrying table 6;
3) preprocessing the acquired image, wherein the preprocessing comprises homomorphic filtering for uniform picture brightness, background noise filtering, wheat-free image pair rejection and image distortion correction;
4) starting image classification, and when the classification options of the batch comprise perfect particle options, finishing the classification of the image pairs judged to be 'yes' after all the image pairs of the batch pass through the classification model, and enabling all the residual image pairs judged to be 'no' to sequentially pass through other selected classification models; when the classification options of the batch do not comprise perfect particle options, all the image pairs of the batch sequentially pass through the selected classification model;
5) generating a batch classification report of the cost and an imperfect grain sorting prompt array image corresponding to the batch classification report, and transmitting the batch classification report and the imperfect grain sorting prompt array image to a display screen 5, wherein the report comprises the number of wheat grains of each category and the proportion of the wheat grains to the total number of the batch of wheat;
6) the inspector selects whether to let the program classify the wheat to be tested in the next batch: if the inspector selects yes, the program enters step 1), if yes, a classification report of the round is generated, and the classification report comprises the integrity classification report of each batch, and also comprises the number of all the selected classifications of the round and the proportion of the total number of the selected classifications of the round to the total number of the particles of the round.
As shown in fig. 4, the embodiment relates to a specific implementation process based on the above method for detecting imperfect granules of granular crops, which specifically includes:
step 1: the method comprises the following steps that a detector turns on a power supply of an intelligent detection system for imperfect grains of granular crops, takes out an objective table 6 for classifying wheat, and pours the wheat to be detected on an upper table top of the objective table 6;
step 2: manually shaking the object carrying table 6 to clean impurities, so that at most one wheat to be detected falls into each object carrying groove 26, and when redundant wheat to be detected exists, slightly inclining the object carrying table 6 or using auxiliary tools such as a plastic scraper blade, a soft brush and the like to enable the object carrying table 6 to be completely piled at the corner of the table top of the object carrying table 6 without interfering with image acquisition;
when more than one wheat to be detected falls into the object carrying groove 26, the wheat to be detected can be clamped out by using tweezers and placed on other empty object carrying grooves 26 or table surface corners of the object carrying table 6; in order to improve the efficiency, each carrying groove 26 is dropped into one wheat to be detected as much as possible;
and step 3: fixing the object stage 6 in the light blocking box 17, opening the program of the computer 4 carrying the classification model, and carrying out related setting and operation of the program;
and 4, step 4: after the image classification is finished, the crop auxiliary sorting device 3 displays the classification mark images of the wheat to be detected corresponding to each loading groove 26 on the display screen 5 according to the image code of the image information processing device 2 and the same size, shape and arrangement mode as the loading grooves 26, and the classification mark images jointly form the imperfect grain sorting prompt array image;
and 5: an inspector fixes the object stage 6 on the display screen 5 through the clamping groove 25 of the object stage 6 and the fixing column 23 of the display screen 5, so that each object carrying groove 26 is in one-to-one coincidence correspondence with the corresponding classification mark image, light of an imperfect sorting prompt array image on the display screen 5 passes through the transparent object carrying grooves 26 to be observed by the inspector, and the inspector sorts the wheat at the corresponding position on the object carrying table 6 according to the light;
step 6: the inspector stores the corresponding classification report and the image as required;
and 7: and (3) when the wheat to be detected still exists on the objective table 6, returning to the step 2, otherwise, ending the imperfect grain detection of the wheat to be detected.
Through specific practical experiments, two industrial cameras with 2000 ten thousand pixels and 1 inch photosensitive elements are used as an upper camera and a lower camera which are respectively 12cm away from the upper plane and the lower plane of a transparent acrylic objective table 6, an objective groove 26 of the objective table 6 is 14 rows by 30 columns, the transverse distance is 21cm, the longitudinal distance is 14cm, a computer 4 is an i7 desktop with a 2080Ti display card, a Denset 169 is used as a primary feature extraction unit, a ResNet34 is used as a secondary feature extraction unit, the m of a dimensionality reduction module is set to be 10, the upper image information and the lower image information of 200 grains of complete grains, broken grains, black sharp grains and germinated grains are acquired for the rye according to the sampling mode, the image pair of each category is trained and tested according to the proportion of 3:1, the classification accuracy and the repetition rate of 95% are achieved, the average classification time of each rye is about 0.09s, and meanwhile, the four types of white for classification are classified on a common color display screen 5 with the 1K, Green, black and red as color markers to aid sorting.
Compared with the prior art, the method has the advantages that the time for classifying granular crops is short, the average time for classifying each rye is about 0.09s, and the average time for an inspector to classify one rye is 1 s; the method has high classification accuracy rate on granular crops, the classification accuracy rate on wheat reaches 95%, and the classification accuracy rates of Vgg, ResNet and DenseNet are not more than 90% by taking Vgg, ResNet and DenseNet as classification models; the repeatability of image classification is high, the repetition rate can reach 99%, and the repetition rate of classification results does not reach 90% by taking Vgg, ResNet and DenseNet as classification models; the image classification has strong generalization capability, the classification result accuracy in the test stage reaches over 95% for four categories of perfect grains, broken grains, black sharp grains and germinated grains, the classification accuracy in the training stage is close to the value by taking Vgg, ResNet and DenseNet as classification models, but the normal grain classification model with the highest accuracy based on DenseNet only has the classification accuracy of 89% in the test stage, and the classification accuracy of the classification models is obviously reduced for the categories of the black sharp grains, the germinated grains and the like which are difficult to identify; the sampling mode of the invention can acquire more image information of each wheat, if each wheat is approximately regarded as an ellipsoid, the proportion of the surface area of the image information acquired by each wheat to the whole surface area (hereinafter referred to as the occupation ratio) can reach more than 99 percent, and in the same acquisition environment, a camera is fixed right above and right below the middle of the objective table 6, and the vertical distance between the camera and the objective table 6 is adjusted to the position where the camera view can cover all objective grooves 26 for fixed-point image acquisition, so the acquisition occupation ratio of the objective grooves 26 at the four corner positions of the objective table 6 is less than 65 percent; if a single-side camera is adopted to collect the wheat image, the ratio of the collected wheat image is less than 50 percent; compared with the traditional manual sorting mode, the auxiliary sorting mode reduces the total time consumption by more than 60 percent on average.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (6)

1. A method for detecting defective grains of granular crops is characterized in that the upper and lower side image information of the crops are collected through arranging the crops regularly, after the collected images are preprocessed, image classification is carried out through a classification model, so that a corresponding classification report of the defective grains of the crops is obtained, meanwhile, a corresponding sorting prompt array image of the defective grains is generated according to the image classification result, and sorting and collection of the defective grains of the crops are carried out according to the classification prompt array image;
the classification model comprises: the device comprises an upper side and a lower side image pair generating unit, a pair of primary feature extracting units with the same structures respectively used for extracting the features of an upper image and a lower image, a pair of key region amplifying units with the same structures respectively used for generating the key region amplified images of the upper key region and the lower key region amplified images, a pair of secondary feature extracting units with the same structures respectively used for extracting the features of the key region amplified images of the upper key region and the lower key region amplified images, a feature fusion unit respectively connected with the two primary feature extracting units and the secondary feature extracting units, and a loss function unit respectively connected with the two primary feature extracting units, the key region amplifying units and the secondary feature: the image pair generating unit generates an upper and lower side image pair of crops, and controls the generated image pair to have a certain probability of being the upper and lower side image pair of the same crop, and the other image pair to be the upper side image of a certain crop and the lower side image of another certain crop to form different upper and lower side image pairs of crops; the two primary feature extraction units take a convolutional neural network as a main frame and share the weight, and respectively output primary features to a key region amplification unit, a feature fusion unit and a loss function unit according to upper side or lower side images; two key region amplifying units respectively perform dimensionality reduction on primary features of upper and lower side images to obtain attention features, the attention features are output to a loss function unit, on one hand, corresponding weights of the attention features are obtained to obtain weighted sums of the attention features, then dimensionality compression is performed to obtain a pair of attention feature maps corresponding to the upper and lower side images, pixels lower than a threshold value are all set to be zero to obtain a pair of attention feature activation maps, a minimum rectangular region containing all non-zero pixels in each attention feature activation map is extracted and amplified to the size of an upper and lower side input image, and finally pixel-by-pixel multiplication is performed on the minimum rectangular region and the corresponding upper and lower side images to obtain a pair of upper and lower side key region enlarged images which are respectively output to two secondary feature extracting units; the two secondary feature extraction units also take the convolutional neural network as a main framework and share the weight, and output secondary features to the feature fusion unit according to the upper and lower side key region enlarged images; the feature fusion unit has two layers, wherein the first layer is divided into four fully-connected layers, the input of the four fully-connected layers is respectively two primary features and two secondary features, four outputs are obtained after the two primary features and the two secondary features are respectively received, the four outputs are spliced and then transmitted to the fully-connected layer of the second layer, and a predicted value is obtained and is used as the output of the feature fusion unit and is transmitted to the loss function unit; the loss function unit obtains the same grain loss by splicing the primary features, obtains a first central loss and a second central loss respectively by carrying out bilinear global pooling on the corresponding primary features and the attention features, obtains the classification loss by the feature fusion unit, calculates the weighted sum of the four losses to obtain a total loss function and carries out back propagation on the total loss function to update system parameters, and when testing the classification model, takes the output value of the feature fusion module as the predicted value of whether the corresponding crops of the input image pair are classified or not.
2. The method as claimed in claim 1, wherein the same grain loss is calculated by cross entropy
Figure 830273DEST_PATH_IMAGE001
Wherein:
Figure 747413DEST_PATH_IMAGE002
the real tags of the upper and lower image information pairs,
Figure 527150DEST_PATH_IMAGE003
is a prediction result.
3. The method of detecting defective grains in granular crops as claimed in claim 1, wherein said image capturing is performed by capturing upper and lower image information of each grain of crops by means of a stage for supporting the granular crops, which is movable in an X, Y-axis direction, and cameras disposed above and below the stage;
the image classification means: the method comprises the steps of extracting primary features of upper and lower sides of preprocessed upper and lower side images respectively to obtain upper and lower primary features, amplifying a key region to obtain a key region enlarged image, further extracting secondary features from the key region enlarged image to obtain upper and lower secondary features, performing feature fusion on the upper and lower primary features and the secondary features to obtain fusion features, obtaining classification loss, central loss and homoparticle loss through the received outputs of a primary feature extraction unit, a key region amplification unit, a secondary feature extraction unit and a feature fusion unit by a loss function unit, calculating the weighted sum of the classification loss, the central loss and the homoparticle loss to serve as a total loss function, and finally realizing image classification after a plurality of times of back propagation updating training.
4. The method as claimed in claim 1, wherein the imperfect grain sorting hint array image is selected from the group consisting of: different mark images are displayed on the image display device through different colors so as to represent different perfectness categories, the category and the position of each mark image on the display device correspond to the classification result and the position of each crop, and the plurality of mark images form an imperfect grain sorting prompt array image.
5. A granular crop imperfection detection system for implementing the method of any one of the preceding claims, comprising: crops image acquisition device, image information processing apparatus and supplementary sorting device of crops, wherein: the crop image acquisition device and the crop auxiliary sorting device are respectively connected with the image information processing device; the crop image acquisition device is used for regularly arranging granular crops and acquiring images of the upper side and the lower side of the granular crops, and transmitting the acquired images to the image information processing device;
the crop image acquisition device comprises: the crop detection device comprises an object stage, an upper camera, a lower camera, an upper light source, a lower light source, two stepping motors, two pairs of slide bars, a stepping motor controller, a light blocking box and a power supply module, wherein the object stage is used for regularly arranging crops to be detected and can move in the direction of X, Y axes;
the image information processing device classifies the received crop images to generate a crop imperfect particle classification report, generates a defective particle sorting prompt array image corresponding to the report and transmits the defective particle sorting prompt array image to the crop auxiliary sorting device, and the image information processing device is provided with equipment for transmitting image information and processing the image information and can run a program carrying a classification model;
the crop auxiliary sorting device assists a detector to complete imperfect grain sorting work of granular crops and comprises a color display device for displaying imperfect grain sorting prompt array images.
6. The system for detecting defective granular crops as claimed in claim 5, wherein the objective table is made to have a slot customized according to the size and shape of the granular crop to be detected, the objective table moves horizontally and allows the upper camera and the lower camera to simultaneously acquire the information of the upper image and the lower image, the ROI is taken from the acquired image, the size of the ROI is determined according to the height and the visual field range of the camera, and the middle points of the two cameras and the middle point of the objective slot in the ROI are on the same straight line perpendicular to the plane of the objective table during acquisition.
CN202010564912.7A 2020-06-19 2020-06-19 Method and system for detecting imperfect grains of granular crops Active CN111753707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010564912.7A CN111753707B (en) 2020-06-19 2020-06-19 Method and system for detecting imperfect grains of granular crops

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010564912.7A CN111753707B (en) 2020-06-19 2020-06-19 Method and system for detecting imperfect grains of granular crops

Publications (2)

Publication Number Publication Date
CN111753707A CN111753707A (en) 2020-10-09
CN111753707B true CN111753707B (en) 2021-06-29

Family

ID=72674873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010564912.7A Active CN111753707B (en) 2020-06-19 2020-06-19 Method and system for detecting imperfect grains of granular crops

Country Status (1)

Country Link
CN (1) CN111753707B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697653A (en) * 2020-12-10 2021-04-23 安徽省益丰生态农业科技有限公司 Grain image double-sided continuous circulation acquisition device
CN112581459A (en) * 2020-12-23 2021-03-30 安徽高哲信息技术有限公司 Crop classification system and method
CN113109240B (en) * 2021-04-08 2022-09-09 国家粮食和物资储备局标准质量中心 Method and system for determining imperfect grains of grains implemented by computer
CN113791008B (en) * 2021-08-25 2024-03-15 安徽高哲信息技术有限公司 Grain imperfect grain detection equipment and detection method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610020B (en) * 2012-01-11 2014-12-17 广州市地下铁道总公司 Recognition device of two-sided images of coin and recognition method thereof
CN103177435B (en) * 2013-04-10 2015-08-12 浙江大学 A kind of apple surface non-redundancy information image disposal route based on machine vision
CN103344648A (en) * 2013-07-10 2013-10-09 鞍钢股份有限公司 Method and system for detecting surface quality of steel sheet based on graphics processing unit (GPU)
CN107818554B (en) * 2016-09-12 2023-04-21 索尼公司 Information processing apparatus and information processing method
US10212410B2 (en) * 2016-12-21 2019-02-19 Mitsubishi Electric Research Laboratories, Inc. Systems and methods of fusing multi-angle view HD images based on epipolar geometry and matrix completion
CN108802840B (en) * 2018-05-31 2020-01-24 北京迈格斯智能科技有限公司 Method and device for automatically identifying object based on artificial intelligence deep learning
CN110473179B (en) * 2019-07-30 2022-03-25 上海深视信息科技有限公司 Method, system and equipment for detecting surface defects of thin film based on deep learning
CN110598029B (en) * 2019-09-06 2022-03-22 西安电子科技大学 Fine-grained image classification method based on attention transfer mechanism
CN111008670A (en) * 2019-12-20 2020-04-14 云南大学 Fungus image identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111753707A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111753707B (en) Method and system for detecting imperfect grains of granular crops
US6091842A (en) Cytological specimen analysis system with slide mapping and generation of viewing path information
CN110473173A (en) A kind of defect inspection method based on deep learning semantic segmentation
CN113240626B (en) Glass cover plate concave-convex type flaw detection and classification method based on neural network
CN109815945B (en) Respiratory tract examination result interpretation system and method based on image recognition
CN111289512B (en) Rice grain alkali elimination value high-throughput determination method based on deep convolutional neural network
CN112990391A (en) Feature fusion based defect classification and identification system of convolutional neural network
CN111062938A (en) Plate expansion plug detection system and method based on machine learning
CN112819748A (en) Training method and device for strip steel surface defect recognition model
CN110837809A (en) Blood automatic analysis method, blood automatic analysis system, blood cell analyzer, and storage medium
CN115457026A (en) Paper defect detection method based on improved YOLOv5
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN114881987A (en) Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
JP7379478B2 (en) Milk analyzer for classifying milk
CN116485766A (en) Grain imperfect grain detection and counting method based on improved YOLOX
CN116958073A (en) Small sample steel defect detection method based on attention feature pyramid mechanism
CN115079393B (en) Chromosome karyotype analysis microscopic shooting device and method based on 10-fold objective lens
CN113538389B (en) Pigeon egg quality identification method
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
CN212646436U (en) Artificial board surface flaw detection device
CN113269251A (en) Fruit flaw classification method and device based on machine vision and deep learning fusion, storage medium and computer equipment
CN116434066B (en) Deep learning-based soybean pod seed test method, system and device
CN111161215A (en) Detection apparatus capable of identifying color of printer
CN115541578B (en) High-flux super-resolution cervical cell pathological section rapid scanning analysis system
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant