CN112560748A - Crop shape analysis subsystem and method - Google Patents

Crop shape analysis subsystem and method Download PDF

Info

Publication number
CN112560748A
CN112560748A CN202011541364.2A CN202011541364A CN112560748A CN 112560748 A CN112560748 A CN 112560748A CN 202011541364 A CN202011541364 A CN 202011541364A CN 112560748 A CN112560748 A CN 112560748A
Authority
CN
China
Prior art keywords
crop
image
crops
images
dimensional characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011541364.2A
Other languages
Chinese (zh)
Inventor
武勇
周金旺
武祥
范磊
范冬冬
丁益文
董军军
仇国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Gaozhe Information Technology Co ltd
Original Assignee
Anhui Gaozhe Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Gaozhe Information Technology Co ltd filed Critical Anhui Gaozhe Information Technology Co ltd
Priority to CN202011541364.2A priority Critical patent/CN112560748A/en
Publication of CN112560748A publication Critical patent/CN112560748A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention has proposed a crops shape analysis subsystem and method, the said system includes the computer vision processing module, is used for obtaining the crops appearance scanning image of the multi-angle, carry on the image preprocessing to the crops appearance scanning image, obtain the image of the single crop; and the artificial intelligence processing module is used for processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating accurate template information of the crops in the images, and finally fitting and displaying the contour information of the crops according to the template information. The system is based on artificial intelligence and computer vision technology, can accurately identify the shapes of various crops, and can run on a plurality of operating systems in real time with extremely low calculated amount and parameter.

Description

Crop shape analysis subsystem and method
Technical Field
The invention relates to the technical field of analysis and detection of crops, in particular to a crop shape analysis subsystem and a crop shape analysis method.
Background
Grain is the cornerstone for human survival. The quality of crops directly determines the current season value of the crops, and the crops and the like need to be priced according to the quality of the crops in the grain collecting process no matter in a grain station or a factory. In addition, the quality of the crop also directly affects the breeding process in the next season. The quality of seeds is one of the prerequisites for ensuring the harvest of grains, wherein breeding is an important link for ensuring the quality of the seeds. In the breeding process, it is one of the basic requirements to select crops with superior quality as seeds, and the requirements are strict on the characteristics of the seeds, such as size, shape and the like.
However, under the prior art, no matter whether the fixed price of the crops or the breeding process is achieved, the shape judgment of the crops is usually completed manually, the working efficiency is low, and the difference between people is large.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a crop shape analysis subsystem and a crop shape analysis method.
A crop shape analysis subsystem comprises
The computer vision processing module is used for acquiring multi-angle crop appearance scanning images, and carrying out image preprocessing on the crop appearance scanning images to acquire images of single crops;
and the artificial intelligence processing module is used for processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating template information of the crops in the images, and finally fitting and displaying the contour information of the crops according to the template information.
Specifically, the deep convolutional network model further comprises a channel pruning subunit, which is used for compressing redundant feature channels of the image.
A crop shape analysis method comprising the steps of:
s1, after the crop surface scanning images at multiple angles are obtained, image preprocessing is carried out on the surface scanning images, and then the images of a single crop are obtained;
s2, processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating template information of the crops in the images, and finally fitting and displaying contour information of the crops according to the template information.
Specifically, step S1 specifically includes the following steps:
s11, image scanning equipment in the computer vision processing module carries out image scanning on a set amount of crop samples to obtain multi-angle crop appearance scanning images;
s12, the image processing unit in the computer vision processing module carries out image preprocessing on the appearance scanning images of the crops, and finally the images containing a plurality of crops are divided into a plurality of images of single crops; the image processing unit comprises a filtering algorithm and an edge signal enhancement algorithm, wherein the filtering algorithm filters interference signals in the images, and then registration of multi-angle crop appearance scanning images is carried out through the edge signal enhancement algorithm.
Specifically, step S2 specifically includes the following steps:
s21, converting the acquired image of the single crop into a high-dimensional characteristic value corresponding to the image by the deep convolution network model;
s22, the neural network unit counts the high-dimensional characteristic values and combines the high-dimensional characteristic values to form a plurality of dimensional characteristic values;
and S23, calculating template information of the crops in the image according to the plurality of dimensional characteristic values by the calculating and displaying unit, and fitting and displaying the outline information of the crops according to the template information.
Specifically, in step S21, the deep convolutional network model compresses the redundant feature channels of the image using a channel pruning method.
Specifically, step S12 includes the steps of:
s121, marking the crop surface scanning image;
s122, training the deep convolutional network model; based on the labeled data, the deep convolution network model trains the model by using a training frame based on pycaffe to obtain a plurality of groups of model weights;
s123, carrying out weight optimization processing on the deep convolution network model to obtain an optimal model;
s124, when the system is actually used, the collected images are processed by using the optimal model, and multi-angle images of crops are obtained;
and S125, carrying out shape analysis on the image sample of the single crop.
Specifically, in the step S121, marking the crop grains is to perform edge contour marking on the scanned image of the crop surface by using the marking tool CVAT.
The invention has the technical effects that:
compared with manual or other identification technologies, the analysis subsystem and the analysis method are mainly based on artificial intelligence and computer vision technologies, can accurately identify the shapes of various crops, and can run on a plurality of operating systems in real time with extremely low calculation amount and parameter. The computer vision technology used in the system and the method lays a solid foundation for the prediction precision and can ensure that the recognition precision error is less than 5 pixels. And the artificial intelligence technology can analyze special scenes such as various complex crop shapes, spatial combination of a plurality of crops and the like.
Drawings
FIG. 1 is a diagram of an artificial intelligence crop shape analysis subsystem.
Fig. 2 is a diagram of an artificial intelligent crop quality analysis subsystem.
Fig. 3 is a flow chart of a reconstruction method of an artificial intelligent crop three-dimensional image subsystem.
FIG. 4 is a flow chart of a detection and positioning method of the artificial intelligent crop surface defect detection and positioning subsystem.
Fig. 5 is a structure diagram of the automatic crop image classification subsystem.
Fig. 6 is a schematic diagram of a deep neural network model in the crop image automatic classification subsystem.
FIG. 7 is a frame diagram of a crop grade online review subsystem.
Fig. 8A is a schematic diagram of a segmentation algorithm.
FIG. 8B is a diagram of a quality detection algorithm.
FIG. 9 is a sample annotation system diagram.
Fig. 10 is a flow chart of picture sample segmentation.
FIG. 11 is a flowchart of deep learning model optimization.
Detailed Description
An artificial intelligence crop processing system comprises
A shape analysis subsystem for analyzing the shape of the crop;
the quality analysis subsystem is used for analyzing the quality of the crops;
the three-dimensional image reconstruction subsystem is used for carrying out three-dimensional modeling on the collected front and back images of the crops; converting the two-dimensional plane image information into a three-dimensional image for electronic archiving;
the surface defect detection positioning subsystem is used for detecting the specific position of the imperfect grain defects of the crops;
the crop image automatic classification subsystem is used for classifying imperfect grains and perfect grains of crops;
the crop grade online evaluation subsystem is used for realizing the online evaluation function of the grain quality grade;
and the sample marking subsystem is used for marking the pictures of the crops and making a data set required by an algorithm.
The following describes the subsystems and the corresponding methods of the systems in detail respectively:
1A, a shape analysis subsystem
As shown in FIG. 1, the shape analysis subsystem is used to obtain two-dimensional coordinate information for a plurality of crops, including
The computer vision processing module is used for acquiring multi-angle crop appearance scanning images, and carrying out image preprocessing on the crop appearance scanning images to acquire images of single crops;
in detail, the computer vision processing module comprises a plurality of modules which are arranged in sequence
The image scanning device is used for carrying out image scanning on a set amount of crop samples, and a multi-angle crop appearance scanning image is obtained by using a sample amount of 50g in the scheme;
the image processing unit is used for carrying out image preprocessing on the crop surface scanning images and finally dividing the images of a plurality of crops into images of a plurality of single crops; the image processing unit comprises a filtering algorithm subunit and an edge signal enhancer unit, wherein the filtering algorithm subunit is used for filtering interference signals in the image and obtaining rough positioning of crops in the image, and the edge signal enhancer unit is used for registering multi-angle crop appearance scanning images. The method for obtaining the rough positioning of the crops in the image is to calculate the gradient information among all pixel values in the image through an edge filtering operator so as to fit the rough positions of the crops.
And the artificial intelligence processing module is used for processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating accurate template information of the crops in the images, and finally fitting and displaying the contour information of the crops according to the template information.
In detail, the artificial intelligence processing module extracts the characteristic information of the picture, cooperates with the rough positioning information of the crops to fit the accurate shape of the crops, and specifically comprises the components which are sequentially arranged
The deep convolution network model is used for converting the acquired image of the single crop into a high-dimensional characteristic value corresponding to the image; the deep convolutional network model further comprises a channel pruning subunit, wherein the channel pruning subunit is used for compressing redundant feature channels of the images, and greatly compresses parameters and calculated quantities used in the segmentation method in the graph 10, so that the shape of multiple crops can be analyzed efficiently in real time.
The neural network unit is used for counting the high-dimensional characteristic values and combining the high-dimensional characteristic values to form a plurality of dimensional characteristic values;
and the calculation display unit is used for calculating template information of the crops in the image according to the plurality of dimensional characteristic values, and fitting and displaying the contour information of the crops according to the template information.
1B, crop shape analysis method
A crop shape analysis method comprising the steps of:
s1, after the crop surface scanning images at multiple angles are obtained, image preprocessing is carried out on the surface scanning images, and then the images of a single crop are obtained;
s11, image scanning equipment in the computer vision processing module carries out image scanning on a set amount of crop samples to obtain multi-angle crop appearance scanning images;
s12, the image processing unit in the computer vision processing module carries out image preprocessing on the appearance scanning images of the crops, and finally the images of a plurality of crops are divided into images of a plurality of single crops; the image processing unit comprises a filtering algorithm and an edge signal enhancement algorithm, wherein the filtering algorithm filters interference signals in the images and enables the multi-angle crop appearance scanning images to be registered through the edge signal enhancement algorithm.
As shown in fig. 10, the segmentation method in step S12 specifically includes:
s121, marking the crop surface scanning image; the existing marking tool CVAT can be used for marking the edge outline of the crop appearance scanning image, and the marking is to manually mark the outline of all crop particles in the image into a picture frame to completely surround the crop image; the labeling can also be performed by the sample labeling subsystem in the application.
S122, detecting and training a target of the deep convolutional network model; after the labeled data are obtained, training the deep convolution network model by using a training frame based on pycaffe to obtain a plurality of groups of model weights;
s123, carrying out optimization processing on the deep convolution network model to obtain an optimal model; specifically, after a verification sample set (which needs to be representative) is set, performance of a plurality of groups of model weights on the verification sample set is tested respectively, and an optimal model is selected;
s124, in the actual process, processing the sampling image by using the optimal model to obtain a plurality of single crop images in the image;
and S125, carrying out shape analysis on the image sample of the single crop.
As shown in fig. 11, in step S123, the specific steps of the optimization processing are:
s1231, selecting an optimization data set; the method is a very important work in the system, and directly influences the quality of a subsequent optimal model; the optimization data set should have broad versatility and applicability, be independent of the training set or the test set, and need to be labeled for evaluating the performance of different models.
S1232, setting an evaluation standard; the evaluation standard directly determines the optimizing process of the model, and the selection of the evaluation standard is applied to the specific application field of the system; in the present system, recall is the primary criterion for model evaluation.
And S1233, reasoning the weight set of the deep learning model on the optimization data set in sequence to obtain the model with the highest performance index.
S2, processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating accurate template information of the crops in the images, and finally fitting and displaying contour information of the crops according to the template information.
S21, converting the acquired image of the single crop into a high-dimensional characteristic value corresponding to the image by the deep convolution network model; and further optimizing a deep convolutional network model, wherein the deep convolutional network model compresses redundant characteristic channels of the image by using a channel pruning method so as to achieve the effect of greatly reducing the parameter and the calculated amount of the model, and can run on a plurality of operating systems in real time after being greatly simplified.
S22, the neural network unit counts the high-dimensional characteristic values and combines the high-dimensional characteristic values to form a plurality of dimensional characteristic values;
and S23, calculating template information of the crops in the image according to the plurality of dimensional characteristic values by the calculating and displaying unit, and fitting and displaying the outline information of the crops according to the template information.
The subsystem and the method are combined with an artificial intelligence processing module and a computer vision processing module and are used for shape information segmentation of crop pictures. Specifically, the crop picture is processed through a computer vision processing module, noise signals in the picture are filtered, and rough positioning of crops in the picture is obtained; and then, extracting the characteristic information of the picture by using the artificial intelligence processing module, and segmenting the accurate shape of the crop by cooperating with the rough positioning information of the crop. In addition, the method is based on a model pruning technology, the parameters and the calculated amount of the model for shape analysis are greatly compressed, and the shape of a plurality of crops is efficiently analyzed in real time.
Compared with manual or other identification technologies, the computer vision technology used in the system and the method lays a solid foundation for prediction accuracy, and can ensure that the identification accuracy error is less than 5 pixels. The artificial intelligence technology can analyze special scenes such as various complex crop shapes, spatial combination of a plurality of crops and the like.
2A, a quality analysis subsystem
As shown in FIG. 2, the mass analysis subsystem includes
The crop image processing module is used for processing a scanning image containing information of a plurality of crops and analyzing the shape and the category information of the crops in the image;
specifically, comprises
The system comprises an image scanning device, a data processing device and a data processing device, wherein the image scanning device is used for carrying out multi-angle image scanning on a set amount of crop samples, and a sample amount less than 50g is used in the scheme to obtain multi-angle crop appearance scanning images;
and the image processing unit is used for performing image processing on the crop surface scanning images, including image normalization and multi-image matching, and calculating the shape and the belonging category information of a plurality of crops in the images.
And the crop quality judgment module is used for predicting the actual quality of the multiple crops. And based on a strong supervision learning technology, the actual quality of the crops is calculated according to the shape and the belonging category information of the multiple crops and the marking information provided by the sample marking subsystem.
2B, Mass analysis method
The crop quality analysis method comprises the following steps:
s1, the crop image processing module processes the multi-angle scanning images containing multiple crops and analyzes the shape and category information of the multiple crops in the images; the method comprises the following specific steps:
s11, the image scanning device carries out multi-angle image scanning on a set amount of crop samples, and in the scheme, the sample amount less than 50g is used to obtain multi-angle crop appearance scanning images;
and S12, an image processing unit for performing image processing on the crop appearance scanning images, wherein the image processing includes image normalization and multi-image matching, and the shape and the belonging category information of a plurality of crops in the images are calculated.
S2, the crop quality judging module trains the crop quality judging module by combining the shape and the belonging category information of a plurality of crops with the labeling information in the sample labeling subsystem based on the strong supervision learning technology;
and S3, after the training of the crop quality judgment module is completed, receiving the shape and the belonging category information of the multiple crops output by the crop image processing module, and outputting the quality information of the crops.
The subsystem comprises a crop image processing module and a crop quality judging module, wherein the crop image processing module is based on an artificial intelligence technology and used for processing multi-angle scanning images containing a plurality of crop information, the crop quality judging module is based on a strong supervision learning technology and used for training and obtaining a high-precision quality analysis model, and in the actual calculation process, the crop image processing module can directly receive the shape and category information of crops output by a crop image processing program and quickly and accurately calculate the quality information of the crops.
3A, three-dimensional image reconstruction subsystem
As shown in FIG. 3, the three-dimensional image reconstruction subsystem includes
The RPN network unit inputs the two-dimensional image with the labeling information to the neural network unit for training and obtaining; the two-dimensional image with the labeling information is a front and back color image of the labeled crop, and the RPN network unit outputs specific coordinate information of a specific position; in the embodiment, 300 ten thousand grain two-dimensional images to be reconstructed with the labeling information are input.
A crop shape and posture predicting subunit for predicting the shape parameters and posture parameters of the grains in the three-dimensional space according to the input specific area;
the shape sampling sub-network decodes a point cloud model of the shape of the corresponding grain in the space according to the shape parameters of the grain;
the rigid transformation unit is used for carrying out rigid transformation on the attitude parameters of the generated point cloud model according to the predicted shape parameters and attitude parameters of the grains in the three-dimensional space;
and the conversion storage unit is used for carrying out three-dimensional reconstruction on the structure and the posture of the rigid transformation result to obtain the volume information of the crops.
3B, three-dimensional image reconstruction method
The three-dimensional image reconstruction method comprises the following steps:
s1, inputting a front and back color image of the crop with label information, and outputting two-dimensional coordinate information of a specific position by using an RPN (resilient packet network) trained on the basis of a neural network unit;
s2, inputting the two-dimensional coordinate information into a crop form and posture predicting subunit, wherein the crop form and posture predicting subunit predicts the shape parameters and the posture parameters of the grains in the three-dimensional space according to the input specific position;
s3, decoding a point cloud model of the shape of the corresponding crop in the space by the shape sampling sub-network according to the shape parameters of the crop;
s4, carrying out rigid transformation of the attitude parameters on the point cloud model generated in the step S3 according to the attitude parameters in the step S2;
and S5, carrying out three-dimensional reconstruction of the structure and the posture of the rigid transformation result to obtain the volume information of the crops.
The three-dimensional image reconstruction subsystem is based on a computer vision technology and a deep learning technology, and mainly has the functions of carrying out three-dimensional modeling on front and back images of collected crops (soybean, corn, wheat, rice and the like) and converting two-dimensional plane image information into a three-dimensional image for electronic archiving. The method can electronically archive the skin information and the volume information of the grains, the skin and the volume of the grains are not changed due to the increase of the storage life of the real objects, and the quality of the grains can be converted according to the three-dimensional volume.
4A, surface defect detection positioning subsystem
As shown in FIG. 4, the surface defect detecting and positioning subsystem comprises a plurality of positioning units arranged in sequence
The depth convolution network model is used for carrying out high-dimensional feature extraction on the normalized image information so that each ROI generates a feature map with a fixed size;
the RPN area candidate frame network is used for generating a suggested area and generating area coordinate information, and the region coordinate information is mapped to the last layer of convolution characteristic graph extracted by the deep convolution network model; in this embodiment, 500 suggested regions are generated per picture;
and evaluating a classification probability and evaluating a frame regression model, and performing combined training on the classification probability and the frame regression.
4B, surface defect detection and positioning method
The method for detecting and positioning the surface defects comprises the following specific steps:
s1: inputting a training image with defects;
s2: carrying out normalization operation on the image, and sending the image into a depth convolution network model for carrying out image high-dimensional feature extraction;
s3: generating suggested areas by using an RPN area candidate frame network, and generating 500 suggested areas for each picture;
s4: mapping the generated suggested region to the last layer of convolution characteristic graph extracted by the deep convolution network model;
s5: a pooling layer in a deep convolutional network model is adopted to enable each ROI to generate a feature map with a fixed size;
s6: performing combined training on the classification probability and the frame regression by adopting the evaluation classification probability and the evaluation frame regression;
s7: and deploying the trained detection network at the cloud or the client.
The subsystem and the method are mainly based on a target detection technology of deep learning, the positioning of the imperfect grain defects of the crops is different from the traditional imperfect grain classification, only the imperfect grain classification of the crops is preliminarily distinguished, and the specific defect positions cannot be accurately given.
5A, crop image automatic classification subsystem
As shown in fig. 5, the automatic crop image classification subsystem includes
The input module receives an image of a target crop to be detected; the image format is not limited, but the image of a lossless compression format similar to PNG is suggested to be used, if the input image format is not supported by the module, the coding and decoding plug-in of the image format is installed; the module performs edge contour searching on particles on the image so as to filter useless image data outside the particles.
The classification module analyzes the data of the input module based on the deep neural network model and provides a corresponding analysis result; the classification module supports plug-in installation, facilitates system expansion, and can manufacture and install corresponding plug-ins for different grains.
And the output module is used for displaying the input image, marking the edge contour of the grains in the image and giving a classification judgment result of the crops based on the image. The output module supports the storage of output images, particle outlines are marked out from the stored images, and classification judgment results of the image particles are marked; in addition, the module supports output of result data, so that the system can be used as an extension module of other systems, and meanwhile, the output can be defined based on the data output by the system.
5B, automatic crop image classification method
The automatic crop image classification method comprises the following steps:
s1, receiving an image of a target crop to be detected, searching edge contours of grains on the image, filtering useless image data outside the grains, and sending the image data to a classification module;
s2, analyzing the data of the input module based on the deep neural network model, and giving out a corresponding analysis result;
and S3, displaying the input image, marking the edge contour of the particle in the image, and giving a classification judgment result of the particle on the basis of the image.
The subsystem and the method assist workers related to crop classification (for example, imperfect grain detection of wheat and rice) based on strong analysis capability of artificial intelligence on images to make a classification judgment on the basis of the crop images.
6A, online appraisal subsystem of crops grade
As shown in fig. 7, the crop grade online review subsystem includes:
the data acquisition module is used for acquiring image data of the grains and integrating the image data into a hardware platform for grain data acquisition; specifically speaking, this module needs the staff cooperation, puts into cereal data acquisition hardware platform's objective table with target cereal to press start switch button, hardware equipment can gather image data to target cereal automatically, and accomplish the recovery of target cereal.
The network module is used for supporting communication between the hardware acquisition equipment and the server; specific communication contents include transmitting grain image data to a server, receiving a result of server analysis, and the like. The module is realized based on an open source software library Boost ASIO, and the network transmission support fragments the overlarge data so as to ensure that the server can correctly and completely receive the data. And the network request cache is supported, so that the currently executed task can be cached when the software exits abnormally, and preparation is provided for later recovery.
The algorithm module is used for analyzing the grain image data received by the server and giving a corresponding analysis and evaluation result; the algorithm module is deployed on the server. Segmenting each grain particle from the grain image by using a segmentation algorithm based on a deep neural network; then, performing quality detection on each well-segmented particle by using a quality detection algorithm based on a deep neural network; and finally, performing statistical analysis on all detection results, and recording data.
The data module is used for uniformly storing and managing the grain data received by the server, the result data analyzed by the server and the like; and the module can be used as a basic module for analyzing the big data of the later-stage grain review.
The result query display module is used for acquiring the analysis and review result of the specified grains from the data module for display; the module supports the query of all historical grain analysis and review results, supports the statistical analysis of selected analysis and review results, and provides a basic big data analysis conclusion.
6B, crop grade online evaluation method
S1, the data acquisition module acquires image data of the grains and then sends the image data to the algorithm module through the network module;
s2, an algorithm module firstly segments each grain particle from the grain image by using a segmentation algorithm based on a deep neural network; then, performing quality detection on each well-segmented particle by using a quality detection algorithm based on a deep neural network model; finally, all detection results are subjected to statistical analysis, and data are recorded;
as shown in fig. 8A, the segmentation algorithm specifically includes: a grain image is separated from an image of a plurality of grains and extracted individually.
As shown in fig. 8B, the quality detection algorithm specifically includes: and sending the singly extracted grain image data into a deep learning neural network for calculation to obtain the quality parameters of the grain image data.
S3, the algorithm module outputs grain data to the data module through the network module, and the data module stores and manages the result data analyzed by the server in a unified manner;
s4, the result query and display module obtains the analysis and review result of the appointed grains in the data module for display through the network module; the module supports the query of all historical grain analysis and review results, supports the statistical analysis of selected analysis and review results, and provides a basic big data analysis conclusion.
The subsystem mainly realizes the online evaluation function of the quality grade of the crops based on the technologies of artificial intelligence, computer vision and the like. The method comprises the steps that the crops acquire images of the crops through hardware equipment, the images are transmitted to a server through a network, the acquired images of the crops are analyzed through an artificial intelligence analysis algorithm configured on the server, corresponding analysis and evaluation results are obtained, and the analysis and evaluation results are displayed through the network.
7A, a sample labeling subsystem
As shown in FIG. 9, the sample annotation subsystem comprises
The registration login unit comprises user registration, user logout and user authority;
the file unit comprises an uploading file subunit and a downloading file subunit, wherein the uploading file subunit is used for uploading a picture sample with a set size and a set format to be marked, and the downloading file subunit is used for downloading a modified marking information data file with a specified format and is used for detecting algorithm training;
and the sample labeling unit is used for intelligently identifying the picture samples uploaded by the uploading subunit and the xml files obtained by processing the corresponding picture samples through the python script, and the obtained sample labeling information can be used for algorithm model training only through manual fine adjustment. Selecting the type of the annotation, such as: a frame, a point, a polygon, etc., and the line color of the marked crop grain may be set, and if not, the line color is randomly displayed. The marked sample picture can be amplified and reduced, so that the accuracy of the marked data is greatly enhanced, the unit can also review the missing-marked sample in real time, the marked crop particles can be covered, the unmarked crop particles can be obvious, and the complexity of manual marking is greatly reduced.
7B and sample labeling method
The sample labeling method comprises the following specific steps:
s1, registering user information in a registration login unit and giving the user related authority;
s2, uploading the picture sample to be marked in the uploading file sub-unit of the file unit;
s3, intelligently identifying the sample position data of the picture sample uploaded by the uploading subunit, selecting the type of the label, setting the line color of the labeled crop particles, and obtaining a final sample labeling file after manual reinspection;
and S4, downloading the sample annotation file through the download file sub-unit in the file unit and outputting the sample annotation file. Firstly, the intelligent identification can be carried out on the sample position data uploaded into the software, and secondly, the types of labels can be selected as follows: boxes, points, polygons, etc., and the line colors of the marked wheat grains are randomly selected.
The method is mainly used for marking the grain picture samples based on artificial intelligence, integrates a semi-automatic marking model based on tensierflow, can preprocess a part of data, is convenient for artificial marking, and can display the color of a marking frame randomly through software intelligence so as to prevent missing marks. The method has the main function of marking the pictures of crops (wheat, rice and the like) and is used for detecting the data set of the algorithm. The more accurate the sample labeling, the better the final effect of the algorithm model.
The member management system in the crop picture sample labeling subsystem, such as a creating and deleting user, covers the basic labeling modes of the image, such as: the frame, the point and the polygon can set different colors for the marked samples to distinguish, and the subsystem can also cover the marked samples in colors, so that the defects and the omissions can be conveniently checked. The crop picture sample labeling software integrates a semi-automatic labeling model based on tensiorflow, can preprocess a part of data, and for example, an xml coordinate data file obtained by processing a python-opencv picture can be led into the crop picture sample labeling software, so that the time for labeling the sample is greatly saved.
The above subsystems may work individually or in combination, and some of them may use the subsystems and methods disclosed in this application, or may use other systems and methods known in the art.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A crop shape analysis subsystem, comprising
The computer vision processing module is used for acquiring multi-angle crop appearance scanning images, and carrying out image preprocessing on the crop appearance scanning images to acquire images of single crops;
and the artificial intelligence processing module is used for processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating template information of the crops in the images, and finally fitting and displaying the contour information of the crops according to the template information.
2. The crop shape analysis subsystem of claim 1, wherein the computer vision processing module comprises a sequence of sequential stages
The image scanning equipment is used for carrying out image scanning on a set amount of crop samples to obtain multi-angle crop appearance scanning images;
the image processing unit is used for carrying out image preprocessing on the crop surface scanning images and finally dividing the images of a plurality of crops into images of a plurality of single crops; the image processing unit comprises a filtering algorithm subunit and an edge signal enhancement subunit, wherein the filtering algorithm subunit is used for filtering interference signals in the images, and the edge signal enhancement subunit is used for registering the multi-angle crop appearance scanning images.
3. A crop shape analysis subsystem according to claim 1 or claim 2 wherein the artificial intelligence processing module comprises a sequence of modules
The deep convolution network model is used for converting the acquired image of the single crop into a high-dimensional characteristic value corresponding to the image;
the neural network unit is used for counting the high-dimensional characteristic values and combining the high-dimensional characteristic values to form a plurality of dimensional characteristic values;
and the calculation display unit is used for calculating template information of the crops in the image according to the plurality of dimensional characteristic values, and fitting and displaying the contour information of the crops according to the template information.
4. The crop shape analysis subsystem of claim 3, wherein the deep convolutional network model further comprises a channel pruning subunit for compressing redundant feature channels of the image.
5. A crop shape analysis method is characterized by comprising the following steps:
s1, after the crop surface scanning images at multiple angles are obtained, image preprocessing is carried out on the surface scanning images, and then the images of a single crop are obtained;
s2, processing the images of the crops to obtain high-dimensional characteristic values of the images, performing statistical combination to obtain a plurality of dimensional characteristic values, further calculating template information of the crops in the images, and finally fitting and displaying contour information of the crops according to the template information.
6. The crop shape analysis method according to claim 5, wherein the step S1 specifically comprises the steps of:
s11, image scanning equipment in the computer vision processing module carries out image scanning on a set amount of crop samples to obtain multi-angle crop appearance scanning images;
s12, the image processing unit in the computer vision processing module carries out image preprocessing on the appearance scanning images of the crops, and finally the images containing a plurality of crops are divided into a plurality of images of single crops; the image processing unit comprises a filtering algorithm and an edge signal enhancement algorithm, wherein the filtering algorithm filters interference signals in the images, and then registration of multi-angle crop appearance scanning images is carried out through the edge signal enhancement algorithm.
7. The method for analyzing the shape of a crop as claimed in claim 6, wherein the step S2 comprises the steps of:
s21, converting the acquired image of the single crop into a high-dimensional characteristic value corresponding to the image by the deep convolution network model;
s22, the neural network unit counts the high-dimensional characteristic values and combines the high-dimensional characteristic values to form a plurality of dimensional characteristic values;
and S23, calculating template information of the crops in the image according to the plurality of dimensional characteristic values by the calculating and displaying unit, and fitting and displaying the outline information of the crops according to the template information.
8. The method for crop shape analysis according to claim 7, wherein in step S21, the deep convolutional network model compresses redundant feature channels of the image using a channel pruning method.
9. The method for analyzing the shape of a crop as claimed in claim 6, wherein the step S12 comprises the steps of:
s121, marking the crop surface scanning image;
s122, training the deep convolutional network model; based on the labeled data, the deep convolution network model trains the model by using a training frame based on pycaffe to obtain a plurality of groups of model weights;
s123, carrying out weight optimization processing on the deep convolution network model to obtain an optimal model;
s124, when the system is actually used, the collected images are processed by using the optimal model, and multi-angle images of crops are obtained;
and S125, carrying out shape analysis on the image sample of the single crop.
10. The method of claim 9, wherein the step of marking the crop grain in step S121 is performed by edge contour marking on the scanned image of the crop surface with a marking tool CVAT.
CN202011541364.2A 2020-12-23 2020-12-23 Crop shape analysis subsystem and method Pending CN112560748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011541364.2A CN112560748A (en) 2020-12-23 2020-12-23 Crop shape analysis subsystem and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011541364.2A CN112560748A (en) 2020-12-23 2020-12-23 Crop shape analysis subsystem and method

Publications (1)

Publication Number Publication Date
CN112560748A true CN112560748A (en) 2021-03-26

Family

ID=75031783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011541364.2A Pending CN112560748A (en) 2020-12-23 2020-12-23 Crop shape analysis subsystem and method

Country Status (1)

Country Link
CN (1) CN112560748A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627386A (en) * 2021-08-30 2021-11-09 山东新一代信息产业技术研究院有限公司 Visual video abnormity detection method
CN114037835A (en) * 2022-01-11 2022-02-11 安徽高哲信息技术有限公司 Grain quality estimation method, device, equipment and storage medium
CN116311231A (en) * 2023-05-25 2023-06-23 安徽高哲信息技术有限公司 Grain insect etching identification method, device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895192A (en) * 2017-12-06 2018-04-10 广州华多网络科技有限公司 Depth convolutional network compression method, storage medium and terminal
CN108875747A (en) * 2018-06-15 2018-11-23 四川大学 A kind of wheat unsound grain recognition methods based on machine vision
CN109087241A (en) * 2018-08-22 2018-12-25 东北农业大学 A kind of agricultural crops image data nondestructive collection method
CN109357630A (en) * 2018-10-30 2019-02-19 南京工业大学 A kind of polymorphic type batch workpiece vision measurement system and method
CN111652292A (en) * 2020-05-20 2020-09-11 贵州电网有限责任公司 Similar object real-time detection method and system based on NCS and MS

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895192A (en) * 2017-12-06 2018-04-10 广州华多网络科技有限公司 Depth convolutional network compression method, storage medium and terminal
CN108875747A (en) * 2018-06-15 2018-11-23 四川大学 A kind of wheat unsound grain recognition methods based on machine vision
CN109087241A (en) * 2018-08-22 2018-12-25 东北农业大学 A kind of agricultural crops image data nondestructive collection method
CN109357630A (en) * 2018-10-30 2019-02-19 南京工业大学 A kind of polymorphic type batch workpiece vision measurement system and method
CN111652292A (en) * 2020-05-20 2020-09-11 贵州电网有限责任公司 Similar object real-time detection method and system based on NCS and MS

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627386A (en) * 2021-08-30 2021-11-09 山东新一代信息产业技术研究院有限公司 Visual video abnormity detection method
CN114037835A (en) * 2022-01-11 2022-02-11 安徽高哲信息技术有限公司 Grain quality estimation method, device, equipment and storage medium
CN116311231A (en) * 2023-05-25 2023-06-23 安徽高哲信息技术有限公司 Grain insect etching identification method, device, electronic equipment and readable storage medium
CN116311231B (en) * 2023-05-25 2023-09-19 安徽高哲信息技术有限公司 Grain insect etching identification method, device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112580540A (en) Artificial intelligent crop processing system and method
CN112560748A (en) Crop shape analysis subsystem and method
CN110926430A (en) Air-ground integrated mangrove forest monitoring system and control method
KR20190030151A (en) Apparatus, method and computer program for analyzing image
CN112581459A (en) Crop classification system and method
CN112560749A (en) Crop analysis system and analysis method
Zhang et al. Extraction of tree crowns damaged by Dendrolimus tabulaeformis Tsai et Liu via spectral-spatial classification using UAV-based hyperspectral images
CN112016392A (en) Hyperspectral image-based small sample detection method for soybean pest damage degree
Ninomiya High-throughput field crop phenotyping: current status and challenges
CN111199192A (en) Method for detecting integral maturity of field red globe grapes by adopting parallel line sampling
CN106846462B (en) insect recognition device and method based on three-dimensional simulation
CN110555395A (en) Classified evaluation method for nitrogen content grade of rape canopy
Zhao et al. Automatic sweet pepper detection based on point cloud images using subtractive clustering
CN116189076A (en) Observation and identification system and method for bird observation station
CN115015258A (en) Crop growth and soil moisture association determination method and related device
KR102576427B1 (en) Real-time Rainfall Prediction Device using Cloud Images, and Rainfall Prediction Method using the same, and a computer-readable storage medium
CN117548370A (en) Quick treatment method for agricultural waste
KR20220022961A (en) Advanced crop diagnosing system
Rai et al. Implementation of virtual instrumentation system for estimation of eaten leaf area using digital image processing
CN113723833A (en) Method and system for evaluating afforestation actual performance quality, terminal equipment and storage medium
CN112750123A (en) Rice disease and insect pest monitoring method and system
CN112465038A (en) Method and system for identifying disease and insect pest types of fruit trees
CN116308435B (en) Agricultural product precise identification method and system based on intelligent traceability scale
CN111339904B (en) Animal sperm image identification method and device
CN111899214B (en) Pathological section scanning analysis device and pathological section scanning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326