CN114078228A - Target identification method and system based on lightweight network and agricultural machine - Google Patents

Target identification method and system based on lightweight network and agricultural machine Download PDF

Info

Publication number
CN114078228A
CN114078228A CN202010797595.3A CN202010797595A CN114078228A CN 114078228 A CN114078228 A CN 114078228A CN 202010797595 A CN202010797595 A CN 202010797595A CN 114078228 A CN114078228 A CN 114078228A
Authority
CN
China
Prior art keywords
information
data
frame
target
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010797595.3A
Other languages
Chinese (zh)
Inventor
杨强荣
方小永
何振军
贡军
高一平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhonglian Agricultural Machinery Co ltd
Original Assignee
Zhonglian Agricultural Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhonglian Agricultural Machinery Co ltd filed Critical Zhonglian Agricultural Machinery Co ltd
Priority to CN202010797595.3A priority Critical patent/CN114078228A/en
Publication of CN114078228A publication Critical patent/CN114078228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a target identification method based on a lightweight network, which comprises the following steps: establishing a target database, wherein the target database comprises a target image; performing data preprocessing operation on the target image to obtain corresponding preprocessing data; establishing a lightweight network model, and processing the preprocessed data based on the lightweight network model to obtain corresponding processed data; and acquiring frame information, and analyzing the processed data based on the frame information to acquire target identification information. The invention also discloses a target identification system based on the lightweight network. The agricultural target is identified, analyzed and processed by constructing a lightweight network framework, the operation complexity is reduced, the operation amount is reduced, and meanwhile, the image data is subjected to data enhancement by combining a generative countermeasure network, so that the diversity and effectiveness of the data are improved, and the training model is continuously optimized through the marking information in the specific identification process, so that the identification and positioning accuracy is greatly improved.

Description

Target identification method and system based on lightweight network and agricultural machine
Technical Field
The invention relates to the technical field of agricultural image identification, in particular to a target identification method and system based on a lightweight network and agricultural machinery.
Background
The artificial intelligence technology is more and more widely applied in the field of agricultural machinery. The target identification and detection technology combines an electronic technology, a sensing technology, a computer technology, an intelligent control technology and the like, plays a key role in realizing automation of agricultural machinery, and is increasingly popularized and applied in the whole agricultural production, such as weed identification, pest and disease identification, crop lodging identification, harvest object identification, bundling object identification, fertilization object identification and the like.
In the prior art, technicians develop a perception method for agricultural targets, and identify the agricultural targets in an image identification and deep learning mode. On one hand, the existing agricultural target identification method is very dependent on expert experience in the design stage of the feature extractor, and the generalization capability is poor; on the other hand, in the process of identifying the agricultural target, a large amount of preprocessing work needs to be carried out on the image, the preprocessing work load is extremely large, extremely high requirements are provided for the computing capability of a processing device, and the processing flow has high time cost and low efficiency. In addition, the positioning accuracy of the existing target identification on the agricultural target is poor.
Disclosure of Invention
In order to solve the technical problems of high operation complexity, low identification efficiency and poor positioning accuracy of an identification method of an agricultural target in the prior art, the embodiment of the invention provides a target identification method based on a lightweight network, a target identification system based on the lightweight network and an agricultural machine.
In order to achieve the above object, an embodiment of the present invention provides an object identification method based on a lightweight network, where the identification method includes: establishing a target database, wherein the target database comprises a target image; performing data preprocessing operation on the target image to obtain corresponding preprocessing data; establishing a lightweight network model, and processing the preprocessed data based on the lightweight network model to obtain corresponding processed data; and acquiring frame information, and analyzing the processed data based on the frame information to acquire target identification information.
Preferably, the establishing a target database includes: acquiring a target image; performing effectiveness screening on the target image to obtain a screened image; labeling the screened image to obtain labeling information corresponding to the screened image, wherein the labeling information comprises first target information and second target information; and establishing a target database based on the screened image and the labeling information.
Preferably, the performing a data preprocessing operation on the target image to obtain corresponding preprocessed data includes: performing a first data enhancement operation on the screened image to obtain first enhancement data; executing a second data enhancement operation on the first enhancement data to obtain second enhancement data; and processing the second enhancement data according to a preset requirement to obtain the preprocessed data.
Preferably, the processing the second enhancement data according to a preset requirement to obtain the preprocessed data includes: acquiring preset picture format information; performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data; and carrying out normalization processing on the formatted data to obtain the preprocessed data.
Preferably, the establishing a lightweight network model, processing the preprocessed data based on the lightweight network model, and obtaining corresponding processed data includes: acquiring an infrastructure network architecture, wherein the infrastructure network architecture is a network architecture based on lightweight design; acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to obtain expanded data; acquiring second channel information, and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data; and performing fusion operation on the extracted data based on the first channel information in the basic network architecture to obtain fused data, and taking the fused data as the processed data.
Preferably, the annotation information further includes filtered size information of the filtered image, and the identification method further includes: after the frame body information is obtained, generating corresponding size statistical information based on the screened size information; and adjusting the frame information based on the basic network architecture and the size statistical information to obtain the adjusted frame information.
Preferably, the analyzing the processed data based on the frame information to obtain target identification information includes: analyzing the processed data based on the frame information to obtain a plurality of candidate prediction frames; screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame; taking the post-screening prediction box as the analysis result; and extracting the classification information and the positioning information of each screened prediction box, and generating target identification information corresponding to the screened prediction boxes on the basis of the classification information and the positioning information.
Preferably, the identification method further comprises: generating loss calculation information based on the frame information, the second channel information and a preset loss calculation algorithm before analyzing the processed data based on the frame information; and optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
Preferably, the optimizing the frame information based on the loss calculation information to obtain the optimized frame information includes: judging whether the loss calculation information is larger than a preset loss threshold value or not; when the loss calculation information is less than or equal to the preset loss threshold: processing the second channel information based on the loss calculation information to obtain processed second channel information; optimizing the frame information based on the processed second channel information to obtain optimized frame information; the identification method further comprises the following steps: in the case that the loss calculation information is greater than the preset loss threshold: judging whether the screened image corresponding to the loss calculation information is a qualified image; if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information; and if not, deleting the screened image corresponding to the loss calculation information.
Preferably, the screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame includes: screening the candidate prediction frames based on a non-maximum suppression algorithm to obtain a prediction frame after first processing; screening the first processed prediction frame based on a score threshold value to obtain a second processed prediction frame; screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame; judging whether an overlapped third-processing prediction frame with an overlapped area larger than a maximum overlapping threshold exists or not, if so, acquiring target score information of the overlapped third-processing prediction frame, deleting the overlapped third-processing prediction frame with smaller target score information, and acquiring at least one screened prediction frame; otherwise, the prediction frame after the third processing is used as the prediction frame after screening.
Accordingly, the present invention also provides an object recognition system based on a lightweight network, the recognition system comprising: the system comprises a library construction unit, a database processing unit and a database processing unit, wherein the library construction unit is used for establishing a target database, and the target database comprises a target image; the preprocessing unit is used for executing data preprocessing operation on the target image to obtain corresponding preprocessing data; the light-weight network unit is used for establishing a light-weight network model, processing the preprocessed data based on the light-weight network model and obtaining corresponding processed data; an identification unit; the frame information is acquired, and the processed data is analyzed based on the frame information to acquire target identification information.
Preferably, the library construction unit includes: the image acquisition module is used for acquiring a target image; the effectiveness screening module is used for carrying out effectiveness screening on the target image to obtain a screened image; the marking module is used for marking the screened image to obtain marking information corresponding to the screened image, and the marking information comprises first target information and second target information; and the library establishing module is used for establishing a target database based on the screened images and the marking information.
Preferably, the preprocessing unit includes: the first enhancement module is used for executing a first data enhancement operation on the screened image to obtain first enhancement data; the second enhancement module is used for executing second data enhancement operation on the first enhancement data to obtain second enhancement data; and the preprocessing module is used for processing the second enhanced data according to a preset requirement to obtain the preprocessed data.
Preferably, the processing the second enhancement data according to a preset requirement to obtain the preprocessed data includes: acquiring preset picture format information; performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data; and carrying out normalization processing on the formatted data to obtain the preprocessed data.
Preferably, the lightweight network unit comprises: the basic network module is used for acquiring a basic network architecture, and the basic network architecture is a network architecture based on lightweight design; the expansion module is used for acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to obtain expanded data; the extraction module is used for acquiring second channel information, and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data; and the fusion module is used for executing fusion operation on the extracted data in the basic network architecture based on the first channel information to obtain fused data, and taking the fused data as the processed data.
Preferably, the annotation information further includes filtered size information of the filtered image, and the identification unit includes: the statistical module is used for generating corresponding size statistical information based on the screened size information after the frame body information is obtained; and the adjusting module is used for adjusting the frame body information based on the basic network architecture and the size statistical information to obtain the adjusted frame body information.
Preferably, the identification unit includes: the analysis module is used for analyzing the processed data based on the frame body information to obtain a plurality of candidate prediction frames; the screening module is used for screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame; and the identification module is used for extracting the classification information and the positioning information of each screened prediction frame and generating target identification information corresponding to the screened prediction frames based on the classification information and the positioning information.
Preferably, the identification unit further comprises: a loss calculation module, configured to generate loss calculation information based on the frame information, the second channel information, and a preset loss calculation algorithm before analyzing the processed data based on the frame information; and the frame body optimization module is used for optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
Preferably, the optimizing the frame information based on the loss calculation information to obtain the optimized frame information includes: judging whether the loss calculation information is larger than a preset loss threshold value or not; when the loss calculation information is less than or equal to the preset loss threshold: processing the second channel information based on the loss calculation information to obtain processed second channel information; optimizing the frame information based on the processed second channel information to obtain optimized frame information; the identification device further comprises: in the case that the loss calculation information is greater than the preset loss threshold: judging whether the screened image corresponding to the loss calculation information is a qualified image; if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information; and if not, deleting the screened image corresponding to the loss calculation information.
Preferably, the screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame includes: screening the candidate prediction frames based on a non-maximum suppression algorithm to obtain a prediction frame after first processing; screening the first processed prediction frame based on score threshold information to obtain a second processed prediction frame; screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame; judging whether an overlapped third-processing prediction frame with an overlapped area larger than a maximum overlapping threshold exists or not, if so, acquiring target score information of the overlapped third-processing prediction frame, deleting the overlapped third-processing prediction frame with smaller target score information, and acquiring at least one screened prediction frame; otherwise, the prediction frame after the third processing is used as the prediction frame after screening.
In another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method provided by the present invention.
In another aspect, the present disclosure also provides an agricultural machine including an identification system provided by the present disclosure and/or a computer-readable storage medium provided by the present disclosure.
Through the technical scheme provided by the invention, the invention at least has the following technical effects:
through the technical scheme, the agricultural target is identified, analyzed and processed by constructing a lightweight network architecture, the operation complexity is greatly reduced, the operation amount is reduced, and meanwhile, the image data is subjected to data enhancement by innovatively combining a generative countermeasure network, so that the diversity and effectiveness of the data are effectively improved, and further, the training model is continuously optimized through the label information in the specific identification process, so that the identification accuracy is greatly improved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is a flowchart of a specific implementation of a method for identifying an object in a lightweight network according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific implementation of data preprocessing in a target identification method of a lightweight network according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific implementation of the method for identifying an object in a lightweight network according to an embodiment of the present invention, wherein the method processes pre-processed data based on the lightweight network;
fig. 4 is a flowchart of a specific implementation of analyzing processed data based on frame information in the method for identifying an object in a lightweight network according to an embodiment of the present invention;
fig. 5 is a flowchart of a specific implementation of the method for identifying an object in a lightweight network according to an embodiment of the present invention, for screening multiple candidate prediction blocks;
fig. 6 is a schematic structural diagram of an object recognition system of a lightweight network according to an embodiment of the present invention.
Detailed Description
In order to solve the technical problems of high operation complexity, low identification efficiency and poor positioning accuracy of an identification method of an agricultural target in the prior art, the embodiment of the invention provides a target identification method based on a lightweight network, a target identification system based on the lightweight network and an agricultural machine.
The agricultural target recognition of the present invention refers to recognition of relevant objects of agricultural working machines, including weed recognition, pest recognition, crop lodging recognition, harvest object recognition, bundling object recognition, fertilization object recognition, crop growth situation recognition, and the like.
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
The terms "system" and "network" in embodiments of the present invention may be used interchangeably. The "plurality" means two or more, and in view of this, the "plurality" may also be understood as "at least two" in the embodiments of the present invention. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified. In addition, it should be understood that the terms first, second, etc. in the description of the embodiments of the invention are used for distinguishing between the descriptions and are not intended to indicate or imply relative importance or order to be construed.
Referring to fig. 1, an embodiment of the present invention provides an object identification method based on a lightweight network, where the identification method includes:
s10), establishing a target database, wherein the target database comprises target images;
s20) executing data preprocessing operation on the target image to obtain corresponding preprocessed data;
s30), establishing a lightweight network model, and processing the preprocessed data based on the lightweight network model to obtain corresponding processed data;
s40) obtaining frame information, and analyzing the processed data based on the frame information to obtain target identification information.
In an embodiment of the present invention, the establishing a target database includes: acquiring a target image; performing effectiveness screening on the target image to obtain a screened image; labeling the screened image to obtain labeling information corresponding to the screened image, wherein the labeling information comprises first target information and second target information; and establishing a target database based on the screened image and the labeling information.
In a possible implementation mode, in the process of identifying weeds in a certain farmland or farm area, a database of agricultural targets needs to be established first, for example, the target database is a database of crop weeds, for example, in the embodiment of the present invention, a technician drives a plant protection machine to automatically fly above the farmland area to be identified and uses a camera arranged on the plant protection machine to shoot a farmland image of the farmland, after the farmland image is obtained, validity screening needs to be performed on the preliminarily obtained farmland image, for example, in the embodiment of the present invention, the technician screens each obtained farmland image through manual identification so as to screen and delete farmland images including non-farmland scenes, high-repetition degree, blurred images, no target crops and the like, and thus images meeting identification requirements or identification scenes are reserved, at this time, the technician labels each image after the screening, for example, labels crops and weeds in each image in the form of a label frame, and labels information such as corresponding size information for each label frame.
Referring to fig. 2, in the embodiment of the present invention, the performing a data preprocessing operation on the target image to obtain corresponding preprocessed data includes:
s201) performing a first data enhancement operation on the screened image to obtain first enhancement data;
s202) executing a second data enhancement operation on the first enhancement data to obtain second enhancement data;
s203) processing the second enhanced data according to a preset requirement to obtain the preprocessed data.
In one possible embodiment, after the creation of the agricultural target database (e.g., the agricultural target database is a database of wheat lodging) is completed, the pre-processing operation of the filtered images in the database of wheat lodging is started. Firstly, acquiring a preset operation probability, and then judging whether to perform a first data enhancement operation on a screened image in a database of wheat lodging according to the preset probability. For example, the preset operation probability is 33%, the database of the lodging wheat includes 35 screened images, and at this time, the 35 screened images are sequentially subjected to random first data enhancement operation according to the preset operation probability.
For example, for the 1 st screened image, the operation after random selection is selected as not to carry out the first data enhancement operation, so that the 1 st screened image is skipped; for the 2 nd screened image, the operation after random selection is selected as the first data enhancement operation, and at this time, operations such as left-right turning, clipping, brightness conversion, color conversion and the like are further selected randomly to perform the data enhancement operation on the 2 nd screened image, so that the enhanced screened image … is obtained until all screened images complete the first data enhancement operation according to the preset operation probability. And finally, processing the second enhancement data according to a preset requirement, for example, the preset requirement is an input image size requirement required in a lightweight network, namely, the second enhancement data is subjected to size cutting according to the input image size requirement to obtain final preprocessed data.
In the traditional agricultural target identification method, the acquired image is usually subjected to direct image identification or deep learning so as to generate an identification result of the agricultural target, and the input image in the mode usually has the condition of single shooting mode or similar acquisition way, so that the input data usually has the problems of insufficient diversity and insufficient effectiveness.
Further, in this embodiment of the present invention, the processing the second enhancement data according to a preset requirement to obtain the preprocessed data includes: acquiring preset picture format information; performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data; and carrying out normalization processing on the formatted data to obtain the preprocessed data.
In a possible implementation manner, the crop growth situation of a certain farmland or farm area is identified, in order to solve the technical problem that the collected image is greatly influenced by the ambient illumination of the identification scene in the existing agricultural target identification method, for example, in the embodiment of the present invention, the crop image collected under different ambient illumination of the current identification scene has a certain change, and further has a certain influence on the identification and analysis of the subsequent crop growth situation, so in step S201, a further data enhancement operation is performed on the processed first enhancement data. Firstly, a Generative Adaptive Networks (GAN) is created, the GAN includes a first training model and a second training model, the first training model is used for extracting and identifying color and form distribution in first enhanced data according to preset identification parameters, and generating new picture data according to the extracted and identified color and form distribution, at this time, the second training model further judges the probability that the picture data comes from the screened image according to the new picture data, and optimizes the preset identification parameters of the first training model according to the probability. And further performing data enhancement operation on the first enhancement data to obtain second enhancement data through the mutual game learning of the first training model and the second training model.
In the embodiment of the invention, the GAN is innovatively adopted to further enhance the data of the first enhanced data, so that the influence of the color difference and the shape of the input data on the identification process is greatly reduced, the problems of low identification accuracy and large identification deviation of the agricultural target caused by the large shape difference of the agricultural target due to the illumination of the external environment or the shooting angle are effectively avoided, and the accuracy and the effectiveness of the identification of the agricultural target are greatly improved.
Referring to fig. 3, in the embodiment of the present invention, the establishing a lightweight network model, and processing the preprocessed data based on the lightweight network model to obtain corresponding processed data includes:
s301) obtaining an infrastructure network architecture, wherein the infrastructure network architecture is a network architecture based on lightweight design;
s302) acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to acquire expanded data;
s303) acquiring second channel information, and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data;
s304) performing fusion operation on the extracted data in the basic network architecture based on the first channel information to obtain fused data, and taking the fused data as the processed data.
In order to solve the technical problems, in a possible implementation manner, before the input data is identified, a basic network architecture is further constructed, wherein the basic network architecture is based on lightweight design MobileNet as a basic network architecture, and specifically, the basic network architecture based on MobileNet V2 is adopted. At this time, for an input image with the size of a, B, C, firstly, expanding the number of channels of the basic network through a convolution kernel of 1, C, generating corresponding expanded data by the expanded channels for the preprocessed data, then, performing feature extraction on the expanded data through convolution kernel of 3, C to realize pattern analysis on the expanded data and obtain extracted data, and further, performing information fusion on the extracted data in different channels through convolution kernel of 1, thereby obtaining final fused data, wherein the fused data is the processed data.
In the embodiment of the invention, by adopting a lightweight network architecture, preprocessed data is firstly processed by a convolution kernel of 1 × C instead of a convolution kernel of 3 × C, so that the parameter quantity of data processing is greatly reduced, the operation complexity is reduced, the data processing difficulty is reduced, the nonlinear characteristic of the network is increased, the operation efficiency and the operation accuracy are improved, and the model training loss is reduced.
In an embodiment of the present invention, the annotation information further includes filtered size information of the filtered image, and step S40 of the identification method further includes: after the frame body information is obtained, generating corresponding size statistical information based on the screened size information; and adjusting the frame information based on the basic network architecture and the size statistical information to obtain the adjusted frame information.
In the traditional agricultural target identification method, a plurality of marking frames are generated in the identification process, and each marking frame is generated randomly and is not fixed in size, so that the traditional agricultural target identification method has the defects that the number of the marking frames needing to be processed is large, the operation complexity is improved, the convergence rate of an identification network on agricultural target identification is reduced, and the positioning accuracy of an agricultural target is reduced.
Therefore, in order to solve the above technical problem, in a possible embodiment, a harvested object of a harvester is identified, for example, the harvested object is an apple, before the input processed data is identified as the apple, the present invention further performs statistical analysis according to the labeled information of the filtered image, specifically, performs statistical analysis according to the filtered size information in the labeled information, performs statistical analysis on the size and aspect ratio information in all the filtered size information, and further performs cluster analysis on the filtered size information according to a method such as a K-Means algorithm, so as to further improve the accuracy of the analysis, analyze the receptive field size of the apple on each layer of the filtered image, and further generate the optimal size and the optimal aspect ratio of the labeled frame, that is, obtain the adjusted frame information.
In the embodiment of the invention, the frame bodies in the agricultural target identification process are further optimized, so that on one hand, the number of the frame bodies in the identification and analysis process is further reduced, the operation complexity is reduced, and the convergence rate of the identification network on the agricultural target identification is increased, and meanwhile, the optimized frame bodies can better accord with the actual characteristics of data sets (a first target set and a second target set, for example, the first target set is a crop set, and the second target set is a weed set), so that the identification accuracy and the positioning accuracy of the agricultural target are further improved.
Referring to fig. 4, in the embodiment of the present invention, analyzing the processed data based on the frame information to obtain target identification information includes:
s401) analyzing the processed data based on the frame body information to obtain a plurality of candidate prediction frames;
s402) screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame;
s403) extracting the classification information and the positioning information of each prediction frame after screening, and generating target identification information corresponding to the prediction frame after screening based on the classification information and the positioning information.
In one possible embodiment, insect pests in a current farmland or farm area are identified, after appropriate frame information is designed, processed data is analyzed according to the designed frame information, for example, the processed data is a plurality of images containing a first target (for example, the first target is a crop) and a second target (for example, the second target is a pest), at least one candidate prediction frame is marked on each image according to the distribution situation of the crop and the pest on the image, at this time, a plurality of candidate prediction frames are screened according to a preset screening rule, so as to obtain at least one screened prediction frame, the screened prediction frame contains the relevant information of the crop or the pest contained on each image, at this time, the classification information and the positioning information of each screened prediction frame are extracted, and finally, the crop target identification information of the current farmland is generated, for example, in embodiments of the present invention, pest identification information for a current field is generated.
In the embodiment of the present invention, the identification method further includes: generating loss calculation information based on the frame information, the second channel information and a preset loss calculation algorithm before analyzing the processed data based on the frame information; and optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
Further, in this embodiment of the present invention, the optimizing the frame information based on the loss calculation information to obtain the optimized frame information includes: judging whether the loss calculation information is larger than a preset loss threshold value or not; when the loss calculation information is less than or equal to the preset loss threshold: processing the second channel information based on the loss calculation information to obtain processed second channel information; optimizing the frame information based on the processed second channel information to obtain optimized frame information; the identification method further comprises the following steps: in the case that the loss calculation information is greater than the preset loss threshold: judging whether the screened image corresponding to the loss calculation information is a qualified image; if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information; and if not, deleting the screened image corresponding to the loss calculation information.
In order to further optimize the above frame information, so that the prediction frame marked each time can be as close to the marking frame as possible, that is, the prediction result each time is as close to the actual display result as possible, in the embodiment of the present invention, before the processed data is recognized, the recognition method is further deeply learned and trained to further enhance the accuracy of the prediction result.
In a possible implementation, a preset loss calculation algorithm is first obtained, and then loss calculation is performed on the plurality of filtered prediction frames generated in the training process by combining the frame body information and the second channel information, so as to evaluate loss calculation information between the prediction frames and the labeled filtered size information. At this time, it is determined whether the obtained loss calculation information is greater than a preset loss threshold (for example, the preset loss threshold is 20%) and the current loss calculation information is 18%, that is, it is determined that the currently calculated loss calculation information is less than the preset loss threshold, so that the second channel information is optimized and adjusted according to the loss calculation information, the optimized second channel information is obtained, and the frame information is optimized according to the processed second channel information, so as to obtain the optimized frame information. And in the subsequent training process of identifying the agricultural target, continuously optimizing the second channel information and/or the frame body information according to the generated loss calculation information between the prediction frame and the marking frame, so that the final prediction frame is more matched with the actual marking frame.
In the embodiment of the invention, the training stage of deep learning of the agricultural target identification method is continuously learned according to the preset loss calculation algorithm, and the second channel information and the frame body information are continuously optimized, so that the matching degree between the prediction frame generated in the identification process and the actual marking frame is further improved, and the identification accuracy of the agricultural target is improved. On the other hand, in the process of optimizing the agricultural target identification method by the optimization method, the annotation information can be further adjusted and optimized, so that the annotation information is more accurate and reasonable, or unreasonable screened images are removed, the training complexity of the agricultural target identification method is reduced, and the operation accuracy and the operation efficiency are improved.
Referring to fig. 5, in an embodiment of the present invention, the screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame includes:
s4021) screening the candidate prediction boxes based on a non-maximum suppression algorithm to obtain a prediction box after first processing;
s4022) screening the prediction frame after the first processing based on the score threshold to obtain a prediction frame after the second processing;
s4023) screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame;
s4024) judging whether an overlapped third processed prediction frame with an overlapped area larger than the maximum overlapping threshold exists;
s40241) if yes, acquiring target score information of the prediction frames after the third overlapping processing, deleting the prediction frames after the third overlapping processing with smaller target score information, and acquiring at least one prediction frame after screening;
s40242) if not, taking the third post-processing prediction box as the post-screening prediction box.
After a plurality of candidate prediction frames for performing agricultural target identification on the screened image are obtained, the candidate prediction frames are further screened so as to improve the accuracy of the identification result. In one possible embodiment, the fertilization targets of the current field or farm area are identified, for example, the fertilization targets are corn crops. After obtaining a plurality of candidate prediction frames for identifying corn crops, firstly screening the plurality of candidate prediction frames based on a Non-Maximum Suppression algorithm (NMS) so as to identify and eliminate redundant prediction frames in the plurality of candidate prediction frames and obtain a first processed prediction frame; then further acquiring score threshold information and target score information of each first processed prediction box, deleting the first processed prediction boxes of which the target score information is lower than the score threshold information, and acquiring second processed prediction boxes, wherein in the embodiment of the invention, the score threshold information can be calculated based on the overlapping degree of the prediction boxes and the real frames; further, according to the acquired minimum size threshold information and the size information of each second processed prediction frame, deleting the second processed prediction frames with the size information smaller than the minimum size threshold information so as to further exclude the prediction frames with the too small size and acquire a third processed prediction frame; and finally, analyzing each prediction frame after the third processing according to the acquired maximum overlapping threshold information, judging whether an overlapping prediction frame after the third processing with an overlapping area larger than the maximum overlapping threshold information exists, and if so, deleting the prediction frame after the third processing with smaller target score information in the overlapping prediction frame after the third processing so as to avoid repeated operation, thereby greatly improving the accuracy of identifying and positioning the agricultural target and improving the operation efficiency.
In the embodiment of the invention, the prediction frame generated after agricultural target identification is subjected to multi-level screening, so that the accuracy of agricultural target identification is further improved, the calculation amount in the subsequent calculation process is reduced, the calculation complexity is reduced, and the calculation efficiency is improved.
An object recognition system based on a lightweight network according to an embodiment of the present invention will be described with reference to the drawings.
Referring to fig. 6, based on the same inventive concept, an embodiment of the present invention provides an object recognition system based on a lightweight network, where the object recognition system includes: the system comprises a library construction unit, a database processing unit and a database processing unit, wherein the library construction unit is used for establishing a target database, and the target database comprises a target image; the preprocessing unit is used for executing data preprocessing operation on the target image to obtain corresponding preprocessing data; the light-weight network unit is used for establishing a light-weight network model, processing the preprocessed data based on the light-weight network model and obtaining corresponding processed data; an identification unit; the frame information is acquired, and the processed data is analyzed based on the frame information to acquire target identification information.
In an embodiment of the present invention, the library construction unit includes: the image acquisition module is used for acquiring a target image; the effectiveness screening module is used for carrying out effectiveness screening on the target image to obtain a screened image; the marking module is used for marking the screened image to obtain marking information corresponding to the screened image, and the marking information comprises first target information and second target information; and the library establishing module is used for establishing a target database based on the screened images and the marking information.
In an embodiment of the present invention, the preprocessing unit includes: the first enhancement module is used for executing a first data enhancement operation on the screened image to obtain first enhancement data; the second enhancement module is used for executing second data enhancement operation on the first enhancement data to obtain second enhancement data; and the preprocessing module is used for processing the second enhanced data according to a preset requirement to obtain the preprocessed data.
In this embodiment of the present invention, the processing the second enhancement data according to a preset requirement to obtain the preprocessed data includes: acquiring preset picture format information; performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data; and carrying out normalization processing on the formatted data to obtain the preprocessed data.
In an embodiment of the present invention, the lightweight network unit includes: the basic network module is used for acquiring a basic network architecture, and the basic network architecture is a network architecture based on lightweight design; the expansion module is used for acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to obtain expanded data; the extraction module is used for acquiring second channel information and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data; and the fusion module is used for executing fusion operation on the extracted data in the basic network architecture based on the first channel information to obtain fused data, and taking the fused data as the processed data.
In an embodiment of the present invention, the annotation information further includes filtered size information of the filtered image, and the identifying unit includes: the statistical module is used for generating corresponding size statistical information based on the screened size information after the frame body information is obtained; and the adjusting module is used for adjusting the frame body information based on the basic network architecture and the size statistical information to obtain the adjusted frame body information.
In an embodiment of the present invention, the identification unit includes: the analysis module is used for analyzing the processed data based on the frame body information to obtain a plurality of candidate prediction frames; the screening module is used for screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame; and the identification module is used for extracting the classification information and the positioning information of each screened prediction frame and generating target identification information corresponding to the screened prediction frames based on the classification information and the positioning information.
In an embodiment of the present invention, the identification unit further includes: a loss calculation module, configured to generate loss calculation information based on the frame information, the second channel information, and a preset loss calculation algorithm before analyzing the processed data based on the frame information; and the frame body optimization module is used for optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
In this embodiment of the present invention, the optimizing the frame information based on the loss calculation information to obtain the optimized frame information includes: judging whether the loss calculation information is larger than a preset loss threshold value or not; when the loss calculation information is less than or equal to the preset loss threshold: processing the second channel information based on the loss calculation information to obtain processed second channel information; optimizing the frame information based on the processed second channel information to obtain optimized frame information; the identification device further comprises: in the case that the loss calculation information is greater than the preset loss threshold: judging whether the screened image corresponding to the loss calculation information is a qualified image; if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information; and if not, deleting the screened image corresponding to the loss calculation information.
In this embodiment of the present invention, the screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame includes: screening the candidate prediction frames based on a non-maximum suppression algorithm to obtain a prediction frame after first processing; screening the first processed prediction frame based on score threshold information to obtain a second processed prediction frame; screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame; judging whether an overlapped third-processing prediction frame with an overlapped area larger than a maximum overlapping threshold exists or not, if so, acquiring target score information of the overlapped third-processing prediction frame, deleting the overlapped third-processing prediction frame with smaller target score information, and acquiring at least one screened prediction frame; otherwise, the prediction frame after the third processing is used as the prediction frame after screening.
Further, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of the present invention.
Further, the embodiment of the invention also provides an agricultural machine, and the agricultural machine comprises the identification system provided by the embodiment of the invention and/or the computer-readable storage medium provided by the embodiment of the invention.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (22)

1. An object identification method based on a lightweight network, characterized in that the identification method comprises:
establishing a target database, wherein the target database comprises a target image;
performing data preprocessing operation on the target image to obtain corresponding preprocessing data;
establishing a lightweight network model, and processing the preprocessed data based on the lightweight network model to obtain corresponding processed data;
and acquiring frame information, and analyzing the processed data based on the frame information to acquire target identification information.
2. The identification method according to claim 1, wherein the establishing a target database comprises:
acquiring a target image;
performing effectiveness screening on the target image to obtain a screened image;
labeling the screened image to obtain labeling information corresponding to the screened image, wherein the labeling information comprises first target information and second target information;
and establishing a target database based on the screened image and the labeling information.
3. The identification method according to claim 2, wherein the performing a data preprocessing operation on the target image to obtain corresponding preprocessed data comprises:
performing a first data enhancement operation on the screened image to obtain first enhancement data;
executing a second data enhancement operation on the first enhancement data to obtain second enhancement data;
and processing the second enhancement data according to a preset requirement to obtain the preprocessed data.
4. The identification method according to claim 3, wherein the processing the second enhancement data according to a preset requirement to obtain the preprocessed data comprises:
acquiring preset picture format information;
performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data;
and carrying out normalization processing on the formatted data to obtain the preprocessed data.
5. The identification method according to claim 2, wherein the establishing a lightweight network model, and processing the pre-processed data based on the lightweight network model to obtain corresponding processed data comprises:
acquiring an infrastructure network architecture, wherein the infrastructure network architecture is a network architecture based on lightweight design;
acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to obtain expanded data;
acquiring second channel information, and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data;
and performing fusion operation on the extracted data based on the first channel information in the basic network architecture to obtain fused data, and taking the fused data as the processed data.
6. The identification method according to claim 5, wherein the labeling information further includes filtered size information of the filtered image, the identification method further comprising:
after the frame body information is obtained, generating corresponding size statistical information based on the screened size information;
and adjusting the frame information based on the basic network architecture and the size statistical information to obtain the adjusted frame information.
7. The identification method according to claim 5 or 6, wherein the analyzing the processed data based on the frame information to obtain target identification information comprises:
analyzing the processed data based on the frame information to obtain a plurality of candidate prediction frames;
screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame;
and extracting the classification information and the positioning information of each screened prediction box, and generating target identification information corresponding to the screened prediction boxes on the basis of the classification information and the positioning information.
8. The identification method according to claim 7, characterized in that the identification method further comprises:
generating loss calculation information based on the frame information, the second channel information and a preset loss calculation algorithm before analyzing the processed data based on the frame information;
and optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
9. The identification method according to claim 8, wherein the optimizing the frame information based on the loss calculation information to obtain optimized frame information comprises:
judging whether the loss calculation information is larger than a preset loss threshold value or not;
when the loss calculation information is less than or equal to the preset loss threshold:
processing the second channel information based on the loss calculation information to obtain processed second channel information;
optimizing the frame information based on the processed second channel information to obtain optimized frame information;
the identification method further comprises the following steps:
in the case that the loss calculation information is greater than the preset loss threshold:
judging whether the screened image corresponding to the loss calculation information is a qualified image;
if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information;
and if not, deleting the screened image corresponding to the loss calculation information.
10. The identification method according to claim 7, wherein the screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame comprises:
screening the candidate prediction frames based on a non-maximum suppression algorithm to obtain a prediction frame after first processing;
screening the first processed prediction frame based on a score threshold value to obtain a second processed prediction frame;
screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame;
judging whether an overlapped third-processing prediction frame with an overlapped area larger than a maximum overlapping threshold exists or not, if so, acquiring target score information of the overlapped third-processing prediction frame, deleting the overlapped third-processing prediction frame with smaller target score information, and acquiring at least one screened prediction frame; otherwise, the prediction frame after the third processing is used as the prediction frame after screening.
11. An object recognition system based on a lightweight network, characterized in that the recognition system comprises:
the system comprises a library construction unit, a database processing unit and a database processing unit, wherein the library construction unit is used for establishing a target database, and the target database comprises a target image;
the preprocessing unit is used for executing data preprocessing operation on the target image to obtain corresponding preprocessing data;
the light-weight network unit is used for establishing a light-weight network model, processing the preprocessed data based on the light-weight network model and obtaining corresponding processed data;
an identification unit; the frame information is acquired, and the processed data is analyzed based on the frame information to acquire target identification information.
12. The identification system according to claim 11, wherein the library construction unit comprises:
the image acquisition module is used for acquiring a target image;
the effectiveness screening module is used for carrying out effectiveness screening on the target image to obtain a screened image;
the marking module is used for marking the screened image to obtain marking information corresponding to the screened image, and the marking information comprises first target information and second target information;
and the library establishing module is used for establishing a target database based on the screened images and the marking information.
13. The identification system of claim 12, wherein the preprocessing unit comprises:
the first enhancement module is used for executing a first data enhancement operation on the screened image to obtain first enhancement data;
the second enhancement module is used for executing second data enhancement operation on the first enhancement data to obtain second enhancement data;
and the preprocessing module is used for processing the second enhanced data according to a preset requirement to obtain the preprocessed data.
14. The identification system of claim 13, wherein said processing said second enhanced data to obtain said preprocessed data according to a predetermined requirement comprises:
acquiring preset picture format information;
performing format processing on the second enhanced data based on the preset picture format information to obtain formatted data;
and carrying out normalization processing on the formatted data to obtain the preprocessed data.
15. The identification system according to claim 12, wherein said lightweight network unit comprises:
the basic network module is used for acquiring a basic network architecture, and the basic network architecture is a network architecture based on lightweight design;
the expansion module is used for acquiring first channel information, and performing data expansion processing on the preprocessed data based on the first channel information in the basic network architecture to obtain expanded data;
the extraction module is used for acquiring second channel information, and performing feature extraction processing on the expanded data based on the second channel information in the basic network architecture to obtain extracted data;
and the fusion module is used for executing fusion operation on the extracted data in the basic network architecture based on the first channel information to obtain fused data, and taking the fused data as the processed data.
16. The recognition system of claim 15, wherein the annotation information further comprises filtered size information of the filtered image, the recognition unit comprising:
the statistical module is used for generating corresponding size statistical information based on the screened size information after the frame body information is obtained;
and the adjusting module is used for adjusting the frame body information based on the basic network architecture and the size statistical information to obtain the adjusted frame body information.
17. The identification system according to claim 15 or 16, characterized in that the identification unit comprises:
the analysis module is used for analyzing the processed data based on the frame body information to obtain a plurality of candidate prediction frames;
the screening module is used for screening the candidate prediction frames according to a preset screening rule to obtain at least one screened prediction frame;
and the identification module is used for extracting the classification information and the positioning information of each screened prediction frame and generating target identification information corresponding to the screened prediction frames based on the classification information and the positioning information.
18. The identification system of claim 17, wherein the identification unit further comprises:
a loss calculation module, configured to generate loss calculation information based on the frame information, the second channel information, and a preset loss calculation algorithm before analyzing the processed data based on the frame information;
and the frame body optimization module is used for optimizing the frame body information based on the loss calculation information to obtain the optimized frame body information.
19. The identification system of claim 18, wherein the optimizing the frame information based on the loss calculation information to obtain optimized frame information comprises:
judging whether the loss calculation information is larger than a preset loss threshold value or not;
when the loss calculation information is less than or equal to the preset loss threshold:
processing the second channel information based on the loss calculation information to obtain processed second channel information;
optimizing the frame information based on the processed second channel information to obtain optimized frame information;
the identification device further comprises:
in the case that the loss calculation information is greater than the preset loss threshold:
judging whether the screened image corresponding to the loss calculation information is a qualified image;
if the screened image corresponding to the loss calculation information is a qualified image, adjusting the annotation information corresponding to the screened image based on the loss calculation information to obtain adjusted annotation information;
and if not, deleting the screened image corresponding to the loss calculation information.
20. The identification system according to claim 17, wherein the filtering the candidate prediction boxes according to a preset filtering rule to obtain at least one filtered prediction box comprises:
screening the candidate prediction frames based on a non-maximum suppression algorithm to obtain a prediction frame after first processing;
screening the first processed prediction frame based on score threshold information to obtain a second processed prediction frame;
screening the second processed prediction frame based on the minimum size threshold to obtain a third processed prediction frame;
judging whether an overlapped third-processing prediction frame with an overlapped area larger than a maximum overlapping threshold exists or not, if so, acquiring target score information of the overlapped third-processing prediction frame, deleting the overlapped third-processing prediction frame with smaller target score information, and acquiring at least one screened prediction frame; otherwise, the prediction frame after the third processing is used as the prediction frame after screening.
21. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 10.
22. An agricultural machine comprising an identification system according to any one of claims 11 to 20 and/or a computer readable storage medium according to claim 21.
CN202010797595.3A 2020-08-10 2020-08-10 Target identification method and system based on lightweight network and agricultural machine Pending CN114078228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010797595.3A CN114078228A (en) 2020-08-10 2020-08-10 Target identification method and system based on lightweight network and agricultural machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010797595.3A CN114078228A (en) 2020-08-10 2020-08-10 Target identification method and system based on lightweight network and agricultural machine

Publications (1)

Publication Number Publication Date
CN114078228A true CN114078228A (en) 2022-02-22

Family

ID=80279923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010797595.3A Pending CN114078228A (en) 2020-08-10 2020-08-10 Target identification method and system based on lightweight network and agricultural machine

Country Status (1)

Country Link
CN (1) CN114078228A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115292334A (en) * 2022-10-10 2022-11-04 江西电信信息产业有限公司 Intelligent planting method and system based on vision, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115292334A (en) * 2022-10-10 2022-11-04 江西电信信息产业有限公司 Intelligent planting method and system based on vision, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110148120B (en) Intelligent disease identification method and system based on CNN and transfer learning
da Silva et al. Estimating soybean leaf defoliation using convolutional neural networks and synthetic images
AU2015265625B2 (en) Methods, systems, and devices relating to real-time object identification
CN108921105B (en) Method and device for identifying target number and computer readable storage medium
CN111400536B (en) Low-cost tomato leaf disease identification method based on lightweight deep neural network
KR102526846B1 (en) Fruit tree disease Classification System AND METHOD Using Generative Adversarial Networks
CN115512238A (en) Method and device for determining damaged area, storage medium and electronic device
CN113627248A (en) Method, system, lawn mower and storage medium for automatically selecting recognition model
CN114972208A (en) YOLOv 4-based lightweight wheat scab detection method
CN114078228A (en) Target identification method and system based on lightweight network and agricultural machine
CN114898359A (en) Litchi pest and disease detection method based on improved EfficientDet
CN117173400B (en) Low-carbon treatment scheme recommendation method and system for litchi insect pest
KR102393265B1 (en) System for detecting pests of shiitake mushrooms
CN113221913A (en) Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion
CN117274674A (en) Target application method, electronic device, storage medium and system
Liu et al. “Is this blueberry ripe?”: a blueberry ripeness detection algorithm for use on picking robots
CN116071653A (en) Automatic extraction method for multi-stage branch structure of tree based on natural image
Chiatti et al. Surgical fine-tuning for Grape Bunch Segmentation under Visual Domain Shifts
CN113673340B (en) Pest type image identification method and system
CN114782969A (en) Image table data extraction method and device based on generation countermeasure network
CN114972264A (en) Method and device for identifying mung bean leaf spot based on MS-PLNet model
CN115272862A (en) Audio-visual cooperation based winged insect tracking and identifying method and device
CN113344009A (en) Light and small network self-adaptive tomato disease feature extraction method
CN114663652A (en) Image processing method, image processing apparatus, management system, electronic device, and storage medium
Chang et al. Improved deep learning-based approach for real-time plant species recognition on the farm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination