CN117079270A - Picking method and device based on intelligent machine vision recognition and positioning mushrooms - Google Patents

Picking method and device based on intelligent machine vision recognition and positioning mushrooms Download PDF

Info

Publication number
CN117079270A
CN117079270A CN202311075825.5A CN202311075825A CN117079270A CN 117079270 A CN117079270 A CN 117079270A CN 202311075825 A CN202311075825 A CN 202311075825A CN 117079270 A CN117079270 A CN 117079270A
Authority
CN
China
Prior art keywords
mushroom
picking
mushrooms
recognition
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311075825.5A
Other languages
Chinese (zh)
Inventor
张超
张海恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Golden Mushroom Nanjing Intelligent Technology Co ltd
Original Assignee
Golden Mushroom Nanjing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Golden Mushroom Nanjing Intelligent Technology Co ltd filed Critical Golden Mushroom Nanjing Intelligent Technology Co ltd
Priority to CN202311075825.5A priority Critical patent/CN117079270A/en
Publication of CN117079270A publication Critical patent/CN117079270A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a picking method and device based on intelligent machine vision recognition and positioning of mushrooms, and belongs to the technical field of mushroom picking. The method comprises the following steps: acquiring image data of a plurality of picking areas in a target mushroom bed; processing the image data through the trained mushroom recognition model to obtain a mushroom recognition result of each picking area; determining the size and the space position of each mushroom in the picking area according to the mushroom identification result, the internal and external parameters of the depth camera and the corrected depth map, thereby determining a mushroom picking strategy; the device comprises: the system comprises an image acquisition module, a mushroom identification module, a mushroom positioning module and a path planning module. By adopting the application, the timeliness of mushroom picking can be ensured, and the problem that the existing automatic mushroom picking equipment can grasp mushrooms on a mushroom bed by mistake and grasp the mushrooms by deviation can be solved.

Description

Picking method and device based on intelligent machine vision recognition and positioning mushrooms
Technical Field
The application belongs to the technical field of mushroom picking, and particularly relates to a machine vision intelligent identification and positioning mushroom picking method and device.
Background
Mushrooms are deeply favored by consumers due to the characteristics of delicious taste, rich nutrition, moderate price and the like. In the process of mushroom planting and production, the picking link of mushrooms is the link with the largest labor consumption and the highest cost at present, and if the mushrooms are not picked timely, the problems of mushroom opening, excessive growth of mushroom bodies, too dense mushroom clusters and the like can be caused, so that the growth of the mushrooms in the next tide is affected, and serious economic loss is caused.
The current mushroom picking is completely dependent on manual picking, the picking methods of workers in the picking process are not uniform, and mushrooms are damaged and polluted during picking, so that the fruiting quality is affected to a certain extent. Meanwhile, labor force is gradually reduced, labor recruitment is difficult, the working environment of the mushroom housing is poor, and labor cost is greatly increased. Due to the limitation of the mushroom growing period, 24 hours are often required for the harvesting period, and the fruiting efficiency and fruiting quality are often affected by heavy harvesting workload, so that waste and damage are caused.
Therefore, there is a need for an efficient mushroom picking method that is suitable for use in automatic mushroom picking equipment, and that can intelligently identify and locate mushrooms on a mushroom bed that are suitable for picking, thereby picking.
Disclosure of Invention
The application provides a picking method and device based on intelligent machine vision recognition and positioning of mushrooms, which solve the problems that the optimal picking time cannot be mastered in the existing manual picking process, and meanwhile, related mushrooms are automatically picked up by equipment to grasp mushrooms on a mushroom bed by mistake. Before a picking task starts, an image of a mushroom bed is acquired, mushrooms on the mushroom bed are identified and positioned by combining an artificial intelligence technology and a depth camera, and then the picking task is planned finely according to the size and the position of the mushrooms, so that high-efficiency and high-quality picking is realized.
In order to achieve the above purpose, the present application proposes the following technical scheme:
a picking method based on intelligent machine vision recognition and positioning mushrooms is characterized by comprising the following steps:
acquiring image data of a mushroom bed in a target picking area;
the mushroom bed image data are identified through the trained mushroom identification model, and a mushroom identification result of each picking area is obtained;
determining the size and the space position of mushrooms based on the mushroom recognition result of each picking area and the internal and external parameters and the corrected depth map of the depth camera;
determining a picking strategy for mushrooms in the region based on the size and the spatial position of the mushrooms in each picking region;
the mushroom recognition model is trained based on the following modes:
the steps of constructing a mushroom dataset include:
acquiring a plurality of mushroom group images under different angles, different densities and different brightness conditions;
carrying out random scaling, cutting, overturning and color transformation operation on the image, and finally enabling each picture in the data set to have the same length-width ratio;
marking and selecting each mushroom group image, wherein each image is provided with a plurality of mushroom 2D frames, wherein two adjacent 2D frames have overlapping areas, and classification labels in marking information and pixel information of the 2D frames are recorded for subsequent training;
adding a plurality of mushroom-free images as negative samples in the mushroom dataset;
constructing a mushroom recognition model based on a neural network structure;
dividing the mushroom data set into a training set and a verification set;
training the mushroom recognition model based on the training set after the mushroom data set is segmented, verifying on the verification set, stopping training when the accuracy rate on the verification set is larger than a preset value, obtaining the trained mushroom recognition model, processing sample image data of an input model by the mushroom recognition model, and outputting position information of a detection frame and corresponding confidence, wherein the detection frame is used for indicating the position of a predicted mushroom in an image, the confidence is used for indicating the probability of the predicted mushroom, and the detection frame and the confidence output by the trained mushroom recognition model aiming at sample image data are matched with a labeling frame and the confidence in corresponding sample image data;
and performing position correspondence on the position of the detection frame in the sample image and the corrected depth map in the depth camera to obtain depth information of the position of the detection frame in the physical world, and further performing coordinate calculation according to the depth information, the inherent internal parameter value of the depth camera and the external parameter value based on the installation position and the installation angle to obtain the three-dimensional space position of the mushroom in the detection frame in the physical world.
Further, the sample size ratio of the training set to the verification set after the mushroom data set is segmented is 7:3.
Further, the mushroom recognition model based on the neural network structure comprises: one input layer, one output layer, 7 hidden layers, using relu as the activation function, convolution structure as the image feature extraction module, and cross entropy as the loss function.
In a second aspect, the application provides a picking device based on intelligent machine vision for identifying and positioning mushrooms, which is characterized by comprising the following modules:
and an image acquisition module: the mushroom bed image data acquisition device is used for acquiring mushroom bed image data of a picking area;
a mushroom recognition module: based on the trained mushroom recognition model, processing the mushroom bed image data to obtain a mushroom recognition detection frame of each picking area on the image;
mushroom positioning module: based on the mushroom recognition detection frames of each picking area, carrying out coordinate calculation according to the inherent internal parameter value of the depth camera and the external parameter value based on the mounting position and the mounting angle, so as to determine the size and the spatial position of each mushroom suitable for picking;
and a path planning module: a mushroom picking strategy and path plan for the picking area is determined based on the size and spatial location of each mushroom suitable for picking.
Compared with the prior art, the application has the beneficial effects that:
(1) Processing image data of a plurality of picking areas in a target mushroom bed through a mushroom recognition model, giving out mushroom recognition and positioning results of each picking area, and determining the positions of mushrooms meeting picking requirements in the picking areas according to the mushroom recognition and positioning results, so as to determine a complete mushroom picking strategy in the mushroom bed. Through the processing, the mushroom growth state of the picking area can be monitored in real time, so that a corresponding mushroom picking strategy is formulated, the problem that the optimal picking time is delayed due to experience or manual local observation in the past is solved, and the timeliness of mushroom picking is ensured.
(2) In the process of constructing the mushroom dataset, the mushroom group images are subjected to random scaling, cutting, overturning and color conversion, so that various mushroom state images possibly appearing on the mushroom bed are increased, the robustness and accuracy of the mushroom recognition model in application are improved, and the problem that the existing automatic mushroom picking equipment is used for grabbing mushrooms on the mushroom bed by mistake is solved.
Drawings
Further details, features and advantages of the application are disclosed in the following description of exemplary embodiments with reference to the following drawings, in which:
FIG. 1 illustrates a flow chart of a mushroom picking method provided in accordance with an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of a training method for a mushroom recognition model provided in accordance with an exemplary embodiment of the present application;
FIG. 3 illustrates a flowchart of a method of constructing a mushroom dataset provided in accordance with an exemplary embodiment of the present application;
fig. 4 shows a schematic block diagram of a mushroom picking device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. It should be understood that the drawings and embodiments of the application are for illustration purposes only and are not intended to limit the scope of the present application.
The method will be described with reference to a flowchart of the mushroom picking method shown in fig. 1. As shown in fig. 1, the method may include the following steps 101-104.
Step 101, acquiring mushroom bed image data of a picking area.
In one possible embodiment, a plurality of image acquisition devices may be disposed in the target mushroom bed for acquiring image data of the mushrooms in the respective picking areas, respectively. The image acquisition device may acquire image data of the picking area and upload the image data to a system for determining a mushroom picking strategy. Thus, the system can store the received image data, and can also establish correspondence between the image data and the information of the picking area.
Alternatively, when the target picking time is reached, image data of a plurality of picking areas in the target mushroom bed may be periodically acquired. As an example, the growth cycle of each tide of mushrooms is about 6 days, wherein the 1 st day is before the tide, the 2 nd day is the comb bud, the 3 rd day is the small harvest, the 4 th day is the large harvest, the 5 th day is the semi-clear bed, the 6 th day is the clear bed, different stages of each growth cycle correspond to different mushroom group densities and mushroom bed states, one image can be acquired for detection every hour at the stage, and the other periods do not need detection, so that the computer resources are saved. The specific settings of the target picking time and the image capturing period are not limited in this embodiment, and for example, the target picking time may be set to am and the image capturing period may be set to 15 minutes.
In order to ensure timeliness of mushroom picking, the image data used in the subsequent processing refers to real-time images.
And 102, processing the mushroom bed image data through the trained mushroom recognition model to obtain a mushroom recognition detection frame of each picking area on the image.
When mushrooms in the picking area are unearthed, the image acquisition equipment can acquire the mushroom bed image of the growing mushrooms, and after the mushrooms are processed by the mushroom recognition model, a mushroom recognition detection frame and confidence of the picking area are given. Then, as shown in step 103, based on the mushroom recognition detection frame of each picking area, coordinate calculation is performed according to the intrinsic internal parameter value of the depth camera and the external parameter value based on the installation position and the installation angle, so as to determine the size and the spatial position of each mushroom suitable for picking, and as shown in step 104, a mushroom picking strategy and a path planning of the picking area are determined.
In one possible embodiment, the picking order of each target picking area can be determined according to the confidence, the higher the confidence is, the earlier the picking order is, and the picking path is determined by combining the position information of each target picking area in the mushroom bed.
In order that the mushroom recognition model may perform the downstream tasks described above, the mushroom recognition model may be trained in advance. Referring to the training method flowchart of the mushroom recognition model shown in fig. 2, the method includes the following steps 201-203.
Step 201, a mushroom dataset is constructed.
Specifically, referring to the flowchart of the method for constructing the mushroom dataset shown in fig. 3, the processing of step 201 described above may be as follows steps 301-304.
Step 301, obtaining a plurality of mushroom group images under different angles, different densities and different brightness conditions.
Step 302, performing random scaling, cropping, flipping and color transformation operations on the image, and finally enabling each picture in the dataset to have the same aspect ratio.
And 303, carrying out marking frame selection on each mushroom group image, and obtaining a plurality of mushroom 2D frames from each image, wherein two adjacent 2D frames have overlapping areas, and recording classification labels in marking information and pixel information of the 2D frames for subsequent training.
Step 304, add several images without mushrooms as negative samples in the mushroom dataset.
Step 202, constructing a mushroom recognition model based on a neural network structure.
Optionally, the mushroom recognition model based on the neural network structure may include: one input layer, one output layer, 7 hidden layers, using relu as the activation function, convolution structure as the image feature extraction module, and cross entropy as the loss function.
And 203, training the mushroom recognition model based on the training set obtained by segmenting the mushroom data set, verifying the mushroom recognition model on the verification set, and stopping training when the accuracy rate on the verification set is greater than a preset value to obtain the trained mushroom recognition model.
Optionally, the sample size ratio of the training set and the verification set after the mushroom data set is segmented may be 7:3.
After the mushroom recognition model processes the sample image data of the input model, the position information of the detection frame and the corresponding confidence coefficient can be output, the detection frame is used for indicating the position of the predicted mushroom, the confidence coefficient is used for indicating the probability of the predicted mushroom existence, and the mushroom information is comprehensively recognized by using the detection frame and the confidence coefficient.
When training is not finished, the deviation between the detection frame output by the mushroom recognition model and the corresponding labeling frame is large, and the confidence coefficient is low.
In the training process, the detection frames output by the mushroom recognition model can gradually approach to the corresponding labeling frames, and the confidence level can also be gradually improved.
After training, the detection frame and the confidence coefficient output by the mushroom recognition model can be basically matched with the actual labeling frame and the confidence coefficient, so that the mushroom recognition model can be used for recognizing whether mushrooms are contained in the image data of the picking area.
The embodiment of the application provides a mushroom picking device which is used for realizing the mushroom picking method. As shown in the schematic block diagram of fig. 4, the mushroom picking apparatus 400 includes: an image acquisition module 401, a mushroom identification module 402, a mushroom positioning module 403 and a path planning module 404.
The image acquisition module 401 is used for acquiring the image data of the mushroom bed in the picking area.
The mushroom recognition module 402 processes the mushroom bed image data based on the trained mushroom recognition model to obtain a mushroom recognition detection frame of each picking area on the image.
The mushroom positioning module 403 performs coordinate calculation based on the mushroom recognition detection frame of each picking area according to the intrinsic internal parameter value of the depth camera and the external parameter value based on the installation position and the installation angle, thereby determining the size and the spatial position of each mushroom suitable for picking.
The path planning module 404 determines a mushroom picking strategy and path plan for the picking area based on the size and spatial location of each mushroom that is suitable for picking.
In the embodiment of the application, the image data of a plurality of picking areas in a target mushroom bed are processed through a mushroom recognition model, mushrooms in each picking area are recognized, and the size and the spatial position of each mushroom in each picking area are determined according to a mushroom recognition result, the inner parameter and the outer parameter of a depth camera and the corrected depth map, so that a mushroom picking strategy is determined. Through the processing, the size and the position of mushrooms in the picking area can be monitored in real time, so that corresponding mushroom picking strategies are formulated, and the timeliness and the accuracy of mushroom picking are ensured.
The embodiment of the application also provides a computer device, which comprises: the mushroom picking device comprises one or more processors and a storage unit, wherein the storage unit is used for storing one or more programs, and the one or more programs are executed by the one or more processors and can enable the one or more processors to realize the mushroom picking method in the embodiment of the application.
The present embodiment also provides a computer-readable storage medium including: the mushroom picking device comprises one or more processors and a storage unit, wherein the storage unit is used for storing one or more programs, and the one or more programs are executed by the one or more processors and can enable the one or more processors to realize the mushroom picking method in the embodiment of the application.
It should be appreciated by those skilled in the art that embodiments of the application may be provided as a method, system, computer device, or computer-readable storage medium. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the preferred embodiment of the application is not intended to limit the application in any way, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the application.

Claims (4)

1. A picking method based on intelligent machine vision recognition and positioning mushrooms is characterized by comprising the following steps:
acquiring image data of a mushroom bed in a target picking area;
the mushroom bed image data are identified through the trained mushroom identification model, and a mushroom identification result of each picking area is obtained;
determining the size and the space position of mushrooms based on the mushroom recognition result of each picking area and the internal and external parameters and the corrected depth map of the depth camera;
determining a picking strategy for mushrooms in the region based on the size and the spatial position of the mushrooms in each picking region;
the mushroom recognition model is trained based on the following modes:
the steps of constructing a mushroom dataset include:
acquiring a plurality of mushroom group images under different angles, different densities and different brightness conditions;
carrying out random scaling, cutting, overturning and color transformation operation on the image, and finally enabling each picture in the data set to have the same length-width ratio;
marking and selecting each mushroom group image, wherein each image is provided with a plurality of mushroom 2D frames, wherein two adjacent 2D frames have overlapping areas, and classification labels in marking information and pixel information of the 2D frames are recorded for subsequent training;
adding a plurality of mushroom-free images as negative samples in the mushroom dataset;
constructing a mushroom recognition model based on a neural network structure;
dividing the mushroom data set into a training set and a verification set;
training the mushroom recognition model based on the training set after the mushroom data set is segmented, verifying on the verification set, stopping training when the accuracy rate on the verification set is larger than a preset value, obtaining the trained mushroom recognition model, processing sample image data of an input model by the mushroom recognition model, and outputting position information of a detection frame and corresponding confidence, wherein the detection frame is used for indicating the position of a predicted mushroom in an image, the confidence is used for indicating the probability of the predicted mushroom, and the detection frame and the confidence output by the trained mushroom recognition model aiming at sample image data are matched with a labeling frame and the confidence in corresponding sample image data;
and performing position correspondence on the position of the detection frame in the sample image and the corrected depth map in the depth camera to obtain depth information of the position of the detection frame in the physical world, and further performing coordinate calculation according to the depth information, the inherent internal parameter value of the depth camera and the external parameter value based on the installation position and the installation angle to obtain the three-dimensional space position of the mushroom in the detection frame in the physical world.
2. The machine vision intelligent identification and positioning mushroom picking method according to claim 1, wherein the sample size ratio of the training set and the verification set after the mushroom data set is segmented is 7:3.
3. A machine vision intelligent identification and location mushroom picking method as claimed in claim 1 wherein said neural network structure based mushroom identification model comprises: one input layer, one output layer, 7 hidden layers, using relu as the activation function, convolution structure as the image feature extraction module, and cross entropy as the loss function.
4. A picking device based on intelligent machine vision recognition and positioning mushrooms, which comprises an image acquisition module, a mushroom recognition module, a mushroom positioning module and a path planning module, and is used for realizing the picking method based on intelligent machine vision recognition and positioning mushrooms according to any one of claims 1-3.
CN202311075825.5A 2023-08-25 2023-08-25 Picking method and device based on intelligent machine vision recognition and positioning mushrooms Pending CN117079270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311075825.5A CN117079270A (en) 2023-08-25 2023-08-25 Picking method and device based on intelligent machine vision recognition and positioning mushrooms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311075825.5A CN117079270A (en) 2023-08-25 2023-08-25 Picking method and device based on intelligent machine vision recognition and positioning mushrooms

Publications (1)

Publication Number Publication Date
CN117079270A true CN117079270A (en) 2023-11-17

Family

ID=88707751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311075825.5A Pending CN117079270A (en) 2023-08-25 2023-08-25 Picking method and device based on intelligent machine vision recognition and positioning mushrooms

Country Status (1)

Country Link
CN (1) CN117079270A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117426255A (en) * 2023-12-07 2024-01-23 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117426255A (en) * 2023-12-07 2024-01-23 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback
CN117426255B (en) * 2023-12-07 2024-04-12 南京农业大学 Automatic agaricus bisporus picking system and method based on vision and force sense feedback

Similar Documents

Publication Publication Date Title
CN109863874B (en) Fruit and vegetable picking method, picking device and storage medium based on machine vision
WO2020007363A1 (en) Method and apparatus for identifying number of targets, and computer-readable storage medium
CN117079270A (en) Picking method and device based on intelligent machine vision recognition and positioning mushrooms
CN111476149A (en) Plant cultivation control method and system
CN110610506B (en) Image processing technology-based agaricus blazei murill fruiting body growth parameter detection method
CN111727457A (en) Cotton crop row detection method and device based on computer vision and storage medium
CN112990103A (en) String mining secondary positioning method based on machine vision
CN114581816A (en) Real-time detection and counting method for solanaceous vegetables and fruits in plant factory
CN110288623A (en) The data compression method of unmanned plane marine cage culture inspection image
CN116051996A (en) Two-stage crop growth prediction method based on multi-mode information
WO2024045749A1 (en) Method for spatio-temporal prediction of growth state of mushrooms
CN115661650A (en) Farm management system based on data monitoring of Internet of things
CN114399664A (en) Intelligent monitoring and control method and system for growth state of plant seedlings
CN114863311A (en) Automatic tracking method and system for inspection target of transformer substation robot
CN113344035A (en) Banana phenological period monitoring module and planting system
CN111369497B (en) Walking type tree fruit continuous counting method and device
CN117197595A (en) Fruit tree growth period identification method, device and management platform based on edge calculation
CN112381028A (en) Target feature detection method and device
CN107064159A (en) A kind of apparatus and system that growth tendency is judged according to the detection of plant yellow leaf
CN112329697B (en) Improved YOLOv 3-based on-tree fruit identification method
CN115861768A (en) Honeysuckle target detection and picking point positioning method based on improved YOLOv5
CN114937078A (en) Automatic weeding method, device and storage medium
CN105574853A (en) Method and system for calculating number of wheat grains based on image identification
Story et al. Automated machine vision guided plant monitoring system for greenhouse crop diagnostics
CN113361520A (en) Transmission line equipment defect detection method based on sample offset network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination