WO2023050783A1 - 除草机器人及其除草路径的规划方法、装置和介质 - Google Patents

除草机器人及其除草路径的规划方法、装置和介质 Download PDF

Info

Publication number
WO2023050783A1
WO2023050783A1 PCT/CN2022/088072 CN2022088072W WO2023050783A1 WO 2023050783 A1 WO2023050783 A1 WO 2023050783A1 CN 2022088072 W CN2022088072 W CN 2022088072W WO 2023050783 A1 WO2023050783 A1 WO 2023050783A1
Authority
WO
WIPO (PCT)
Prior art keywords
weed
weeding
camera
image
target feature
Prior art date
Application number
PCT/CN2022/088072
Other languages
English (en)
French (fr)
Inventor
崔龙飞
薛新宇
乐飞翔
孙涛
张宋超
陈晨
金永奎
徐阳
孙竹
丁素明
周晴晴
蔡晨
顾伟
孔伟
Original Assignee
农业农村部南京农业机械化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 农业农村部南京农业机械化研究所 filed Critical 农业农村部南京农业机械化研究所
Priority to AU2022256171A priority Critical patent/AU2022256171B2/en
Publication of WO2023050783A1 publication Critical patent/WO2023050783A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/01Mowers; Mowing apparatus of harvesters characterised by features relating to the type of cutting apparatus
    • A01D34/412Mowers; Mowing apparatus of harvesters characterised by features relating to the type of cutting apparatus having rotating cutters
    • A01D34/63Mowers; Mowing apparatus of harvesters characterised by features relating to the type of cutting apparatus having rotating cutters having cutters rotating about a vertical axis
    • A01D34/64Mowers; Mowing apparatus of harvesters characterised by features relating to the type of cutting apparatus having rotating cutters having cutters rotating about a vertical axis mounted on a vehicle, e.g. a tractor, or drawn by an animal or a vehicle
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present application relates to the technical field of weeding robots, for example, to a planning method, device and medium for a weeding robot and its weeding path.
  • the present application provides a weeding robot and its planning method, device and medium for weeding paths. Based on the tracking within and between cameras, it can reliably record and trace the position of weeds, so as to alleviate the impact of image processing delay and improve weeding efficiency. the accuracy.
  • This application proposes a method for planning a weeding path for a weeding robot, including:
  • an image segmentation model is obtained, wherein the image segmentation model is used to identify and segment weed target features, soil target features and crop target features;
  • the weed target feature is acquired by means of intra-camera tracking, and the weed target feature is acquired by multi-camera tracking, and the weeding path of the weeding robot arm of the weeding robot is planned, to The weeding mechanical arm is made to weed according to the weeding path.
  • This application proposes a planning device for a weeding path of a weeding robot, which is applied to the planning method for a weeding path of a weeding robot as described above.
  • the device includes:
  • a model acquisition module configured to train based on a neural network model to obtain an image segmentation model, wherein the image segmentation model is used to identify and segment weed target features, soil target features and crop target features;
  • the path planning module is configured to acquire the weed target features by means of intra-camera tracking based on the weed target features, and acquire the weed target features by means of multi-camera tracking, and plan the weeding machinery of the weeding robot The weeding path of the arm, so that the weeding robotic arm performs weeding according to the weeding path.
  • the application also proposes a weeding robot, including the planning device for the weeding path of the weeding robot as described above; also includes:
  • processors one or more processors
  • the data storage device is configured to store one or more programs; store images collected by each camera, image templates and corresponding coordinate information in the three-dimensional scene diagram;
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method for planning the weeding path of the weeding robot as described above.
  • the present application also proposes a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the path planning method for automatic weeding by a weeding robot as described above is implemented.
  • Fig. 1 is a flow chart of a method for planning a weeding path of a weeding robot proposed in an embodiment of the present application;
  • Fig. 2 is a schematic structural diagram of a weeding robot proposed in the embodiment of the present application.
  • FIG. 3 is a schematic structural view of a weeding robot arm of a weeding robot proposed in an embodiment of the present application;
  • Fig. 4 is a side view of a weeding robot arm of a weeding robot proposed in an embodiment of the present application;
  • Fig. 5 is a planning diagram of a weeding path of a weeding robot proposed in the embodiment of the present application.
  • Fig. 6 is a flow chart of another weeding robot weeding path planning method proposed by the embodiment of the present application.
  • Fig. 7 is a schematic block diagram of a planning device for a weeding path of a weeding robot proposed in an embodiment of the present application;
  • Fig. 8 is a structural block diagram of a weeding robot proposed in the embodiment of the present application.
  • a visual classification method is generally used to track weeds. This visual classification method is usually implemented using a complex convolutional neural network, and the convolutional neural network will introduce a long and indeterminate time delay, resulting in weeding.
  • the target recognition algorithm in the related art only recognizes the weeds and does not track the position changes of the weeds.
  • traditional single-camera target tracking usually uses a motion-based contour tracker for target tracking; single-camera weed tracking systems often cannot adapt to long-term weed detection (image segmentation) delays and robotic arm movements due to their limited field of view. Delay.
  • the chassis bumps due to the undulation of the ground during the walking of the robot, which changes the camera's perspective and viewpoint.
  • the traditional method may cause significant changes in the appearance of the object (distortion, blur), especially for those tall weeds. Large, it is easy to lose track of the target and accidentally injure crops.
  • the embodiment of the present application proposes a weeding robot weeding path planning method, based on neural network model training, to obtain an image segmentation model, the image segmentation model is used to identify and segment weed target features, soil target features and crop target features; based on the weed target features, the weed target features are acquired by means of intra-camera tracking, and the weed target features are acquired by multi-camera tracking, and the weeding path of the weeding robot arm of the weeding robot is planned , so that the weeding robotic arm can weed according to the weeding path, so that the weeding robotic arm can move to the designated weeding position in advance, which alleviates the impact of image processing delay and improves the accuracy of weeding.
  • FIG. 1 is a flow chart of a method for planning a weeding path of a weeding robot proposed in an embodiment of the present application.
  • the planning method for the weeding path of the weeding robot includes:
  • an image segmentation model is acquired, and the image segmentation model is used to identify and segment weed target features, soil target features, and crop target features.
  • an image segmentation model is obtained, including:
  • Multiple original images are collected through the camera under different parameters, where the parameters include the growth stage of the crop, light intensity, temperature, humidity, soil type and acquisition time; multiple original images are classified according to the growth stage of the crop, and each Label the crops, weeds and soil in each original image to form a label image; split the original image and label image of each growth stage of the crop into a training set, a verification set and a test set; define a deep neural network architecture, Based on the training set, verification set and test set of each growth stage of the crop, set the training parameters, and call the optimization algorithm to train the deep neural network architecture, and obtain the trained model weight file; based on the model weight file, obtain each of the crop Image segmentation models for growth stages.
  • the growth stages of crops include seedling stage, growth stage and maturity stage, that is to say, under the corresponding light intensity, temperature, humidity, soil type and collection time, multiple images of crops, weeds and soil at the seedling stage can be obtained respectively.
  • Original images multiple original images of crops, weeds and soil in the growth stage, multiple original images of crops, weeds and soil in the mature stage.
  • the label processing can be manually using label software to label the original image, segment the crops, weeds and soil in each picture, and generate the labeled image, that is, the channel grayscale map, whose grayscale values represent its categorical categories. Then split multiple seedling stage label images and multiple seedling stage original images into training set, verification set and test set, and define the neural network architecture, and use deep learning method to train the neural network architecture; obtain images of seedling stage crops Split the model.
  • three deep learning architectures of fully convolutional networks are used in combination to train image segmentation models to obtain image segmentation models for seedling crops.
  • three The configuration parameters of the deep learning architecture include learning rate, learning rate decay, processing batch size, and the number of Graphics Processing Units (GPUs) that need to be used.
  • GPUs Graphics Processing Units
  • an edge computer with a GPU device can be installed on the weeding robot, and the edge computer can be installed with Ubuntu system and robot operating system (Robot Operating System, ROS) development environment, and create a new ROS node for real-time semantic segmentation of images, so as to deploy
  • the model on the edge computer can run online to realize real-time segmentation of crops and weeds.
  • In-camera tracking refers to the identification of weeds, soil and crops in the images captured by the camera itself.
  • Multi-camera tracking means that the initial camera recognizes the weeds, soil and crops in the captured image based on the image segmentation model, and the latter camera tracks the recognition results of the previous camera to identify the weeds, soil and crops in the image captured by itself. Crop identification, and finally plan the weeding path of the weeding robot arm according to the identified weed characteristics.
  • the weeding path of the weeding robot arm can be predicted in advance, making the weeding robot’s weeding path
  • the weeding robot arm precisely swings to the weeding position, which alleviates the impact of image processing delay and improves the accuracy of weeding.
  • the weed target feature in this embodiment mainly includes the size and location information of the weed.
  • the first camera, ..., the i-th camera, ..., the N-th camera are arranged at intervals along the direction opposite to the forward direction of the weeding robot, and the first camera to the Nth camera follow the weeding robot to move ;
  • the weed target feature is obtained by means of intra-camera tracking, and the weed target feature is acquired by multi-camera tracking, and the weeding of the weeding robot arm of the weeding robot is planned
  • the path includes: acquiring the first image collected by the first camera, calling the image segmentation model to identify and segment the first image, generating the first weed target feature and the weed label of the first weed target feature; the i-th camera
  • obtain the i-th image collected by the i-th camera extract the target features in the i-th image according to the density space clustering method, and compare the target features in the i-th image with the i-1 weed target Features are matched to
  • the leftward arrow in Fig. 2 indicates the direction in which the robot is advancing
  • the first image collected by the first camera 1 is obtained, and the image segmentation model is invoked for the first Image recognition and segmentation (wherein, a lightweight target detector can be used to identify the type and growth stage of the crop, thereby calling an appropriate image segmentation model), generating the first weed target feature and the first weed target feature weed label, control the weeding robotic arm 4 of the weeding robot to start, then obtain the second image collected by the second camera 2, extract the target feature in the second image according to the density space clustering method, and compare it with the first weed
  • the target feature is matched to obtain the second weed target feature corresponding to the first weed target feature, and the weed label of the second weed target feature; according to the second weed target feature and the weed label of the second weed target feature Grass label, planning the weeding path of the weeding robotic arm of the weed
  • the weed target feature can be one or a combination of the outline, color, size, and coordinates of the weed.
  • the time for the Nth camera to move to the original position of the first camera with the weeding robot (that is, related to the walking speed of the weeding robot) and the sum of the time for image processing from the first camera to the Nth camera is greater than or equal to the weeding robot’s weeding machinery Time for the arm to swing to the position of the weed corresponding to the first weed target feature.
  • the weeding robot arm can be precisely swung to a preset position, so as to achieve more precise weeding.
  • the first camera 1 to the third camera 3 can also be cameras installed on a drone, not limited to being installed on a weeding robot, and image information can be transmitted between the drone and the robot through wireless communication.
  • the first camera can be a red-green-blue near-infrared spectrum (Red-Green-Blue Near Infrared, RGB+NIR) camera
  • the second camera can be an ordinary RGB camera
  • the third camera can be an ordinary RGB camera. camera.
  • the first camera, ..., the jth camera, ..., the Nth camera are arranged at intervals along the opposite direction of the forward direction of the weeding robot, and the first camera to the Nth camera follow the movement of the weeding robot; Based on the weed target features, the weed target features are obtained by means of intra-camera tracking, and the weed target features are acquired by multi-camera tracking, and the weeding path of the weeding robot arm of the weeding robot is planned, Making the robotic arm weed according to the weeding path includes: acquiring a first image captured by a first camera, calling an image segmentation model to identify and segment the first image, generating a first weed target feature and a first weed The weed label of the grass target feature; when the jth camera is the second camera, the jth image collected by the jth camera can be obtained based on the 3D reconstruction method of simultaneous localization and mapping (SLAM), According to the jth image, the 3D scene map of
  • the 3D reconstruction method can be a visual odometry method
  • the 3D reconstruction method can be a lidar 3D scene reconstruction method
  • the jth The -1 weed target feature and the weed label of the j-1th weed target feature are matched with the weeds in the 3D scene graph, and the 3D graph of the weed is separated from the matching result, and the position coordinates of the main stem of the crop are extracted , form an image template according to the three-dimensional image and position coordinates; obtain the j+1th image collected by the j+1th camera, extract the target features in the j+1th image according to the density space clustering method, and extract the j+1th image Match the target features of the image template to obtain the j+1th weed target feature and the weed label of the j+1th weed target feature; according to the j+1th weed target feature and the j+1th weed target feature
  • the j+1th image collected by the j+1th camera extract the target features in the j+1th image according to
  • the three-dimensional scene graph of soil, weeds and crops can be reconstructed according to the images collected by the second camera 2, and then the following steps are performed:
  • the edge computer runs a lightweight target detection model (the target detection model can be YOLO Nano), and classifies the result according to the accumulated labels of the image Verify again to prevent damage to crops due to misclassification.
  • the target detection model can be YOLO Nano
  • a third camera is arranged above the weeding device, which is installed vertically downwards on the ground. Once the third camera finds that a new weed/crop object moves into its field of view, the target feature extraction of the weed/crop is performed, and Match the weed/crop features in the previous image template to determine its label.
  • the control computer starts the fill light through the digital switch, and the control computer runs the weed tracking algorithm to track the position changes of the weeds in the camera field of view.
  • a phenotype detection module is installed on the upper part of the cutter motor, which is composed of a laser radar, an inertial measurement unit (IMU), and a camera.
  • the target detection algorithm is run on another computer II on the weeding robot, and real-time
  • the multi-line laser radar can estimate the distance between the weeding blade and the main stalk of the crop in real time, so as to avoid damage to the main stalk of the crop.
  • the first camera may be an RGB+NIR camera
  • the second camera may be a binocular camera with an IMU
  • the third camera may be an ordinary RGB camera.
  • the distance between the camera and the ground is 80cm, and the ground resolution is about 3 pixels/cm.
  • the weeding robot arm performs actions according to the planned weeding path, as shown in Figures 3 and 4, the weeding robot arm 4 of the weeding robot includes a first motor 10, a second motor 13, Three motors 15, a fourth motor 17, a first rotary joint 11, a second rotary joint 19, a large arm 12, a small arm 16, a lead screw and a guide rail 14, a cutter 18, an electric cylinder 20, and a phenotype detection module 21.
  • the arm 16 includes a first connecting rod 16-1, a second connecting rod 16-2 and an electric cylinder 20, the first connecting rod 16-1 and the second connecting rod 16-2 are parallel linkages, and the electric cylinder 20 Regulate the height of cutting knife 18, make it adapt to the fluctuation of soil ground.
  • One end of the electric cylinder 20 is hinged to the first connecting rod 16-1, and the other end is the second connecting rod 16-2, so that the cutting knife 18 floats up and down with the four-bar linkage under the drive of the electric cylinder 20.
  • the guide rail 14 is equipped with a linear displacement sensor, and the first rotary joint 11, the second rotary joint 19, the first motor 10, the second motor 13, the third motor 15, and the fourth motor 17 are all equipped with angle sensors.
  • the phenotype detection module 21 is composed of a multi-line laser radar, an IMU, and a camera. The phenotype detection module 21 can store the detected field plant and ground information in the second industrial computer on the robot.
  • the weeding robot arm 4 of the weeding robot also includes a controller, which collects feedback data from sensors on the weeding robot arm, controls the movement of all motors and electric cylinders on the weeding robot arm, and drives the cutter at the end of the weeding robot arm 18 moves according to the planned trajectory.
  • control flow is as follows:
  • Step 1 After the weeding robot goes to the field, adjust the angle of the first motor 10, rotate the boom 12, and detect whether the second rotating joint 19 is between two rows of crops;
  • Step 2 adjust the second motor 13, and lower the height of the cutter 18;
  • step 3 The weeding robot starts to walk, and the fourth motor 17 starts to rotate;
  • Step 4 The robotic arm controller accepts the planned trajectory;
  • Step 5 Coordinate transformation, converting the path of weeds into the swing target trajectory of the second rotary joint 19;
  • Step 6 Calculate the actual angle of the second rotary joint 19 and the positional deviation of the target track point;
  • Step 7 Deviation is used as the input of the proportional integral differential (Proportion Integral Differential, PID) control algorithm to calculate the control command value of the third motor 15;
  • step 8 The controller outputs a command signal, and the third motor 15 drives the small arm 16 of the weeding mechanical arm 4 to swing between two rows of crops;
  • Step 9 The swing trajectory tracks the planned path;
  • Step 10 The small arm 16 second
  • a Global Navigation Satellite System Global Navigation Satellite System, GNSS
  • GNSS Global Navigation Satellite System
  • the position of the second camera is known, since the installation position of the second camera on the robot is known, the 3D scene modeling uses the satellite positioning information of the GNSS receiver, and the 3D space models of crops and weeds are based on the earth coordinate system, according to
  • the structure of the weeding mechanical arm 4 can convert the three-dimensional structure of crops and weeds to the coordinate system of the second conversion joint 19 of the weeding mechanical arm 4.
  • the second conversion joint 19 is equipped with an angle feedback sensor, so the cutter 18
  • the position under the coordinate system is known, so that the position of the cutter 18 and the positions of the weeds and crops are all unified in the same coordinate system.
  • the planned path in step 5 is converted to the swing target trajectory of the forearm 16, and the method is as follows:
  • the speed v of the weeding robot is measured in real time by the GNSS receiver.
  • the current moment is t, the moment when the small arm of the weeding mechanical arm 4 rotates to position ⁇ is t+t1; by analogy, calculate the swing target track (swing angle and time history curve) of the small arm 16; then calculate the second rotary joint 19
  • the actual angle and the position deviation of the target track point; the deviation is used as the input of the PID control algorithm to calculate the motor control command value; the controller outputs the command signal, and the third motor 15 drives the small arm 16 of the weeding mechanical
  • the arm 16 of the weeding robot arm 4 After the image of the Nth camera is matched with the feature of the N-1th camera, the arm 16 of the weeding robot arm 4 performs an action according to the target trajectory of the arm swing after the coordinate transformation.
  • the reconstructed three-dimensional scene greatly improves the robot system's ability to detect Weed perception accuracy.
  • the deep neural network model is used to segment the image captured by the first camera, and the accurate weed segmentation result in the perspective of the front of the robot is obtained.
  • the association of position information (the position coordinates and change matrix between multi-camera positions are known), and the 3D spatial position of weeds is obtained by the visual odometry method based on multi-sensors (IMU, binocular camera, and GNSS receiver) information, accurately construct the 3D scene structure of crops/weeds/soil, and then match the 2D image segmentation results with the 3D scene graph, extend the detection results of the first camera to the multi-camera detection method, and determine the location of the crop stalks image templates and keep updating the templates.
  • the weeding trajectory is planned with the goal of minimizing the energy consumption of the robotic arm. Improved weeding accuracy.
  • the 3D scene graph may be a 3D scene point cloud graph
  • the tracking target may be one weed or multiple weeds, which is not limited in this application.
  • the control algorithm is as follows: the first camera and the controllable light source are started, the field image is collected, the image is processed, and the images of crops and weeds are segmented in real time online, and the images of crops and weeds are segmented in real time.
  • Grass 3D reconstruction using internal camera tracking and inter-camera tracking, control algorithm to predict trigger time and cutter head position, adjust the azimuth angle of the weeding robot arm, adjust the tilt angle of the mowing motor, and estimate the distance between the weeding blade and the weeds. Trigger the cutter to cut weeds.
  • the weeding execution unit is the weeding robot arm 4
  • the mowing motor is the fourth motor 17 .
  • the planning method of the weeding path of the weeding robot further includes:
  • a lightweight target detector is obtained; after each acquisition of weed target features and the weed label of the acquired weed target features, the lightweight target detector is used to reprocess each camera. images to verify that the acquired weed target characteristics and weed labels are correct.
  • the lightweight target detector runs faster and processes images faster (the lightweight target detector here is different from the aforementioned lightweight target detector), and then, when the first camera captures an image, And after the segmentation recognition is performed by the image segmentation model, it can also be verified by the lightweight target detector. If the processing result of the image segmentation model is consistent with the detection result of the lightweight target detector, it means that the processing result of the image segmentation model is correct. It can be sent to the next camera as a classification standard, and then control the weeding robot arm to weed. If the result of the image segmentation model processing is inconsistent with the detection result of the lightweight target detector, then the result of the image segmentation model processing is inaccurate and needs to be shut down for maintenance , to prevent damage to crops due to misclassification.
  • a camera is set at the tail of the weeding robot; after correcting the weeding path of the weeding robot arm of the weeding robot according to the Nth weed target feature and the weed label of the Nth weed target feature, It also includes: controlling the weeding robotic arm of the weeding robot to weed according to the weeding path; obtaining the image after weeding through the camera set at the tail of the weeding robot, and comparing and analyzing it with the first image to establish an evaluation function: Among them, the total number of weeds in the first image is n1, the total number of crops is m1, the number of successfully removed weeds is n2, and the number of accidentally injured crops is m2 according to the first image and the image after weeding; according to the evaluation
  • the value of the function S adjusts the control parameters of the sway drive motor of the weeding manipulator.
  • the camera is installed at the tail of the weeding robot.
  • the quality of weeding can be known.
  • the larger the value of the evaluation function S the better the quality of weeding.
  • the smaller the value of the evaluation function S it means that the weeding effect is not ideal, and the driving speed of the robot needs to be reduced. , and at the same time, adjust the control parameters of the motor in the weeding mechanical arm to improve the response speed.
  • the weed target feature includes one or more combinations of weed outline, color, shape, texture, and position; the crop target feature includes one of crop outline, color, shape, texture, and position.
  • soil target features include soil profile, color, shape, texture, location or one or a combination of several.
  • the soil target features and crop target features can also be identified, the height of the weeding robot arm can be adjusted according to the soil target features, and the evaluation function S can be finally established according to the crop target features.
  • Fig. 7 is a schematic block diagram of a planning device for a weeding path of a weeding robot proposed in an embodiment of the present application. This device is applied to the planning method of the weeding path of the weeding robot, as shown in Figure 7, the device includes:
  • the model acquisition module 101 is set to train based on the neural network model to obtain an image segmentation model, and the image segmentation model is used to identify and segment weed target features, soil target features and crop target features;
  • the path planning module 102 is set to be based on the weed target features, using the method of tracking the weeds within the camera to obtain the characteristics of the weed target, and the method of tracking between multiple cameras to obtain the characteristics of the weed target, planning the weeding path of the weeding robot arm of the weeding robot, so that the robot arm can follow the weeding path Do weeding.
  • the first camera, ..., the i-th camera, ..., the N-th camera are sequentially arranged at intervals along the direction opposite to the advancing direction of the weeding robot, and the first camera to the N-th camera follow the weeding robot to move;
  • the path planning module 102 includes: a first weed target feature extraction module, configured to obtain the first image collected by the first camera, call the image segmentation model to identify and segment the first image, and generate the first image The weed label of the weed target feature and the first weed target feature;
  • the second weed target feature extraction module is set to obtain the i-th image collected by the i-th camera when the i-th camera is the second camera, according to The density space clustering method extracts the target feature in the i-th image, and matches the target feature in the i-th image with the i-1th weed target feature, and obtains the i-th weed target feature corresponding to the i-1th weed target feature.
  • the first path planning module is set to plan the weeding robot arm of the weeding robot according to the i-th weed target feature and the weed label of the i-th weed target feature the weeding path;
  • the third weed target feature extraction module is set to extract the target feature in the i-th image according to the density space clustering method when the i-th camera is the third camera to the N-th camera in turn, and the i-th image
  • the target feature in the i image is matched with the i-1th weed target feature, and the i-th weed target feature corresponding to the i-1-th weed target feature is obtained, as well as the weed label of the i-th weed target feature;
  • the second path planning module is set to correct the weeding path of the weeding robot arm of the weeding robot according to the i-th weed target feature and the weed label of the i-th weed target feature;
  • the first camera, ..., the jth camera, ..., the Nth camera are sequentially arranged at intervals along the direction opposite to the advancing direction of the weeding robot, and the first camera to the Nth camera follow the weeding robot to move;
  • the path planning module 102 includes: a first weed target feature extraction module, configured to obtain the first image collected by the first camera, call the image segmentation model to identify and segment the first image, and generate the first image The weed label of the weed target feature and the first weed target feature;
  • the three-dimensional scene graph acquisition module is set to obtain the jth image collected by the jth camera when the jth camera is the second camera, and according to the jth image
  • the 3D scene graph of weeds is reconstructed by the jth camera, wherein the jth camera is one of binocular cameras, depth cameras or multi-line lidar sensors;
  • the image template forming module is set to take the j-1th weeds The grass target feature and the weed label of the
  • the j+1th weed target feature extraction module is configured to obtain the j+1th image collected by the j+1th camera, and extract the j+1th image according to the density space clustering method and match the target features in the j+1th image with the image template to obtain the j+1th weed target feature and the weed label of the j+1th weed target feature;
  • j+1th path planning The module is set to plan the weeding path of the weeding robot arm of the weeding robot according to the j+1th weed target feature and the weed label of the j+1th weed target feature;
  • the 3D scene graph acquisition module is also set to the j+1th weed target feature.
  • the jth image collected by the jth camera is obtained, and the 3D scene map of weeds is reconstructed by the jth camera according to the jth image, wherein, the jth camera
  • the camera is one of a binocular camera, a depth camera or a multi-line lidar sensor;
  • the image template forming module is also configured to combine the j-1 weed target feature and the weed label of the j-1 weed target feature with Match the weeds in the 3D scene image, separate the 3D image of the weed from the matching result, and extract the position coordinates of the main stem of the crop, and form an image template according to the 3D image and the position coordinates;
  • the feature extraction module is also configured to obtain the j+1th image collected by the j+1th camera, extract the target features in the j+1th image according to the density space clustering method, and combine the target features in the j+1th image with The image template
  • the j-th weed target feature corresponding to the target feature, and the weed label of the j-th weed target feature; the Nth path planning module is set to be based on the j-th weed target feature and the weed label of the j-th weed target feature, Correct the weeding path of the weeding robot arm of the weeding robot; wherein, N ⁇ 2, 1 ⁇ j ⁇ N, wherein j and N are both positive integers; the time when the Nth camera moves to the original position of the first camera with the weeding robot, And the sum of the image processing time of the first camera to the Nth camera is greater than or equal to the time for the weeding robot arm of the weeding robot to swing to the position of the weed corresponding to the first weed target feature.
  • the planning device for the weeding path of the weeding robot further includes: a target detector acquisition module, configured to obtain a lightweight target detector based on neural network model training; Grass target features and weed labels for the acquired weed target features, each camera-acquired image is reprocessed by a lightweight object detector to verify the acquired weed target features and the weed labels for the acquired weed target features Is the label correct.
  • a target detector acquisition module configured to obtain a lightweight target detector based on neural network model training
  • Grass target features and weed labels for the acquired weed target features each camera-acquired image is reprocessed by a lightweight object detector to verify the acquired weed target features and the weed labels for the acquired weed target features Is the label correct.
  • the camera is arranged at the tail of the weeding robot; the device also includes: a control module, configured to control the weeding robotic arm of the weeding robot to weed according to the weeding path; an evaluation module, configured to pass the weeding robot
  • the camera set at the tail of the camera acquires the image after weeding, and compares and analyzes it with the first image, and establishes an evaluation function: Among them, the total number of weeds in the first image is n1, the total number of crops is m1, the number of successfully removed weeds is n2, and the number of accidentally injured crops is m2 according to the first image and the image after weeding; according to the evaluation
  • the value of the function S adjusts the control parameters of the sway drive motor of the weeding manipulator.
  • the model acquisition module 101 includes: an acquisition module configured to collect multiple original images through the camera under different parameters, wherein the different parameters include the growth stage of the crop, light intensity, temperature, humidity, and soil type and acquisition time; the label image generation module is set to classify multiple original images according to the growth stages of the crops, and label the crops, weeds and soil in each original image to form a label image; the classification module is set In order to randomly split the original image of each growth stage of the crop and the corresponding label image into a training set, a verification set and a test set; the training module is set to define a deep neural network architecture, based on the training set of each growth stage of the crop , verification set and test set, set the training parameters, call the optimization algorithm to train the deep neural network architecture, and obtain the trained model weight file; obtain the image segmentation model of each growth stage of the crop based on the model weight file.
  • the different parameters include the growth stage of the crop, light intensity, temperature, humidity, and soil type and acquisition time
  • the label image generation module is set to classify
  • the above-mentioned products can execute the method provided by any embodiment of the present application, and have corresponding functional modules and effects for executing the method.
  • Fig. 8 is a structural block diagram of a weeding robot proposed in the embodiment of the present application.
  • this weeding robot 400 comprises the planning device of the weeding path of the weeding robot; Also includes: one or more processors 401; Data storage device 402, is set to store one or more programs; Stores each The image collected by the camera, the image template, and the corresponding coordinate information in the three-dimensional scene graph; when one or more programs are executed by one or more processors 401, one or more processors 401 realize the aforementioned weeding robot and The planning method of its weeding path.
  • this weeding robot 400 comprises a processor 401, a data storage device 402, an input device 403 and an output device 404;
  • the processor 401, the data storage device 402, the input device 403 and the output device 404 in the device can be connected through a bus or in other ways. In FIG. 8, the connection through a bus is taken as an example.
  • the data storage device 402 can be configured to store software programs, computer-executable programs and modules, such as program instructions corresponding to the weeding robot and its weeding path planning method in the embodiment of the present application.
  • the processor 401 runs the software programs, instructions and modules stored in the data storage device 402 to execute various functional applications and data processing of the device, that is, to realize the planning method of the above-mentioned weeding robot and its weeding path.
  • the data storage device 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the terminal, and the like.
  • the data storage device 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • the data storage device 402 may include memory located remotely from the processor 401, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 403 can be configured to receive input instruction requests, and generate key signal input related to device setting and function control.
  • the output device 404 may include a display device such as a display screen.
  • the weeding robot further includes: at least one controller, configured to collect feedback data from sensors on the weeding mechanical arm, control the movement of all motors and electric cylinders on the weeding mechanical arm, and drive the The cutter at the end of the weeding mechanical arm moves according to the planned trajectory; at least one set of weeding actuators, the weeding actuators include a link mechanism, a motor with angle feedback and an electric cylinder with position feedback.
  • the embodiment of the present application also proposes a computer-readable storage medium, on which a computer program is stored.
  • the program is executed by the processor 401, the above-mentioned weeding robot and its weeding path planning method are implemented.
  • the storage medium provided by the embodiment of the present application contains computer-executable instructions, and the computer-executable instructions can execute the relevant operations in the planning method of the weeding robot and its weeding path provided in any embodiment of the present application. .
  • the present application can be realized by software and necessary general-purpose hardware, and can also be realized by hardware.
  • the technical solution of the present application can be embodied in the form of software products in essence, and the computer software products can be stored in computer-readable storage media, such as computer floppy disks, read-only memory (Read-Only Memory, ROM), random access Memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disc, etc., including a plurality of instructions to make a computer device (which can be a personal computer, server, or network device, etc.) execute the described embodiment of the present application. Methods.
  • an image segmentation model is obtained based on neural network model training, and the image segmentation model is used to identify and segment weed targets features, soil target features, and crop target features; based on the image segmentation model, the weeding path of the weeding robotic arm of the weeding robot is planned by using the method of intra-camera tracking and multi-camera tracking, so that based on the tracking in-camera and inter-camera , Reliably record and trace the location of weeds to alleviate the harmful effects of image processing delays and improve the accuracy of weeding.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Environmental Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

一种除草机器人除草路径的规划方法,包括以下步骤:基于神经网络模型训练,获取图像分割模型,其中,图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;基于杂草目标特征,采用相机内跟踪的方式获取杂草目标特征,以及采用多相机间跟踪的方式获取杂草目标特征,规划除草机器人的除草机械臂(4)的除草路径,以使除草机械臂(4)根据除草路径进行除草。该方法可以缓解图像处理延迟带来的有害影响,提高了除草的精准度。一种除草机器人除草路径的规划装置、一种除草机器人和一种计算机可读存储介质。

Description

除草机器人及其除草路径的规划方法、装置和介质
本申请要求在2021年09月29日提交中国专利局、申请号为202111147412.4的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及除草机器人技术领域,例如涉及一种除草机器人及其除草路径的规划方法、装置和介质。
背景技术
在倡导环境保护的前提下,大幅增加农作物的产量将是绿色可持续农业的主要任务。大多数有机农场不允许使用化学合成农业杀虫剂、除草剂,田间管理环节仍在使用人工除草,但是人工除草成本非常高。田间除草所面临的两大问题:一是从事农业劳动的人员不断减少,用于除草的人力成本不断上升;二是环境保护意识不断提高,多地对除草剂使用量的限制程度不断增加。人工智能技术、机器人技术的出现为解决以上问题提供了有效途径。物理除草机器人系统,可以成功清除植物行之间的杂草。但是,当杂草位于农作物的茎部时,清除植物茎部附近的杂草这是一项非常具有挑战性的任务。
要准确定位杂草,首先要将杂草与有价值的农作物区分开。基于视觉的分类方法在这一领域已经被证明是有效的。然而这种视觉的分类方法,通常使用复杂的卷积神经网络(Convolutional Neural Network,CNN)实现,该CNN会引入一个长而不确定的时间延迟,导致除草的精度大大降低,无法应用到真的农业场景中,相关研究都处于试验阶段。另一方面,传统的单相机杂草追踪系统由于视场有限,往往不能适应长时间的杂草检测延迟。机器人行走过程中相机视角和视点的变化可能会引起对象外观的显著变化,尤其是对于那些高的杂草而言图像变化较大,传统方法很容易跟丢目标、误伤作物。
发明内容
本申请提供一种除草机器人及其除草路径的规划方法、装置和介质,基于相机内和相机间的跟踪,可靠地记录、追溯杂草的位置,以缓解图像处理延迟带来的影响,提高除草的精准度。
本申请提出了一种除草机器人除草路径的规划方法,包括:
基于神经网络模型训练,获取图像分割模型,其中,所述图像分割模型用 于识别并分割杂草目标特征、土壤目标特征和作物目标特征;
基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述除草机械臂根据所述除草路径进行除草。
本申请提出了一种除草机器人除草路径的规划装置,应用于如前所述的除草机器人除草路径的规划方法,所述装置包括:
模型获取模块,设置为基于神经网络模型训练,获取图像分割模型,其中,所述图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;
路径规划模块,设置为基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述除草机械臂根据所述除草路径进行除草。
本申请还提出了一种除草机器人,包括如前所述的除草机器人除草路径的规划装置;还包括:
一个或多个处理器;
数据存储装置,设置为存储一个或多个程序;存储每一相机采集的图像、图像模板以及三维场景图中对应的坐标信息;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如前所述的除草机器人除草路径的规划方法。
本申请还提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前所述的除草机器人自动除草的路径规划方法。
附图说明
图1是本申请实施例提出的一种除草机器人除草路径的规划方法的流程图;
图2是本申请实施例提出的一种除草机器人的结构示意图;
图3是本申请实施例提出的一种除草机器人的除草机械臂的结构示意图;
图4是本申请实施例提出的一种除草机器人的除草机械臂的侧视图;
图5是本申请实施例提出的一种除草机器人除草路径的规划图;
图6是本申请实施例提出的另一种除草机器人除草路径的规划方法的流程图;
图7是本申请实施例提出的一种除草机器人除草路径的规划装置的方框示意图;
图8是本申请实施例提出的一种除草机器人的结构框图。
具体实施方式
下面结合附图和实施例对本申请进行说明。此处所描述的具体实施例仅仅用于解释本申请。为了便于描述,附图中仅示出了与本申请相关的部分。
养活日益增长的人口和保护环境是未来农业发展的两大挑战。世界多国纷纷提出并推行的精确农业,也被称为智能农业,将有助于解决了这两个挑战。杂草是与所需的经济作物竞争并降低生产力的植物,它们和农作物争夺水、养分、空间和阳光,导致了作物产量、质量下降。事实上,作物种植做法在很大程度上依赖于统一使用除草剂来控制杂草。在这个范围内,开发农业机器人系统对于精确农业的实现至关重要。相关技术中,一般采用视觉的分类方法追踪杂草,该视觉的分类方法通常使用复杂的卷积神经网络实现,而该卷积神经网络会引入一个长而不确定的时间延迟,导致除草时,当除草相机拍摄图像并将图像处理完毕时,机器人可能已经走过了除草目标的位置,除草机械臂还未运动到指定位置,从而,大大降低除草的准确率。
另外,相关技术中的目标识别算法,只识别杂草没有跟踪杂草位置变化。并且传统的单个相机目标跟踪通常采用了基于运动的轮廓跟踪器来进行目标跟踪;单相机杂草追踪系统由于视场有限,往往不能适应长时间的杂草检测(图像分割)延迟和机械臂动作延迟。同时,机器人行走过程中由于地面起伏导致底盘颠簸,使相机视角和视点发生变化,传统方法可能会引起对象外观的显著变化(扭曲、模糊),尤其是对于那些高的杂草而言图像变化较大,很容易跟丢目标、误伤作物。
为解决上述技术问题,本申请实施例提出了一种除草机器人除草路径的规划方法,基于神经网络模型训练,获取图像分割模型,所述图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;基于所述杂草目标特征,采用相机内跟踪的方式获取杂草目标特征,以及采用多相机间跟踪的方式获取杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述除草机械臂根据除草路径进行除草,从而使得除草机械臂可提前运动到指定除草位置,缓解了图像处理延迟带来的影响,提高了除草的精准度。
下面来介绍本申请实施例提出的除草机器人除草路径的规划方法。图1是本申请实施例提出的一种除草机器人除草路径的规划方法的流程图。如图1所示,该除草机器人除草路径的规划方法包括:
S101,基于神经网络模型训练,获取图像分割模型,图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征。
根据本申请的一个实施例,S101中基于神经网络模型训练,获取图像分割模型,包括:
在不同参数下通过相机采集多个原始图像,其中,参数包括作物的生长阶段、光照强度、温度、湿度、土壤类型和采集时间;对多个原始图像按照作物的生长阶段进行分类,并对每个原始图像中的作物、杂草和土壤进行打标签,形成标签图像;将作物的每个生长阶段的原始图像和标签图像拆分为训练集、验证集和测试集;定义深度神经网络架构,基于作物的每个生长阶段的训练集、验证集和测试集,设置训练参数,并调用优化算法对深度神经网络架构进行训练,得到训练后的模型权重文件;基于模型权重文件获取作物的每个生长阶段的图像分割模型。
作物的生长阶段包括苗期、生长期和成熟期,也就是说,可以在相应的光照强度、温度、湿度、土壤类型和采集时间下,分别获取多张苗期的作物、杂草和土壤的原始图像,多张生长期的作物、杂草和土壤的原始图像,多张成熟期的作物、杂草和土壤的原始图像。
下面以苗期的作物、杂草和土壤的原始图像为例来说,将多张苗期的作物、杂草和土壤的原始图像进行分类打标签处理形成标签图像,比如土壤标签为0,作物标签为1,杂草标签为2,其中,标签处理可以是人工使用标签软件对原始图像进行打标签处理,分割每一张图片中的作物、杂草和土壤,产生标注图像,即通道灰度图,其灰度值表示其分类类别。接着将多张苗期标签图像以及多张苗期原始图像拆分为训练集、验证集和测试集,并定义神经网络架构,采用深度学习方法对神经网络架构进行训练;获取苗期作物的图像分割模型。本实施例中组合使用全卷积网络(Fully Convolutional Networks,FCN)、U-NET、Deeplab v3+三种深度学习架构对图像分割模型进行训练,以获取苗期作物的图像分割模型,其中,三种深度学习架构的配置参数包括学习率、学习率衰减、处理批量大小、需要使用的图形处理器(Graphics Processing Unit,GPU)数量。训练过程中,启动多GPU训练,定期报告准确性和Jaccard指数,并存储验证集中性能最佳的模型权重文件,针对苗期作物选出图像分割精度最高的网络架构。在实际运行时,可以在除草机器人上安装一个带GPU设备的边缘计算机,边缘计算机安装Ubuntu系统和机器人操作系统(Robot Operating System,ROS)开发环境,新建图像实时语义分割的ROS节点,从而,部署在边缘计算机的模型可以在线运行,实现实时的作物、杂草的分割。
对于生长期作物的图像分割模型的获取、成熟期作物的图像分割模型的获 取可以参考苗期作物的图像分割模型的获取过程。此处不再赘述。
S102,基于杂草目标特征,采用相机内跟踪的方式获取杂草目标特征,以及采用多相机间跟踪的方式获取杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使除草机械臂根据除草路径进行除草。
相机内跟踪是指,相机本身对相机拍摄的图像中的杂草、土壤和作物进行识别。多相机间跟踪是指,初始的相机基于图像分割模型对拍摄的图像中的杂草、土壤和作物进行识别,后一相机跟踪前一相机识别的结果对自身拍摄的图像进行杂草、土壤和作物的识别,最终根据识别的杂草特征规划除草机器人的除草机械臂的除草路径,由于后一相机跟踪前一相机的识别结果,从而,可以提前预测除草机械臂的除草路径,使得除草机器人的除草机械臂精准地摆动到除草位置,缓解图像处理延迟带来的影响,提高了除草的精准度。本实施例中的杂草目标特征主要包括杂草的外形尺寸以及位置信息。
下面以具体实施例来对S102进行解释说明。
根据本申请的一个实施例,沿所述除草机器人前进方向的相反方向依次间隔布置第一相机、……、第i相机、……、第N相机,第一相机至第N相机跟随除草机器人移动;所述基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,包括:获取第一相机采集的第一图像,调用图像分割模型对第一图像进行识别和分割,生成第一杂草目标特征及第一杂草目标特征的杂草标签;在第i相机为第二相机的情况下,获取第i相机采集的第i图像,根据密度空间聚类方法提取第i图像中的目标特征,并将第i图像中的目标特征与第i-1杂草目标特征进行匹配,获取与第i-1杂草目标特征对应的第i杂草目标特征,以及第i杂草目标特征的杂草标签;根据第i杂草目标特征以及第i杂草目标特征的杂草标签,规划除草机器人的除草机械臂的除草路径;在第i相机依次为第三相机至第N相机的情况下,根据密度空间聚类方法提取第i图像中的目标特征,并将第i图像中的目标特征与第i-1杂草目标特征进行匹配,获取与第i-1杂草目标特征对应的第i杂草目标特征,以及第i杂草目标特征的杂草标签;根据第i杂草目标特征以及第i杂草目标特征以的杂草标签,修正除草机器人的除草机械臂的除草路径;其中,N≥2,1<i≤N,i和N均为正整数;第N相机随除草机器人移动至第一相机的原位置的时间、以及第一相机至第N相机处理图像的时间之和,大于或等于除草机器人的除草机械臂摆动到第一杂草目标特征对应的杂草的位置的时间。
以N为3来说,如图2至图5所示,其中,图2中的向左的箭头表示机器人前进的方向,获取第一相机1采集的第一图像,调用图像分割模型对第一图 像进行识别和分割(其中,可通过一轻量级目标检测器来对作物的种类和生长阶段进行识别,从而调用合适的图像分割模型),生成第一杂草目标特征及第一杂草目标特征的杂草标签,控制除草机器人的除草机械臂4进行启动,接着获取第二相机2采集的第二图像,根据密度空间聚类方法提取第二图像中的目标特征,并与第一杂草目标特征进行匹配,获取与第一杂草目标特征对应的第二杂草目标特征,以及第二杂草目标特征的杂草标签;根据第二杂草目标特征以及第二杂草目标特征的杂草标签,规划除草机器人的除草机械臂的除草路径;此时,除草机械臂4已经基本到达预设位置;接着获取第三相机3采集的第三图像,根据密度空间聚类方法提取第三图像中的目标特征,并与第二杂草目标特征进行匹配,获取与第二杂草目标特征对应的第三杂草目标特征,以及第三杂草目标特征的杂草标签;根据第三杂草目标特征以及第三杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;从而,当第三相机3移动到第一相机1的位置时,除草机器人的除草机械臂4已经提前做好了相应的准备,仅需要微调,就可以进行除草,避免了仅有一个相机采集图像,并通过模型识别造成的时间延迟问题,使得机器人走过了杂草所在位置,除草机械臂才到达预设位置,造成对作物的误伤。
杂草目标特征可以为杂草的轮廓、颜色、大小、坐标中的一种或几种组合。第N相机随除草机器人移动至第一相机的原位置的时间(即与除草机器人的行走速度相关)、以及第一相机至第N相机处理图像的时间之和,大于或等于除草机器人的除草机械臂摆动到第一杂草目标特征对应的杂草的位置的时间。进而,使得除草机械臂可以精确地摆动到预设位置,实现更精准地进行除草。
第一相机1至第三相机3还可以是安装在无人机上的相机,不限于安装在除草机器人上,无人机与机器人之间可通过无线通信的方式来传递图像信息。
在前述实施例中,第一相机可以为红-绿-蓝近红外光谱(Red-Green-Blue Near Infrared,RGB+NIR)相机,第二相机可以为普通的RGB相机,第三相机为普通RGB相机。
根据本申请的一个实施例,沿除草机器人前进方向的相反方向依次间隔布置第一相机、……、第j相机、……、第N相机,第一相机至第N相机跟随除草机器人移动;所述基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述机械臂根据所述除草路径进行除草,包括:获取第一相机采集的第一图像,调用图像分割模型对第一图像进行识别和分割,生成第一杂草目标特征及第一杂草目标特征的杂草标签;在第j相机为第二相机的情况下,获取第j相机采集的第j图像,可以基于同步定位与 建图(simultaneous localization and mapping,SLAM)的三维重建方式,根据第j图像通过第j相机重构出杂草的三维场景图,其中,所述第j相机为具备三维场景感知能力的传感器,例如为双目相机、深度相机或者多线激光雷达传感器中的任意一种;当第j相机为双目相机时,三维重建方式可以为视觉里程计方法,当第j相机为激光雷达传感器时,三维重建方式可以为激光雷达三维场景重构方法;将第j-1杂草目标特征以及第j-1杂草目标特征的杂草标签与三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据三维图和位置坐标形成图像模板;获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取第j+1图像中的目标特征,并将第j+1图像中的目标特征与图像模板进行匹配,获取第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签;根据第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签,规划除草机器人的除草机械臂的除草路径;在第j相机依次为第三相机至第N-1相机的情况下,获取第j相机采集的第j图像,根据第j图像通过第j相机重构出杂草的三维场景图,其中,第j相机为双目相机、深度相机或者多线激光雷达传感器中的一种;将第j-1杂草目标特征以及第j-1杂草目标特征的杂草标签与三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据三维图和位置坐标形成图像模板;获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取第j+1图像中的目标特征,并将第j+1图像中的目标特征与图像模板进行匹配,获取第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签;根据第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;在第j相机为第N相机的情况下,获取第j相机采集的第j图像,根据密度空间聚类方法提取第j图像中的目标特征,并将第j图像中的目标特征与第j-1杂草目标特征进行匹配,获取与第j-1杂草目标特征对应的第j杂草目标特征,以及第j杂草目标特征的杂草标签;根据第j杂草目标特征以及第j杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;其中,N≥2,1<j≤N,其中,j和N均为正整数;第N相机随除草机器人移动至第一相机的原位置的时间、以及第一相机至第N相机处理图像的时间之和,大于或等于除草机器人的除草机械臂摆动到第一杂草目标特征对应的杂草的位置的时间。
在另外的实施例中,参考图2,仍以N为3为例,可以根据第二相机2采集的图像重构土壤、杂草和作物的三维场景图,然后执行以下步骤:
(1)三维场景图建立后,将第一相机图像的分割结果中的作物/杂草和第二相机三维场景图中作物/杂草的进行匹配,将杂草中心及其轮廓边界转换至当前帧的图片中,生成用于目标跟踪的图像模板,实时更新作物/杂草的位置,并记录其标签,该步骤实现第二相机视场内部的作物/杂草目标跟踪。
(2)当视场中作物、杂草从第二相机的视线移出时,边缘计算机运行一个轻量级的目标检测模型(目标检测模型可以为YOLO Nano),根据图像累积的标签对其分类结果再次进行验证,以防止分类错误而损伤到农作物。
(3)除草装置的上方布置了第三相机,垂直于地面向下安装,一旦第三相机发现新的杂草/作物对象移入其视场中,对杂草/作物的进行目标特征提取,并与之前图像模板中的杂草/作物特征匹配,判定其标签。控制计算机通过数字量开关启动补光灯,控制计算机运行杂草追踪算法,在该相机视场内跟踪杂草位置变化。
(4)经过反复的相机内跟踪和多目标跟踪算法间跟踪后,杂草最终接近末端执行器时,跟踪执行器与杂草的相对位置和机器人底盘的运动速度,控制算法预测触发时间和调节除草装置的位置以进行精准除草。
(5)割刀电机的上部安装有表型探测模组,由激光雷达、惯性测量单元(Inertial Measurement Unit,IMU)、以及相机组成,除草机器人上另外一台计算机II上运行目标检测算法,实时检测作物茎秆的检测,多线激光雷达实时估算出除草刀片与作物主茎秆的距离,避免伤到作物的主茎秆。
在前述实施例中,第一相机可以为RGB+NIR相机,第二相机可以为带有IMU的双目相机,第三相机为普通RGB相机。相机距离地面的距离为80cm,地面分辨率约为3像素/cm,系统下田工作之前,将所有的相机进行标定和校准,补偿镜头带来的畸变。为了相机免受阳光强度变化的影响,可以在每个相机旁边安装补光灯。
在前述两个实施例的基础上,除草机械臂根据规划的除草路径执行动作,如图3和图4所示,该除草机器人的除草机械臂4包括第一电机10、第二电机13、第三电机15、第四电机17、第一转动关节11、第二转动关节19、大臂12、小臂16、丝杠和导轨14以及割刀18、电动缸20、表型探测模组21。
小臂16包括第一连杆16-1、第二连杆16-2和电动缸20,第一连杆16-1和第二连杆16-2是平行连杆机构,通过电动缸20来调节割刀18的高度,使其适应土壤地面起伏变化。电动缸20的一端铰接于第一连杆16-1,另外一端第二连杆16-2,从而使割刀18在电动缸20驱动下随四连杆机构上下浮动。
导轨14安装有直线位移传感器,第一转动关节11、第二转动关节19以及第一电机10、第二电机13、第三电机15、第四电机17均安装有角度传感器。表型探测模组21由多线激光雷达、IMU、相机组成。所述表型探测模组21能够将检测的所述田间植物和地面信息存储于所述位于机器人上的第二工业计算机。
该除草机器人的除草机械臂4还包括控制器,采集除草机械臂上的传感器的反馈数据,控制所述除草机械臂上所有的电机和电动缸的运动,驱动所述除草机械臂末端的割刀18按照规划的轨迹运动。
该控制流程如下:
步骤1:除草机器人下田以后调节第一电机10角度,大臂12旋转,检测第二转动关节19是否处于两行作物之间;步骤2:调节第二电机13,降低割刀18的高度;步骤3:除草机器人启动行走,第四电机17启动旋转;步骤4:机械臂控制器接受规划的轨迹;步骤5:坐标变换,将杂草路径转换为第二转动关节19的摆动目标轨迹;步骤6:计算第二转动关节19的实际角度与目标轨迹点的位置偏差;步骤7:偏差作为比例积分微分(Proportion Integral Differential,PID)控制算法的输入,计算出第三电机15的控制指令值;步骤8:控制器输出指令信号,第三电机15驱动除草机械臂4的小臂16在两行作物之间摆动;步骤9:摆动轨迹跟踪规划的路径;步骤10:小臂16第二转动关节19处角度传感器实时反馈小臂16的摆动角度;步骤11:当表型探测模组21检测到地面高程发生起伏变化时,控制器发指令给电动缸20,调节电动缸20的位移,驱动割刀18随连杆机构上下浮动,避免损坏割刀18和割刀18的驱动电机。
当第二相机为双目相机,在第二实施例中构建三维场景图时,除草机器人上安装了全球导航卫星系统(Global Navigation Satellite System,GNSS)卫星定位接收机,除草机器人在大地坐标系中的位置已知,由于第二相机在机器人上的安装位置是已知的,三维场景建模使用了GNSS接收机卫星定位信息,作物和杂草的三维空间模型都是基于大地坐标系的,根据除草机械臂4的结构,可以将作物和杂草的三维结构都换算到除草机械臂4的第二转换关节19坐标系之下,第二转换关节19处安装有角度反馈传感器,因此割刀18在坐标系之下的位置已知,从而使得割刀18的位置与杂草和作物的位置均统一在同一坐标系中。
如图5所示,步骤5中规划路径转换为小臂16摆动目标轨迹,方法如下:
除草机器人行驶速度v由GNSS接收机实时测得,杂草C11在第二转换关节19的坐标系下的位置为(x11,y11),去除杂草C11时小臂16的转动角β=arcsin(x11/R);R表示小臂16的长度;杂草C11到小臂16的运动圆弧上的投影距离L=y11-Rcosβ;则可计算出到达时间间隔t1=L/v;设当前时刻为t,除草机械臂4的小臂转动到位置β的时刻为t+t1;以此类推,计算出小臂16的摆动目标轨迹(摆动角和时间历程曲线);然后计算第二转动关节19的实际角度与目标轨迹点的位置偏差;偏差作为PID控制算法的输入,计算出电机控制指令值;控制器输出指令信号,第三电机15驱动除草机械臂4的小臂16在两行作物之间摆动。图5中,C12为杂草C11在小臂16的运动圆弧上的投影,Y轴方 向表示除草机器人的前进方向。
第N相机的图像与第N-1相机的特征匹配后,除草机械臂4的小臂16按照坐标转换后的小臂摆动目标轨迹执行动作。
基于此,通过将三维空间感知的传感器(双目相机、深度相机或者多线激光雷达传感器)、IMU、以及GNSS接收机三者的数据进行融合,重构的三维场景,大大提高了机器人系统对杂草的感知精度。
本申请实施例中的多相机杂草跟踪系统中,首先使用深度神经网络模型对第一相机捕获的图像分割处理,得到了机器人前方视角中精准的杂草分割结果,接下来利用多摄像机之间位置信息的关联(多相机位置之间的位置坐标和变化矩阵是已知的),和基于多传感器(IMU、双目相机、以及GNSS接收机)的视觉里程计方法得到杂草的三维空间位置信息,精确构建作物/杂草/土壤的三维场景结构,然后将二维图像分割结果和三维场景图进行匹配,将第一相机的检测结果扩展到多相机检测方法中,确定包含作物茎秆位置的图像模板,并不断更新模板。基于图像模板中杂草的茎秆坐标,以机械臂能耗最小为目标,规划除草运动轨迹。提升了除草精度。其中,三维场景图可以为三维场景点云图。
多相机跟踪系统中,跟踪的目标可以为一株杂草,也可以为多株杂草,本申请对此不作限制。
如图6所示,在该实施例中,控制算法如下:第一相机以及可控光源启动,对田间图像采集,进行图像处理,并对作物、杂草的图像在线实时分割,对作物、杂草三维重构,采用相机内跟踪、相机间跟踪,控制算法预测触发时间和刀盘位置,调节除草机械臂的方位角,调节割草电机的倾斜角,估算出除草刀片与杂草的距离,触发割刀切除杂草。其中,除草执行单元为除草机械臂4,割草电机为第四电机17。
根据本申请的一个实施例,除草机器人除草路径的规划方法还包括:
基于神经网络模型训练,获取一轻量级目标检测器;在每次获取杂草目标特征以及获取的杂草目标特征的杂草标签之后,通过轻量级目标检测器重新处理每个相机采集的图像,以验证获取的杂草目标特征以及杂草标签是否正确。
轻量级目标检测器相对于图像分割模型运行速度快,处理图像速度快(此处的轻量级目标检测器与前述的轻量级目标检测器不同),进而,当第一相机采集图像,并通过图像分割模型进行分割识别之后,还可以通过轻量级目标检测器进行验证,如果图像分割模型处理的结果与轻量级目标检测器检测结果一致,那么说明图像分割模型处理的结果无误,可以传送至下一相机作为分类标准,接着控制除草机械臂进行除草,如果图像分割模型处理的结果与轻量级目 标检测器检测结果不一致,那么说明图像分割模型处理的结果不准确,需要停机检修,防止分类错误而损伤到作物。
根据本申请的一个实施例,在所述除草机器人的尾部设置相机;在根据第N杂草目标特征以及第N杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径之后,还包括:控制除草机器人的除草机械臂依据除草路径进行除草;通过除草机器人的尾部设置的相机获取除草后的图像,并与第一图像进行对比分析,建立评价函数:
Figure PCTCN2022088072-appb-000001
其中,第一图像中的杂草的总数量为n1、农作物的总数量为m1,根据第一图像和除草后的图像获取成功去除杂草的数量为n2、误伤农作物的数量为m2;依据评价函数S的值调整除草机械臂横摆驱动电机的控制参数。
摄像头安装在除草机器人的尾部。根据评价函数S的值,可以知道除草的质量,评价函数S的值越大,说明除草的质量越好,评价函数S的值越小,表示除草的作业效果不理想,需要减低机器人的行驶速度,同时,调节除草机械臂中的电机的控制参数,提高响应速度。
根据本申请的一个实施例,杂草目标特征包括杂草轮廓、颜色、形状、纹理、位置中的一种或几种组合;作物目标特征包括作物轮廓、颜色、形状、纹理、位置中的一种或几种组合;土壤目标特征包括土壤轮廓、颜色、形状、纹理、位置中的一种或几种组合。
在通过图像分割模型时,还可以识别土壤目标特征和作物目标特征,根据土壤目标特征可以调整除草机械臂的高低,根据作物目标特征可以最终建立评价函数S。
图7是本申请实施例提出的除草机器人除草路径的规划装置的方框示意图。该装置应用于如前的除草机器人除草路径的规划方法,如图7所示,装置包括:
模型获取模块101,设置为基于神经网络模型训练,获取图像分割模型,图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;路径规划模块102,设置为基于杂草目标特征,采用相机内跟踪杂的方式获取草目标特征,以及采用多相机间跟踪的方式获取杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述机械臂根据所述除草路径进行除草。
根据本申请的一个实施例,沿所述除草机器人前进方向的相反方向依次间隔布置第一相机、……、第i相机、……、第N相机,所述第一相机至所述第N相机跟随所述除草机器人移动;路径规划模块102包括:第一杂草目标特征提取模块,设置为获取第一相机采集的第一图像,调用图像分割模型对第一图像进行识别和分割,生成第一杂草目标特征及第一杂草目标特征的杂草标签;第二杂草目标特征提取模块,设置为在第i相机为第二相机的情况下,获取第i相 机采集的第i图像,根据密度空间聚类方法提取第i图像中的目标特征,并将第i图像中的目标特征与第i-1杂草目标特征进行匹配,获取与第i-1杂草目标特征对应的第i杂草目标特征,以及第i杂草目标特征的杂草标签;第一路径规划模块,设置为根据第i杂草目标特征以及第i杂草目标特征的杂草标签,规划除草机器人的除草机械臂的除草路径;第三杂草目标特征提取模块,设置为在第i相机依次为第三相机至第N相机的情况下,根据密度空间聚类方法提取第i图像中的目标特征,并将第i图像中的目标特征与第i-1杂草目标特征进行匹配,获取与第i-1杂草目标特征对应的第i杂草目标特征,以及第i杂草目标特征的杂草标签;第二路径规划模块,设置为根据第i杂草目标特征以及第i杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;其中,N≥2,1<i≤N,i和N均为正整数;第N相机随除草机器人移动至第一相机的原位置的时间、以及第一相机至第N相机处理图像的时间之和,大于或等于除草机器人的除草机械臂摆动到第一杂草目标特征对应的杂草的位置的时间。
根据本申请的一个实施例,沿所述除草机器人前进方向的相反方向依次间隔布置第一相机、……、第j相机、……、第N相机,所述第一相机至所述第N相机跟随所述除草机器人移动;路径规划模块102包括:第一杂草目标特征提取模块,设置为获取第一相机采集的第一图像,调用图像分割模型对第一图像进行识别和分割,生成第一杂草目标特征及第一杂草目标特征的杂草标签;三维场景图获取模块,设置为在第j相机为第二相机的情况下,获取第j相机采集的第j图像,根据第j图像通过第j相机重构出杂草的三维场景图,其中,第j相机为双目相机、深度相机或者多线激光雷达传感器中的一种;图像模板形成模块,设置为将第j-1杂草目标特征以及第j-1杂草目标特征的杂草标签与三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据三维图和所述位置坐标形成图像模板;第j+1杂草目标特征提取模块,设置为获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取第j+1图像中的目标特征,并将第j+1图像中的目标特征与图像模板进行匹配,获取第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签;第j+1路径规划模块,设置为根据第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签,规划除草机器人的除草机械臂的除草路径;三维场景图获取模块,还设置为在第j相机依次为第三相机至所述第N-1相机的情况下,获取第j相机采集的第j图像,根据第j图像通过第j相机重构出杂草的三维场景图,其中,第j相机为双目相机、深度相机或者多线激光雷达传感器中的一种;图像模板形成模块,还设置为将第j-1杂草目标特征以及第j-1杂草目标特征的杂草标签与三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据三维图和所述位置坐标形成图像模板;第j+1杂草目标特征提取模块,还设 置为获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取第j+1图像中的目标特征,并将第j+1图像中的目标特征与图像模板进行匹配,获取第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签;第j+1路径规划模块,还设置为根据第j+1杂草目标特征以及第j+1杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;第N杂草目标特征提取模块,设置为在第j相机为第N相机的情况下,获取第j相机采集的第j图像,根据密度空间聚类方法提取第j图像中的目标特征,并将第j图像中的目标特征与第j-1杂草目标特征进行匹配,获取与第j-1杂草目标特征对应的第j杂草目标特征,以及第j杂草目标特征的杂草标签;第N路径规划模块,设置为根据第j杂草目标特征以及第j杂草目标特征的杂草标签,修正除草机器人的除草机械臂的除草路径;其中,N≥2,1<j≤N,其中,j和N均为正整数;第N相机随除草机器人移动至第一相机的原位置的时间、以及第一相机至第N相机处理图像的时间之和,大于或等于除草机器人的除草机械臂摆动到第一杂草目标特征对应的杂草的位置的时间。
根据本申请的一个实施例,除草机器人除草路径的规划装置还包括:目标检测器获取模块,设置为基于神经网络模型训练,获取轻量级目标检测器;验证模块,设置为在每次获取杂草目标特征以及获取的杂草目标特征的杂草标签之后,通过轻量级目标检测器重新处理每个相机采集的图像,以验证获取的杂草目标特征以及获取的杂草目标特征的杂草标签是否正确。
根据本申请的一个实施例,在所述除草机器人的尾部设置的相机;该装置还包括:控制模块,设置为控制除草机器人的除草机械臂依据除草路径进行除草;评价模块,设置为通过除草机器人的尾部设置的相机获取除草后的图像,并与第一图像进行对比分析,建立评价函数:
Figure PCTCN2022088072-appb-000002
Figure PCTCN2022088072-appb-000003
其中,第一图像中的杂草的总数量为n1、农作物的总数量为m1,根据第一图像和除草后的图像获取成功去除杂草的数量为n2、误伤农作物的数量为m2;依据评价函数S的值调整除草机械臂横摆驱动电机的控制参数。
根据本申请的一个实施例,模型获取模块101包括:采集模块,设置为在不同参数下通过相机采集多个原始图像,其中,不同参数包括作物的生长阶段、光照强度、温度、湿度、土壤类型和采集时间;标签图像生成模块,设置为对多个原始图像按照作物的生长阶段进行分类,并对每个原始图像中的作物、杂草和土壤进行打标签,形成标签图像;分类模块,设置为将作物的每个生长阶段的原始图像和对应的标签图像随机拆分为训练集、验证集和测试集;训练模块,设置为定义深度神经网络架构,基于作物的每个生长阶段的训练集、验证集和测试集,设置训练参数,调用优化算法对深度神经网络架构进行训练,得 到训练后的模型权重文件;基于模型权重文件获取作物的每个生长阶段的图像分割模型。
上述产品可执行本申请任意实施例所提供的方法,具备执行方法相应的功能模块和效果。
图8是本申请实施例提出的一种除草机器人的结构框图。如图8所示,该除草机器人400包括如前的除草机器人除草路径的规划装置;还包括:一个或多个处理器401;数据存储装置402,设置为存储一个或多个程序;存储每一相机采集的图像、图像模板以及三维场景图中对应的坐标信息;当一个或多个程序被一个或多个处理器401执行,使得一个或多个处理器401实现如前所述的除草机器人及其除草路径的规划方法。
如图8所示,该除草机器人400包括处理器401、数据存储装置402、输入装置403和输出装置404;设备中处理器401的数量可以是一个或多个,图8中以一个处理器401为例;设备中的处理器401、数据存储装置402、输入装置403和输出装置404可以通过总线或其他方式连接,图8中以通过总线连接为例。
数据存储装置402作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例中除草机器人及其除草路径的规划方法对应的程序指令。处理器401通过运行存储在数据存储装置402中的软件程序、指令以及模块,从而执行设备的多种功能应用以及数据处理,即实现上述的除草机器人及其除草路径的规划方法。
数据存储装置402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,数据存储装置402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,数据存储装置402可包括相对于处理器401远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置403可设置为接收输入的指令请求,以及产生与设备的设置以及功能控制有关的键信号输入。输出装置404可包括显示屏等显示设备。
根据本申请的一个实施例,除草机器人,还包括:至少一个控制器,设置为采集除草机械臂上的传感器的反馈数据,控制所述除草机械臂上所有的电机和电动缸的运动,驱动所述除草机械臂末端的割刀按照规划的轨迹运动;至少一组除草执行装置,所述除草执行装置包括连杆机构、带有角度反馈的电机和 带有位置反馈的电动缸。
本申请实施例还提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器401执行时实现如前所述的除草机器人及其除草路径的规划方法。
也就是说,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令可以执行本申请任意实施例所提供的除草机器人及其除草路径的规划方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,也可以通过硬件实现。本申请的技术方案本质上可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例所述的方法。
综上所述,根据本申请实施例提出的除草机器人及其除草路径的规划方法、装置和介质,基于神经网络模型训练,获取图像分割模型,所述图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;基于所述图像分割模型,采用相机内跟踪的方式以及多相机间跟踪的方式规划除草机器人的除草机械臂的除草路径,从而基于相机内和相机间的跟踪,可靠地记录、追溯杂草的位置,以缓解图像处理延迟带来的有害影响,提高除草的精准度。

Claims (11)

  1. 一种除草机器人除草路径的规划方法,包括:
    基于神经网络模型训练,获取图像分割模型,其中,所述图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;
    基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述除草机械臂根据所述除草路径进行除草。
  2. 根据权利要求1所述的除草机器人除草路径的规划方法,其中,沿所述除草机器人前进方向的相反方向依次间隔布置第一相机、……、第i相机、……、第N相机,所述第一相机至所述第N相机跟随所述除草机器人移动;
    所述基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,包括:
    获取所述第一相机采集的第一图像,调用所述图像分割模型对所述第一图像进行识别和分割,生成第一杂草目标特征及所述第一杂草目标特征的杂草标签;
    在所述第i相机为第二相机的情况下,获取所述第i相机采集的第i图像,根据密度空间聚类方法提取所述第i图像中的目标特征,并将所述第i图像中的目标特征与所述第i-1杂草目标特征进行匹配,获取与所述第i-1杂草目标特征对应的第i杂草目标特征,以及所述第i杂草目标特征的杂草标签;根据所述第i杂草目标特征以及所述第i杂草目标特征的杂草标签,规划所述除草机器人的除草机械臂的除草路径;
    在所述第i相机依次为第三相机至所述第N相机的情况下,根据密度空间聚类方法提取所述第i图像中的目标特征,并将所述第i图像中的目标特征与第i-1杂草目标特征进行匹配,获取与所述第i-1杂草目标特征对应的第i杂草目标特征,以及所述第i杂草目标特征的杂草标签;根据所述第i杂草目标特征以及所述第i杂草目标特征的杂草标签,修正所述除草机器人的除草机械臂的除草路径;
    其中,N≥2,1<i≤N,i和N均为正整数;所述第N相机随所述除草机器人移动至所述第一相机的原位置的时间、以及所述第一相机至所述第N相机处理图像的时间之和,大于或等于所述除草机器人的除草机械臂摆动到所述第一杂草目标特征对应的杂草的位置的时间。
  3. 根据权利要求1所述的除草机器人除草路径的规划方法,其中,沿所述除草机器人前进方向的相反方向依次间隔布置第一相机、……、第j相机、……、 第N相机,所述第一相机至所述第N相机跟随所述除草机器人移动;
    所述基于所述杂草目标特征,采用相机内跟踪的方式获取所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,包括:
    获取所述第一相机采集的第一图像,调用所述图像分割模型对所述第一图像进行识别和分割,生成第一杂草目标特征及所述第一杂草目标特征的杂草标签;
    在所述第j相机为第二相机的情况下,获取所述第j相机采集的第j图像,根据所述第j图像通过所述第j相机重构出杂草的三维场景图,其中,所述第j相机为双目相机、深度相机或者多线激光雷达传感器中的一种;将第j-1杂草目标特征以及所述第j-1杂草目标特征的杂草标签与所述三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据所述三维图和所述位置坐标形成图像模板;获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取所述第j+1图像中的目标特征,并将所述第j+1图像中的目标特征与所述图像模板进行匹配,获取第j+1杂草目标特征以及所述第j+1杂草目标特征的杂草标签;根据所述第j+1杂草目标特征以及所述第j+1杂草目标特征的杂草标签,规划所述除草机器人的除草机械臂的除草路径;
    在所述第j相机依次为第三相机至所述第N-1相机的情况下,获取所述第j相机采集的第j图像,根据所述第j图像通过所述第j相机重构出杂草的三维场景图,其中,所述第j相机为双目相机、深度相机或者多线激光雷达传感器中的一种;将第j-1杂草目标特征以及所述第j-1杂草目标特征的杂草标签与所述三维场景图中的杂草匹配,从匹配结果中分离出杂草的三维图,并提取作物主茎秆的位置坐标,根据所述三维图和所述位置坐标形成图像模板;获取第j+1相机采集的第j+1图像,根据密度空间聚类方法提取所述第j+1图像中的目标特征,并将所述第j+1图像中的目标特征与所述图像模板进行匹配,获取第j+1杂草目标特征以及所述第j+1杂草目标特征的杂草标签;根据所述第j+1杂草目标特征以及所述第j+1杂草目标特征的杂草标签,修正所述除草机器人的除草机械臂的除草路径;
    在所述第j相机为所述第N相机的情况下,获取所述第j相机采集的第j图像,根据密度空间聚类方法提取所述第j图像中的目标特征,并将所述第j图像中的目标特征与第j-1杂草目标特征进行匹配,获取与所述第j-1杂草目标特征对应的第j杂草目标特征,以及所述第j杂草目标特征的杂草标签;根据所述第j杂草目标特征以及所述第j杂草目标特征的杂草标签,修正所述除草机器人的除草机械臂的除草路径;
    其中,N≥2,1<j≤N,其中,j和N均为正整数;所述第N相机随所述除草机器人移动至所述第一相机的原位置的时间、以及所述第一相机至所述第N相机处理图像的时间之和,大于或等于所述除草机器人的除草机械臂摆动到所述第一杂草目标特征对应的杂草的位置的时间。
  4. 根据权利要求2或3所述的除草机器人除草路径的规划方法,还包括:
    基于有督导神经网络的训练方法,获取轻量级目标检测器;
    在每次获取杂草目标特征以及获取的杂草目标特征的杂草标签之后,通过所述轻量级目标检测器重新处理每个相机采集的图像,以验证所述获取的杂草目标特征以及所述获取的杂草目标特征的杂草标签是否正确。
  5. 根据权利要求2或3所述的除草机器人除草路径的规划方法,其中,在所述除草机器人的尾部设置相机;
    在根据所述第N杂草目标特征以及所述第N杂草目标特征的杂草标签,修正所述除草机器人的除草机械臂的除草路径之后,还包括:
    控制所述除草机器人的除草机械臂依据所述除草路径进行除草;
    通过所述除草机器人的尾部设置的相机获取除草后的图像,并与所述第一图像进行对比分析,建立评价函数:
    Figure PCTCN2022088072-appb-100001
    Figure PCTCN2022088072-appb-100002
    其中,所述第一图像中的杂草的总数量为n1、农作物的总数量为m1,根据所述第一图像和所述除草后的图像获取成功去除杂草的数量为n2、误伤农作物的数量为m2;
    依据所述评价函数S的值调整所述除草机械臂驱动电机的控制参数。
  6. 根据权利要求1所述的除草机器人除草路径的规划方法,其中,所述基于神经网络模型训练,获取图像分割模型包括:
    在不同参数下通过相机采集多个原始图像,其中,所述不同参数包括作物的生长阶段、光照强度、温度、湿度、土壤类型和采集时间;
    对所述多个原始图像按照作物的生长阶段进行分类,并对每个原始图像中的作物、杂草和土壤进行打标签,形成标签图像;
    将作物的每个生长阶段的原始图像和标签图像拆分为训练集、验证集和测试集;
    定义深度神经网络架构,基于作物的每个生长阶段的训练集、验证集和测试集,设置训练参数,调用优化算法对所述深度神经网络架构进行训练,得到训练后的模型权重文件;
    基于所述模型权重文件获取作物的每个生长阶段的图像分割模型。
  7. 根据权利要求1所述的除草机器人除草路径的规划方法,其中,
    所述杂草目标特征包括杂草轮廓、颜色、形状、纹理、位置中的至少一种组合;
    所述作物目标特征包括作物轮廓、颜色、形状、纹理、位置中的至少一种组合;
    所述土壤目标特征包括土壤轮廓、颜色、形状、纹理、位置中的至少一种组合。
  8. 一种除草机器人除草路径的规划装置,应用于如权利要求1-7任一项所述的除草机器人除草路径的规划方法,所述装置包括:
    模型获取模块,设置为基于神经网络模型训练,获取图像分割模型,其中,所述图像分割模型用于识别并分割杂草目标特征、土壤目标特征和作物目标特征;
    路径规划模块,设置为基于所述杂草目标特征,采用相机内跟的方式获取踪所述杂草目标特征,以及采用多相机间跟踪的方式获取所述杂草目标特征,规划除草机器人的除草机械臂的除草路径,以使所述除草机械臂根据所述除草路径进行除草。
  9. 一种除草机器人,包括如权利要求8所述的除草机器人除草路径的规划装置;
    还包括:
    至少一个处理器;
    数据存储装置,设置为存储至少一个程序,存储每一相机采集的图像、图像模板以及三维场景图中对应的坐标信息;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的除草机器人除草路径的规划方法。
  10. 根据权利要求9所述的除草机器人,还包括:
    至少一个控制器,设置为采集除草机械臂上的传感器的反馈数据,控制所述除草机械臂上所有的电机和电动缸的运动,驱动所述除草机械臂末端的割刀按照规划的轨迹运动;
    至少一组除草执行装置,所述除草执行装置包括连杆机构、带有角度反馈的电机和带有位置反馈的电动缸。
  11. 一种计算机可读存储介质,存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一项所述的除草机器人除草路径的规划方法。
PCT/CN2022/088072 2021-09-29 2022-04-21 除草机器人及其除草路径的规划方法、装置和介质 WO2023050783A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2022256171A AU2022256171B2 (en) 2021-09-29 2022-04-21 Weeding robot and method, apparatus for planning weeding path for the same and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111147412.4 2021-09-29
CN202111147412.4A CN113597874B (zh) 2021-09-29 2021-09-29 一种除草机器人及其除草路径的规划方法、装置和介质

Publications (1)

Publication Number Publication Date
WO2023050783A1 true WO2023050783A1 (zh) 2023-04-06

Family

ID=78343220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088072 WO2023050783A1 (zh) 2021-09-29 2022-04-21 除草机器人及其除草路径的规划方法、装置和介质

Country Status (3)

Country Link
CN (1) CN113597874B (zh)
AU (1) AU2022256171B2 (zh)
WO (1) WO2023050783A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292248A (zh) * 2023-10-30 2023-12-26 吉林农业大学 基于深度学习的农田喷药系统及杂草检测方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113597874B (zh) * 2021-09-29 2021-12-24 农业农村部南京农业机械化研究所 一种除草机器人及其除草路径的规划方法、装置和介质
CN114747360A (zh) * 2022-04-24 2022-07-15 佛山科学技术学院 一种除杂草机
CN117635719B (zh) * 2024-01-26 2024-04-16 浙江托普云农科技股份有限公司 基于多传感器融合的除草机器人定位方法、系统及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614973A (zh) * 2018-11-22 2019-04-12 华南农业大学 水稻秧苗及苗期杂草图像语义分割方法、系统、设备和介质
CN209643363U (zh) * 2019-01-08 2019-11-19 安徽农业大学 一种基于深度视觉的智能除草机器人、系统
US20200011019A1 (en) * 2017-02-06 2020-01-09 Bilberry Sas Weeding systems and methods, railway weeding vehicles
CN111340141A (zh) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) 一种基于深度学习的作物幼苗与杂草检测方法及系统
CN111837593A (zh) * 2020-07-24 2020-10-30 安徽农业大学 一种新型基于机器视觉和卷积神经网络算法的前胡除草机
EP3811748A1 (en) * 2019-10-24 2021-04-28 Ekobot Ab A weeding machine and a method for carrying out weeding using the weeding machine
CN113597874A (zh) * 2021-09-29 2021-11-05 农业农村部南京农业机械化研究所 一种除草机器人及其除草路径的规划方法、装置和介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2327039A1 (en) * 2008-06-20 2011-06-01 Université De Liège Gembloux Agro-Bio Tech Weed detection and/or destruction
CN106951836B (zh) * 2017-03-05 2019-12-13 北京工业大学 基于先验阈值优化卷积神经网络的作物覆盖度提取方法
CN112380926B (zh) * 2020-10-28 2024-02-20 安徽农业大学 一种田间除草机器人除草路径规划系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200011019A1 (en) * 2017-02-06 2020-01-09 Bilberry Sas Weeding systems and methods, railway weeding vehicles
CN109614973A (zh) * 2018-11-22 2019-04-12 华南农业大学 水稻秧苗及苗期杂草图像语义分割方法、系统、设备和介质
CN209643363U (zh) * 2019-01-08 2019-11-19 安徽农业大学 一种基于深度视觉的智能除草机器人、系统
EP3811748A1 (en) * 2019-10-24 2021-04-28 Ekobot Ab A weeding machine and a method for carrying out weeding using the weeding machine
CN111340141A (zh) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) 一种基于深度学习的作物幼苗与杂草检测方法及系统
CN111837593A (zh) * 2020-07-24 2020-10-30 安徽农业大学 一种新型基于机器视觉和卷积神经网络算法的前胡除草机
CN113597874A (zh) * 2021-09-29 2021-11-05 农业农村部南京农业机械化研究所 一种除草机器人及其除草路径的规划方法、装置和介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292248A (zh) * 2023-10-30 2023-12-26 吉林农业大学 基于深度学习的农田喷药系统及杂草检测方法
CN117292248B (zh) * 2023-10-30 2024-04-26 吉林农业大学 基于深度学习的农田喷药系统及杂草检测方法

Also Published As

Publication number Publication date
AU2022256171B2 (en) 2024-02-29
AU2022256171A1 (en) 2023-04-13
CN113597874A (zh) 2021-11-05
CN113597874B (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
WO2023050783A1 (zh) 除草机器人及其除草路径的规划方法、装置和介质
Li et al. Detection of fruit-bearing branches and localization of litchi clusters for vision-based harvesting robots
US11771077B2 (en) Identifying and avoiding obstructions using depth information in a single image
Bai et al. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review
US20210360850A1 (en) Automatic driving system for grain processing, automatic driving method, and path planning method
Tang et al. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: A review
Miao et al. Efficient tomato harvesting robot based on image processing and deep learning
US20220101554A1 (en) Extracting Feature Values from Point Clouds to Generate Plant Treatments
Silwal et al. Bumblebee: A Path Towards Fully Autonomous Robotic Vine Pruning.
US11553636B1 (en) Spacing-aware plant detection model for agricultural task control
CA3125700C (en) Automatic driving system for grain processing, automatic driving method and automatic identification method
Parhar et al. A deep learning-based stalk grasping pipeline
Jin et al. Detection method for table grape ears and stems based on a far-close-range combined vision system and hand-eye-coordinated picking test
Jin et al. Far-near combined positioning of picking-point based on depth data features for horizontal-trellis cultivated grape
Liu et al. The Vision-Based Target Recognition, Localization, and Control for Harvesting Robots: A Review
US20220100996A1 (en) Ground Plane Compensation in Identifying and Treating Plants
US20230252789A1 (en) Detection and instant confirmation of treatment action
Sun et al. Object localization methodology in occluded agricultural environments through deep learning and active sensing
Reddy et al. dscout: Unmanned ground vehicle for automatic disease detection and pesticide atomizer
Jogi et al. CNN based Synchronal recognition of Weeds in Farm Crops
Peebles et al. Robotic Harvesting of Asparagus using Machine Learning and Time-of-Flight Imaging–Overview of Development and Field Trials
Fang et al. Classification system study of soybean leaf disease based on deep learning
Qu et al. Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review
Shamshiri et al. An overview of visual servoing for robotic manipulators in digital agriculture
Vijay et al. A review on application of robots in agriculture using deep learning

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022256171

Country of ref document: AU

Date of ref document: 20220421

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874182

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE