CN115755890A - Weeding task path planning method and device based on data processing - Google Patents

Weeding task path planning method and device based on data processing Download PDF

Info

Publication number
CN115755890A
CN115755890A CN202211352669.8A CN202211352669A CN115755890A CN 115755890 A CN115755890 A CN 115755890A CN 202211352669 A CN202211352669 A CN 202211352669A CN 115755890 A CN115755890 A CN 115755890A
Authority
CN
China
Prior art keywords
preset
planned route
weeding
obstacle
position data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211352669.8A
Other languages
Chinese (zh)
Other versions
CN115755890B (en
Inventor
王伟
潘佳笛
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing AYDrone Intelligent Technology Co ltd
Original Assignee
Nanjing AYDrone Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing AYDrone Intelligent Technology Co ltd filed Critical Nanjing AYDrone Intelligent Technology Co ltd
Priority to CN202211352669.8A priority Critical patent/CN115755890B/en
Publication of CN115755890A publication Critical patent/CN115755890A/en
Application granted granted Critical
Publication of CN115755890B publication Critical patent/CN115755890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a weeding task path planning method and device based on data processing. The method comprises the steps that an initial overall image of a target area is obtained through an unmanned aerial vehicle, various target objects in the overall image are identified according to a preset target identification model, a pre-planned route is determined, when the weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle obtains a corrected overall image of the target area again, the pre-planned route is adjusted according to temporary obstacles in the corrected overall image, the corrected planned route is generated, the weeding robot continues to complete weeding tasks according to the corrected planned route, and weeding requirements under complex scenes are better met.

Description

Weeding task path planning method and device based on data processing
Technical Field
The present application relates to data processing technologies, and in particular, to a weeding task path planning method and device based on data processing.
Background
With the urbanization development of China, two major problems of weeding in public areas of society are as follows: firstly, the number of people engaged in agricultural labor is continuously reduced, and the labor cost for weeding is continuously increased; secondly, the environmental protection consciousness is continuously improved, and the restriction degree of each place on the use amount of the herbicide is continuously increased.
The appearance of artificial intelligence technology and robot technology provides an effective way for solving the problems. The existing physical weeding robot system can be used for effectively weeding a large-area. Specifically, the existing weeding robot usually moves based on a fixed route, and obstacle avoidance and adjustment are performed through the machine vision of the weeding robot in the moving process.
However, the above-mentioned methods of performing movement based on a fixed route and adjusting according to machine vision only can implement an obstacle avoidance function of the weeding robot, and cannot effectively perform overall route planning and adjustment, thereby being difficult to meet weeding requirements in complex scenes such as parks and communities.
Disclosure of Invention
The application provides a weeding task path planning method and device based on data processing, which are used for solving the technical problem that the overall planning and adjustment of a route cannot be effectively carried out in a mode of carrying out movement based on a fixed route and carrying out adjustment according to machine vision.
In a first aspect, the present application provides a data processing-based weeding task path planning method, including:
acquiring an initial overall image of a target area by an unmanned aerial vehicle, wherein the target area corresponds to a range to be weeded in the weeding task, and identifying various target objects in the overall image according to a preset target identification model, wherein the target objects are at least one of fixed obstacles, temporary obstacles and the area to be weeded;
determining a pre-planned route according to first position data corresponding to the fixed barrier, second position data corresponding to the to-be-weeded area and a preset route planning model, wherein an initial section of the pre-planned route does not comprise the temporary barrier, and the initial section is a section corresponding to a front preset distance in the pre-planned route;
when the weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle acquires the corrected whole image of the target area again, wherein when the weeding robot is located at the early warning position, the distance between the weeding robot and the temporary obstacle in the initial whole image meets a preset condition; and adjusting the pre-planned route according to the temporary obstacles in the corrected whole image to generate a corrected planned route so that the weeding robot can continuously complete the weeding task according to the corrected planned route.
Optionally, if there is a movement obstacle in the range to be weeded, the identified temporary obstacle includes the movement obstacle; correspondingly, the adjusting the pre-planned route according to the temporary obstacle in the corrected overall image to generate a corrected planned route includes:
predicting a target position of the moving obstacle to determine a position set of the moving obstacle after a preset time length, wherein the position set comprises a plurality of position data, each position data in the plurality of position data corresponds to a position determined by different target position prediction algorithms, and the preset time length is the time length for the weeding robot to move to a dangerous area; the dangerous area is a sector determined by taking the current position of the moving obstacle as the center and the predicted linear movement distance of the moving obstacle in the preset time as the radius, and an included angle between the unfolding direction of the sector and the movement direction of the moving obstacle is within a preset included angle range;
and adjusting the pre-planned route according to the position set to generate the revised planned route, wherein the revised planned route and the position set do not interfere with each other.
Optionally, the fan-shaped angle is in positive correlation with the movement speed of the moving obstacle, and the preset included angle range is determined according to the movement direction of the moving obstacle and the contour line of the route in which the moving obstacle moves currently.
Optionally, the adjusting the pre-planned route according to the location set includes:
obtaining historical position data of the moving barrier in a preset time length, wherein the historical position data comprises position data at each moment in the preset time length, and determining first position data according to the historical position data and a first target position prediction algorithm
Figure 484489DEST_PATH_IMAGE001
Determining second position data based on the historical position data and a second target position prediction algorithm
Figure 612982DEST_PATH_IMAGE002
Determining third position data according to the historical position data and a third target position prediction algorithm
Figure 435358DEST_PATH_IMAGE003
The initial avoidance zone is determined according to equation 1:
equation 1:
Figure 965697DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 594255DEST_PATH_IMAGE005
Figure 210044DEST_PATH_IMAGE006
Figure 175726DEST_PATH_IMAGE007
Figure 701517DEST_PATH_IMAGE008
a preset correlation function between the type of the moving obstacle and the first target position prediction algorithm,
Figure 94452DEST_PATH_IMAGE009
a preset correlation function between the type of the moving obstacle and the second target position prediction algorithm,
Figure 463116DEST_PATH_IMAGE010
is the type and location of the movement obstacleA preset correlation function between the third target position prediction algorithms, a being the type of the moving obstacle,
Figure 966910DEST_PATH_IMAGE011
the first, second and third areas are S1, S2 and S3, respectively, for the predetermined radius value,
Figure 81628DEST_PATH_IMAGE012
Figure 911043DEST_PATH_IMAGE013
Figure 970266DEST_PATH_IMAGE014
radii of the first region, the second region, and the third region, respectively, and the initial avoidance region is a union of the first region, the second region, and the third region;
and dividing the initial avoidance area into grids, wherein the boundary size of the minimum grid is determined according to a formula 2:
equation 2:
Figure 808909DEST_PATH_IMAGE015
where d is the boundary size of the minimum mesh,
Figure 43713DEST_PATH_IMAGE016
as a function of the velocity of the moving obstacle within the preset time period,
Figure 309609DEST_PATH_IMAGE017
as a function of the change of the direction of movement of the moving obstacle within the preset time period,
Figure 262652DEST_PATH_IMAGE018
is a first weight of the weight set to be a first weight,
Figure 639407DEST_PATH_IMAGE019
in order to be the second weight, the weight is,
Figure 853351DEST_PATH_IMAGE020
is a preset time interval;
determining the evasion weight of each divided grid according to a formula 3:
equation 3:
Figure 431094DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 730488DEST_PATH_IMAGE022
as a function of the position relationship between the grid boundary points and S1, S2 and S3,
Figure 786300DEST_PATH_IMAGE023
Figure 385909DEST_PATH_IMAGE024
Figure 603394DEST_PATH_IMAGE025
Figure 683145DEST_PATH_IMAGE026
is a constant;
and adjusting the pre-planned route according to the avoidance weight of each grid in the initial avoidance area to generate a corrected planned route.
Optionally, the adjusting the pre-planned route according to the avoidance weight of each mesh in the initial avoidance area includes:
if the avoidance weight of the first grid is greater than or equal to a first preset threshold, the shortest distance between the revised planned route and the first grid is greater than a preset distance threshold;
if the avoidance weight of the second grid is greater than or equal to a second preset threshold and smaller than the first preset threshold, the revised planned route avoids the area corresponding to the second grid;
if the avoidance weight of the third grid is smaller than the second preset threshold, judging whether the third grid falls into the sector; if the judgment result is yes, the revised planning route avoids the area corresponding to the third grid; if not, when the preset planning condition is met, planning the area corresponding to the third grid into the corrected planning route.
Optionally, after obtaining the historical position data of the moving obstacle within the preset time period, the method further includes:
determining a motion vector sequence according to each pair of adjacent position coordinate data in the historical position data;
determining a boundary line of a current path of the moving obstacle according to the corrected overall image;
if the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than a preset deviation, and the distance between the motion obstacle and the change point of the current path is greater than a preset safety distance, the revised planned route avoids the current road section of the current path, the current road section is the road section between the current position of the motion obstacle and a predicted position, the predicted position is determined according to the current position and the preset safety distance, the preset safety distance is determined according to the motion speed of the motion obstacle and the preset duration, and the change point comprises: an inflection point and an intersection point of the current path.
Optionally, the determining the first position data according to the historical position data and a first target position prediction algorithm
Figure 401702DEST_PATH_IMAGE001
The method comprises the following steps:
predicting a neural network model corresponding to the algorithm according to the historical position data and the first target position, wherein the historical position data comprises video data within the preset time length, and the video data comprises N frames of images of RGB3 channels;
combining the images of the N frames of RGB3 channels and the preset duration into a 4-channel three-dimensional body with the size of NxHxW, wherein H is the length of the picture, and W is the width of the picture;
inputting the stereo of the 4 channels with the size of NxHxW into a 3D depth convolution residual error network, extracting features, outputting feature maps with different scales, and fusing the features with different scales by using a preset feature pyramid to obtain a feature map of the 2D channels;
predicting the position in the layer to be predicted according to the feature map of the 2D multiple channels, and converting the position on the image into coordinates to obtain the first position data
Figure 855817DEST_PATH_IMAGE001
In a second aspect, the present application provides a data processing-based weeding task path planning device, including:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring an initial whole image of a target area, the target area corresponds to a range to be weeded in a weeding task, and various target objects in the whole image are identified according to a preset target identification model, and the target objects are at least one of a fixed obstacle, a temporary obstacle and the area to be weeded;
the processing module is used for determining a pre-planned route according to first position data corresponding to the fixed barrier, second position data corresponding to the area to be weeded and a preset route planning model, wherein an initial road section of the pre-planned route does not include the temporary barrier, and the initial road section is a road section corresponding to a front preset distance in the pre-planned route;
the acquisition module is further configured to acquire a corrected overall image of the target area again when the weeding robot moves to an early warning position according to the pre-planned route, wherein when the weeding robot is located at the early warning position, a distance between the weeding robot and the temporary obstacle in the initial overall image meets a preset condition;
and the planning module is used for adjusting the pre-planned route according to the temporary barrier in the corrected whole image to generate a corrected planned route so that the weeding robot can continuously finish the weeding task according to the corrected planned route.
Optionally, if there is a movement obstacle in the range to be weeded, the identified temporary obstacle includes the movement obstacle; the planning module is specifically configured to:
predicting a target position of the moving obstacle to determine a position set of the moving obstacle after a preset time length, wherein the position set comprises a plurality of position data, each position data in the plurality of position data corresponds to a position determined by different target position prediction algorithms, and the preset time length is the time length for the weeding robot to move to a dangerous area; the dangerous area is a sector determined by taking the current position of the moving obstacle as the center and the predicted linear movement distance of the moving obstacle in the preset time as the radius, and an included angle between the unfolding direction of the sector and the movement direction of the moving obstacle is within a preset included angle range;
and adjusting the pre-planned route according to the position set to generate a revised planned route, wherein the revised planned route does not interfere with the position set.
Optionally, the fan-shaped angle is in positive correlation with the movement speed of the moving obstacle, and the preset included angle range is determined according to the movement direction of the moving obstacle and the contour line of the route in which the moving obstacle moves currently.
Optionally, the planning module is specifically configured to:
obtaining historical position data of the moving obstacle in a preset time length, wherein the historical position data comprises position data of the moving obstacle at each moment in the preset time length, and determining first position data according to the historical position data and a first target position prediction algorithm
Figure 244205DEST_PATH_IMAGE001
Determining second position data based on the historical position data and a second target position prediction algorithm
Figure 987033DEST_PATH_IMAGE002
Determining third position data according to the historical position data and a third target position prediction algorithm
Figure 243702DEST_PATH_IMAGE003
The initial avoidance zone is determined according to equation 1:
equation 1:
Figure 552323DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 111612DEST_PATH_IMAGE005
Figure 872894DEST_PATH_IMAGE006
Figure 667675DEST_PATH_IMAGE007
Figure 830803DEST_PATH_IMAGE008
a preset correlation function between the type of the moving obstacle and the first target position prediction algorithm,
Figure 560993DEST_PATH_IMAGE009
a preset correlation function between the type of the moving obstacle and the second target position prediction algorithm,
Figure 543992DEST_PATH_IMAGE010
is a preset correlation function between the type of the moving obstacle and the third target position prediction algorithm, a is the type of the moving obstacle,
Figure 142464DEST_PATH_IMAGE011
the first, second and third areas are S1, S2 and S3, respectively, for the predetermined radius value,
Figure 160098DEST_PATH_IMAGE012
Figure 326769DEST_PATH_IMAGE013
Figure 797064DEST_PATH_IMAGE014
radii of the first region, the second region, and the third region, respectively, and the initial avoidance region is a union of the first region, the second region, and the third region;
and dividing the initial avoidance area into grids, wherein the boundary size of the minimum grid is determined according to a formula 2:
equation 2:
Figure 199227DEST_PATH_IMAGE015
where d is the boundary size of the minimum mesh,
Figure 805789DEST_PATH_IMAGE016
as a function of the velocity of the moving obstacle within the preset time period,
Figure 143360DEST_PATH_IMAGE017
as a function of the change of the direction of movement of the moving obstacle within the preset time period,
Figure 100952DEST_PATH_IMAGE018
is a first weight of the weight set to be a first weight,
Figure 306805DEST_PATH_IMAGE019
is a second weight that is a function of the first weight,
Figure 767874DEST_PATH_IMAGE020
is a preset time interval;
determining the evasion weight of each divided grid according to a formula 3:
equation 3:
Figure 10767DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 455655DEST_PATH_IMAGE022
as a function of the position relationship between the grid boundary points and S1, S2 and S3,
Figure 199620DEST_PATH_IMAGE023
Figure 769056DEST_PATH_IMAGE024
Figure 979588DEST_PATH_IMAGE025
Figure 380614DEST_PATH_IMAGE026
is a constant;
and adjusting the pre-planned route according to the avoidance weight of each grid in the initial avoidance area to generate a revised planned route.
Optionally, the planning module is specifically configured to:
if the avoidance weight of the first grid is greater than or equal to a first preset threshold, planning that the shortest distance between the corrected planned route and the first grid is greater than a preset distance threshold;
if the avoidance weight of the second grid is greater than or equal to a second preset threshold and smaller than the first preset threshold, planning the corrected planned route to avoid an area corresponding to the second grid;
if the avoidance weight of the third grid is smaller than the second preset threshold, judging whether the third grid falls into the sector; if so, planning the corrected planning route to avoid an area corresponding to the third grid; if the judgment result is negative, when the preset planning condition is met, planning to plan the area corresponding to the third grid into the corrected planning route.
Optionally, the planning module is further specifically configured to:
determining a motion vector sequence according to each pair of adjacent position coordinate data in the historical position data;
determining a boundary line of a current path of the moving obstacle according to the corrected overall image;
if the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than a preset deviation, and the distance between the motion obstacle and the change point of the current path is greater than a preset safety distance, the revised planned route avoids the current road section of the current path, the current road section is the road section between the current position of the motion obstacle and a predicted position, the predicted position is determined according to the current position and the preset safety distance, the preset safety distance is determined according to the motion speed of the motion obstacle and the preset duration, and the change point comprises: an inflection point and an intersection point of the current path.
Optionally, the planning module is further specifically configured to:
predicting a neural network model corresponding to the algorithm according to the historical position data and the first target position, wherein the historical position data comprises video data within the preset time length, and the video data comprises N frames of images of RGB3 channels;
combining the images of the N frames of RGB3 channels and the preset duration into a 4-channel three-dimensional body with the size of NxHxW, wherein H is the length of the picture, and W is the width of the picture;
inputting the 4-channel NxHxW stereo into a 3D depth convolution residual error network, extracting features, outputting feature maps of different scales, and fusing the features of different scales by using a preset feature pyramid to obtain a 2D multi-channel feature map;
predicting the position in the layer to be predicted according to the feature map of the 2D multiple channels, and converting the position on the image into coordinates to obtain the first position data
Figure 69215DEST_PATH_IMAGE001
In a third aspect, the present application provides an electronic device, comprising:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the possible data processing-based weeding mission path planning methods of the first aspect via execution of the executable instructions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing any one of the possible data processing-based weeding mission path planning methods described in the first aspect when executed by a processor.
According to the weeding task path planning method and device based on data processing, an initial overall image of a target area is obtained through an unmanned aerial vehicle, various target objects in the overall image are identified according to a preset target identification model, a pre-planned route is determined according to first position data corresponding to a fixed barrier and second position data corresponding to an area to be weeded and a preset route planning model, when a weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle obtains a corrected overall image of the target area again, the pre-planned route is adjusted according to a temporary barrier in the corrected overall image, a corrected planned route is generated, and therefore the weeding robot can continue to complete weeding tasks according to the corrected planned route. Therefore, the method provided by the embodiment can determine the pre-planned route through the initial overall image to meet the overall route planning requirement under the condition of the change of the obstacle in the complex scene, and can also generate the corrected planned route according to the corrected overall image to meet the local route adjustment requirement after the dynamic change of the temporary obstacle in the complex scene. And furthermore, the reasonability and the adaptability of the route planning of the weeding robot can be effectively improved, so that the weeding requirement under a complex scene can be better met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow diagram illustrating a data processing-based weeding task path planning method according to an example embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of S105 shown herein according to an example embodiment;
FIG. 3 is a schematic flow diagram of a data processing based weeding task path planning apparatus according to an example embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device shown in accordance with an example embodiment of the present invention.
Specific embodiments of the present application have been shown by way of example in the drawings and will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
With the urbanization development of China, two major problems of weeding in public areas of society are as follows: firstly, the number of people engaged in agricultural labor is continuously reduced, and the labor cost for weeding is continuously increased; secondly, the environmental protection consciousness is continuously improved, and the limitation degree of the herbicide use amount is continuously increased in various places. The appearance of artificial intelligence technology and robot technology provides an effective way for solving the problems. The existing physical weeding robot system can be used for effectively weeding a large-area. Specifically, the existing weeding robot usually moves based on a fixed route, and obstacle avoidance and adjustment are performed through the machine vision of the weeding robot in the moving process.
However, the above-mentioned methods of performing movement based on a fixed route and adjusting according to machine vision only can implement an obstacle avoidance function of the weeding robot, and cannot effectively perform overall route planning and adjustment, so that it is difficult to meet weeding requirements in complex scenes such as parks and communities. In particular, since obstacles in a complex scene are dynamically changed, there are temporary obstacles such as temporarily parked vehicles, piled goods, in addition to fixed obstacles, and when the weeding robot moves only using the existing fixed route based on, it is possible that the distribution of the current obstacles has changed greatly from when the fixed route is set. In this case, when the fixed path is continuously used for movement, although obstacle avoidance can be performed subsequently by using machine vision, frequent obstacle avoidance wastes a large amount of working time of the weeding robot, and the working efficiency of the weeding robot is seriously affected.
In order to solve the above problems, according to the embodiment provided by the application, an unmanned aerial vehicle is used for obtaining an initial overall image of a target area, various target objects in the overall image are identified according to a preset target identification model, a pre-planned route is determined according to first position data corresponding to a fixed obstacle, second position data corresponding to an area to be weeded and a preset route planning model, when a weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle obtains a corrected overall image of the target area again, the pre-planned route is adjusted according to a temporary obstacle in the corrected overall image, and a corrected planned route is generated, so that the weeding robot continues to complete a weeding task according to the corrected planned route. Therefore, the method provided by the embodiment can determine the pre-planned route through the initial overall image to meet the overall route planning requirement under the condition of changing the obstacles in the complex scene, and can also generate the corrected planned route according to the corrected overall image to meet the local route adjustment requirement after the temporary obstacles dynamically change in the complex scene. And furthermore, the reasonability and the adaptability of the route planning of the weeding robot can be effectively improved, so that the weeding requirement under a complex scene can be better met.
Fig. 1 is a schematic flow chart diagram illustrating a data processing-based weeding task path planning method according to an example embodiment of the present application. As shown in fig. 1, the weeding task path planning method based on data processing provided in this embodiment includes:
s101, acquiring an initial overall image of a target area through an unmanned aerial vehicle.
When a weeding task is triggered, an initial overall image of a target area is obtained through the unmanned aerial vehicle, wherein the target area corresponds to a range to be weeded in the weeding task, such as the initial overall image of a community, a park and the like. Optionally, if the target area range is large, the partial images may be obtained by a plurality of unmanned aerial vehicles, and then the partial images may be subjected to image stitching to generate an initial overall image of the target area.
In addition, for an area with a block in the target area, for example, a fixed backdrop in the park lawn blocks a part of lawn, and for such ranges, since all ranges are fixed blocking ranges, the range corresponding to the part in the initial whole image can be automatically added into the range to be weeded.
And S102, identifying various target objects in the whole image according to a preset target identification model.
After the initial overall image of the target area is obtained, various target objects in the overall image can be identified according to a preset target identification model, wherein the target objects are at least one of fixed obstacles, temporary obstacles and the area to be weeded. The target recognition model can be trained according to various obstacles which are commonly found in various scenes. The identified fixed obstacles can be fixed pillars, the temporary obstacles can be vehicles parked temporarily, and the area to be weeded is the lawn needing weeding operation.
S103, determining a pre-planned route according to the first position data corresponding to the fixed barrier, the second position data corresponding to the area to be weeded and a preset route planning model.
In this step, a pre-planned route may be determined according to the first position data corresponding to the fixed obstacle, the second position data corresponding to the area to be weeded, and the preset route planning model, wherein the initial section of the pre-planned route does not include the temporary obstacle, and the initial section is a section corresponding to a previous preset distance in the pre-planned route. It should be noted that the preset route planning model may be a route planning based on a PREC algorithm or other whole area coverage route algorithms, and is not limited in this embodiment.
And S104, when the weeding robot moves to the early warning position according to the pre-planned route, the unmanned aerial vehicle acquires the corrected whole image of the target area again.
When the weeding robot moves to the early warning position according to the pre-planned route, the unmanned aerial vehicle acquires the corrected whole image of the target area again, wherein when the weeding robot is located at the early warning position, the distance between the weeding robot and the temporary barrier in the initial whole image meets the preset condition. It can be understood that, when the distance between the weeding robot and the temporary obstacle is smaller than or equal to the preset distance, the early warning position is the position where the weeding robot is located, that is, when the weeding robot is located at the early warning position, the distance between the weeding robot and the temporary obstacle in the initial overall image is smaller than or equal to the preset distance.
And S105, adjusting the pre-planned route according to the temporary obstacles in the corrected overall image to generate a corrected planned route.
And S106, the weeding robot continues to complete the weeding task according to the corrected planning route.
In S105-S106, the pre-planned route is adjusted according to the temporary obstacles in the corrected whole image, and a corrected planned route is generated, so that the weeding robot continues to complete the weeding task according to the corrected planned route.
In this embodiment, an initial overall image of a target area is obtained by an unmanned aerial vehicle, various target objects in the overall image are identified according to a preset target identification model, a pre-planned route is determined according to first position data corresponding to a fixed obstacle, second position data corresponding to an area to be weeded and a preset route planning model, when a weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle obtains a corrected overall image of the target area again, adjusts the pre-planned route according to a temporary obstacle in the corrected overall image, and generates a corrected planned route, so that the weeding robot continues to complete a weeding task according to the corrected planned route. Therefore, the method provided by the embodiment can determine the pre-planned route through the initial overall image to meet the overall route planning requirement under the condition of the change of the obstacle in the complex scene, and can also generate the corrected planned route according to the corrected overall image to meet the local route adjustment requirement after the dynamic change of the temporary obstacle in the complex scene. And furthermore, the reasonability and the adaptability of the route planning of the weeding robot can be effectively improved, so that the weeding requirement under a complex scene can be better met.
Fig. 2 is a schematic flow chart of S105 shown in the present application according to an example embodiment. As shown in fig. 2, in the weeding mission path planning method based on data processing provided by this embodiment, S105 includes:
s1051, predicting the target position of the moving obstacle to determine the position set of the moving obstacle after a preset time.
If a movement obstacle exists in the range to be weeded, the identified temporary obstacle comprises the movement obstacle. Wherein the movement obstacle may be a moving vehicle, another weeding robot, a forage reclaiming robot, or the like.
Specifically, the target position of the moving obstacle may be predicted to determine a position set of the moving obstacle after a preset time period, where the position set includes a plurality of position data, each of the plurality of position data corresponds to a position determined by a different target position prediction algorithm, and the preset time period is a time period during which the weeding robot moves to a dangerous area. The dangerous area is a sector determined by taking the current position of the moving obstacle as the center and the predicted linear movement distance of the moving obstacle in a preset time period as the radius, and an included angle between the unfolding direction of the sector and the movement direction of the moving obstacle is within a preset included angle range, wherein the unfolding direction of the sector is a vector direction pointing from the circle center corresponding to the sector to the arc midpoint corresponding to the sector.
Optionally, the angle of the sector may also be in positive correlation with the movement speed of the moving obstacle, where the preset included angle range is determined according to the movement direction of the moving obstacle and the contour line of the current movement route of the moving obstacle. It will be appreciated that as the speed of movement of the moving obstacle is faster, it corresponds to a higher degree of risk and, therefore, to a greater fan angle. In addition, when the moving obstacle moves on a corresponding route, an included angle formed by the moving direction of the moving obstacle and a contour line of the route in which the moving obstacle currently moves can be used for representing the movement intention of the moving obstacle deviating from the current route, and the included angle between the fan-shaped unfolding direction and the moving direction of the moving obstacle is within a preset included angle range, so that the safety of the moving obstacle continuing to move on the route can be effectively ensured, and the safety of the moving obstacle deviating from the route can also be effectively ensured.
And S1052, adjusting the pre-planned route according to the position set to generate a corrected planned route.
Specifically, in this step, historical position data of the moving obstacle within a preset time period may be obtained, where the historical position data includes position data at each time within the preset time period, and the first position data is determined according to the historical position data and the first target position prediction algorithm
Figure 504876DEST_PATH_IMAGE001
Determining second position data based on the historical position data and a second target position prediction algorithm
Figure 479785DEST_PATH_IMAGE002
Determining third position data according to the historical position data and a third target position prediction algorithm
Figure 633686DEST_PATH_IMAGE003
It should be noted that the first target position prediction algorithm, the second target position prediction algorithm, and the third target position prediction algorithm may be completely different algorithms, for example, a tracking algorithm based on a linear gaussian system, or a target motion trend prediction tracking algorithm, which is not specifically limited in this embodiment.
In one possible implementation, the target location prediction algorithm may be based on a neural network model. Specifically, the method may be a neural network model corresponding to a historical position data and a first target position prediction algorithm, wherein the historical position data includes video data within a preset time duration, and the video data includes images of N frames of RGB3 channels; combining the images of the N frames of RGB3 channels and preset duration into a 4-channel three-dimensional body with the size of NxHxW, wherein H is the length of the picture, and W is the width of the picture; inputting the 4-channel NxHxW stereo into a 3D depth convolution residual error network, extracting features, outputting feature maps of different scales, and fusing the features of different scales by using a preset feature pyramid to obtain a 2D multi-channel feature map; predicting the position in the layer to be predicted according to the feature map of the 2D multiple channels, and converting the position on the image into coordinates to obtain first position data
Figure 860399DEST_PATH_IMAGE001
In determining the first position data
Figure 416145DEST_PATH_IMAGE001
Second position data
Figure 561956DEST_PATH_IMAGE002
And third position data
Figure 78519DEST_PATH_IMAGE003
Then, the initial avoidance region is determined according to equation 1:
equation 1:
Figure 967977DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 378230DEST_PATH_IMAGE005
Figure 835887DEST_PATH_IMAGE006
Figure 964380DEST_PATH_IMAGE007
Figure 391951DEST_PATH_IMAGE008
is a preset correlation function between the type of the moving obstacle and the first target position prediction algorithm,
Figure 63235DEST_PATH_IMAGE009
is a preset correlation function between the type of the moving obstacle and the second target position prediction algorithm,
Figure 285268DEST_PATH_IMAGE010
is a preset correlation function between the type of the moving obstacle and the third target position prediction algorithm, a is the type of the moving obstacle,
Figure 901058DEST_PATH_IMAGE011
the first, second and third areas are S1, S2 and S3, respectively, for the predetermined radius value,
Figure 273264DEST_PATH_IMAGE012
Figure 658109DEST_PATH_IMAGE013
Figure 582203DEST_PATH_IMAGE014
the radii of the first region, the second region and the third region, respectively, the initial avoidance region is a union of the first region, the second region and the third region,
Figure 295075DEST_PATH_IMAGE027
is the coordinate of any point in the initial avoidance area;
and dividing the initial avoidance area into grids, and determining the boundary size of the minimum grid according to a formula 2:
equation 2:
Figure 64448DEST_PATH_IMAGE015
where d is the boundary size of the minimum mesh,
Figure 581097DEST_PATH_IMAGE016
as a function of the velocity of the moving obstacle within a preset time period,
Figure 551458DEST_PATH_IMAGE017
as a function of the change of the direction of movement of the moving obstacle within a preset time period,
Figure 876261DEST_PATH_IMAGE018
is a first weight of the weight set to be a first weight,
Figure 449324DEST_PATH_IMAGE019
in order to be the second weight, the weight is,
Figure 949707DEST_PATH_IMAGE020
is a preset time interval;
determining the evasion weight of each divided grid according to a formula 3:
equation 3:
Figure 950024DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 762122DEST_PATH_IMAGE022
as a function of the position relationship between the grid boundary points and S1, S2 and S3,
Figure 873298DEST_PATH_IMAGE023
Figure 228187DEST_PATH_IMAGE024
Figure 399405DEST_PATH_IMAGE025
Figure 698799DEST_PATH_IMAGE026
is a constant;
and adjusting the pre-planned route according to the avoidance weight of each grid in the initial avoidance area to generate a corrected planned route.
In one possible design, if the avoidance weight of the first grid is greater than or equal to a first preset threshold, the shortest distance between the revised planned route and the first grid is greater than a preset distance threshold;
if the avoidance weight of the second grid is greater than or equal to a second preset threshold and smaller than a first preset threshold, the planned route is corrected to avoid the area corresponding to the second grid;
if the evasion weight of the third grid is smaller than a second preset threshold, judging whether the third grid falls into a fan shape; if so, correcting the planned route to avoid the area corresponding to the third grid; if the judgment result is negative, when the preset planning condition is met, the area corresponding to the third grid is planned into the corrected planning route.
In addition, in another possible implementation manner, after the historical position data of the moving obstacle within the preset time period is obtained, a motion vector sequence may be determined according to each pair of adjacent two position coordinate data in the historical position data, and then the boundary line of the current path where the moving obstacle is located may be determined according to the corrected whole image. If the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than a preset deviation, and the distance between the motion barrier and the change point of the current path is greater than a preset safety distance, the planned route is corrected to avoid the current road section of the current path, wherein the current road section is the road section between the current position of the motion barrier and the predicted position, the predicted position is determined according to the current position and the preset safety distance, the preset safety distance is determined according to the motion speed of the motion barrier and the preset duration, and the change point comprises: the inflection point and the intersection point of the current path. In this implementation manner, when it is determined that the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than the preset deviation and the distance between the obstacle moving from the change point of the current path is greater than the preset safety distance, the obstacle moving is always on the current path and still moves on the current path after the predicted time length, so that the position prediction reliability of such obstacle moving is increased by using the characteristic, and the rationality of correcting the planned route is further improved.
Fig. 3 is a schematic flow chart of a data processing-based weeding task path planning device according to an example embodiment of the present application. As shown in fig. 3, the weeding task path planning apparatus 300 based on data processing according to this embodiment includes:
an obtaining module 301, configured to obtain an initial overall image of a target area, where the target area corresponds to a to-be-weeded range in the weeding task, and identify various target objects in the overall image according to a preset target identification model, where the target objects are at least one of a fixed obstacle, a temporary obstacle, and a to-be-weeded area;
a processing module 302, configured to determine a pre-planned route according to first position data corresponding to the fixed obstacle, second position data corresponding to the to-be-weeded area, and a preset route planning model, where an initial road segment of the pre-planned route does not include the temporary obstacle, and the initial road segment is a road segment corresponding to a preset distance ahead of the pre-planned route;
the obtaining module 301 is further configured to obtain a corrected overall image of the target area again when the weeding robot moves to an early warning position according to the pre-planned route, where a distance between the weeding robot and the temporary obstacle in the initial overall image meets a preset condition when the weeding robot is located at the early warning position;
the planning module 303 is configured to adjust the pre-planned route according to the temporary obstacle in the corrected whole image, and generate a corrected planned route, so that the weeding robot continues to complete the weeding task according to the corrected planned route.
Optionally, if there is a movement obstacle in the range to be weeded, the identified temporary obstacle includes the movement obstacle; the planning module 303 is specifically configured to:
predicting a target position of the moving obstacle to determine a position set of the moving obstacle after a preset time length, wherein the position set comprises a plurality of position data, each position data in the plurality of position data corresponds to a position determined by different target position prediction algorithms, and the preset time length is the time length for the weeding robot to move to a dangerous area; the dangerous area is a sector determined by taking the current position of the moving obstacle as the center and the predicted linear movement distance of the moving obstacle in the preset time as the radius, and an included angle between the unfolding direction of the sector and the movement direction of the moving obstacle is within a preset included angle range;
and adjusting the pre-planned route according to the position set to generate a revised planned route, wherein the revised planned route does not interfere with the position set.
Optionally, the fan-shaped angle is in positive correlation with the movement speed of the moving obstacle, and the preset included angle range is determined according to the movement direction of the moving obstacle and the contour line of the route in which the moving obstacle moves currently.
Optionally, the planning module 303 is specifically configured to:
obtaining historical position data of the moving obstacle in a preset time length, wherein the historical position data comprises position data of the moving obstacle at each moment in the preset time length, and determining first position data according to the historical position data and a first target position prediction algorithm
Figure 489032DEST_PATH_IMAGE001
Determining second position data based on the historical position data and a second target position prediction algorithm
Figure 557482DEST_PATH_IMAGE002
According to said history bitDetermining third position data by using position data and third target position prediction algorithm
Figure 899602DEST_PATH_IMAGE003
The initial avoidance zone is determined according to equation 1:
equation 1:
Figure 827238DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 545795DEST_PATH_IMAGE005
Figure 468752DEST_PATH_IMAGE006
Figure 981773DEST_PATH_IMAGE007
Figure 131125DEST_PATH_IMAGE008
a preset correlation function between the type of the moving obstacle and the first target position prediction algorithm,
Figure 387794DEST_PATH_IMAGE009
a preset correlation function between the type of the moving obstacle and the second target position prediction algorithm,
Figure 430837DEST_PATH_IMAGE010
is a preset correlation function between the type of the moving obstacle and the third target position prediction algorithm, a is the type of the moving obstacle,
Figure 849180DEST_PATH_IMAGE011
the first, second and third areas are S1, S2 and S3, respectively, for the predetermined radius value,
Figure 344883DEST_PATH_IMAGE012
Figure 139664DEST_PATH_IMAGE013
Figure 771633DEST_PATH_IMAGE014
the initial avoidance area is a union of the first area, the second area and the third area;
and dividing the initial avoidance area into grids, wherein the boundary size of the minimum grid is determined according to a formula 2:
equation 2:
Figure 755684DEST_PATH_IMAGE015
where d is the boundary size of the minimum mesh,
Figure 473104DEST_PATH_IMAGE016
as a function of the velocity of the moving obstacle within the preset time period,
Figure 805996DEST_PATH_IMAGE017
as a function of the change of the direction of movement of the moving obstacle within the preset time period,
Figure 823631DEST_PATH_IMAGE018
is a first weight of the weight set to be a first weight,
Figure 583776DEST_PATH_IMAGE019
in order to be the second weight, the weight is,
Figure 54072DEST_PATH_IMAGE020
is a preset time interval;
determining the evasion weight of each divided grid according to a formula 3:
equation 3:
Figure 190655DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 797217DEST_PATH_IMAGE022
as a function of the positional relationship between the grid boundary points and S1, S2, S3,
Figure 728264DEST_PATH_IMAGE023
Figure 685856DEST_PATH_IMAGE024
Figure 360551DEST_PATH_IMAGE025
Figure 821619DEST_PATH_IMAGE026
is a constant;
and adjusting the pre-planned route according to the avoidance weight of each grid in the initial avoidance area to generate a corrected planned route.
Optionally, the planning module 303 is specifically configured to:
if the avoidance weight of the first grid is greater than or equal to a first preset threshold, planning that the shortest distance between the corrected planned route and the first grid is greater than a preset distance threshold;
if the avoidance weight of the second grid is greater than or equal to a second preset threshold and smaller than the first preset threshold, planning the corrected planned route to avoid an area corresponding to the second grid;
if the avoidance weight of the third grid is smaller than the second preset threshold, judging whether the third grid falls into the sector; if so, planning the corrected planning route to avoid an area corresponding to the third grid; if the judgment result is negative, when the preset planning condition is met, planning to plan the area corresponding to the third grid into the corrected planning route.
Optionally, the planning module 303 is further specifically configured to:
determining a motion vector sequence according to each pair of adjacent position coordinate data in the historical position data;
determining a boundary line of a current path of the moving obstacle according to the corrected overall image;
if the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than a preset deviation, and the distance between the motion obstacle and the change point of the current path is greater than a preset safety distance, the revised planned route avoids the current road section of the current path, the current road section is the road section between the current position of the motion obstacle and a predicted position, the predicted position is determined according to the current position and the preset safety distance, the preset safety distance is determined according to the motion speed of the motion obstacle and the preset duration, and the change point comprises: an inflection point and an intersection point of the current path.
Optionally, the planning module 303 is further specifically configured to:
predicting a neural network model corresponding to the algorithm according to the historical position data and the first target position, wherein the historical position data comprises video data within the preset time length, and the video data comprises N frames of images of RGB3 channels;
combining the images of the N frames of RGB3 channels and the preset duration into a 4-channel three-dimensional body with the size of NxHxW, wherein H is the length of the picture, and W is the width of the picture;
inputting the 4-channel NxHxW stereo into a 3D depth convolution residual error network, extracting features, outputting feature maps of different scales, and fusing the features of different scales by using a preset feature pyramid to obtain a 2D multi-channel feature map;
predicting the position in the layer to be predicted according to the feature map of the 2D multiple channels, and converting the position on the image into coordinates to obtain the first position data
Figure 189146DEST_PATH_IMAGE001
In this embodiment, an initial overall image of a target area is obtained by an unmanned aerial vehicle, various target objects in the overall image are identified according to a preset target identification model, a pre-planned route is determined according to first position data corresponding to a fixed obstacle, second position data corresponding to an area to be weeded and a preset route planning model, when a weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle obtains a corrected overall image of the target area again, adjusts the pre-planned route according to a temporary obstacle in the corrected overall image, and generates a corrected planned route, so that the weeding robot continues to complete a weeding task according to the corrected planned route. Therefore, the method provided by the embodiment can determine the pre-planned route through the initial overall image to meet the overall route planning requirement under the condition of the change of the obstacle in the complex scene, and can also generate the corrected planned route according to the corrected overall image to meet the local route adjustment requirement after the dynamic change of the temporary obstacle in the complex scene. And furthermore, the reasonability and the adaptability of the route planning of the weeding robot can be effectively improved, so that the weeding requirement under a complex scene can be better met. It should be noted that the weeding task path planning device based on data processing provided in the embodiment shown in fig. 3 can be used to execute the weeding task path planning method based on data processing provided in the embodiments shown in fig. 1 to fig. 2, and the specific implementation and technical effects are similar and will not be described again here.
Fig. 4 is a schematic structural diagram of an electronic device shown in accordance with an example embodiment of the present invention. As shown in fig. 4, the present embodiment provides an electronic device 400 including: a processor 401 and a memory 402; wherein:
a memory 402 for storing a computer program, which may also be a flash (flash memory).
A processor 401 for executing the execution instructions stored in the memory to implement the steps of the above method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 402 may be separate or integrated with the processor 401.
When the memory 402 is a device independent of the processor 401, the electronic device 400 may further include:
a bus 403 for connecting the memory 402 and the processor 401.
The present embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes the methods provided by the above various embodiments.
The present embodiments also provide a program product comprising a computer program, the computer program being stored in a readable storage medium. The computer program can be read from a readable storage medium by at least one processor of the electronic device, and the computer program can be executed by the at least one processor to enable the electronic device to implement the methods provided by the various embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A weeding task path planning method based on data processing is characterized by comprising the following steps:
acquiring an initial overall image of a target area by an unmanned aerial vehicle, wherein the target area corresponds to a range to be weeded in the weeding task, and identifying various target objects in the overall image according to a preset target identification model, wherein the target objects are at least one of fixed obstacles, temporary obstacles and the area to be weeded;
determining a pre-planned route according to first position data corresponding to the fixed barrier, second position data corresponding to the area to be weeded and a preset route planning model, wherein an initial section of the pre-planned route does not include the temporary barrier, and the initial section is a section corresponding to a front preset distance in the pre-planned route;
when the weeding robot moves to an early warning position according to the pre-planned route, the unmanned aerial vehicle acquires the corrected whole image of the target area again, wherein when the weeding robot is located at the early warning position, the distance between the weeding robot and the temporary obstacle in the initial whole image meets a preset condition; and adjusting the pre-planned route according to the temporary obstacles in the corrected whole image to generate a corrected planned route so that the weeding robot can continuously complete the weeding task according to the corrected planned route.
2. A data processing-based weeding mission path planning method according to claim 1, wherein if there is a moving obstacle within the range to be weeded, the identified temporary obstacle includes the moving obstacle; correspondingly, the adjusting the pre-planned route according to the temporary obstacle in the corrected overall image to generate a corrected planned route includes:
predicting a target position of the moving obstacle to determine a position set of the moving obstacle after a preset time length, wherein the position set comprises a plurality of position data, each position data in the plurality of position data corresponds to a position determined by different target position prediction algorithms, and the preset time length is the time length for the weeding robot to move to a dangerous area; the dangerous area is a sector determined by taking the current position of the moving obstacle as the center and the predicted linear movement distance of the moving obstacle in the preset time as the radius, and an included angle between the unfolding direction of the sector and the movement direction of the moving obstacle is within a preset included angle range;
and adjusting the pre-planned route according to the position set to generate a revised planned route, wherein the revised planned route does not interfere with the position set.
3. A data processing-based weeding mission path planning method according to claim 2, wherein the angle of the sector is positively correlated with the movement speed of the moving obstacle, and the preset included angle range is determined according to the movement direction of the moving obstacle and the contour line of the route in which the moving obstacle is currently moving.
4. A data processing based weeding mission path planning method according to claim 3, wherein said adjusting the pre-planned route according to the set of locations comprises:
obtaining historical position data of the moving obstacle in a preset time length, wherein the historical position data comprises position data of the moving obstacle at each moment in the preset time length, and determining first position data according to the historical position data and a first target position prediction algorithm
Figure 690231DEST_PATH_IMAGE001
Determining second position data based on the historical position data and a second target position prediction algorithm
Figure 938810DEST_PATH_IMAGE002
Determining third position data according to the historical position data and a third target position prediction algorithm
Figure 678227DEST_PATH_IMAGE003
The initial avoidance zone is determined according to equation 1:
equation 1:
Figure 430282DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 721586DEST_PATH_IMAGE005
Figure 191882DEST_PATH_IMAGE006
Figure 203831DEST_PATH_IMAGE007
Figure 341551DEST_PATH_IMAGE008
a preset correlation function between the type of the moving obstacle and the first target position prediction algorithm,
Figure 944702DEST_PATH_IMAGE009
a preset correlation function between the type of the moving obstacle and the second target position prediction algorithm,
Figure 902294DEST_PATH_IMAGE010
is a preset correlation function between the type of the moving obstacle and the third target position prediction algorithm, a is the type of the moving obstacle,
Figure 249093DEST_PATH_IMAGE011
the first, second and third areas are S1, S2 and S3, respectively, for the predetermined radius value,
Figure 975740DEST_PATH_IMAGE012
Figure 331549DEST_PATH_IMAGE013
Figure 510858DEST_PATH_IMAGE014
the initial avoidance area is a union of the first area, the second area and the third area;
and dividing the initial avoidance area into grids, wherein the boundary size of the minimum grid is determined according to a formula 2:
equation 2:
Figure 395768DEST_PATH_IMAGE015
where d is the boundary size of the minimum mesh,
Figure 976922DEST_PATH_IMAGE016
as a function of the velocity of the moving obstacle within the preset time period,
Figure 46509DEST_PATH_IMAGE017
as a function of the change of the direction of movement of the moving obstacle within the preset time period,
Figure 447535DEST_PATH_IMAGE018
is a first weight of the first group,
Figure 729612DEST_PATH_IMAGE019
in order to be the second weight, the weight is,
Figure 306218DEST_PATH_IMAGE020
is a preset time interval;
determining the evasion weight of each divided grid according to a formula 3:
equation 3:
Figure 15548DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 435028DEST_PATH_IMAGE022
between grid boundary points and S1, S2, S3The function of the positional relationship is,
Figure 661741DEST_PATH_IMAGE023
Figure 217487DEST_PATH_IMAGE024
Figure 832139DEST_PATH_IMAGE025
Figure 738915DEST_PATH_IMAGE026
is a constant;
and adjusting the pre-planned route according to the avoidance weight of each grid in the initial avoidance area to generate a corrected planned route.
5. The data processing-based weeding mission path planning method according to claim 4, wherein said adjusting the pre-planned route according to the avoidance weights of each mesh in the initial avoidance area comprises:
if the avoidance weight of the first grid is greater than or equal to a first preset threshold, the shortest distance between the revised planned route and the first grid is greater than a preset distance threshold;
if the avoidance weight of the second grid is greater than or equal to a second preset threshold and smaller than the first preset threshold, the revised planned route avoids the area corresponding to the second grid;
if the avoidance weight of the third grid is smaller than the second preset threshold, judging whether the third grid falls into the sector; if so, the revised planning route avoids the area corresponding to the third grid; if not, when the preset planning condition is met, planning the area corresponding to the third grid into the corrected planning route.
6. A data processing-based weeding mission path planning method according to any one of claims 3-5, further comprising, after acquiring historical position data of the moving obstacle within a preset time period:
determining a motion vector sequence according to each pair of adjacent position coordinate data in the historical position data;
determining a boundary line of a current path of the moving obstacle according to the corrected overall image;
if the deviation between the angle change rate of the motion vector sequence and the corresponding angle change rate of the boundary line is smaller than a preset deviation, and the distance between the motion obstacle and the change point of the current path is greater than a preset safety distance, the revised planned route avoids the current road section of the current path, the current road section is the road section between the current position of the motion obstacle and a predicted position, the predicted position is determined according to the current position and the preset safety distance, the preset safety distance is determined according to the motion speed of the motion obstacle and the preset duration, and the change point comprises: an inflection point and an intersection point of the current path.
7. A data processing-based herbicide task path planning method according to any one of claims 1-5, characterized in that the first position data is determined according to the historical position data and a first target position prediction algorithm
Figure 362795DEST_PATH_IMAGE027
The method comprises the following steps:
predicting a neural network model corresponding to the first target position according to the historical position data and the first target position, wherein the historical position data comprises video data within the preset time length, and the video data comprises images of N frames of RGB3 channels;
combining the images of the N frames of RGB3 channels and the preset duration into a 4-channel three-dimensional body with the size of NxHxW, wherein H is the length of the picture, and W is the width of the picture;
inputting the 4-channel NxHxW stereo into a 3D depth convolution residual error network, extracting features, outputting feature maps of different scales, and fusing the features of different scales by using a preset feature pyramid to obtain a 2D multi-channel feature map;
predicting the position in the layer to be predicted according to the feature map of the 2D multiple channels, and converting the position on the image into coordinates to obtain the first position data
Figure 648414DEST_PATH_IMAGE027
8. A weeding task path planning device based on data processing is characterized by comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring an initial whole image of a target area, the target area corresponds to a range to be weeded in a weeding task, and various target objects in the whole image are identified according to a preset target identification model, and the target objects are at least one of a fixed obstacle, a temporary obstacle and the area to be weeded;
the processing module is used for determining a pre-planned route according to first position data corresponding to the fixed barrier, second position data corresponding to the area to be weeded and a preset route planning model, wherein an initial road section of the pre-planned route does not include the temporary barrier, and the initial road section is a road section corresponding to a front preset distance in the pre-planned route;
the acquisition module is further configured to acquire a corrected whole image of the target area again when the weeding robot moves to an early warning position according to the pre-planned route, wherein when the weeding robot is located at the early warning position, a distance between the weeding robot and the temporary obstacle in the initial whole image satisfies a preset condition;
and the planning module is used for adjusting the pre-planned route according to the temporary barrier in the corrected whole image to generate a corrected planned route so that the weeding robot can continuously finish the weeding task according to the corrected planned route.
9. An electronic device, comprising:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the data processing-based weeding task path planning method of any one of claims 1-7 via execution of the executable instructions.
10. A computer-readable storage medium having stored thereon computer-executable instructions for implementing the data processing-based weeding mission path planning method according to any one of claims 1 to 7 when executed by a processor.
CN202211352669.8A 2022-11-01 2022-11-01 Weeding task path planning method and device based on data processing Active CN115755890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211352669.8A CN115755890B (en) 2022-11-01 2022-11-01 Weeding task path planning method and device based on data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211352669.8A CN115755890B (en) 2022-11-01 2022-11-01 Weeding task path planning method and device based on data processing

Publications (2)

Publication Number Publication Date
CN115755890A true CN115755890A (en) 2023-03-07
CN115755890B CN115755890B (en) 2023-04-07

Family

ID=85354923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211352669.8A Active CN115755890B (en) 2022-11-01 2022-11-01 Weeding task path planning method and device based on data processing

Country Status (1)

Country Link
CN (1) CN115755890B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015064780A1 (en) * 2013-10-30 2015-05-07 주식회사 드림씨엔지 Intelligent unmanned robotic weeder
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN109901594A (en) * 2019-04-11 2019-06-18 清华大学深圳研究生院 A kind of localization method and system of weed-eradicating robot
CN112363494A (en) * 2020-09-24 2021-02-12 深圳优地科技有限公司 Method and device for planning advancing path of robot and storage medium
CN115185285A (en) * 2022-09-06 2022-10-14 深圳市信诚创新技术有限公司 Automatic obstacle avoidance method, device and equipment for dust collection robot and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015064780A1 (en) * 2013-10-30 2015-05-07 주식회사 드림씨엔지 Intelligent unmanned robotic weeder
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN109901594A (en) * 2019-04-11 2019-06-18 清华大学深圳研究生院 A kind of localization method and system of weed-eradicating robot
CN112363494A (en) * 2020-09-24 2021-02-12 深圳优地科技有限公司 Method and device for planning advancing path of robot and storage medium
CN115185285A (en) * 2022-09-06 2022-10-14 深圳市信诚创新技术有限公司 Automatic obstacle avoidance method, device and equipment for dust collection robot and storage medium

Also Published As

Publication number Publication date
CN115755890B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Broggi et al. The ARGO autonomous vehicle’s vision and control systems
CN101619984B (en) Mobile robot visual navigation method based on colorful road signs
Kenue Lanelok: Detection of lane boundaries and vehicle tracking using image-processing techniques-part i: Hough-transform, region-tracing and correlation algorithms
Liang et al. Video stabilization for a camcorder mounted on a moving vehicle
EP3594902B1 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
CN102248947A (en) Object and vehicle detecting and tracking using a 3-D laser rangefinder
CN114365195A (en) Structural annotation
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
WO2021072709A1 (en) Method for detecting and tracking target, system, device, and storage medium
CN113112491B (en) Cliff detection method, cliff detection device, robot and storage medium
EP3555854B1 (en) A method of tracking objects in a scene
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN113282088A (en) Unmanned driving method, device and equipment of engineering vehicle, storage medium and engineering vehicle
CN115755890B (en) Weeding task path planning method and device based on data processing
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
CN115629600B (en) Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment
CN116310380A (en) Road edge detection method, device, vehicle and storage medium
CN113607161B (en) Robot navigation path width acquisition system, method, robot and storage medium
CN111208785B (en) Motion control method and device
US20220053124A1 (en) System and method for processing information from a rotatable camera
CN115123291A (en) Behavior prediction method and device based on obstacle recognition
CN114565669A (en) Method for fusion positioning of field-end multi-camera
Martinez et al. Map-based lane identification and prediction for autonomous vehicles
Yu et al. Moving object detection using an in-vehicle fish-eye camera
Oh et al. A modified sequential Monte Carlo Bayesian occupancy filter using linear opinion pool for grid mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant