CN116576859A - Path navigation method, operation control method and related device - Google Patents

Path navigation method, operation control method and related device Download PDF

Info

Publication number
CN116576859A
CN116576859A CN202310443224.9A CN202310443224A CN116576859A CN 116576859 A CN116576859 A CN 116576859A CN 202310443224 A CN202310443224 A CN 202310443224A CN 116576859 A CN116576859 A CN 116576859A
Authority
CN
China
Prior art keywords
path
planting
target
machine body
small
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310443224.9A
Other languages
Chinese (zh)
Inventor
李龙喜
黄志鑫
李扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202310443224.9A priority Critical patent/CN116576859A/en
Publication of CN116576859A publication Critical patent/CN116576859A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)

Abstract

The invention provides a path navigation method, an operation control method and a related device, and relates to the technical field of navigation. In the case of navigating the body along the current path, when it is determined that the body enters the area to be turned, a turning path can be generated based on the current position of the body and the path intersection point of the current path and the target path, and the turning path can be used to guide the body to turn from the current path to the target path. Therefore, when the machine body is determined to enter the area to be turned, a turning path is timely generated, and the turning path is generated based on the current position of the machine body and the path intersection point of the current path and the target path, so that the machine body can accurately turn to the target path according to the turning path without deviation, and road switching between the current path and the target path is accurately and efficiently realized.

Description

Path navigation method, operation control method and related device
Technical Field
The present invention relates to the field of navigation technologies, and in particular, to a path navigation method, an operation control method, and a related device.
Background
In the field of agricultural automation, unmanned equipment generally adopts a fixed-point navigation technology to realize automatic operation. The actual operation scene is complicated various, for example, in the planting greenhouse, not only there are planting lanes between planting rows, but also there are main roads surrounding the planting rows, and how the unmanned equipment accurately and efficiently realizes road switching between the main roads and the planting lanes when performing operation is needed to be considered.
Therefore, in the special scene of the complex road, how to accurately and efficiently realize the road switching problem when the unmanned device navigates is a technical problem which needs to be solved by the person skilled in the art.
Disclosure of Invention
The invention aims to provide a path navigation method, an operation control method and a related device, which can accurately and efficiently realize road switching between a current path and a target path.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides a path navigation method, including:
navigating the machine body along the current path;
when the machine body is determined to enter a region to be turned, generating a turning path based on a path intersection point of the current position of the machine body and the current path and the target path;
the turning path is used for guiding the machine body to turn from the current path to a target path.
Optionally, after the step of navigating the body along the current path, the method further includes:
when a set reference object is identified, or the accumulated running length of the machine body on the current path reaches the set length, or the machine body runs to the end point of the current path, determining the distance between the current position and the intersection point of the path;
And if the distance between the current position and the path intersection point is smaller than or equal to the set distance, determining that the machine body enters the area to be turned.
Optionally, the current path is a main road path of a target scene, and the target path is a planting small road path of the target scene;
the step of generating a turning path based on a path intersection of the current position of the body and the current path and the target path includes:
generating a turning path from the arterial road path to the planting small road path by taking a first set point of a turning inner side area of the machine body as a circle center and taking a distance between the current position and the path intersection point as a radius;
wherein, the connection line between the first set point and the current position is perpendicular to the main road path.
In an optional embodiment, the current path is a planting lane path of a target scene, and the target path is a main lane path of the target scene;
the step of generating a turning path based on a path intersection of the current position of the body and the current path and the target path includes:
generating a turning path from the planting small road path to the main road path by taking a second set point of a turning inner side area of the machine body as a circle center and taking a distance between the current position and the path intersection point as a radius;
Wherein the line connecting the second set point and the current position is perpendicular to the planting lane path.
Optionally, the method further comprises:
when the machine body enters the arterial road path from the planting small road path, repositioning is carried out based on the three-dimensional map of the target scene, so that pose information obtained through the visual inertial odometer of the machine body is in the same coordinate system with the three-dimensional map.
Optionally, the current path is a planting path of a target scene, and the target path is a main path of the target scene;
the step of navigating the body along the current path comprises the following steps:
acquiring a small road field Jing Tuxiang acquired by a visual inertial odometer of the machine body; the small-channel scene image is acquired when the machine body enters a planting small channel of the target scene;
generating a target straight line based on the small road field Jing Tuxiang;
extracting a plurality of small-channel navigation path points from the target straight line to obtain the planting small-channel path;
and navigating the machine body according to the planting road diameter. Optionally, the step of generating a target straight line based on the small road field Jing Tuxiang includes:
Performing three-dimensional reconstruction based on the small-track scene image to obtain a three-dimensional reconstruction image;
extracting three-dimensional point information on two sides of the planting lane from the three-dimensional reconstruction graph;
and fitting to obtain the target straight line based on the three-dimensional point information on the two sides of the planting small road.
Optionally, the step of generating a target straight line based on the small road field Jing Tuxiang includes:
identifying planting bracket information and planting row information from the small-track scene image;
determining a planting bracket central line based on the planting bracket information, and determining a crop edge fitting central line based on the planting row information;
and calculating a target fitting central line between the central line of the planting bracket and the fitting central line of the crop edge, and converting the target fitting central line into the target straight line.
Optionally, the step of generating a target straight line based on the small road field Jing Tuxiang includes:
based on the equipment central axis of the machine body, obtaining a local image of the small-track scene image;
converting the local image to obtain a target aerial view;
crop identification is carried out on the target aerial view, and a first planting row area and a second planting row area on two sides of a planting small road are determined;
And obtaining the target straight line according to the central lines of the first planting row area and the second planting row area and a preset scale factor, wherein the preset scale factor is used for representing the relation between the distance between pixels in the target aerial view and the actual distance.
Optionally, the step of extracting a plurality of lane navigation path points from the target straight line to obtain the planted lane path includes:
and taking an entry point of the small planting way as a starting point, extracting a point from the target straight line every preset length, and converting the point into an equipment body coordinate system until the exit point of the small planting way is extracted from the target straight line, so as to obtain a plurality of small planting way navigation path points, and generating the small planting way path.
Optionally, the current path is a main road path of a target scene, and the target path is a planting small road path of the target scene;
before the step of navigating the body along the current path, the method further comprises:
acquiring a three-dimensional map of the target scene and a main road path in the three-dimensional map; the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of the machine body;
Repositioning is performed based on the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map.
Optionally, the step of navigating the body along the current path includes:
acquiring operation information of the machine body relative to the initialization completion time or the last positioning fusion time based on the image acquired by the visual inertial odometer in real time;
when the running information meets preset conditions, determining a matching map area from the three-dimensional map according to the current pose information obtained by the visual inertial odometer;
positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information;
and navigating the machine body according to the fusion positioning information, the three-dimensional map and the arterial road path.
In a second aspect, the present invention provides a job control method, including:
receiving a job instruction;
determining a region to be operated which needs to be operated currently from a target scene based on an operation instruction;
according to the path navigation method of any one of the foregoing embodiments, path navigation is performed on the unmanned device, so as to assist the unmanned device in performing operations on the area to be worked.
Optionally, the job instruction includes a designated job start point and a designated job end point; the target scene comprises a plurality of planting rows;
the step of determining the area to be worked for the current required work from the target scene based on the work instruction comprises the following steps:
finding out at least one row of target planting rows from the designated job starting point to the designated job ending point in the target scene;
and taking the area where all the target planting rows are located as the area to be operated.
Optionally, the job instruction includes a specified job type; the target scene comprises a plurality of planting rows;
the step of determining the area to be worked for the current required work from the target scene based on the work instruction comprises the following steps:
obtaining crop growth information of various planting rows in the target scene;
searching at least one column of target planting rows, of which the crop growth information is matched with the designated operation type, from the target scene;
and taking the area where all the target planting rows are located as the area to be operated.
In a third aspect, the present invention provides a path navigation apparatus comprising:
the path navigation module is used for navigating the machine body along the current path;
The turning control module is used for generating a turning path based on the path intersection point of the current position of the machine body and the current path and the target path when the machine body is determined to enter the area to be turned; the turning path is used for guiding the machine body to turn from the current path to a target path.
In a fourth aspect, the present invention provides a job control apparatus including:
the instruction receiving module is used for receiving the operation instruction;
the operation control module is used for determining a region to be operated which needs to be operated currently from a target scene based on an operation instruction;
the operation control module is further configured to perform path navigation on the unmanned device according to the path navigation method in any one of the foregoing embodiments, so as to assist the unmanned device in performing an operation on the area to be operated.
In a fifth aspect, the present invention provides an unmanned device comprising: the navigation device comprises a machine body, a memory and a processor, wherein the memory is arranged on the machine body and is used for storing a program, and the processor realizes the navigation of the machine body according to the path navigation method in the first aspect and/or controls the operation of the machine body according to the operation control method in the second aspect when executing the program.
In a sixth aspect, the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements: the path navigation method in the first aspect and/or the job control method in the second aspect.
Compared with the prior art, the embodiment of the invention provides a path navigation method, an operation control method and a related device, when the machine body is determined to enter a to-be-turned area under the condition that the machine body is navigated along the current path, a turning path can be generated based on the current position of the machine body and the path intersection point of the current path and the target path, and the turning path can be used for guiding the machine body to turn from the current path to the target path. Therefore, when the machine body is determined to enter the area to be turned, a turning path is timely generated, and the turning path is generated based on the current position of the machine body and the path intersection point of the current path and the target path, so that the machine body can accurately turn to the target path according to the turning path without deviation, and road switching between the current path and the target path is accurately and efficiently realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a planting greenhouse according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a path navigation method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a path in a planting greenhouse according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an area to be turned in a planting greenhouse according to an embodiment of the present invention.
Fig. 5 is a second flowchart of a path navigation method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a reference object set in a planting greenhouse according to an embodiment of the present disclosure.
Fig. 7 is a second schematic view of a planting greenhouse according to an embodiment of the present disclosure.
Fig. 8 is a flowchart illustrating a path navigation method according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a turning path in a planting greenhouse according to an embodiment of the present invention.
Fig. 10 is a flowchart of a path navigation method according to an embodiment of the present invention.
Fig. 11 is a schematic view of a scenario for extracting a path of a planting lane in a planting greenhouse according to an embodiment of the present disclosure.
Fig. 12 is a schematic view of a scenario for extracting a target straight line from a planting greenhouse according to an embodiment of the present disclosure.
Fig. 13 is a flow chart of a job control method according to an embodiment of the present invention.
Fig. 14 is a schematic structural diagram of a path navigation device according to an embodiment of the present invention.
Fig. 15 is a schematic structural diagram of a job control device according to an embodiment of the present invention.
Fig. 16 is a schematic structural diagram of an unmanned device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In the field of agricultural automation, unmanned equipment generally adopts a fixed-point navigation technology to realize automatic operation. The actual operation scene is complex and various, for example, please refer to fig. 1, the planting greenhouse shown in fig. 1 comprises a plurality of rows of planting rows (planting rows 1-N), planting lanes for unmanned equipment to pass are arranged between adjacent planting rows, and main roads for unmanned equipment to pass are enclosed by the planting rows 1-N.
When the unmanned equipment works, pesticide spraying or fruit picking and the like are usually required to be carried out on planting rows from the main road by turning into the planting small road, and the planting small road is turned out to return to the main road after the work tasks of the planting rows on the two sides of the planting small road are completed. For example, assuming that the unmanned apparatus is an unmanned vehicle, the unmanned vehicle in fig. 1 needs to enter a planting lane between a planting line 1 and a planting line 2 for operation due to an operation task, and returns to the main road after the operation of the planting lines 1 and 2 on both sides of the planting lane is completed. Under such a scene, it is necessary to consider how to make the unmanned vehicle travel in the center of the road without touching crops planted on both sides as much as possible, and how to accurately and efficiently realize road switching when the unmanned vehicle navigates.
In order to solve the technical problems, the embodiment of the invention provides a path navigation method, which timely generates a turning path when determining that an engine body enters a to-be-turned area, and the turning path is generated based on the intersection point of the current position of the engine body and the path of the current path and the target path, so that the engine body can accurately turn to the target path according to the turning path without deviation, and road switching between the current path and the target path is accurately and efficiently realized. The following detailed description is made by way of example with reference to the accompanying drawings.
The path navigation method provided by the embodiment of the invention can be applied to unmanned equipment. The unmanned equipment can be an unmanned vehicle, an unmanned ship and the like, can also be an unmanned aerial vehicle (such as an agricultural unmanned aerial vehicle, a forestry unmanned aerial vehicle, a navigation measurement unmanned aerial vehicle and the like), can also be a robot (such as an agricultural robot, a sweeping robot and the like), and can comprise an organism, such as an unmanned vehicle body, an unmanned aerial vehicle body and a robot body. The device type of the unmanned device can be determined according to the actual application scene, and the embodiment of the invention does not limit the device type.
Taking unmanned equipment as an example of an unmanned vehicle, the path navigation method provided by the embodiment of the invention is described below.
Referring to fig. 2, fig. 2 is a flow chart of a path navigation method according to an embodiment of the present invention, where an execution subject of the method may be the unmanned device, and the method includes the following steps:
s101, navigating the machine body along the current path.
In this embodiment, the target scene to be navigated may be an outdoor or indoor scene, for example, an outdoor area (e.g., an orchard) where an unmanned vehicle is required to perform management work or transportation work, or an indoor area (e.g., in a planting greenhouse) where an unmanned vehicle is required to perform management work.
Taking a target scene as an example of a planting greenhouse, with reference to fig. 3, path navigation according to an embodiment of the present invention can be divided into the following two scenes:
scene one: when the unmanned vehicle runs on the main road, for example, the unmanned vehicle runs to a first position in fig. 3, the current path is the main road path of the main road in the target scene, and the target path is the planting small road path of the planting small road in the target scene;
scene II: when the unmanned vehicle runs on the planting lane, for example, the unmanned vehicle runs to a second position in fig. 3, the current path is a planting lane path of the planting lane in the target scene, and the target path is a main road path of the main road in the target scene.
And S104, when the machine body is determined to enter the area to be turned, generating a turning path based on the path intersection point of the current position of the machine body and the current path and the target path.
In this embodiment, the turning path is generated when it is determined that the body enters the area to be turned, so that the body can accurately cut into the target path from the current path without deviation. The unmanned vehicle may store in advance the position coordinates of the two path intersections of the planting lane path and the arterial road path of each planting lane.
In an optional implementation manner, in order to ensure the navigation effect, a main road path and a planting small road path can be obtained by mapping a target scene in advance, and after the mapping is completed to obtain the main road path and the planting small road path of each planting small road, two path intersection points between the main road path and the extension line of the planting small road path of each planting small road can be calculated and stored.
In another alternative implementation, to alleviate the storage pressure of the unmanned vehicle, the arterial road path may be obtained by mapping the target scene in advance, and the planting roadway path may be generated in real time by the unmanned vehicle when the arterial road turns into the planting roadway. After the main road path is obtained after the map is built, the unmanned vehicle can calculate the central line of the small planting road based on the collected image when driving to the small planting road, then calculate two intersection points between the extension line of the central line of the small planting road and the main road path and store the intersection points as the two path intersection points of the main road path and the subsequent small planting road path.
In practical application, the arterial road path and the planting path can be obtained by adopting any one of the two implementation modes, and the method is not limited herein.
In this embodiment, the path intersection point of the current path and the target path is a relative concept, and needs to be determined by combining the position of the unmanned vehicle in the target scene. For example, referring to fig. 3, when the unmanned vehicle is located at the first position and needs to travel to the planting lane between planting row 1 and planting row 2, at this time, the intersection point of the current path and the target path is K1; when the unmanned vehicle is located at the second position and needs to drive back to the main road from the planting small road between the planting row 1 and the planting row 2, at the moment, the intersection point of the current path and the target path is K2.
According to the path navigation method provided by the embodiment of the invention, under the condition that the machine body is navigated along the current path, the turning path is timely generated when the machine body is determined to enter the area to be turned, and the turning path is generated based on the path intersection point of the current position of the machine body, the current path and the target path, so that the machine body can accurately turn to the target path according to the turning path without deviation, the road switching between the current path and the target path is accurately and efficiently realized, a satellite positioning device is not needed, and the accurate navigation of the machine body under the condition without satellite positioning data can be ensured.
In an alternative embodiment, the area to be turned may depend on a distance d between an inlet center point or an outlet center point of the planting lane and a path intersection point of the planting lane path and the arterial road path, may be a circular area with the path intersection point between the planting lane path and the arterial road path as a center and d as a radius, or may be a square area with the path intersection point between the planting lane path and the arterial road path as a center and 2d as a side length.
Assuming that the area to be turned is a circular area, when the unmanned vehicle is located at the first position (when running towards K1), and needs to enter a planting small road between the planting row 1 and the planting row 2, the center point of the entrance of the planting small road is M1, the intersection point of the current path and the target path is K1, the distance between M1 and K1 is d, and the area to be turned is a circular area taking K1 as the center and d as the radius.
Referring to fig. 4, when the unmanned vehicle travels toward M2, it is required to return to the main road from the small planting road between the planting row 1 and the planting row 2, the exit center point of the small planting road is M2, the intersection point of the current path and the target path is K2, the distance between M2 and K2 is d (assuming equal distance with M1 and K1), and the area to be turned is a circular area with K2 as the center and d as the radius.
It should be noted that the foregoing examples are merely examples, and the shape of the area to be turned is not limited herein.
Optionally, a VIO (Visual-Inertial Odometry, visual odometer) may be mounted on the body. VIO is an algorithm that fuses camera and IMU (Inertial Measurement Unit ) data, computing device position in space, where the camera may be a binocular camera, depth camera, etc., without any limitation. The VIO technology integrates visual positioning and inertial navigation positioning, has the advantages of high updating speed, small drift, high robustness and the like, and is widely applied to positioning navigation systems.
When the VIO is installed on the machine body, auxiliary installation and angle adjustment can be performed through the display, so that the camera can be ensured to acquire clear pictures in front. The display may be a display mounted on the unmanned apparatus itself, or may be a display of another apparatus communicatively connected to the unmanned apparatus, for example, a display of a mobile terminal (personal computer, smart phone, tablet, apparatus controller, etc.) of the user, and is not limited thereto.
In the running process of the unmanned vehicle, the visual inertial odometer can acquire images of the current scene at intervals of set time or intervals of preset running length, or acquire the images in real time. In practical application, different image acquisition modes can be adopted in different scenes in combination with practical requirements. For example, the image may be acquired once every 2s of the visual odometer, or once every time the travel length of the unmanned vehicle reaches 0.5m of the visual odometer, or the like, which is not limited herein.
In an alternative embodiment, in order to ensure that the body can turn in time and directly and accurately reach the target path, a turning path may be generated after determining that the body enters the area to be turned, so as to perform turning. Correspondingly, referring to fig. 5 on the basis of fig. 2, after step S101, the path navigation method provided in the embodiment of the present invention may further include steps S102 to S103:
s102, when the set reference object is identified, or the accumulated running length of the machine body on the current path reaches the set length, or the machine body runs to the end point of the current path, determining the distance between the current position of the machine body and the path intersection point of the current path and the target path.
And S103, if the distance between the current position and the intersection point of the path is smaller than or equal to the set distance, determining that the machine body enters the area to be turned.
In this embodiment, in scene one (unmanned vehicle running on the main road), it is necessary to determine when to turn into the planting lane; in scenario two (unmanned vehicles traveling on the planting lane), it is necessary to determine when to turn back to the arterial road. Along with the running of the unmanned aerial vehicle, when approaching a planting small road needing to enter or approaching the tail end of the planting small road, the unmanned aerial vehicle can start to detect the distance between the current position of the machine body and the path intersection point of the current path and the target path in real time, and the machine body can be determined to enter the area to be turned as long as the distance between the current position and the path intersection point is smaller than or equal to the set distance.
Alternatively, the set distance may depend on the distance between the entrance center point or the exit center point of the planting lane and the path intersection of the planting lane and the arterial road path. Referring to fig. 4, the set distance may be set to d, which may be 1m or 1.5m, etc., and this example is only an example, and the size of the specific set distance depends on the actual distance between the planting row and the arterial road path in the actual target scene, which is not limited herein.
In a first possible implementation, a set reference (e.g., a planting rack) may be provided at both end positions of the planting row. In the running process of the unmanned vehicle, the collected images (collected at intervals of set time length, collected at intervals of preset running length or collected in real time) can be subjected to image recognition, so that whether the unmanned vehicle approaches to the small planting way or approaches to the tail end of the small planting way is determined according to whether a set reference object is recognized. The image recognition may employ a pre-trained object detection algorithm or feature recognition algorithm, etc.
Referring to fig. 6, in the case where the set references (set references 1 to 4) are provided at both ends of the planting row 1 and the planting row 2, when the unmanned vehicle travels toward the point K1 outside the planting row 1, if the set reference 3 and/or the set reference 4 are recognized from the acquired image, it is considered that the lane is approaching, it is possible to start detecting the distance s between the current position of the body and the intersection point (K1 in fig. 6) of the current path and the target path, and when s is less than or equal to the set distance, it is possible to determine that the body has entered the area to be turned.
In connection with fig. 6, when the unmanned vehicle travels toward the point M2 in the planting lane between the planting rows 1 and 2, when the set reference object 1 and the set reference object 2 are recognized from the acquired images, that is, the unmanned vehicle is considered to be close to the end of the planting lane, it is possible to start detecting the distance s between the current position of the body and the path intersection point (K2 in fig. 6) of the current path and the target path, and when s is less than or equal to the set distance, it is possible to determine that the body has entered the area to be turned.
In a second possible implementation manner, the unmanned vehicle can determine whether the machine body approaches a planting small channel needing to enter or approaches the tail end of the planting small channel by judging whether the accumulated running length of the machine body on the current path reaches a set length or not when the unmanned vehicle runs.
When the unmanned vehicle runs on the main road, the set length can represent the distance from the access point corresponding to the replenishing point of the target scene to the to-be-turned area of the first planting small road, which is stored in advance by the unmanned vehicle. Referring to fig. 7, the entry point corresponding to the replenishment point is S1, and the set length is the length of the arterial road path between the entry point S1 and S2 of the area to be turned, assuming that the unmanned vehicle enters the arterial road and runs counterclockwise. It should be noted that, in the actual running process of the unmanned vehicle, there may be a situation that the unmanned vehicle needs to turn on a horse to enter another planting small road just after returning to the main road, so the unmanned vehicle can determine whether to approach the planting small road by adopting the method just after entering the main road, and can determine whether to approach the planting small road by adopting the method of the first implementation mode when the unmanned vehicle needs to enter other planting small roads.
When the unmanned vehicle runs on the planting small road, the set length can be the length of a pre-stored planting row, and the length of the planting row is the distance between M1 and M2 in combination with FIG. 7. When the accumulated running length of the unmanned vehicle in the planting small road reaches the set length, the distance between the current position of the machine body and the K2 point is required to be detected so as to judge whether the unmanned vehicle enters the area to be turned.
In a third possible implementation, when the drone is driving on a planting lane, the end of the planting lane path may be considered to be reached when the drone is driving a length equal to the length of the planting lane, or when the lane centerline cannot be extracted based on the captured lane scene image. The distance between the current position of the machine body and the intersection point of the planting small road path and the main road path needs to be detected so as to judge whether the machine body enters the area to be turned.
It should be noted that, in the application process, one of the three implementation manners for determining that the body approaches the planting lane or the end of the planting lane may be used alone, or may be used in combination with each other, and the specific application is based on the actual situation and is not limited herein.
It can be understood that, in the step S103, the distance d1 between the current position of the body and the intersection point of the current path and the target path is detected, three situations of d1> d, d1=d, and d1< d may occur, and when d1> d, the unmanned vehicle needs to continue to travel until d1=d is satisfied, at this time, the body just enters the zone with turning, and a turning path may be generated; when d1=d or d1< d, the machine body just enters the area to be turned or has entered the area to be turned, and the turning path can be directly generated.
In one possible implementation, when the unmanned vehicle is driving on a main road and needs to turn into a planting small road, the current path is the main road path, and the target path is the planting small road path. Correspondingly, referring to fig. 8 on the basis of fig. 2, step S101 may include sub-step S101A, and step S104 may include sub-step S104A:
and S101A, navigating the machine body along the arterial road path based on the three-dimensional map.
And S104A, when the machine body is determined to enter the area to be turned, generating a turning path from the arterial road path to the planting small road path by taking the first set point of the turning inner side area of the machine body as a circle center and taking the distance between the current position and the path intersection point as a radius.
In this embodiment, the line connecting the first set point and the current position is perpendicular to the arterial road path. That is, the distance between the current position and the path intersection is d1, and the first set point is a point in the turning inner region at a vertical distance d1 from the current position of the body.
For example, assuming that the area to be turned is a circle center area, in combination with the diagram (a) in fig. 9, the unmanned vehicle travels toward the point K1 on the arterial road, and when it is determined that the body enters the area to be turned, there are two cases:
In the first case, when the unmanned vehicle determines that the body enters the area to be turned, the current position of the body is S2 (just touching the area to be turned), then the turning path generated at this time is just a quarter arc with S2 as a starting point, M1 as an ending point and d as a radius, and the first set point is O1. Then the unmanned vehicle can directly enter the planting small road from the point M1 through the turning path from the point S2, so that the unmanned vehicle can accurately enter the center point of the entrance of the planting small road according to the turning path;
in the second case, when the unmanned vehicle determines that the body enters the area to be turned, if the current position of the body is S3 (the actual body has entered the area to be turned), then the distance between S3 and K1 is d2, the turning path generated at this time is a quarter arc with S3 as the starting point, S4 as the end point, and d2 as the radius, and the first set point is O2. And then the unmanned vehicle can start to reach S4 from S3 through a turning path, and then can enter the planting small road after continuing to travel to M1. Therefore, the unmanned vehicle can accurately enter the central line of the planting small road according to the turning path, and the unmanned vehicle can also timely adjust the posture in the process of driving from S4 to M1 so as to enable the advancing mode to be consistent with the central line of the planting small road.
In another possible implementation, when the unmanned vehicle is traveling on a small planting road and needs to turn back to the arterial road, the current path is the small planting road path and the target path is the arterial road path. Correspondingly, referring to fig. 10 on the basis of fig. 2, step S101 may include sub-step S101a, and step S104 may include sub-step S104a:
s101a, navigating the machine body along the path of the planting small path.
And S104a, when the machine body is determined to enter the area to be turned, generating a turning path from the planting small road path to the arterial road path by taking a second set point of the turning inner side area of the machine body as a circle center and taking the distance between the current position and the path intersection point as a radius.
In this embodiment, the line connecting the second setpoint with the current position is perpendicular to the path of the planting lane. That is, the distance between the current position and the path intersection is d2, and the second set point is a point in the turning inner region at a vertical distance d2 from the current position of the body.
For example, referring to fig. 9, in fig. 9, after the unmanned vehicle travels from the position in fig. (a) to the position in fig. (b), in fig. 9, when it is determined that the body enters the area to be turned, and when the current position of the body is M2 (the body has just entered the area to be turned), the turning path generated at this time is a quarter arc with M2 as the starting point, S5 as the end point, and d as the radius, and the second set point is O3. And then the unmanned vehicle can start to reach S5 from M2 through the turning path, so that the unmanned vehicle can be ensured to accurately return to the arterial road path of the arterial road according to the turning path.
The two scenes of how the unmanned vehicle turns to enter the planting small road in the main road and turns to return to the main road in the planting small road are introduced. The following is a description of the navigation of the unmanned vehicle on the arterial road and the planting roadway, respectively.
In an alternative embodiment, the navigation of the body along the arterial road path by the drone may rely on a three-dimensional map. Correspondingly, referring to fig. 8, when the unmanned vehicle runs on the arterial road, the path navigation method provided by the embodiment of the invention may further include steps S001 and S002 before step S101A.
S001, acquiring a three-dimensional map of the target scene and a main road path in the three-dimensional map.
In this embodiment, the three-dimensional map is obtained by mapping the arterial road of the target scene by the visual inertial odometer of the machine body. The three-dimensional map may include a plurality of three-dimensional points and key frame information and pose information associated with each three-dimensional point. In one case, the arterial road path may be a map-building path of a three-dimensional map, including all three-dimensional points; in another case, to improve efficiency of the fixed-point navigation, the arterial road path may include a plurality of arterial road navigation path points extracted from the map-build path.
It should be noted that, the process of generating the three-dimensional map and the corresponding track information and extracting the main road navigation path point by using the visual inertial odometer frame belongs to the existing algorithm frame, and is not described here again.
And S002, repositioning based on the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map.
In this embodiment, when the unmanned vehicle needs to perform fixed-point navigation on the arterial road, it needs to reposition the machine body according to the three-dimensional map first, after the repositioning of the machine body is successful, the pose information obtained through the visual inertial odometer is in the same coordinate system as the three-dimensional map, and then the machine body is navigated along the arterial road path from the successful repositioning position.
It can be understood that, for the scenario that the unmanned vehicle is driving on the planting roadway and needs to turn back to the main road, after executing the step S104a, the machine body can enter the main road path from the planting roadway path according to the turning path, and after completing the turning, the unmanned vehicle needs to reposition based on the three-dimensional map of the target scenario, so that the pose information obtained by the visual inertial odometer of the machine body and the three-dimensional map are in the same coordinate system, and then the machine body is continuously navigated along the main road path. The process of repositioning the machine body is specifically the prior art, and will not be described in detail herein.
In an alternative embodiment, after the repositioning of the machine body is successful, the pose information obtained through the visual inertial odometer is in the same coordinate system as the three-dimensional map, and then a navigation mode is entered, in which the pose information of the machine body is obtained in real time through the VIO, so that the machine body can be navigated according to the pose information of the machine body, the three-dimensional map and the arterial road path.
In a navigation mode, acquiring real-time pose information of the unmanned vehicle through the VIO, wherein the real-time pose information comprises a real-time positioning point; then, based on the three-dimensional map, generating an adjustment planning path between the real-time positioning point and the arterial road path point, and controlling the unmanned vehicle to run according to the adjustment planning path; with this circulation, make unmanned vehicles's travel track get closer to arterial road route point more and more to finally travel according to arterial road route point.
Fixed-point navigation techniques can be broadly divided into outdoor fixed-point navigation and indoor fixed-point navigation. The outdoor fixed-point navigation mainly adopts a satellite positioning navigation technology, namely, the fixed-point navigation is realized by means of satellite signals, and the fixed-point navigation difficulty is extremely high in a region without satellite signals.
In the prior art, an indoor fixed-point navigation mainly adopts an SLAM (Simultaneous Localization and Mapping, synchronous positioning and image construction) technology, the SLAM technology comprises a laser SLAM adopting a laser radar and a visual SLAM adopting a visual odometer (for example, a camera), the laser SLAM is unfavorable for large-scale popularization and use due to higher equipment requirements, the requirements of the visual SLAM on scenes are too severe, and positioning deviation is larger under the scenes of weak texture, complex illumination and the like.
Therefore, in another alternative embodiment, navigation on the arterial road can also be performed through repositioning through a three-dimensional map, and positioning fusion is performed on the VIO positioning information and the repositioning information, so that positioning accuracy is further improved. Correspondingly, the step S101A may include substeps S101A-1 to S101A-4:
S101A-1, acquiring operation information of the machine body relative to the initialization completion time or the last positioning fusion time based on the image acquired by the visual inertial odometer in real time.
In this embodiment, the initialization completion time may refer to: the moment when the VIO is initialized when the navigation is next fixed-point. The last positioning fusion time can be: the last time the VIO positioning information and the repositioning information are subjected to positioning fusion.
Alternatively, the operation information may include at least one of an operation time period, an operation distance, and a turning curvature, to which the embodiment of the present invention is not limited in any way.
Optionally, the operation time can be calculated according to the time when the main road scene image is currently acquired by the VIO and the initialization completion time or the last positioning fusion time; calculating the running distance according to the main road scene image currently acquired by the VIO and the main road scene image acquired at the moment of initialization completion or the main road scene image acquired at the moment of last positioning and fusion; and calculating turning curvature according to the same characteristic point in the main road scene image currently acquired by the VIO and the main road scene image of the previous frames of the main road scene image.
S101A-2, when the running information meets the preset condition, determining a matching map area from the three-dimensional map according to the current pose information obtained by the visual inertial odometer.
In this embodiment, the preset conditions may include: the run-time length is at least one of greater than a set time threshold, the run-distance is greater than a set distance threshold, and the turning curvature is greater than a set curvature threshold.
Optionally, the set time threshold may be 5s, the set distance threshold may be 3m, the set curvature threshold may be 0.01, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited in the embodiment of the present invention.
That is, if any one of the running time length being greater than the set time threshold (e.g., 5 s), the running distance being greater than the set distance threshold (e.g., 3 m), and the turning curvature being greater than the set curvature threshold (e.g., 0.01) is established, it is determined that the running information satisfies the preset condition.
In this embodiment, when the running information satisfies the preset condition, according to the current pose information obtained by the VIO, a three-dimensional point corresponding to the current pose information may be determined from the three-dimensional map, and then a map area within a set distance range (for example, within 1m around) of the three-dimensional point is acquired and used as a matching map area.
S101A-3, carrying out positioning fusion processing on the current pose information and repositioning pose information obtained by repositioning the matched map area, and obtaining fusion positioning information.
In this embodiment, after the matching map area is determined from the three-dimensional map, repositioning pose information can be obtained through repositioning the matching map area, that is, the current position point of the unmanned vehicle is obtained through the VIO, and pose information associated with the current position point is obtained from the matching map area according to the current position point, that is, repositioning pose information.
Alternatively, the substeps of S101A-3 may include S101A-31 to S101A-33:
S101A-31, determining a distance L between current pose information and repositioning pose information;
S101A-32, if the distance L is smaller than a preset distance threshold, taking the current pose information as fusion positioning information;
S101A-33, if the distance L is greater than or equal to a preset distance threshold, a matching result of the main road scene image acquired by the visual inertial odometer and the three-dimensional map is obtained, and fusion positioning information is determined according to the matching result.
Alternatively, the preset distance threshold may be flexibly set by the user according to the actual requirement, for example, the preset distance threshold may be 0.3m, which is given as an example only, and is not limited herein.
That is, if the distance L between the current pose information and the repositioning pose information is smaller than a preset distance threshold (e.g., 0.3 m), it is indicated that the VIO positioning accuracy is higher, and the current pose information may be directly used as the fusion positioning information. If the distance L between the current pose information and the repositioning pose information is greater than or equal to a preset distance threshold (e.g., 0.3 m), it is indicated that the VIO positioning accuracy is low, and the fusion positioning information needs to be determined by combining with the three-dimensional map.
In one possible implementation, the process of obtaining the matching result of the arterial road scene image acquired by the visual odometer with the three-dimensional map in steps S101A-33A-S101A-33C may include sub-steps S101A-33C.
S101A-33A, determining target pose information matched with the current pose information from the three-dimensional map.
S101A-33B, acquiring target key frame information associated with target pose information.
S101A-33C, determining the number of matching feature points and the average weight projection error between the currently acquired main road scene image and the target key frame information of the visual inertial odometer, and obtaining a matching result, namely, the matching result comprises the number of matching feature points and the average weight projection error.
It should be noted that, the manner of calculating the number of matching feature points and the average re-projection error between the main road scene image currently acquired by the VIO and the target key frame information is in the prior art, and will not be described herein.
In another possible implementation, the process of determining the fused positioning information according to the matching result in step S101A-33 may include sub-steps S101A-33 a-S101A-33 b.
S101A-33a, if the number of the matched characteristic points is larger than a set number threshold value and the average re-projection error is smaller than the initial average re-projection error, the repositioning pose information is used as fusion positioning information.
S101A-33b, if the number of the matched feature points is not larger than a set number threshold, or the average re-projection error is not smaller than the initial average re-projection error, taking the current pose information as fusion positioning information.
The initial average re-projection error is obtained by matching the acquired main road scene image with the three-dimensional map when the visual inertial odometer is initialized. Initialization here means that the VIO is initialized when the navigation is performed at the next fixed point. Alternatively, the set number threshold may be 50, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited herein.
And S101A-4, navigating the machine body according to the fusion positioning information, the three-dimensional map and the arterial road path.
In this embodiment, the fused positioning information obtained according to steps S101A-1 to S101A-3 is obtained by performing positioning fusion processing on pose information obtained by VIO and repositioning pose information obtained by partial map information, so that the defect of low precision of VIO positioning in certain scenes (for example, complex illumination) can be overcome, and the positioning precision is further improved.
Optionally, the VIO positioning information and the repositioning information are subjected to positioning fusion, and after the fusion positioning information is obtained, the unmanned vehicle can navigate the machine body according to the fusion positioning information, the three-dimensional map and the main road path. The method comprises the steps of generating an adjustment planning path between fusion positioning information and a main road path based on a three-dimensional map, controlling the unmanned vehicle to travel according to the adjustment planning path, enabling the travel track of the unmanned vehicle to be closer to the main road path, and finally traveling according to the main road path.
In an alternative embodiment, when the machine body turns from the arterial road path to enter the planting small road according to the turning path, in order to reduce the burden of building the map, the unmanned vehicle can generate the planting small road path in real time based on the collected small road field Jing Tuxiang so as to navigate according to the planting small road path. Correspondingly, the step S101a may include substeps S101a-1 to S101a-4:
S101a-1, acquiring a small-path scene image acquired by a visual inertial odometer of the machine body.
In this embodiment, the lane scene image may be acquired when the body turns from the arterial road path into the planting lane according to the turning path.
S101a-2, generating a target straight line based on the small road field Jing Tuxiang.
In this embodiment, the target straight line is a straight line in actual scene coordinates. Alternatively, the target straight line may be obtained by the following three implementations:
first, when the camera adopts binocular camera, unmanned vehicles can process the small road scene image based on three-dimensional reconstruction technology when entering the planting small road to generate a target straight line.
Secondly, when the unmanned vehicle enters the planting small road, the small road scene image can be processed based on the image recognition technology so as to obtain a target straight line.
Thirdly, in the two modes, the whole small-track scene image needs to be processed, but in the small-track scene image, more than one row of planting rows are possibly planted on each side of the small-track, but only one row of planting rows on two sides of the planting rows are needed to extract a target straight line. That is, in the small-track scene image, the image areas except for the small-track planting and the two planting rows on the two sides are all calculated as redundant parts. Therefore, in order to avoid the interference of other planting rows on navigation and reduce the data processing amount, the small-track scene image can be subjected to regional interception and then further processed to obtain the target straight line.
S101a-3, extracting a plurality of small-path navigation path points from the target straight line to obtain the planting small-path.
Alternatively, the process of obtaining the planting road path based on the target straight line may be: and taking an entry point of the planting small road as a starting point, extracting a point from the target straight line every a preset length, and converting the point into the equipment body coordinate system until the exit point of the planting small road is extracted from the target straight line, so as to obtain a plurality of small road navigation path points, and generating a planting small road path.
Referring to fig. 11, the entrance point of the planted small road in the target straight line is N1, the exit point is N2, from N1, a point can be extracted from the target straight line every preset length s, and the point can be converted into the equipment body coordinate system to be a small road navigation path point, until the exit point N2 of the planted small road is extracted from the target straight line, so as to obtain a plurality of small road navigation path points. The planted roadway path may include all roadway navigation path points from N1 to N2.
It should be noted that the setting of the preset length may be determined according to the actual application requirement, for example, if the navigation control is required to be more accurate, the preset length may be set to 0.5m. This example is merely an example and is not intended to be limiting herein.
S101a-4, navigating the machine body according to the path of the planting small path.
In this embodiment, after the unmanned vehicle completes turning and enters the planting lane, a planting lane path is generated first, and then the machine body can be navigated according to the planting lane path to drive in the planting lane.
Three implementations of obtaining the target line are described next, respectively.
In a first alternative implementation, when the camera is a binocular camera, the drone processes the lane scene image based on three-dimensional reconstruction techniques to generate a target straight line when entering the planting lane. Correspondingly, step S101a-2 may include sub-steps S101 a-2-S101 a-23:
s101a-21, carrying out three-dimensional reconstruction based on the small-track scene image to obtain a three-dimensional reconstruction image.
In this embodiment, the small-track scene image may be a binocular image, and specifically, a process of performing three-dimensional reconstruction based on the small-track scene image to obtain a three-dimensional reconstruction image is in the prior art, which is not described herein.
S101a-22, extracting three-dimensional point information on two sides of the planting small path from the three-dimensional reconstruction graph.
In this embodiment, the three-dimensional point information may include position coordinates of a plurality of three-dimensional points on the left side of the planting lane and position coordinates of a plurality of three-dimensional points on the right side of the planting lane.
S101a-23, fitting to obtain a target straight line based on three-dimensional point information on two sides of the planting small channel.
In the embodiment, straight line fitting is performed based on position coordinates of a plurality of three-dimensional points on the left side of the planting lane, so that a first straight line can be obtained; and performing straight line fitting based on the position coordinates of a plurality of three-dimensional points on the right side of the planting lane to obtain a second straight line, then generating a fitting central line between the first straight line and the second straight line, and converting the fitting central line into actual scene coordinates to obtain a target straight line.
In a second alternative implementation, the lane scene image may be processed based on image recognition techniques to generate the target straight line when the drone enters the planting lane. Correspondingly, step S101a-2 may include S101 a-2A-S101 a-2C:
s101a-2A, identifying planting support information and planting row information from the small-track scene image.
In this embodiment, the planting support information may include planting support areas in planting rows on both sides of the planting lane, and the planting row information may include crop areas in planting rows on both sides of the planting lane. Alternatively, a visual recognition algorithm (e.g., a pre-trained neural network model or a target recognition algorithm) may be employed to identify the small-track scene image to obtain the planting stent information and the planting row information.
S101a-2B, determining a planting bracket central line based on planting bracket information, and determining a crop edge fitting central line based on planting row information.
S101a-2C, calculating a target fitting center line between the center line of the planting support and the fitting center line of the edge of the crop, and converting the target fitting center line into a target straight line.
For example, as shown in fig. 12, a planting rack center line l1 may be determined based on planting rack information, and a crop edge fitting center line l2 may be determined based on planting row information. A target fitted center line l3 between the planting rack center line l1 and the crop edge fitted center line l2 can then be calculated. And converting the target fitting central line l3 into an actual scene coordinate to obtain a target straight line. It should be noted that fig. 12 is merely an example, and is not limited thereto.
In a third alternative implementation, in order to avoid interference of other crop rows on navigation and reduce data processing amount, the small-track scene image may be intercepted in an area, and then the target straight line may be further obtained. Correspondingly, step S101a-2 may include S101a-2 a-S101 a-2d:
s101a-2a, obtaining a local image of the small-track scene image based on the equipment central axis of the machine body.
It can be understood that the central axis direction of the machine body can be the direction of the perpendicular bisector of the wheel axle between the two rear wheels of the unmanned vehicle, namely the central route direction of the unmanned vehicle, under the condition that the unmanned vehicle runs right in front of the vehicle body without turning.
In this embodiment, the local image may be obtained by capturing from the small-track scene image based on the field calibration information obtained in advance. The field calibration information may include four field image coordinates obtained by calibration in advance. The four field-of-view image coordinates may include two rows of planting rows and one planting lane (the planting lane where the unmanned vehicle is located) in a rectangular area (i.e., field-of-view area) enclosed in the image.
The local image is an image of an area where a pre-calibrated view field is located in the small-track scene image, and the local image is an image actually required by path planning, so that interference of other planting rows in the image on solving the path of the planting small-track can be avoided, and meanwhile, the data processing capacity is reduced.
S101a-2b, converting the local image to obtain a target aerial view.
In this embodiment, the local image may be converted into the aerial view field by using a preset conversion matrix or other field-of-view conversion algorithm, so as to obtain the target aerial view.
S101a-2c, crop identification is carried out on the target aerial view, and a first planting row area and a second planting row area on two sides of the planting small road are determined.
In this embodiment, the first planting row area is an area where at least one row of planting rows on the left side of the planting lane is located, and the second planting row area is an area where at least one row of planting rows on the right side of the planting lane is located.
In an alternative example, the target aerial view may be subjected to adaptive thresholding to determine a first planting row region and a second planting row region included in the target aerial view. In another alternative example, crop identification may be performed on the target bird's eye view using a pre-trained identification model, thereby obtaining the first and second planting row areas. The specific manner in which the first planting row area and the second planting row area are identified is based on actual application, and is not limited herein.
S101a-2d, obtaining a target straight line according to the central lines of the first planting row area and the second planting row area and a preset scale factor.
In this embodiment, a preset scale factor is used to represent the relationship between the distance between pixels in the target bird's eye view and the actual distance. And combining the number of horizontal and vertical pixel points in the target aerial view and the actual size of the field of view area to obtain the range scale factor (namely the preset scale factor) of the field of view area. And converting the central lines of the first planting row area and the second planting row area into actual scene coordinates by using a preset scale factor, and obtaining a target straight line.
Alternatively, the preset scale factors may include a lateral scale factor and a longitudinal scale factor. Taking the direction vertical to the central axis of the machine body as the X direction and the direction vertical to the central axis of the machine body as the Y direction, the transverse scale factor is:the longitudinal scale factor is: />I x 、I y Respectively a horizontal pixel distance and a vertical pixel distance of the target aerial view, L x 、L y The actual lateral length and the actual longitudinal length of the field of view region, respectively.
It should be noted that, the above three modes can generate the path of the planting path, and the specific mode can be applied to the requirements.
The route navigation method is described above by taking the unmanned vehicle as an example, and the operation control method of the unmanned device is described below.
Based on the path navigation method, the invention also provides a job control method which can realize auxiliary control of unmanned equipment to operate the appointed planting row area in the target scene. Referring to fig. 13, the job control method may include S201 to S203.
S201, receiving a job instruction.
In this embodiment, the job instruction may be generated by the unmanned device itself based on a user operation, or may be transmitted from a control device connected to the unmanned device through communication.
In an alternative implementation manner, a user can input at least one or a combination of a designated job start point, a designated job end point and a designated job type through an input device (a touch display screen, a keyboard mouse and the like) carried by the unmanned device, so that the unmanned device can generate a job instruction. Or when the voice analysis module is installed on the unmanned equipment, the user can send out operation voice (the operation voice can comprise at least one or combination of inputting a designated operation starting point, a designated operation ending point and a designated operation type) to the unmanned equipment, and the unmanned equipment can generate an operation instruction after saving the operation voice by utilizing the voice analysis module.
In another optional implementation manner, the user may interact with the unmanned device through the mobile terminal, the mobile terminal may be in communication connection with the unmanned device through a network, and an APP (Application program) for controlling the unmanned device may be installed on the mobile terminal, and a job control interface of the APP may display a map of a target scene, where the user may select at least one or a combination of a designated job start point, a designated job end point, and a designated job type, so that the APP may generate a job instruction and send the job instruction to the unmanned device.
S202, determining a region to be operated, which is required to be operated currently, from a target scene based on an operation instruction.
It will be appreciated that the unmanned device may work directly on all of the rows in the target scene, and in some special cases work is required on designated row areas in the target scene, for example, a user designates a pesticide need to be sprayed to a pest area or a user directly designates a picking operation on a mature row area of the crop.
S203, performing path navigation on the unmanned equipment according to the path navigation method so as to assist the unmanned equipment in operating the area to be operated.
After the area to be operated is determined, the unmanned equipment can be subjected to path navigation according to the path navigation method provided by the invention so as to assist in controlling the unmanned equipment to operate according to the appointed operation type.
In one possible implementation, when the unmanned device is required to perform a job on a user-specified planting row area in the target scene, the job instruction may include a specified job start point, a specified job end point, and a specified job type. At this time, step S202 may include substeps S202-A to S202-B:
S202-A, at least one row of target planting rows from the designated job start point to the designated job end point is found out from the target scene.
S202-B, taking the area where all target planting rows are located as an area to be operated.
In this embodiment, the area to be worked that needs to be worked according to the specified work type may be determined directly from the target scene based on the specified work start point and the specified work end point.
For example, in connection with fig. 1, assuming that N is 10, the crop in the planting row is tomatoes, if viewed by the user, it is found that tomatoes in planting rows 1 to 3 are all reddish mature, while tomatoes in planting rows 4 to 10 are mostly green in immature state. Then the user can specify unmanned equipment to pick the mature tomatoes in the planting rows 1-3. Then the planting rows 1-3 are arranged between the designated operation starting point and the designated operation ending point, and the designated operation type is the picking of the mature tomatoes. This example is merely an example and is not intended to be limiting herein.
In another possible implementation, when the unmanned device is required to perform the operation on the specific planting row area where the plant diseases and insect pests exist in the target scene, the operation instruction may only include the designated operation type. At this time, step S202 may include substeps S202-a to S202-c:
s202-a, obtaining crop growth information of various planting rows in a target scene.
In this embodiment, the crop growth information may include a growth condition of the crop in the planting row, the growth condition including at least one of: pest condition (e.g., whether pest is present, and/or pest severity), pest type, growth condition (e.g., what nutrition is missing, and/or what is overfed).
Optionally, in the process of mapping the main road of the target scene, crop growth information of various planting rows can be simultaneously obtained by analysis and stored. Or, in order to reduce the burden of building the graph, the crop growth information of various planting rows can be manually input into the unmanned equipment by the user in advance for storage, or manually input into the APP by the user in advance, and the unmanned equipment is obtained from the APP.
S202-b, searching at least one row of target planting rows, of which the crop growth information is matched with the designated job type, from the target scene.
Alternatively, the specified operation type may be one of the types of operations for pest control, fertilization, irrigation, and the like. The unmanned device may prestore a mapping relationship between the type of operation and the pest condition and/or growth condition of the crop, for example, there is a need for pest removal (spraying a certain pesticide), a need for yellowing of the leaves, a need for fertilization, a need for irrigation for atrophy of the leaves, and the like. This example is merely an example, and the mapping relation between the specific job type and the growth condition of the crop depends on the type of the crop in the actual target scene, and is not limited herein.
Then, based on the mapping relation between the operation type and the growth condition of crops, at least one row of target planting rows with the crop growth information matched with the designated operation type can be found out from the target scene.
S202-c, taking the area where all target planting rows are located as an area to be operated.
In this embodiment, all the target planting rows are required to perform the operation according to the designated operation type.
Optionally, the unmanned device may control the machine body to travel to the area to be worked to start working by the following steps:
a. firstly, repositioning an unmanned vehicle when the unmanned vehicle moves into a main road path, and navigating the machine body along the main road path after the repositioning is successful;
b. when the vehicle is going to the area to be worked (for example, the position 3m away from the entrance of the first planted small road in the area to be worked), turning on a small road identification mode (namely, detecting the distance between the current position and the intersection point of the path);
c. when the situation that the planting lane enters the area to be turned is determined, the identified target pixels are converted into calibrated top view images, the position relation of the entrance positions of the planting lanes relative to the coordinate system of the machine body of the equipment is calculated, a turning path is generated, and the planting lines on the two sides are operated according to the first planting lane entering the area to be operated through the turning path.
It should be noted that, in the above method embodiment, the execution sequence of each step is not limited by the drawing, and the execution sequence of each step is based on the actual application situation.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
firstly, the navigation running is carried out on the machine body in the planting greenhouse with complex roads, whether the unmanned equipment enters the area to be turned or not can be timely judged, and a turning path is timely generated to switch the roads when the unmanned equipment is determined to enter the area to be turned. And the turning path is generated based on the intersection point of the current position of the machine body and the current path and the target path, so that the machine body can accurately turn to the target path according to the turning path without deviation, and the road switching between the current path and the target path is accurately and efficiently realized.
Secondly, when the unmanned equipment navigates the machine body along the main road path, the pose information obtained by the VIO and the repositioning pose information obtained by partial map information can be subjected to positioning fusion processing, so that the defect of low precision of the VIO positioned in certain scenes (for example, complex illumination) can be overcome, and the positioning precision is further improved.
Thirdly, in the process of generating the path of the planting small road based on three-dimensional reconstruction, the interference of the crop rows on the two sides of the planting small road where the unmanned equipment is located on the navigation can be eliminated by the small road scene image, only the edges on the two sides of the planting small road are focused, and the navigation precision on the planting small road is improved.
Fourth, in the process of processing the small-track scene image based on the image recognition technology to generate a planting small-track path, the planting bracket central line and the crop edge fitting central line can be fused to obtain a target fitting central line, the planting small-track path is generated based on a target straight line converted by the target fitting central line, the consideration is comprehensive, and the accurate planting small-track path can be obtained.
Fifth, in the process of carrying out regional interception processing on the small-track scene image to generate a planting small-track path, regional interception is carried out on the small-track scene image, and then the planting small-track path is further obtained, so that the interference of other crop rows on navigation is avoided, and meanwhile, the data processing amount is reduced.
In order to perform the corresponding steps in the above-described method embodiments and in each possible implementation, an implementation of the route guidance device and the job control device is given below.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a path navigation device according to an embodiment of the present invention. The route guidance device 200 includes: a path navigation module 210 and a turn control module 220.
The path navigation module 210 is configured to navigate the machine body along a current path;
The turning control module 220 is configured to generate a turning path based on a path intersection point of a current position of the body and a current path and a target path when it is determined that the body enters the area to be turned; the turning path is used for guiding the machine body to turn from the current path to the target path.
Optionally, the path navigation module 210, after being used to navigate the body along the current path, may be further configured to: when the set reference object is identified, or the accumulated running length of the machine body on the current path reaches the set length, or the machine body runs to the end point of the current path, determining the distance between the current position and the intersection point of the path; if the distance between the current position and the intersection point of the path is smaller than or equal to the set distance, the machine body is determined to enter the area to be turned.
Optionally, the current path is a main road path of the target scene, and the target path is a planting small road path of the target scene. The path navigation module 210 may specifically be configured to: and navigating the machine body along the main road path based on the three-dimensional map. The turning control module 220 is configured to generate a turning path based on a path intersection point of a current position of the body and a current path and a target path, and may specifically be configured to: generating a turning path from a main road path to a planting small road path by taking a first set point of a turning inner side area of the machine body as a circle center and taking the distance between the current position and a path intersection point as a radius; wherein the line connecting the first set point and the current position is perpendicular to the arterial road path.
Optionally, the current path is a planting track path of the target scene, and the target path is a main track path of the target scene. The path navigation module 210 may specifically be configured to: and navigating the machine body along the path of the planting small road. The turning control module 220 is configured to generate a turning path based on a path intersection point of a current position of the body and a current path and a target path, and may specifically be configured to: generating a turning path from the planting small road path to the arterial road path by taking a second set point of the turning inner side area of the machine body as a circle center and taking the distance between the current position and the path intersection point as a radius; wherein the line connecting the second set point and the current position is perpendicular to the path of the planting lane.
Optionally, the path navigation module 210 may be further configured to: when the machine body enters the main road path from the planting small road path, repositioning is carried out based on the three-dimensional map of the target scene, so that pose information obtained through the visual inertial odometer of the machine body is in the same coordinate system with the three-dimensional map.
Optionally, the path navigation module 210 is configured to, when navigating the body along the path of the planting lane, specifically: acquiring a small road field Jing Tuxiang acquired by a visual inertial odometer of the machine body, wherein the small road field image is acquired when the machine body enters a planting small road of the target scene; generating a target straight line based on the small road field Jing Tuxiang; extracting a plurality of small-channel navigation path points from the target straight line to obtain the planting small-channel path; and navigating the machine body according to the planting road diameter.
Optionally, the path navigation module 210 is configured to generate a target straight line based on the small track field Jing Tuxiang, and specifically may be configured to: performing three-dimensional reconstruction based on the small-track scene image to obtain a three-dimensional reconstruction image; extracting three-dimensional point information on two sides of the planting lane from the three-dimensional reconstruction graph; and fitting to obtain the target straight line based on the three-dimensional point information on the two sides of the planting small road.
Optionally, the path navigation module 210 is configured to generate a target straight line based on the small track field Jing Tuxiang, and specifically may be configured to: identifying planting bracket information and planting row information from the small-track scene image; determining a planting bracket central line based on the planting bracket information, and determining a crop edge fitting central line based on the planting row information; and calculating a target fitting central line between the central line of the planting bracket and the fitting central line of the crop edge, and converting the target fitting central line into the target straight line.
Optionally, the path navigation module 210 is configured to generate a target straight line based on the small track field Jing Tuxiang, and specifically may be configured to: based on the equipment central axis of the machine body, obtaining a local image of the small-track scene image; converting the local image to obtain a target aerial view; crop identification is carried out on the target aerial view, and a first planting row area and a second planting row area on two sides of a planting small road are determined; and obtaining the target straight line according to the central lines of the first planting row area and the second planting row area and a preset scale factor, wherein the preset scale factor is used for representing the relation between the distance between pixels in the target aerial view and the actual distance.
Optionally, the path navigation module 210 is configured to extract a plurality of lane navigation path points from the target line, and when obtaining the planting lane path, the path navigation module may be specifically configured to: and taking an entry point of the small planting way as a starting point, extracting a point from the target straight line every preset length, and converting the point into an equipment body coordinate system until the exit point of the small planting way is extracted from the target straight line, so as to obtain a plurality of small planting way navigation path points, and generating the small planting way path.
Optionally, the current path is a main road path of the target scene, and the target path is a planting small road path of the target scene. The path navigation module 210 may also be configured to, prior to being configured to navigate the body along the arterial road path based on the three-dimensional map: acquiring a three-dimensional map of a target scene and a main road path in the three-dimensional map; the three-dimensional map is obtained by mapping a target scene through a visual inertial odometer of the machine body; repositioning is performed based on the three-dimensional map so that pose information obtained by the visual odometer is in the same coordinate system as the three-dimensional map.
The path navigation module 210 is configured to, based on the three-dimensional map, specifically, when navigating the body along the arterial road path: based on the image acquired by the visual inertial odometer in real time, acquiring the running information of the machine body relative to the initialization completion time or the last positioning fusion time; when the running information meets preset conditions, determining a matching map area from the three-dimensional map according to the current pose information obtained by the visual inertial odometer; positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matched map area to obtain fused positioning information; and navigating the machine body according to the fusion positioning information, the three-dimensional map and the arterial road path.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a job control device according to an embodiment of the present invention. The job control device 300 includes: an instruction receiving module 310 and a job control module 320.
The instruction receiving module 310 is configured to receive a job instruction.
The job control module 320 is configured to determine, based on the job instruction, a to-be-job area that needs to be currently worked from the target scene; according to the path navigation method provided by the embodiment, the unmanned equipment is subjected to path navigation so as to assist the unmanned equipment in operating the area to be operated.
Alternatively, the job instruction may include a specified job start point and a specified job end point, and the target scene includes a number of planting rows. The job control module 320 is configured to, based on the job instruction, determine, from the target scenario, a currently required job area, and may specifically be configured to: finding out at least one row of target planting rows from the designated job starting point to the designated job ending point from the target scene; and taking the area where all the target planting rows are located as an area to be operated.
Optionally, the job instruction includes a specified job type, and the target scene includes a plurality of planting rows. The job control module 320 is configured to, based on the job instruction, determine, from the target scenario, a currently required job area, and may specifically be configured to: obtaining crop growth information of various planting rows in a target scene; searching at least one row of target planting rows with crop growth information matched with the designated operation type from the target scene; and taking the area where all the target planting rows are located as an area to be operated.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the path navigation device 200 and the job control device 300 described above may refer to the corresponding processes in the foregoing method embodiments, and are not repeated herein.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an unmanned device according to an embodiment of the present invention. The drone 100 may include: the processor 110 is connected with the memory 120 through the bus 130.
The memory 120 may be used to store programs, such as the route guidance device 200 provided in the embodiment of the present invention and/or the job control device 300 provided in the embodiment of the present invention. The processor 110 executes various functional applications and data processing by running programs stored in the memory 120. When the processor 110 runs the program in the memory 120, the navigation of the machine body is realized according to the path navigation method described above, and/or the operation of the machine body is controlled according to the operation control method described above.
The Memory 120 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), flash Memory (Flash), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 110 may be an integrated circuit chip with signal processing capabilities. The processor 110 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It is to be understood that the configuration shown in fig. 16 is illustrative only and that the drone 100 may also include more or fewer components than shown in fig. 16, or have a different configuration than shown in fig. 16. The components shown in fig. 16 may be implemented in hardware, software, or a combination thereof.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is realized when being executed by a processor: the path navigation method disclosed in the above embodiment and/or the job control method disclosed in the above embodiment. The computer readable storage medium may be, but is not limited to: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, RAM, PROM, EPROM, EEPROM, FLASH magnetic disk or an optical disk.
In summary, the embodiments of the present invention provide a path navigation method, an operation control method, and a related device, where when it is determined that an engine body enters an area to be turned, under the condition that the engine body is navigated along a current path, a turning path can be generated based on a path intersection point of the current position of the engine body and the current path and a target path, and the turning path can be used to guide the engine body to turn from the current path to the target path. Therefore, when the machine body is determined to enter the area to be turned, a turning path is timely generated, and the turning path is generated based on the current position of the machine body and the path intersection point of the current path and the target path, so that the machine body can accurately turn to the target path according to the turning path without deviation, and road switching between the current path and the target path is accurately and efficiently realized.
The present invention is not limited to the above embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (19)

1. A method of path navigation, comprising:
navigating the machine body along the current path;
when the machine body is determined to enter a region to be turned, generating a turning path based on a path intersection point of the current position of the machine body and the current path and the target path;
the turning path is used for guiding the machine body to turn from the current path to a target path.
2. The method of claim 1, wherein after the step of navigating the body along the current path, the method further comprises:
when a set reference object is identified, or the accumulated running length of the machine body on the current path reaches the set length, or the machine body runs to the end point of the current path, determining the distance between the current position and the intersection point of the path;
and if the distance between the current position and the path intersection point is smaller than or equal to the set distance, determining that the machine body enters the area to be turned.
3. The method of claim 1, wherein the current path is a arterial road path of a target scene, the target path being a plant track path of the target scene;
The step of generating a turning path based on a path intersection of the current position of the body and the current path and the target path includes:
generating a turning path from the arterial road path to the planting small road path by taking a first set point of a turning inner side area of the machine body as a circle center and taking a distance between the current position and the path intersection point as a radius;
wherein, the connection line between the first set point and the current position is perpendicular to the main road path.
4. The method of claim 1, wherein the current path is a plant path of a target scene, the target path being a arterial path of the target scene;
the step of generating a turning path based on a path intersection of the current position of the body and the current path and the target path includes:
generating a turning path from the planting small road path to the main road path by taking a second set point of a turning inner side area of the machine body as a circle center and taking a distance between the current position and the path intersection point as a radius;
wherein the line connecting the second set point and the current position is perpendicular to the planting lane path.
5. The method according to claim 4, wherein the method further comprises:
when the machine body enters the arterial road path from the planting small road path, repositioning is carried out based on the three-dimensional map of the target scene, so that pose information obtained through the visual inertial odometer of the machine body is in the same coordinate system with the three-dimensional map.
6. The method of claim 1, wherein the current path is a plant path of a target scene, the target path being a arterial path of the target scene;
the step of navigating the body along the current path comprises the following steps:
acquiring a small road field Jing Tuxiang acquired by a visual inertial odometer of the machine body; the small-channel scene image is acquired when the machine body enters a planting small channel of the target scene;
generating a target straight line based on the small road field Jing Tuxiang;
extracting a plurality of small-channel navigation path points from the target straight line to obtain the planting small-channel path;
and navigating the machine body according to the planting road diameter.
7. The method of claim 6, wherein the step of generating a target straight line based on the small track field Jing Tuxiang comprises:
Performing three-dimensional reconstruction based on the small-track scene image to obtain a three-dimensional reconstruction image;
extracting three-dimensional point information on two sides of the planting lane from the three-dimensional reconstruction graph;
and fitting to obtain the target straight line based on the three-dimensional point information on the two sides of the planting small road.
8. The method of claim 6, wherein the step of generating a target straight line based on the small track field Jing Tuxiang comprises:
identifying planting bracket information and planting row information from the small-track scene image;
determining a planting bracket central line based on the planting bracket information, and determining a crop edge fitting central line based on the planting row information;
and calculating a target fitting central line between the central line of the planting bracket and the fitting central line of the crop edge, and converting the target fitting central line into the target straight line.
9. The method of claim 6, wherein the step of generating a target straight line based on the small track field Jing Tuxiang comprises:
based on the equipment central axis of the machine body, obtaining a local image of the small-track scene image;
converting the local image to obtain a target aerial view;
Crop identification is carried out on the target aerial view, and a first planting row area and a second planting row area on two sides of a planting small road are determined;
and obtaining the target straight line according to the central lines of the first planting row area and the second planting row area and a preset scale factor, wherein the preset scale factor is used for representing the relation between the distance between pixels in the target aerial view and the actual distance.
10. The method according to any one of claims 6-9, wherein the step of extracting a plurality of lane navigation route points from the target line to obtain the planted lane route comprises:
and taking an entry point of the small planting way as a starting point, extracting a point from the target straight line every preset length, and converting the point into an equipment body coordinate system until the exit point of the small planting way is extracted from the target straight line, so as to obtain a plurality of small planting way navigation path points, and generating the small planting way path.
11. The method of claim 1, wherein the current path is a arterial road path of a target scene, the target path being a plant track path of the target scene;
Before the step of navigating the body along the current path, the method further comprises:
acquiring a three-dimensional map of the target scene and a main road path in the three-dimensional map; the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of the machine body;
repositioning is performed based on the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map.
12. The method of claim 11, wherein the step of navigating the body along the current path comprises:
acquiring operation information of the machine body relative to the initialization completion time or the last positioning fusion time based on the image acquired by the visual inertial odometer in real time;
when the running information meets preset conditions, determining a matching map area from the three-dimensional map according to the current pose information obtained by the visual inertial odometer;
positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information;
And navigating the machine body according to the fusion positioning information, the three-dimensional map and the arterial road path.
13. A job control method, comprising:
receiving a job instruction;
determining a region to be operated which needs to be operated currently from a target scene based on an operation instruction;
the path navigation method according to any one of claims 1 to 12, wherein the unmanned device is subjected to path navigation to assist the unmanned device in performing the operation on the area to be operated.
14. The method of claim 13, wherein the job instruction includes a specified job start point and a specified job end point; the target scene comprises a plurality of planting rows;
the step of determining the area to be worked for the current required work from the target scene based on the work instruction comprises the following steps:
finding out at least one row of target planting rows from the designated job starting point to the designated job ending point in the target scene;
and taking the area where all the target planting rows are located as the area to be operated.
15. The method of claim 13, wherein the job instruction includes a specified job type; the target scene comprises a plurality of planting rows;
The step of determining the area to be worked for the current required work from the target scene based on the work instruction comprises the following steps:
obtaining crop growth information of various planting rows in the target scene;
searching at least one column of target planting rows, of which the crop growth information is matched with the designated operation type, from the target scene;
and taking the area where all the target planting rows are located as the area to be operated.
16. A path navigation device, comprising:
the path navigation module is used for navigating the machine body along the current path;
the turning control module is used for generating a turning path based on the path intersection point of the current position of the machine body and the current path and the target path when the machine body is determined to enter the area to be turned; the turning path is used for guiding the machine body to turn from the current path to a target path.
17. A job control device, comprising:
the instruction receiving module is used for receiving the operation instruction;
the operation control module is used for determining a region to be operated which needs to be operated currently from a target scene based on an operation instruction;
the operation control module is further configured to perform path navigation on the unmanned device according to the path navigation method of any one of claims 1 to 12, so as to assist the unmanned device in performing operations on the area to be operated.
18. An unmanned device, comprising: a machine body, a memory provided to the machine body, and a processor for storing a program, the processor realizing navigation of the machine body according to the path navigation method according to any one of claims 1 to 12 and/or controlling the operation of the machine body according to the operation control method according to any one of claims 13 to 15 when the program is executed.
19. A computer readable storage medium, wherein the computer readable storage medium stores a computer program which when executed by a processor implements: the path guidance method according to any one of claims 1 to 12, and/or the job control method according to any one of claims 13 to 15.
CN202310443224.9A 2023-04-21 2023-04-21 Path navigation method, operation control method and related device Pending CN116576859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310443224.9A CN116576859A (en) 2023-04-21 2023-04-21 Path navigation method, operation control method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310443224.9A CN116576859A (en) 2023-04-21 2023-04-21 Path navigation method, operation control method and related device

Publications (1)

Publication Number Publication Date
CN116576859A true CN116576859A (en) 2023-08-11

Family

ID=87544493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310443224.9A Pending CN116576859A (en) 2023-04-21 2023-04-21 Path navigation method, operation control method and related device

Country Status (1)

Country Link
CN (1) CN116576859A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116892944A (en) * 2023-09-11 2023-10-17 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116892944A (en) * 2023-09-11 2023-10-17 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device
CN116892944B (en) * 2023-09-11 2023-12-08 黑龙江惠达科技股份有限公司 Agricultural machinery navigation line generation method and device, and navigation method and device

Similar Documents

Publication Publication Date Title
Wang et al. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
US10278333B2 (en) Pruning robot system
CN112183133B (en) Aruco code guidance-based mobile robot autonomous charging method
CN109755995A (en) Robot automatic charging interconnection method based on ROS robot operating system
US20210255638A1 (en) Area Division and Path Forming Method and Apparatus for Self-Moving Device and Automatic Working System
CN113296495A (en) Path forming method and device for self-moving equipment and automatic working system
CN109753075B (en) Agriculture and forestry park robot navigation method based on vision
CN112171675B (en) Obstacle avoidance method and device for mobile robot, robot and storage medium
CN114568108A (en) Unmanned mower track tracking control method and computer readable storage medium
CN116576859A (en) Path navigation method, operation control method and related device
CN116839570B (en) Crop interline operation navigation method based on sensor fusion target detection
CN110705385A (en) Method, device, equipment and medium for detecting angle of obstacle
Menon et al. NBV-SC: Next best view planning based on shape completion for fruit mapping and reconstruction
CN110806585A (en) Robot positioning method and system based on trunk clustering tracking
US20220361392A1 (en) Method and system for driving view-based agricultural machinery and device for agricultural machinery applying method
CN114740867A (en) Intelligent obstacle avoidance method and device based on binocular vision, robot and medium
CN114489050A (en) Obstacle avoidance route control method, device, equipment and storage medium for straight line driving
US20230320263A1 (en) Method for determining information, remote terminal, and mower
US20230210050A1 (en) Autonomous mobile device and method for controlling same
EP4242775A1 (en) Charging station, method for returning to a charging station for a lawnmower robot
Fu et al. Vision-based trajectory generation and tracking algorithm for maneuvering of a paddy field robot
CN112445205A (en) Control method, device, equipment and storage medium for ground plant protection equipment
CN115589845A (en) Intelligent cotton picking robot and cotton picking operation path planning method thereof
CN114047755B (en) Pesticide spraying robot navigation planning method and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination