CN116839593A - Navigation method and device of unmanned equipment, unmanned equipment and storage medium - Google Patents

Navigation method and device of unmanned equipment, unmanned equipment and storage medium Download PDF

Info

Publication number
CN116839593A
CN116839593A CN202310944029.4A CN202310944029A CN116839593A CN 116839593 A CN116839593 A CN 116839593A CN 202310944029 A CN202310944029 A CN 202310944029A CN 116839593 A CN116839593 A CN 116839593A
Authority
CN
China
Prior art keywords
line
navigation
unmanned
area
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310944029.4A
Other languages
Chinese (zh)
Inventor
李龙喜
张艺菲
蔡浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202310944029.4A priority Critical patent/CN116839593A/en
Publication of CN116839593A publication Critical patent/CN116839593A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a navigation method and device of unmanned equipment, the unmanned equipment and a storage medium, and relates to the technical field of unmanned equipment, wherein the method comprises the following steps: acquiring a first image acquired by a binocular vision sensor, wherein the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment; constructing an edge point cloud of a first area based on the first image, and determining a first navigation line of unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area; and controlling the unmanned equipment to move according to the first navigation line. By the technical means, the problems of low visual navigation precision and limited use scene in the prior art are solved, the applicability of the unmanned equipment is improved, and the popularization and the use of the unmanned equipment are facilitated.

Description

Navigation method and device of unmanned equipment, unmanned equipment and storage medium
Technical Field
The present application relates to the field of unmanned devices, and in particular, to a navigation method and apparatus for an unmanned device, and a storage medium.
Background
The automatic navigation is a key supporting technology for expanding the application field of unmanned equipment, and the navigation technology is classified based on environment-aware sensors and can be roughly divided into visual navigation, satellite system navigation, laser radar navigation and multi-sensor fusion navigation. Because satellite system navigation is easy to be interfered by shielding objects, positioning accuracy is low, laser radar navigation is easy to be shielded by foreign objects in the scanning process, so that feature identification is incomplete, positioning accuracy is poor, and cost of multi-sensor fusion navigation is high, so that unmanned equipment generally adopts a visual navigation technology.
In the prior art, an image in front of an unmanned device is shot through a monocular vision sensor, a movable area and an immovable area in the image are detected, and a navigation line of the unmanned device in the movable area is determined according to the movable area, so that the unmanned device moves based on the navigation line. However, only when the texture difference between the movable region and the non-movable region is large, the movable region and the non-movable region in the image can be accurately recognized. Therefore, the existing visual navigation technology can ensure higher navigation precision only in specific scenes, and is unfavorable for popularization and use of unmanned equipment.
Disclosure of Invention
The application provides a navigation method and device of unmanned equipment, the unmanned equipment and a storage medium, solves the problems of low visual navigation precision and limited use scene in the prior art, improves the applicability of the unmanned equipment, and is beneficial to popularization and use of the unmanned equipment.
In a first aspect, the present application provides a navigation method for an unmanned device, including:
acquiring a first image acquired by a binocular vision sensor, wherein the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment;
Constructing an edge point cloud of the first area based on the first image, and determining a first navigation line of the unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area;
and controlling the unmanned equipment to move according to the first navigation line.
In a second aspect, the present application provides a navigation device of an unmanned apparatus, comprising:
the system comprises a first image acquisition module, a second image acquisition module and a first image acquisition module, wherein the first image acquisition module is configured to acquire a first image acquired by a binocular vision sensor, the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment;
the first navigation line generation module is configured to construct an edge point cloud of the first area based on the first image, and determine a first navigation line of the unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area;
and the movement control module is configured to control the unmanned equipment to move according to the first navigation line.
In a third aspect, the present application provides an unmanned device comprising:
one or more processors; a memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of navigation of the unmanned device of the first aspect.
In a fourth aspect, the present application provides a storage medium containing computer executable instructions which, when executed by a computer processor, are used to perform the method of navigation of the unmanned device according to the first aspect.
According to the application, the binocular vision sensors arranged on two sides of the unmanned equipment are used for shooting a first area in front of the unmanned equipment to generate a first image, an edge point cloud of the first area is constructed according to the first image, a first navigation line of the unmanned equipment is determined according to the edge point cloud, and the unmanned equipment is controlled to move according to the first navigation line. Through the technical means, the edge point cloud of the first area constructed by the first image acquired by the binocular vision sensor can represent the edge information of crops on two sides in the first area, so that a first navigation line capable of effectively avoiding the crops on two sides can be planned according to the edge point cloud of the first area, unmanned equipment is navigated according to the first navigation line to avoid touching the crops during movement, and the safety of the unmanned equipment and the crops is protected. The accuracy of the edge point cloud constructed based on the first image is not affected by region textures, the accuracy of the first navigation line generated based on the edge point cloud is high and is not limited by scenes, the problems of low visual navigation accuracy and limited use scenes in the prior art are solved, the applicability of unmanned equipment is improved, and popularization and use of the unmanned equipment are facilitated.
Drawings
Fig. 1 is a flowchart of a navigation method of an unmanned device according to an embodiment of the present application;
fig. 2 is a first schematic diagram of a first area and an unmanned device according to an embodiment of the present application;
fig. 3 is a second schematic diagram of the first area and the unmanned device according to the embodiment of the present application;
fig. 4 is a flowchart of planning a navigation line based on an edge point cloud according to an embodiment of the present application;
FIG. 5 is a first schematic view of an edge line of a first area according to an embodiment of the present application;
FIG. 6 is a second schematic view of an edge line of the first region according to an embodiment of the present application;
FIG. 7 is a flow chart of another method of navigating an unmanned device provided by an embodiment of the present application;
FIG. 8 is a flow chart of determining a second navigation line provided by an embodiment of the present application;
FIG. 9 is a schematic view of a crop area in a bird's eye view provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a navigation device of an unmanned apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an unmanned device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments of the present application is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The navigation method of the unmanned device provided in the embodiment may be executed by the unmanned device, where the unmanned device may be implemented by software and/or hardware, and the unmanned device may be formed by two or more physical entities or may be formed by one physical entity. The unmanned equipment refers to flight equipment or a ground platform for working according to a remote control instruction or a preset instruction, such as an unmanned plane, an unmanned vehicle and the like.
The unmanned equipment is provided with at least one type of operating system, and can be provided with at least one application program based on the operating system, wherein the application program can be an application program of the operating system, or can be an application program downloaded from a third party device or a server. In this embodiment, the drone has at least an application program that can execute the navigation method of the drone.
For easy understanding, the present embodiment describes an example in which an unmanned apparatus is used as a main body for performing a navigation method of the unmanned apparatus and an unmanned vehicle is used as the unmanned apparatus for performing operations on crops.
In one embodiment, the automated navigation of unmanned devices is a key support technology for developing smart agriculture and other industries such as logistics and mapping. Among these, agriculture includes plantation, forestry, animal husbandry, and fishery. The navigation technology is based on the classification of the environmental perception sensors and can be roughly classified into visual navigation, satellite system navigation, laser radar navigation and multi-sensor fusion navigation. The present embodiment will be described taking the application of unmanned equipment in smart agriculture as an example. In smart agriculture, satellite system navigation is easily interfered by a shielding object, so that the satellite system navigation is generally used in an open operation scene and is not suitable for special operation environments such as under a Lin Jianhuo canopy. The laser radar navigation is easy to be shielded by plant branches and leaves in the scanning process, so that the characteristic identification is incomplete, the positioning accuracy is poor, and the laser radar navigation is not suitable for use in general. The multi-sensor fusion navigation adopts satellite system navigation to carry out rough adjustment on the heading of the unmanned equipment and adopts visual navigation to carry out fine adjustment on the heading of the unmanned equipment. The multi-sensor fusion navigation mainly uses loose coupling, and a satellite system plays a certain auxiliary role, but has no great effect on improving navigation precision. Therefore, in intelligent agriculture, the visual navigation technology is generally adopted to realize the automatic navigation operation of unmanned equipment.
The existing visual navigation technology is to shoot an image in front of unmanned equipment through a monocular vision sensor, detect a crop area in the image, and determine a navigation line of the unmanned equipment according to the crop area so as to enable the unmanned equipment to carry out moving operation based on the navigation line. Wherein, the crop area in the operation area image is identified through a pre-trained deep learning model. When the texture difference between the crop area and other areas is large, the accuracy of the crop area identified by the deep learning model can be guaranteed. Therefore, the existing visual navigation technology can ensure higher navigation precision only in specific scenes, and is unfavorable for popularization and use of unmanned equipment.
In order to solve the problems of low visual navigation precision and limited use scene in the prior art, the embodiment provides a navigation method of unmanned equipment.
Fig. 1 shows a flowchart of a navigation method of an unmanned device according to an embodiment of the present application. Referring to fig. 1, the navigation method of the unmanned device specifically includes:
s110, acquiring a first image acquired by a binocular vision sensor, wherein the binocular vision sensor is arranged on two sides of the unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment.
The binocular vision sensor comprises a first camera and a second camera, wherein the first camera is arranged on the left side of the unmanned equipment, the second camera is arranged on the right side of the unmanned equipment, and the first camera and the second camera synchronously shoot a working area in front of the unmanned equipment to obtain two first images with different shooting angles. The first camera and the second camera can shoot the first image at a certain interval or a certain distance, and can record video and extract the first image from the video, and a specific first image acquisition mode can be set according to actual requirements. In the moving process, the unmanned equipment acquires a first image through the first camera and the second camera in real time, and plans a real-time navigation line according to the first image so as to move according to the navigation line.
In this embodiment, the first area may be understood as an operation area in front of the unmanned device falling into the photographing area of the binocular vision sensor, one side or both sides of the operation area may have crops, which may be grain crops or cash crops, and the middle of the operation area may be a flat ground that the unmanned device may travel. The first camera and the second camera shoot the operation area in front of the unmanned equipment to obtain two first images, and the content of each first image can comprise crops on one side or two sides in the operation area and the land in the middle of the operation area. The embodiment aims at planning a navigation line of the unmanned equipment in the operation area according to the characteristic information in the two first images, so that the unmanned equipment can avoid crops on one side or two sides in the operation area when moving based on the navigation line, and the crops and the unmanned equipment are protected.
S120, constructing an edge point cloud of a first area based on the first image, and determining a first navigation line of the unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area.
The first navigation line is generated by a binocular vision navigation method. Fig. 2 and fig. 3 are schematic diagrams of a first area and an unmanned device according to an embodiment of the present application. As shown in fig. 2 and 3, crops 15 are located on two sides in the first area 17, and a flat ground for the unmanned device 11 to travel is located in the middle of the first area 17. The first camera 12 is mounted on the left side of the unmanned device 11 and the second camera 13 is mounted on the right side. The photographing region (region shown by a broken line in fig. 2) includes only a part of the content of the working region, and the first image generated by photographing the working region in front of the unmanned device by the first camera 12 and the second camera 13 shows the side of the crop 15 near the flat ground where the unmanned device 11 travels, limited by the photographing angles of the first camera 12 and the second camera 13.
Further, pixel point matching is carried out on two first images obtained by synchronously shooting the first camera and the second camera, a parallax matrix is generated according to the matched characteristic points, the internal parameters of the first camera, the internal parameters of the second camera and the external parameters between the first camera and the second camera, the parallax matrix is calculated to obtain the depth of each pixel point in the first image shot by the first camera, and the three-dimensional coordinates corresponding to the pixel points are determined according to the depth of the pixel points, the internal parameters of the first camera and the external parameters of the first camera and the world coordinate system. And combining three-dimensional coordinates corresponding to each pixel point in the first image shot by the first camera to obtain an edge point cloud of the first area. Referring to fig. 2 and 3, in the case where there is no obstacle on the running flat ground, the edge point cloud 16 of the first area 17 is composed of edge point clouds of the crops 15 on both sides in the first area on the side of the running flat ground near the unmanned device 11, so that the edge point cloud 16 of the first area 17 can characterize edge information of the crops 15 on both sides of the running flat ground, and a guide line which can effectively avoid the crops 15 on both sides can be planned based on the edge point cloud 16 of the first area 17 to control the unmanned device to move on the running flat ground according to the guide line to avoid touching the crops 15 on both sides.
In an embodiment, the edge point clouds of the first area may be clustered by a density clustering algorithm to divide the edge point clouds of the first area into edge point clouds of the crops on the left and right sides. And fitting the edge point cloud of the left crop to obtain a first curved surface, and fitting the edge point cloud of the right crop to obtain a second curved surface. A center line between the first curved surface and the second curved surface is determined in a space formed between the first curved surface and the second curved surface, and the center line is determined as a first navigation line. In this embodiment, in addition to dividing the edge point cloud of the first area into the edge point clouds of the crops on the left and right sides by using a clustering algorithm, the edge point cloud of the first area may be divided into the edge point clouds of the crops on the left and right sides based on the center line of sight of the binocular vision sensor. The center line of sight of the binocular vision sensor is the center line of the first camera and the second camera.
In another embodiment, since the operation of fitting the curved surface by the plurality of point clouds is complex, the planning efficiency of the navigation line is low, and therefore, the three-dimensional edge point clouds can be converted into two-dimensional edge points, so that the navigation line can be accurately and rapidly planned according to the two-dimensional edge points. Fig. 4 is a flowchart of planning a navigation line based on an edge point cloud according to an embodiment of the present application. As shown in fig. 4, the steps of planning a navigation line based on the edge point cloud specifically include S1201-S1203:
And S1201, projecting the edge point cloud to a first plane to obtain two-dimensional edge points, wherein the first plane is a plane where the unmanned equipment is located.
For example, assuming that the first plane is an XY plane of the world coordinate system, deleting the Z-axis coordinate in the three-dimensional coordinates of the edge point cloud may obtain a corresponding two-dimensional edge point, for example, the three-dimensional coordinates of the edge point cloud are (x 1, y1, Z1), and the two-dimensional coordinates of the corresponding two-dimensional edge point are (x 1, y 1).
S1202, fitting according to two-dimensional edge points to obtain edge lines of the first area.
The edge line of the first region is understood to be the edge line of the crop in the first region on the side closer to the ground. Fig. 5 is a first schematic view of an edge line of a first area according to an embodiment of the present application. As shown in fig. 5, the two-dimensional edge points 18 may be divided into two-dimensional edge points 18 of the left and right crops by a density clustering algorithm or a central line of sight of a binocular vision sensor, the two-dimensional edge points 18 of the left crop are fitted to a first edge line 19, and the two-dimensional edge points 18 of the right crop are fitted to a second edge line 20.
Because the two-dimensional edge points obtained by projecting the edge point cloud have a large number and uneven distribution, the edge line generated by directly fitting the two-dimensional edge points on the left side and the right side has lower precision. In order to improve the accuracy of the edge line, two-dimensional edge points which are uniformly distributed and can represent the crop edge information can be screened out from the two-dimensional edge points, and the high-accuracy edge line is fitted on the basis of the screened two-dimensional edge points. The specific implementation process is as follows: dividing the first area into a second area and a third area according to the central sight line of the binocular vision sensor; dividing the second area and the third area into a plurality of grids respectively based on preset intervals, and determining two-dimensional edge points falling into the grids; and fitting the two-dimensional edge points closest to the central sight line in each grid of the second area to obtain a first edge line, and fitting the two-dimensional edge points closest to the central sight line in each grid of the third area to obtain a second edge line. Fig. 6 is a second schematic view of an edge line of the first area according to an embodiment of the present application. As shown in fig. 6, the first region 17 may be divided into the second region 21 and the third region 22 by a central line of sight of the binocular vision sensor, and the first region 17 may be divided into a plurality of rows at a preset interval on average so that the second region 21 and the third region 22 are respectively divided into a plurality of grids 23. Wherein, assuming that the preset interval is 20cm, the width of each grid is 20cm, and the length of the grid is equal to the length of the second region or the third region. And determining the position range of each grid according to the position range of the first area and the preset interval, comparing the position range of each grid with the two-dimensional coordinates of each two-dimensional edge point, and determining the grid in which the two-dimensional edge point falls. And screening out the two-dimensional edge points closest to the central sight line from the two-dimensional edge points of each grid in the second area according to the position coordinates of the central sight line, and if the two-dimensional edge points are not found in the grids, not selecting. It will be appreciated that if the second region can be divided into N grids, at most N two-dimensional edge points can be screened out. The first edge line 19 is obtained based on a two-dimensional edge point fit screened from the second region. And screening out the two-dimensional edge points closest to the central sight line from the two-dimensional edge points of each grid of the third area according to the position coordinates of the central sight line, and fitting the screened two-dimensional edge points to obtain a second edge line 20. In this embodiment, the second area and the third area are uniformly divided into a plurality of grids, two-dimensional edge points are obtained from the grids, so that the two-dimensional edge points for fitting the edge lines can be guaranteed to be uniformly distributed, and edge points closest to the central sight line in the grids can represent the edge information of crops, so that the precision of the edge lines can be greatly improved by adopting the edge line generation mode provided by the embodiment, and the precision of the navigation line is further improved.
S1203, determining a first navigation line of the unmanned device according to the edge line.
For example, a centerline between the first edge line and the second edge line may be determined as a first navigation line of the unmanned device. Corresponding linear expressions can be obtained when the first edge line and the second edge line are generated through fitting, a central line which is the same as the distance between the first edge line and the second edge line can be determined according to the linear expressions of the first edge line and the second edge line, and the central line is determined to be the first navigation line of the unmanned equipment.
In another embodiment, because the height of the unmanned aerial vehicle is limited, when the height of part of branches and leaves of crops is higher than that of the unmanned aerial vehicle, the part of branches and leaves of the unmanned aerial vehicle cannot collide with the unmanned aerial vehicle in the moving process, so that the height information is higher than that of the unmanned aerial vehicle according to the height information of the unmanned aerial vehicle and the height information of the edge point cloud, the edge point cloud of the unmanned aerial vehicle can be filtered, and the first navigation line of the unmanned aerial vehicle is determined through the rest of the edge point cloud. Wherein, determining the first navigation line of the unmanned vehicle through the remaining edge point cloud may employ the embodiments of steps S1201-S1203.
When an obstacle exists on the running flat ground, the edge point cloud of the first area comprises the edge point cloud of the crops and the edge point cloud of the obstacle, the edge point cloud of the crops can be screened out from the edge point cloud of the first area, and the first navigation line is planned based on the edge point cloud of the crops. When the edge point clouds of crops are screened, the edge point clouds of the first area can be clustered through a density clustering algorithm, so that the edge point clouds of the first area are divided into edge point clouds of left-side crops, right-side crops and middle obstacles. And removing the edge point cloud of the intermediate obstacle from the edge point cloud of the first area to obtain the edge point cloud of the crop. Or determining the distance between the edge point cloud and the central sight line, determining the edge point cloud with the distance smaller than a preset distance threshold value as the edge point cloud of the obstacle, and removing the edge point cloud of the obstacle from the edge point cloud of the first area to obtain the edge point cloud of the crop. The preset distance threshold can be understood as the minimum distance between the crop in the first area and the central line of sight.
And S130, controlling the unmanned equipment to move according to the first navigation line.
Illustratively, the unmanned device is controlled to move along the first navigation line on a running level between crops on both sides in the first area according to the position coordinates of each track point in the first navigation line. In this embodiment, the unmanned device may be positioned by using RTK (Real Time Kinematic) carrier phase differential positioning technology to collect RTK position information of the unmanned device, and according to the RTK position information of the unmanned device and the first navigation line, the unmanned device is controlled to move, so that the unmanned device avoids crops on two sides and performs plant protection operations such as spraying and fertilizing on the crops on two sides in the moving process.
In an embodiment, the unmanned device may plan a global working path of the unmanned device in advance based on a distribution map of crops and a running flat ground in the working area, and the unmanned device may control the unmanned device to perform moving work in the working area based on the RTK position information and the global working path. It should be noted that, because the accuracy of the distribution map is lower, the accuracy of the global operation path is also lower, if the unmanned device is controlled to move on the running flat ground only by means of the global operation path, the unmanned device will collide with the branches and leaves of the crops on two sides with a high probability, so that the navigation method provided by the embodiment can plan a high-accuracy navigation line, so that the unmanned device avoids the branches and leaves of the crops on two sides when moving based on the navigation line.
If the edge point cloud of the obstacle is removed from the edge point cloud of the first area when the first navigation line is planned, the existence of the obstacle on the running flat surface in the first area can be confirmed, and in order to ensure the safety of the unmanned equipment, whether the obstacle exists in front of the unmanned equipment or not can be determined according to the edge point cloud and the first navigation line. The width of the first guide line and the width of the unmanned device are exemplified, the moving area of the unmanned device when the unmanned device moves based on the first guide line is determined, whether the obstacle is in the moving area or not is determined according to the edge point cloud of the obstacle, if yes, the existence of the obstacle in front of the unmanned device is determined, and if not, the existence of the obstacle in front of the unmanned device is determined. Further, under the condition that an obstacle exists in front of the unmanned equipment, the unmanned equipment is controlled to move around the obstacle or stop moving. For example, the interval between the obstacle and the crops can be determined according to the edge point cloud of the obstacle and the edge point clouds of the crops on two sides, and under the condition that the interval is larger than the width of the unmanned equipment, an obstacle avoidance path passing through the interval between the edge point cloud of the obstacle and the edge point cloud of the crops can be planned according to the edge point cloud of the obstacle and the edge point cloud of the crops, and the unmanned equipment is controlled to perform obstacle-detouring movement based on the obstacle avoidance path. In the event that the gap is less than or equal to the width of the unmanned device, then movement is stopped, while at the same time a prompt may be sent to the staff to remove the obstacle.
On the basis of the embodiment, the embodiment also provides another navigation method of the unmanned equipment, and the navigation method aims at fusing the binocular vision navigation method provided by the embodiment with the monocular vision navigation method, so that the navigation precision is further improved. Fig. 7 is a flowchart of another navigation method of an unmanned device according to an embodiment of the present application. As shown in fig. 7, the navigation method of the unmanned device specifically includes:
s210, acquiring a first image acquired by a binocular vision sensor.
S220, constructing an edge point cloud of the first area based on the first image, and determining a first navigation line of the unmanned equipment according to the edge point cloud.
Steps S210 to S220 may refer to steps S110 to S120.
S230, acquiring a second image acquired by a monocular vision sensor, wherein the monocular vision sensor is arranged in the middle of the unmanned equipment, and the content of the second image comprises a fourth area in front of the unmanned equipment.
Illustratively, the monocular vision sensor is a third camera, referring to fig. 2, with the third camera 14 mounted in the middle of the drone, which photographs a first area in front of the drone to obtain a second image. The third camera may take the second image at intervals or at intervals, or may record a video and extract the second image from the video.
In the present embodiment, the fourth area can be understood as a work area in front of the unmanned device that falls within the photographing area of the monocular vision sensor. The monocular vision sensor and the binocular vision sensor can synchronously shoot the second image and the first image so as to ensure that the content of the first image and the second image contains approximately the same operation area, and further ensure the precision of generating the navigation line by later fusion.
S240, determining a second navigation line of the unmanned equipment according to the second image.
The second navigation line is generated by adopting a monocular visual navigation method. The second image is illustratively segmented into a crop area and a non-crop area by an adaptive threshold segmentation algorithm, and a second navigation line is determined from a centerline of the non-crop area.
Since the third camera is a second image generated by obliquely photographing the operation area, the crop area and the non-crop area in the second image are distorted due to the influence of the photographing angle, and the accuracy of the generated second navigation line is affected. In this regard, the second image may be converted to a bird's eye view of the first region to determine the second navigation line based on the bird's eye view of the first region. Fig. 8 is a flowchart illustrating determination of a second navigation line according to an embodiment of the present application. As shown in fig. 8, the step of determining the second navigation line specifically includes S2401-S2402:
S2401, converting the second image to obtain a bird' S eye view of the first area.
Illustratively, the second image is converted from the oblique photographing view angle to the nodding view angle through a pre-calibrated conversion matrix or view angle conversion algorithm, so as to obtain a bird's eye view of the fourth area. The transformation matrix is understood to be the relative transformation matrix between the world coordinate system when the third camera captures the second image and the world coordinate system when the second image is taken in the depression.
S2402, determining a fifth area in the aerial view, and determining a second navigation line of the unmanned equipment according to the fifth area.
In an embodiment, the bird's eye view is segmented by a preset semantic segmentation model to extract a fifth region in the bird's eye view. Wherein the fifth area is a crop area in the bird's eye view. The data set used for the semantic segmentation model training comprises a plurality of sample images and pixel coordinates of crop areas and non-crop areas in the sample images. In another embodiment, the bird's eye view may be segmented by an adaptive threshold segmentation algorithm to extract crop areas in the bird's eye view. Fig. 9 is a schematic view of a crop area in an aerial view provided by an embodiment of the present application. As shown in fig. 9, after the crop area 25 is extracted from the bird's eye view 24, the extracted crop area 25 is divided into a left crop area 25 and a right crop area 25 based on the center line of the bird's eye view, and the pixel coordinates of the center point 26 of the crop area are determined based on the pixel coordinates of the crop area 25. A first center line 27 is fitted from the pixel coordinates of the center point 26 of the left crop area 25 and a second center line 28 is fitted from the pixel coordinates of the center point 26 of the right crop area 25. The center line between the first center line 26 and the second center line 27 is determined as the pixel coordinates of the second guide line, and the position information of the second guide line in the world coordinate system is determined according to the pixel coordinates of the second guide line and the scale factor of the monocular vision sensor. Wherein the scale factor is used to represent the relationship between the pixel distance and the actual distance of the bird's eye view.
It should be noted that, in this embodiment, the scale factor of the binocular vision sensor and the scale factor of the monocular vision sensor are the same, that is, the first navigation line generated by the binocular vision navigation method and the second navigation line generated by the monocular vision navigation method are in the same scale, and may be fused. If the scale factors of the binocular vision sensor and the monocular vision sensor are different, the scales of the binocular vision sensor and the monocular vision sensor need to be unified before fusion so as to ensure the fusion precision.
S250, fusing the first navigation line and the second navigation line to obtain a third navigation line.
The third navigation line is generated by combining the binocular vision navigation method and the monocular vision navigation method. For example, a center line between the first guide line and the second guide line may be taken as the third guide line. However, compared with the second navigation line, the first navigation line has higher precision, and if the center line between the first navigation line and the second navigation line is taken as the third navigation line, the precision of the third navigation line may be lower than that of the first navigation line, and the effect of improving the precision of the navigation lines cannot be obtained. In this regard, the third guide line may be determined based on the first guide line and the corresponding first confidence level and the second guide line and the corresponding second confidence level. Wherein the first confidence level may characterize the accuracy of the first guide line and the second confidence level may characterize the accuracy of the second guide line. Let the first navigation line be l s (x s ,y s ) The second navigation line is l a (x a ,y a ) Third navigation line l r (x r ,y r ) The expression of (2) may be: l (L) r (x r ,y r )=c s *l s (x s ,y s )+c a *l a (x a ,y a ) Wherein c s For the first confidence level, c a Is a second confidence level.
In this embodiment, the first confidence and the second confidence may be set in advance according to texture features of the crop area and the non-crop area in the work area. It can be appreciated that when the difference between the texture features of the crop area and the non-crop area is large, it indicates that the accuracy of the second navigation line generated by the monocular visual navigation method is high, and then the first confidence level and the second confidence level may be set to 0.5 at this time. When the difference between the crop area and the non-crop area is small, it indicates that the accuracy of the second navigation line generated by the monocular visual navigation method is low, and the first confidence and the second confidence may be set to 0.7 and 0.3 or 0.8 and 0.2, respectively, at this time.
In addition, the first confidence may be determined based on data for planning the first navigation line in planning the first navigation line through the binocular vision navigation method. The specific implementation process is as follows: dividing two-dimensional edge points into grids of a second area and a third area according to the central sight of the binocular vision sensor, and generating the two-dimensional edge points by projecting the two-dimensional edge points to a first plane based on an edge point cloud; wherein, this step can refer to steps S1201-S1203. The first confidence is determined based on the total number of grids in the second region and the third region, and the total number of two-dimensional edge points in each grid that are closest to the center line of sight. For example, as long as two-dimensional edge points are included in the grids, one two-dimensional edge point is taken from the grids to be used for fitting an edge line, so that the total number of two-dimensional edge points closest to the central line of sight in each grid can be obtained by adding the number of grids including the two-dimensional edge points in the second region to the number of grids including the two-dimensional edge points in the third region. Dividing the total number by twice the total number of grids gives a first confidence. The expression of the first confidence is: c s =(n l +n r )/(2*n s ) Wherein n is l And n r The number of grids including two-dimensional edge points in the second region and the number of grids including two-dimensional edge points in the third region, n s Is the total number of grids in the second and third regions. It can be appreciated that when n l And n r The higher the number of two-dimensional edge points used to fit the generated first and second edge lines, the higher the accuracy of the first navigation line, and accordingly the first confidence, based on the first and second edge line plans.
Likewise, a second confidence level may be determined during planning of the second navigation route by the monocular visual navigation method based on the related data for planning the second navigation route. The second confidence level is determined based on the reliability of the output when the fourth region is extracted by the preset semantic segmentation model. The second confidence coefficient is obtained by multiplying the confidence coefficient by a coefficient, and the coefficient can be set according to actual conditions. When the crop areas in the aerial view are extracted through the semantic segmentation model, the credibility of each crop area is output, the credibility can represent the accuracy of prediction of the semantic segmentation model, and correspondingly, when the credibility is higher, the accuracy of the crop area is higher, the accuracy of a second navigation line planned based on the crop area is also higher, and the second confidence is also higher. In the case of outputting the credibility of a plurality of crop areas, taking an average value of the credibility, and determining a second credibility according to the average value.
And S260, controlling the unmanned equipment to move according to the third navigation line.
Exemplary, the unmanned equipment is positioned by RTK (Real Time Kinematic) carrier phase differential positioning technology to acquire RTK position information of the unmanned equipment, and the unmanned equipment is controlled to move according to the RTK position information of the unmanned equipment and the third navigation line, so that the unmanned equipment avoids crops on two sides in the moving process and performs plant protection operations such as spraying, fertilizing and the like on the crops on two sides.
In summary, according to the navigation method of the unmanned equipment provided by the embodiment of the application, the binocular vision sensors arranged on two sides of the unmanned equipment are used for shooting the first area in front of the unmanned equipment to generate the first image, the edge point cloud of the first area is constructed according to the first image, the first navigation line of the unmanned equipment is determined according to the edge point cloud, and the unmanned equipment is controlled to move according to the first navigation line. Through the technical means, the edge point cloud of the first area constructed by the first image acquired by the binocular vision sensor can represent the edge information of crops on two sides in the first area, so that a first navigation line capable of effectively avoiding the crops on two sides can be planned according to the edge point cloud of the first area, unmanned equipment is navigated according to the first navigation line to avoid touching the crops during movement, and the safety of the unmanned equipment and the crops is protected. The accuracy of the edge point cloud constructed based on the first image is not affected by region textures, the accuracy of the first navigation line generated based on the edge point cloud is high and is not limited by scenes, the problems of low visual navigation accuracy and limited use scenes in the prior art are solved, the applicability of unmanned equipment is improved, and popularization and use of the unmanned equipment are facilitated.
On the basis of the above embodiments, fig. 10 is a schematic structural diagram of a navigation device of an unmanned apparatus according to an embodiment of the present application. Referring to fig. 8, the navigation device of the unmanned device provided in the present embodiment specifically includes: a first image acquisition module 31, a first navigation line generation module 32, and a movement control module 33.
The system comprises a first image acquisition module, a second image acquisition module and a control module, wherein the first image acquisition module is configured to acquire a first image acquired by a binocular vision sensor, the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment;
the first navigation line generation module is configured to construct an edge point cloud of the first area based on the first image, determine a first navigation line of the unmanned equipment according to the edge point cloud, and the edge point cloud is used for representing edge information of crops at two sides in the first area;
and the movement control module is configured to control the unmanned equipment to move according to the first navigation line.
On the basis of the above embodiment, the first guidance route generation module 22 includes: the edge point generation sub-module is configured to project the edge point cloud to a first plane to obtain two-dimensional edge points, wherein the first plane is a plane where the unmanned equipment is located; the edge line generation sub-module is configured to obtain an edge line of the first area according to two-dimensional edge point fitting; the first navigation line generation sub-module is configured to determine a first navigation line of the unmanned device according to the edge line.
On the basis of the above embodiment, the edge line generation submodule includes: a region dividing unit configured to divide the first region into a second region and a third region according to a central line of sight of the binocular vision sensor; a grid dividing unit configured to divide the second region and the third region into a plurality of grids, respectively, based on a preset interval, and determine two-dimensional edge points falling into the grids; and the edge line generating unit is configured to obtain a first edge line based on fitting of two-dimensional edge points closest to the central sight line in each grid of the second area, and obtain a second edge line based on fitting of two-dimensional edge points closest to the central sight line in each grid of the third area.
On the basis of the above embodiment, the first guidance route generation submodule includes: and a first guidance line generation unit configured to determine a center line between the first edge line and the second edge line as a first guidance line of the unmanned device.
On the basis of the above embodiment, the navigation device further includes: the second image acquisition module is configured to acquire a second image acquired by a monocular vision sensor, the monocular vision sensor is arranged in the middle of the unmanned equipment, and the content of the second image comprises a fourth area in front of the unmanned equipment; a second navigation line generation module configured to determine a second navigation line of the unmanned device from the second image; and the third navigation line generation module is configured to fuse the first navigation line and the second navigation line to obtain a third navigation line, and the third navigation line is used for guiding the unmanned equipment to move.
On the basis of the above embodiment, the second guidance route generation module includes: the image conversion sub-module is configured to convert the second image to obtain a bird's eye view of the first area; and a second navigation line generation sub-module configured to determine a fifth region in the bird's eye view, and determine a second navigation line of the unmanned device according to the fifth region.
On the basis of the above embodiment, the second guidance route generation submodule includes: and a fifth region extraction unit configured to segment the bird's eye view by a preset semantic segmentation model to extract a fifth region in the bird's eye view.
On the basis of the above embodiment, the third guidance route generation module includes: and a third navigation line generation sub-module configured to determine a third navigation line according to the first navigation line and the corresponding first confidence level and the second navigation line and the corresponding second confidence level.
On the basis of the above embodiment, the edge line generation submodule includes: an edge point dividing unit configured to divide two-dimensional edge points into respective grids of the second area and the third area according to a central line of sight of the binocular vision sensor, the two-dimensional edge points being generated based on projection of an edge point cloud to the first plane; correspondingly, the third navigation line generation submodule includes: and a first confidence determining unit configured to determine a first confidence based on the total number of grids in the second region and the third region and the total number of two-dimensional edge points closest to the center line of sight in each grid.
On the basis of the above embodiment, the third guidance route generation submodule includes: and a second confidence determining unit configured to determine a second confidence based on the confidence outputted when the fourth region is extracted by the preset semantic segmentation model.
On the basis of the above embodiment, the movement control module 23 includes: an obstacle detection sub-module configured to determine whether an obstacle exists in front of the unmanned device according to the edge point cloud and the first navigation line; and the first movement control sub-module is configured to control the unmanned equipment to move around the obstacle or stop moving under the condition that the obstacle exists in front of the unmanned equipment.
On the basis of the above embodiment, the movement control module includes: and the second movement control sub-module is configured to control the unmanned equipment to move according to the RTK position information of the unmanned equipment and the first navigation line.
In the above-mentioned navigation device for the unmanned aerial vehicle provided by the embodiment of the application, the binocular vision sensors installed on two sides of the unmanned aerial vehicle are used for shooting the first area in front of the unmanned aerial vehicle to generate the first image, the edge point cloud of the first area is constructed according to the first image, the first navigation line of the unmanned aerial vehicle is determined according to the edge point cloud, and the unmanned aerial vehicle is controlled to move according to the first navigation line. Through the technical means, the edge point cloud of the first area constructed by the first image acquired by the binocular vision sensor can represent the edge information of crops on two sides in the first area, so that a first navigation line capable of effectively avoiding the crops on two sides can be planned according to the edge point cloud of the first area, unmanned equipment is navigated according to the first navigation line to avoid touching the crops during movement, and the safety of the unmanned equipment and the crops is protected. The accuracy of the edge point cloud constructed based on the first image is not affected by region textures, the accuracy of the first navigation line generated based on the edge point cloud is high and is not limited by scenes, the problems of low visual navigation accuracy and limited use scenes in the prior art are solved, the applicability of unmanned equipment is improved, and popularization and use of the unmanned equipment are facilitated.
The navigation device of the unmanned equipment provided by the embodiment of the application can be used for executing the navigation method of the unmanned equipment provided by the embodiment of the application, and has corresponding functions and beneficial effects.
Fig. 11 is a schematic structural diagram of an unmanned device according to an embodiment of the present application, and referring to fig. 11, the unmanned device includes: a processor 41, a memory 42, a communication device 43, an input device 44 and an output device 45. The number of processors 41 in the drone may be one or more and the number of memories 42 in the drone may be one or more. The processor 41, memory 42, communication means 43, input means 44 and output means 45 of the unmanned device may be connected by bus or other means.
The memory 32 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the navigation method of the unmanned device according to any embodiment of the present application (for example, the first image acquisition module 31, the first navigation line generation module 32, and the movement control module 33 in the navigation apparatus of the unmanned device). The memory 32 may mainly include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the device, etc. In addition, memory 42 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication means 43 are for data transmission.
The processor 41 executes various functional applications of the device and data processing, namely, implements the navigation method of the unmanned device described above, by running software programs, instructions and modules stored in the memory 42.
The input device 44 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the apparatus. The output means 45 may comprise a display device such as a display screen.
The unmanned equipment provided by the embodiment can be used for executing the navigation method of the unmanned equipment, and has corresponding functions and beneficial effects.
The embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a method of navigating an unmanned device, the method of navigating an unmanned device comprising: acquiring a first image acquired by a binocular vision sensor, wherein the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment; constructing an edge point cloud of a first area based on the first image, and determining a first navigation line of unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area; and controlling the unmanned equipment to move according to the first navigation line.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided by the embodiment of the application is not limited to the navigation method of the unmanned device, and the related operations in the navigation method of the unmanned device provided by any embodiment of the application can be performed.
The navigation device, the storage medium and the unmanned aerial vehicle provided in the above embodiments may execute the navigation method of the unmanned aerial vehicle provided in any embodiment of the present application, and technical details not described in detail in the above embodiments may refer to the navigation method of the unmanned aerial vehicle provided in any embodiment of the present application.
The foregoing description is only of the preferred embodiments of the application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous modifications, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit of the application, the scope of which is set forth in the following claims.

Claims (15)

1. A method of navigating an unmanned device, comprising:
acquiring a first image acquired by a binocular vision sensor, wherein the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment;
constructing an edge point cloud of the first area based on the first image, and determining a first navigation line of the unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area;
and controlling the unmanned equipment to move according to the first navigation line.
2. The method for navigating the unmanned device according to claim 1, wherein the determining the first navigation line of the unmanned device according to the edge point cloud comprises:
projecting the edge point cloud to a first plane to obtain a two-dimensional edge point, wherein the first plane is a plane in which the unmanned equipment is located;
obtaining an edge line of the first area according to the fitting of the two-dimensional edge points;
and determining a first navigation line of the unmanned equipment according to the edge line.
3. The method for navigating the unmanned device according to claim 2, wherein the obtaining the edge line of the first region according to the two-dimensional edge point fitting comprises:
Dividing the first area into a second area and a third area according to the central sight line of the binocular vision sensor;
dividing the second area and the third area into a plurality of grids respectively based on preset intervals, and determining two-dimensional edge points falling into the grids;
and fitting a first edge line based on two-dimensional edge points closest to the central sight line in each grid of the second area, and fitting a second edge line based on two-dimensional edge points closest to the central sight line in each grid of the third area.
4. A method of navigating an unmanned device according to claim 3, wherein the determining a first navigation line of the unmanned device from the edge line comprises:
a centerline between the first edge line and the second edge line is determined as a first navigation line of the unmanned device.
5. A method of navigating an unmanned device according to any one of claims 1 to 4, wherein the method further comprises:
acquiring a second image acquired by a monocular vision sensor, wherein the monocular vision sensor is arranged in the middle of the unmanned equipment, and the content of the second image comprises a fourth area in front of the unmanned equipment;
Determining a second navigation line of the unmanned device according to the second image;
and fusing the first navigation line and the second navigation line to obtain a third navigation line, wherein the third navigation line is used for guiding the unmanned equipment to move.
6. The method of claim 5, wherein determining a second navigation line for the unmanned device from the second image comprises:
converting the second image to obtain a bird's eye view of the first area;
and determining a fifth area in the aerial view, and determining a second navigation line of the unmanned equipment according to the fifth area.
7. The method of navigating the unmanned device of claim 6, wherein the determining a fifth region in the bird's eye view comprises:
and dividing the aerial view through a preset semantic division model to extract a fifth region in the aerial view.
8. The method for navigating the unmanned device according to claim 5, wherein the fusing the first navigation line and the second navigation line to obtain a third navigation line comprises:
and determining a third navigation line according to the first navigation line and the corresponding first confidence coefficient and the second navigation line and the corresponding second confidence coefficient.
9. The method of navigation of an unmanned device of claim 8, further comprising, prior to said determining a third navigation route based on the first and second confidence levels, respectively, and the first and second confidence levels, respectively:
dividing two-dimensional edge points into grids of the second area and the third area according to the central sight line of the binocular vision sensor, wherein the two-dimensional edge points are generated based on the projection of the edge point cloud to a first plane;
the first confidence is determined based on a total number of grids in the second region and the third region, and a total number of two-dimensional edge points in each of the grids closest to the center line of sight.
10. The method of navigation of an unmanned device of claim 8, further comprising, prior to said determining a third navigation route based on the first and second confidence levels, respectively, and the first and second confidence levels, respectively:
and determining the second confidence level based on the reliability output when the fourth region is extracted by the preset semantic segmentation model.
11. The method of claim 1, wherein controlling the movement of the unmanned device according to the first navigation line comprises:
Determining whether an obstacle exists in front of the unmanned equipment according to the edge point cloud and the first navigation line;
and controlling the unmanned equipment to move around the obstacle or stop moving under the condition that the obstacle exists in front of the unmanned equipment.
12. The method of claim 1, wherein controlling the movement of the unmanned device according to the first navigation line comprises:
and controlling the unmanned equipment to move according to the RTK position information of the unmanned equipment and the first navigation line.
13. A navigation device for an unmanned device, comprising:
the system comprises a first image acquisition module, a second image acquisition module and a first image acquisition module, wherein the first image acquisition module is configured to acquire a first image acquired by a binocular vision sensor, the binocular vision sensor is arranged on two sides of unmanned equipment, and the content of the first image comprises a first area in front of the unmanned equipment;
the first navigation line generation module is configured to construct an edge point cloud of the first area based on the first image, and determine a first navigation line of the unmanned equipment according to the edge point cloud, wherein the edge point cloud is used for representing edge information of crops at two sides in the first area;
And the movement control module is configured to control the unmanned equipment to move according to the first navigation line.
14. An unmanned device, comprising:
one or more processors;
memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of navigating an unmanned device of any of claims 1-12.
15. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the method of navigation of the unmanned device of any of claims 1-12.
CN202310944029.4A 2023-07-28 2023-07-28 Navigation method and device of unmanned equipment, unmanned equipment and storage medium Pending CN116839593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310944029.4A CN116839593A (en) 2023-07-28 2023-07-28 Navigation method and device of unmanned equipment, unmanned equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310944029.4A CN116839593A (en) 2023-07-28 2023-07-28 Navigation method and device of unmanned equipment, unmanned equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116839593A true CN116839593A (en) 2023-10-03

Family

ID=88168984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310944029.4A Pending CN116839593A (en) 2023-07-28 2023-07-28 Navigation method and device of unmanned equipment, unmanned equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116839593A (en)

Similar Documents

Publication Publication Date Title
RU2768997C1 (en) Method, device and equipment for recognition of obstacles or ground and flight control, and data carrier
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
US20200334551A1 (en) Machine learning based target localization for autonomous unmanned vehicles
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN110988912A (en) Road target and distance detection method, system and device for automatic driving vehicle
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
WO2020103110A1 (en) Image boundary acquisition method and device based on point cloud map and aircraft
CN108764187A (en) Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN108508916B (en) Control method, device and equipment for unmanned aerial vehicle formation and storage medium
CN113593017A (en) Method, device and equipment for constructing surface three-dimensional model of strip mine and storage medium
CN115049700A (en) Target detection method and device
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
CN111982127A (en) Lightweight-3D obstacle avoidance method
CN113592891B (en) Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
Aguiar et al. Localization and mapping on agriculture based on point-feature extraction and semiplanes segmentation from 3D LiDAR data
CN108776991A (en) Three-dimensional modeling method, device, storage medium and computer equipment
CN115900712B (en) Combined positioning method for evaluating credibility of information source
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
CN112154448A (en) Target detection method and device and movable platform
CN113096181B (en) Method and device for determining equipment pose, storage medium and electronic device
CN112733971A (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN116740160A (en) Millisecond level multi-plane real-time extraction method and device in complex traffic scene
CN116739739A (en) Loan amount evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination