WO2021078003A1 - 无人载具的避障方法、避障装置及无人载具 - Google Patents

无人载具的避障方法、避障装置及无人载具 Download PDF

Info

Publication number
WO2021078003A1
WO2021078003A1 PCT/CN2020/118850 CN2020118850W WO2021078003A1 WO 2021078003 A1 WO2021078003 A1 WO 2021078003A1 CN 2020118850 W CN2020118850 W CN 2020118850W WO 2021078003 A1 WO2021078003 A1 WO 2021078003A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection range
obstacle
depth image
detection
obstacle avoidance
Prior art date
Application number
PCT/CN2020/118850
Other languages
English (en)
French (fr)
Inventor
郑欣
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2021078003A1 publication Critical patent/WO2021078003A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots

Definitions

  • the present invention relates to the technical field of unmanned vehicles, in particular to an obstacle avoidance method, obstacle avoidance device and unmanned vehicles of an unmanned vehicle.
  • Unmanned Vehicle is a vehicle without personnel. Generally, it can be roughly divided into the following types: Unmanned Ground Vehicle (UGV) that can operate on the ground, Unmanned Aerial Vehicle (UAV), which is often called unmanned aircraft, Unmanned Surface Vehicle (USV) operating on the water surface, and Unmanned Underwater Vehicle (UUV) operating underwater. Unmanned vehicles are usually controlled by remote control, guidance or automatic driving, and do not need to carry members, and can be used for scientific research, military, leisure and entertainment purposes.
  • UAV Unmanned Ground Vehicle
  • UAV Unmanned Aerial Vehicle
  • USV Unmanned Surface Vehicle
  • UUV Unmanned Underwater Vehicle
  • unmanned vehicles have gradually become more and more widely used in surveying and mapping, search and rescue, real estate and agriculture due to their flexible portability and strong spatial mobility. They are also widely used by consumers in aerial photography or entertainment. favorite.
  • the popularization of unmanned vehicles requires that they can work safely and reliably in various environments. Therefore, increasing its safety and improving environmental adaptability so that unmanned vehicles can recognize surrounding environmental information and effectively avoid obstacles is the focus of unmanned vehicle technology research and is also a technical difficulty that needs to be overcome urgently.
  • the related technology has at least the following problems:
  • the unmanned vehicles use depth sensors for obstacle detection and path planning
  • the braking performance is constant, it is required Set the upper limit of speed to ensure timely avoidance or braking when obstacles are detected.
  • the planned route cannot be calculated well due to incomplete obstacle detection, resulting in inaccurate path planning results.
  • the embodiment of the present invention provides an obstacle avoidance method for an unmanned vehicle, an obstacle avoidance device and an unmanned vehicle, which can solve the technical problems of incomplete obstacle detection and low accuracy of path planning results.
  • an obstacle avoidance method for unmanned vehicles includes:
  • the first detection distance is determined according to the maximum detection range of the unmanned vehicle
  • an obstacle avoidance path is planned according to the second detection range and the maximum detection range.
  • the planning an obstacle avoidance path according to the second detection range and the maximum detection range includes:
  • the planning an obstacle avoidance path according to the second detection range and the maximum detection range includes:
  • the second detection range is extended to a preset detection range, the preset detection range corresponds to a preset detection distance, the preset detection distance is greater than the second detection distance, and the preset detection distance is less than the first detection distance.
  • an obstacle avoidance path is planned.
  • the planning an obstacle avoidance path according to the depth image includes:
  • the planning an obstacle avoidance path according to the depth image includes:
  • the depth image includes a three-dimensional point cloud of the obstacle.
  • the method further includes:
  • Filtering and/or smoothing is performed on the depth image.
  • the method before planning the obstacle avoidance path, the method further includes: decelerating the unmanned vehicle to a preset safe speed.
  • the decelerating the unmanned vehicle to a preset safe speed includes:
  • the preset acceleration is calculated by the following formula
  • V is the current speed
  • V max is the preset safe speed
  • d is the second detection distance
  • a is the preset acceleration
  • an obstacle avoidance device applied to an unmanned vehicle includes: a first detection distance determining module, which is used to determine the distance according to the unmanned vehicle. The maximum detection range of the vehicle determines the first detection distance;
  • a second detection distance determining module configured to determine a second detection distance according to the first detection distance, wherein the second detection distance is smaller than the first detection distance
  • a second detection range determining module configured to determine a second detection range according to the second detection distance
  • the obstacle avoidance path planning module is configured to plan an obstacle avoidance path according to the second detection range and the maximum detection range when an obstacle is detected in the second detection range.
  • the obstacle avoidance path planning module includes a detection range expansion unit, a depth image acquisition unit, and an obstacle avoidance path planning unit;
  • the detection range expansion unit is used to expand the second detection range to a maximum detection range
  • the depth image acquisition unit is used to acquire a depth image of the obstacle within the maximum detection range
  • the obstacle avoidance path planning unit is used for planning an obstacle avoidance path according to the depth image.
  • an unmanned vehicle includes:
  • Unmanned vehicle main body
  • a depth sensor which is arranged on the main body of the unmanned vehicle and is used to collect depth images of obstacles and plan obstacle avoidance paths;
  • a processor the processor is arranged in the main body of the unmanned vehicle, and is respectively connected to the depth sensor in communication; and,
  • a memory connected in communication with the processor; wherein,
  • the memory stores instructions executable by the processor, and the instructions are executed by the processor, so that the processor can execute the above-mentioned method.
  • the method of providing an obstacle avoidance for an unmanned vehicle can determine the first detection distance according to the maximum detection range of the unmanned vehicle, and then determine the second detection distance and the second detection distance according to the first detection distance. For the second detection range corresponding to the second detection distance, since the second detection range corresponding to the second detection distance is smaller than the maximum detection range corresponding to the first detection distance, the second detection range and the maximum detection range The range obtains relatively complete image information of the obstacle, and a more accurate obstacle avoidance path is determined based on the relatively complete graphic information of the obstacle.
  • FIG. 1 is a schematic diagram of an application environment of an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an obstacle avoidance method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an obstacle avoidance method provided by one of the embodiments of the present invention.
  • FIG. 4 is a schematic structural diagram of an obstacle avoidance method provided by another embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of one embodiment of S40 in FIG. 2;
  • FIG. 6 is a schematic structural diagram of an obstacle avoidance method provided by another embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another embodiment of S40 in FIG. 2;
  • FIG. 8 is a schematic structural diagram of an obstacle avoidance method provided by still another embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of one embodiment of S46 in FIG. 7;
  • FIG. 10 is a schematic flowchart of another embodiment of S46 in FIG. 7;
  • FIG. 11 is a structural block diagram of an obstacle avoidance device for unmanned vehicles provided by an embodiment of the present invention.
  • Fig. 12 is a structural block diagram of an unmanned vehicle provided by an embodiment of the present invention.
  • the embodiment of the present invention provides an obstacle avoidance method and device for an unmanned vehicle.
  • the method and device determine the first detection distance according to the maximum detection range of the unmanned vehicle, and then determine the second detection distance according to the first detection distance.
  • the detection distance and the second detection range corresponding to the second detection distance Since the second detection range corresponding to the second detection distance is smaller than the maximum detection range corresponding to the first detection distance, the second detection range and the second detection range According to the maximum detection range, relatively complete image information of the obstacle is obtained, and a more accurate obstacle avoidance path is determined based on the relatively complete graphic information of the obstacle.
  • the following examples illustrate the application environment of the unmanned vehicle obstacle avoidance method and device.
  • FIG. 1 is a schematic diagram of an application environment of an unmanned vehicle obstacle avoidance system provided by an embodiment of the present invention; as shown in FIG. 1, the application scenario includes an unmanned vehicle 10, a wireless network 20, an intelligent terminal 30 and a user 40.
  • the user 40 can operate the smart terminal 30 to control the unmanned vehicle 10 through the wireless network 20.
  • the unmanned vehicle 10 may be an unmanned vehicle driven by any type of power, including but not limited to unmanned ground vehicles, such as unmanned vehicles, unmanned robots, etc.; unmanned aerial vehicles, such as rotary wing unmanned vehicles Unmanned vehicles, fixed-wing unmanned vehicles, umbrella-wing unmanned vehicles, flapping-wing unmanned vehicles and helicopter models, etc.; unmanned surface vehicles operating on the water; and unmanned underwater vehicles operating under water.
  • unmanned ground vehicles such as unmanned vehicles, unmanned robots, etc.
  • unmanned aerial vehicles such as rotary wing unmanned vehicles Unmanned vehicles, fixed-wing unmanned vehicles, umbrella-wing unmanned vehicles, flapping-wing unmanned vehicles and helicopter models, etc.
  • unmanned surface vehicles operating on the water such as rotary wing unmanned vehicles
  • unmanned underwater vehicles operating under water such as a drone is taken as an example for presentation.
  • the unmanned vehicle 10 may have corresponding volume or power according to actual needs, so as to provide load capacity, flight speed, and flight range that can meet the needs of use.
  • One or more functional modules may be added to the unmanned vehicle 10 to enable the unmanned vehicle 10 to realize corresponding functions.
  • the unmanned vehicle 10 is provided with at least one of a depth sensor, a distance sensor, a magnetometer, a GPS navigator, and a vision sensor.
  • the unmanned vehicle 10 is provided with an information receiving device, which receives and processes the information collected by the above-mentioned at least one sensor.
  • the unmanned vehicle 10 includes at least one main control chip, which serves as the control core of the movement and data transmission of the unmanned vehicle, and integrates one or more modules to execute the corresponding logic control program.
  • the main control chip may include an unmanned vehicle obstacle avoidance device 50 for selecting and processing obstacle avoidance paths.
  • the smart terminal 30 may be any type of smart device used to establish a communication connection with the unmanned vehicle 10, such as a mobile phone, a tablet computer, or a smart remote control.
  • the smart terminal 30 may be equipped with one or more different user 40 interaction devices to collect instructions from the user 40 or display and feedback information to the user 40.
  • buttons, display screens, touch screens, speakers, and remote control joysticks include but are not limited to: buttons, display screens, touch screens, speakers, and remote control joysticks.
  • the smart terminal 30 may be equipped with a touch screen, through which the user 40 receives remote control instructions for the unmanned vehicle 10 and displays the image information obtained by the unmanned vehicle to the user 40 through the touch screen. The user 40 can also switch the image information currently displayed on the display screen through the remote control touch screen.
  • the unmanned vehicle 10 and the intelligent terminal 30 can also be integrated with the existing image visual processing technology to further provide more intelligent services.
  • the unmanned vehicle 10 may collect images through a dual-lens camera, and the smart terminal 30 may analyze the images, so as to realize the gesture control of the user 40 on the unmanned vehicle 10.
  • the wireless network 20 may be a wireless communication network based on any type of data transmission principle for establishing a data transmission channel between two nodes, such as a Bluetooth network, a WiFi network, a wireless cellular network, or a combination thereof located in different signal frequency bands.
  • Fig. 2 is an embodiment of an obstacle avoidance method for an unmanned vehicle provided by an embodiment of the present invention. As shown in Figure 2, the method for avoiding obstacles for unmanned vehicles includes the following steps:
  • S10 Determine the first detection distance according to the maximum detection range of the unmanned vehicle.
  • the maximum detection range may be acquired by a depth sensor onboard the unmanned vehicle.
  • the maximum detection range may be the physical detection limit range of the depth sensor, and may also be the maximum stable working range in the actual application process.
  • the depth sensor is a general term for sensors that can obtain the depth of the environment, and different sensors are selected for different environments and required accuracy.
  • the depth sensor is used to detect obstacle information on the path of the unmanned vehicle.
  • the specific implementation method takes the depth sensor Kinect as an example.
  • the system architecture of Kinect includes a physical layer, a driver and interface layer, and an application.
  • the layer, through the SDK and API provided by Kinect, can achieve multiple types of functions, mainly: acquisition of depth data: Kinect emits infrared scattering lattices through the infrared transmitter to the environment, and the infrared receiver receives the emission of obstacles in the environment.
  • the data is calculated to obtain the depth information of each pixel in the received image, thereby generating a three-dimensional depth image.
  • the depth sensor may also be a binocular camera, RGBD camera, structured light, TOF, lidar, and so on.
  • the first detection distance can be determined according to the maximum detection range of the depth sensor.
  • the first detection distance may also be obtained in other ways, for example, according to the maximum detection angle corresponding to the maximum detection range. As long as the maximum detection range of the depth sensor is determined, the acquired first detection distance is a unique value.
  • S20 Determine a second detection distance according to the first detection distance, where the second detection distance is smaller than the first detection distance.
  • the first detection distance is shortened by a preset distance to obtain the second detection distance, and the preset distance may be based on the preset distance of the obstacle.
  • the size is determined, for example, making the preset distance equal to or greater than the size of the pre-stored obstacle.
  • the unmanned vehicle stores the detected obstacle size in the historical driving path in a designated memory, and the pre-stored obstacle size can be obtained from the obstacle size stored in the designated memory.
  • the average obstacle size of the obstacle size stored in the designated memory can be calculated, so that the size of the pre-stored obstacle is equal to the average obstacle size, even if the preset distance is equal to or greater than the average obstacle size. Obstacle size.
  • the first detection distance D1 is shortened by a preset distance ⁇ D to obtain the second detection distance D1.
  • the direction in which the first detection distance D1 is shortened by the preset distance is opposite to the extension direction of the first vertical line.
  • the second detection range can be uniquely determined according to the second detection distance.
  • the first detection distance D1 is shortened by the preset distance ⁇ D to obtain the second detection distance D2
  • a vertical line is drawn and the end of the second detection distance D2 is Perpendicularly intersecting, the vertical line intersects the edge of the maximum detection range with two intersection points, and the two intersection points are connected to obtain the second detection threshold L2 corresponding to the second detection distance D2, and the second detection threshold L2 corresponding to the second detection distance D2 is obtained.
  • the second detection threshold L2 and the edge of the maximum detection range form a second detection range 114 corresponding to the second detection distance D2.
  • the second detection range may also be obtained in other ways, as long as it is satisfied that the second detection range corresponds to a unique second detection range.
  • the UAV can detect whether there is an obstacle in the second detection range through a visual sensor, ultrasonic, infrared, millimeter wave radar, etc.
  • a visual sensor is used to replace the human eye to capture objective information, and the contour information, depth information, position information, etc. of the object are acquired through the relevant visual image processing algorithm to detect whether there is in the second detection range.
  • Vision sensors are different from sensors such as ultrasonic and lidar in that they passively receive light source information (passive perception sensors) and acquire rich information.
  • Ultrasonic and lidar sensors actively emit sound waves or light waves (active sensing sensors), and at the same time receive the reflected information, and obtain single information.
  • the vision sensor includes a lens, an image sensor, an analog-to-digital converter, an image processor, a memory, and the like.
  • the vision sensor When the vision sensor is imaging, the optical information of the obstacle in the three-dimensional space passes through the lens, after the image geometry changes, and is projected on the two-dimensional plane.
  • the image sensor collects the two-dimensional image light signal to obtain the analog image signal, and the image analog signal passes through the modulus
  • the converter encodes a digital image, and finally the image processor re-encodes the digital image and saves it in the memory.
  • a high-end camera with a CCD image sensor is used as the visual sensor.
  • the CCD has the advantages of high image quality and strong anti-noise ability.
  • an obstacle avoidance path is planned according to the second detection range 114 and the maximum detection range 112.
  • the first detection distance is determined according to the maximum detection range of the unmanned vehicle, and then the second detection distance and the second detection range corresponding to the second detection distance are determined according to the first detection distance.
  • the second detection range corresponding to the second detection distance is smaller than the maximum detection range corresponding to the first detection distance, so that relatively complete image information of the obstacle can be obtained according to the second detection range and the maximum detection range, and further based on The complete graphic information of obstacles determines a more accurate obstacle avoidance path.
  • the following steps are further included before planning the obstacle avoidance path:
  • the unmanned vehicle is decelerated to a preset safe speed.
  • the preset acceleration is calculated by the following formula
  • V is the current speed
  • V max is the preset safe speed
  • d is the second detection distance
  • a is the preset acceleration
  • S40 includes the following steps:
  • the second detection range 114 corresponding to the second detection distance D2 is extended to the first detection range.
  • the second detection range 114 is expanded to the maximum detection range 112, and relatively complete obstacle information of the obstacle can be obtained, and then according to the More complete obstacle information determines a more accurate obstacle avoidance path.
  • a depth sensor can generally be used to obtain a three-dimensional point cloud of an obstacle, and the depth image of the obstacle can be obtained by performing feature extraction and matching processing on the three-dimensional point cloud.
  • the depth image of the obstacle may also be obtained based on an inter-frame difference algorithm, a background difference algorithm, an optical flow method, and an image segmentation algorithm.
  • the inter-frame difference algorithm is to obtain the edge contour of the obstacle according to the difference between the front and the back frames of the continuous image in time, and according to the difference between the front and back when the position of the obstacle changes in the image.
  • the advantage of the inter-frame difference algorithm lies in the characteristics of simple operation, fast detection speed, strong scalability and low algorithm implementation complexity.
  • the background difference algorithm detects the obstacle based on the difference between the image where the obstacle appears and the fixed background. Therefore, this method needs to store the fixed scene first, and has the characteristics of simple operation, fast detection speed, and low algorithm implementation complexity when detecting obstacle position information.
  • the method further includes:
  • Filtering and/or smoothing is performed on the depth image.
  • S45 Plan an obstacle avoidance path according to the depth image.
  • a path planning algorithm may be used to plan the obstacle avoidance path according to the acquired depth image.
  • the path planning algorithm includes but is not limited to vfh, vfh+, trajectory library, etc.
  • S40 replaces the foregoing S41, S43, and S45 by the following steps:
  • S42 Extend the second detection range to a preset detection range, the preset detection range corresponds to a preset detection distance, the preset detection distance is greater than the second detection distance, and the preset detection distance is less than all The first detection distance.
  • the second detection distance D2 corresponding to the second detection range 114 is increased to the preset detection distance D3, and the preset detection distance D3 corresponds to the preset detection range 116, thereby achieving
  • the second detection range 114 is extended to the preset detection range 116, the preset detection distance D3 is greater than the second detection distance D2, and the preset detection distance D3 is less than the first detection distance D1.
  • the second detection range 114 When it is detected that once there is an obstacle 12 in the second detection range 114, the second detection range 114 is extended to the preset detection range 116, and relatively complete obstacle information of the obstacle can be obtained, and then according to all the obstacles. The more complete obstacle information to determine the more accurate obstacle avoidance path.
  • the preset detection range 116 is acquired Depth image of the obstacle within.
  • a depth sensor can generally be used to obtain a three-dimensional point cloud of an obstacle, and the depth image of the obstacle can be obtained by performing feature extraction and matching processing on the three-dimensional point cloud.
  • a path planning algorithm may be used to plan the obstacle avoidance path according to the acquired depth image.
  • the path planning algorithm includes but is not limited to vfh, vfh+, trajectory library, etc.
  • S46 includes the following steps:
  • the second detection range 114 corresponding to the second detection distance D2 is extended to the preset detection range 116 corresponding to the preset detection distance D3 if the acquired preset detection range 116 is The obstacle depth image is smaller than the actual size of the obstacle, that is, the acquired depth image of the obstacle is not a complete depth image of the obstacle, which leads to the inaccurate obstacle avoidance path determined according to the depth image. It is necessary to further determine whether the acquired depth image of the obstacle within the preset detection range 116 is a complete depth image of the obstacle.
  • the acquired depth image of the obstacle within the preset detection range is equal to the actual size of the obstacle, that is, the acquired depth image of the obstacle is a complete depth image of the obstacle, it can be directly Plan an obstacle avoidance path according to the depth image.
  • the working process of the depth sensor can be optimized, and the speed of acquiring the depth image can be improved, so that the depth image can be acquired in a timely manner. Planning an obstacle avoidance path according to the depth image of the obstacle.
  • a path planning algorithm may be used to plan the obstacle avoidance path according to the acquired depth image.
  • the path planning algorithm includes but is not limited to vfh, vfh+, trajectory library, etc.
  • S465 If not, extend the preset detection range to the maximum detection range, acquire a depth image of the obstacle within the maximum detection range, and plan an obstacle avoidance path according to the depth image of the obstacle.
  • the acquired depth image of the obstacle within the preset detection range is smaller than the actual size of the obstacle, that is, the acquired depth image of the obstacle is not a complete depth image of the obstacle
  • the The preset detection range is extended to the maximum detection range, a depth image of the obstacle within the maximum detection range is acquired, and the acquired depth image is relatively complete obstacle information of the obstacle, and then according to the comparison Complete obstacle information determines a more accurate obstacle avoidance path.
  • S46 includes the following steps:
  • the preset distance may be determined according to the accuracy of the depth sensor, or may be preset according to the external environment in which the unmanned vehicle performs the task. For example, when the external environment of the unmanned vehicle performing tasks is more complicated, there are many obstacles and the corresponding size is relatively large, then the preset distance is adjusted to a larger value of 5-10m. The external environment in which the vehicle performs the task is relatively simple, with fewer obstacles and a smaller corresponding size, then the preset distance is adjusted to a smaller value of 2-3m, so as to optimize the working process of the depth sensor and improve the depth image Obtain the speed, so that the obstacle avoidance path can be planned according to the depth image of the obstacle in time.
  • the preset detection range corresponding to the preset detection distance is increased, and the increase can be obtained.
  • a larger depth image of the obstacle within the preset detection range, the acquired depth image is the relatively complete obstacle information of the obstacle, and the more complete obstacle information is determined according to the relatively complete obstacle information. It is an accurate obstacle avoidance path.
  • the embodiments of the present application provide an obstacle avoidance device 50 for unmanned vehicles.
  • the unmanned vehicle obstacle avoidance device 50 includes a first detection distance determination module 51, a second detection distance determination module 52, a second detection range determination module 53 and an obstacle avoidance path planning module 54.
  • the first detection distance determining module 51 is configured to determine the first detection distance according to the maximum detection range of the unmanned vehicle.
  • the second detection distance determining module 52 is configured to determine the second detection distance according to the first detection distance, wherein the second detection distance is smaller than the first detection distance.
  • the second detection range determining module 53 is configured to determine the second detection range according to the second detection distance.
  • the obstacle avoidance path planning module 54 is configured to plan an obstacle avoidance path according to the second detection range and the maximum detection range when an obstacle is detected in the second detection range.
  • the first detection distance is determined according to the maximum detection range of the unmanned vehicle, and then the second detection distance and the second detection range corresponding to the second detection distance are determined according to the first detection distance.
  • the second detection range corresponding to the second detection distance is smaller than the maximum detection range corresponding to the first detection distance, so that relatively complete image information of the obstacle can be obtained according to the second detection range and the maximum detection range, and further based on The complete graphic information of obstacles determines a more accurate obstacle avoidance path.
  • the unmanned vehicle obstacle avoidance device 50 further includes a deceleration module 55 for decelerating the unmanned vehicle to a preset safe speed.
  • the preset acceleration is calculated by the following formula
  • V is the current speed
  • V max is the preset safe speed
  • d is the second detection distance
  • a is the preset acceleration
  • the unmanned vehicle obstacle avoidance device 50 further includes an image post-processing module 56 for filtering and/or smoothing the depth image.
  • the obstacle avoidance path planning module 54 includes a detection range expansion unit, a depth image acquisition unit, and an obstacle avoidance path planning unit.
  • the detection range expansion unit is used to expand the second detection range to a maximum detection range.
  • the detection range expansion unit is further configured to expand the second detection range to a preset detection range, the preset detection range corresponds to a preset detection distance, and the preset detection distance is greater than the first detection range. Two detection distances, and the preset detection distance is less than the first detection distance.
  • the detection range expansion unit is further configured to increase the preset detection distance corresponding to the preset detection range by a preset distance.
  • the depth image acquisition unit is used to acquire a depth image of the obstacle within the maximum detection range.
  • the depth image acquisition unit is further configured to acquire a depth image of the obstacle within a preset detection range.
  • the depth image acquiring unit is further configured to acquire a depth image of the obstacle within the expanded preset detection range.
  • the obstacle avoidance path planning unit is used for planning an obstacle avoidance path according to the depth image.
  • the obstacle avoidance path planning unit includes an image integrity determination subunit, a path planning subunit, and a detection range expansion subunit, and the image integrity determination subunit is used to determine the depth Whether the image is a complete depth image of the obstacle.
  • the path planning subunit is used to plan an obstacle avoidance path according to the depth image.
  • the obstacle avoidance device for the unmanned vehicle described above can execute the obstacle avoidance method for the unmanned vehicle provided by the embodiment of the present invention, and has corresponding functional modules and beneficial effects for the execution method.
  • the obstacle avoidance method for unmanned vehicles provided by the embodiments of the present invention.
  • FIG. 12 is a structural block diagram of an unmanned vehicle 10 provided by an embodiment of the present invention.
  • the unmanned vehicle 10 may be used to implement all or part of the functions of the main control chip.
  • the unmanned vehicle 10 may include: an unmanned vehicle body, a depth sensor, a distance sensor, a processor 110, a memory 120 and a communication module 130.
  • the depth sensor is arranged on the main body of the unmanned vehicle and is used to collect the depth image of obstacles and plan obstacle avoidance paths;
  • the distance sensor is arranged on the main body of the unmanned vehicle and is used to detect the unmanned vehicle body. Information about the distance between the vehicle and the obstacle;
  • the processor is arranged in the main body of the unmanned vehicle, and is respectively communicatively connected with the depth sensor and the distance sensor.
  • the processor 110, the memory 120, and the communication module 130 establish a communication connection between any two through a bus.
  • the processor 110 may be of any type, and has one or more processing cores. It can perform single-threaded or multi-threaded operations, and is used to parse instructions to perform operations such as obtaining data, performing logical operation functions, and issuing operation processing results.
  • the memory 120 can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as those corresponding to the unmanned vehicle obstacle avoidance method in the embodiment of the present invention.
  • Program instructions/modules for example, the first detection distance determination module 51, the second detection distance determination module 52, the second detection range determination module 53, and the obstacle avoidance path planning module 54 shown in FIG. 11).
  • the processor 110 executes various functional applications and data processing of the unmanned vehicle obstacle avoidance device 50 by running the non-transitory software programs, instructions, and modules stored in the memory 120, that is, it realizes that none of the above-mentioned method embodiments Obstacle avoidance methods for human vehicles.
  • the memory 120 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; and the storage data area may store information created based on the use of the unmanned vehicle obstacle avoidance device 50. Data etc.
  • the memory 120 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 120 may optionally include a memory remotely provided with respect to the processor 110, and these remote memories may be connected to the unmanned vehicle 10 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 120 stores instructions that can be executed by the at least one processor 110; the at least one processor 110 is used to execute the instructions to implement the unmanned vehicle obstacle avoidance method in any of the foregoing method embodiments, for example, , Execute the method steps 10, 20, 30, 40, etc. described above, to realize the functions of the modules 51-54 in FIG. 11.
  • the embodiment of the present invention also provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors.
  • 110 is executed, for example, executed by one processor 110 in FIG. 6, so that the one or more processors 110 may execute the obstacle avoidance method for unmanned vehicles in any of the foregoing method embodiments, for example, execute step 10 of the method described above. , 20, 30, 40, etc., realize the functions of modules 51-54 in Figure 11.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by instructing relevant hardware by a computer program in a computer program product.
  • the computer program can be stored in a non-transitory computer.
  • the computer program includes program instructions, and when the program instructions are executed by a related device, the related device can execute the flow of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • the above-mentioned products can execute the unmanned vehicle obstacle avoidance method provided by the embodiment of the present invention, and have the corresponding functional modules and beneficial effects for executing the unmanned vehicle obstacle avoidance method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Automation & Control Theory (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种无人载具的避障方法、避障装置及无人载具。所述方法包括:根据无人载具的最大探测范围确定第一探测距离;根据第一探测距离确定第二探测距离,根据第二探测距离确定第二探测范围;当检测到第二探测范围内有障碍物时,根据第二探测范围及最大探测范围,规划避障路径。通过所述无人载具的最大探测范围确定第一探测距离,然后根据第一探测距离确定第二探测距离及第二探测距离相对应的第二探测范围,由于所述第二探测距离对应的第二探测范围小于第一探测距离对应的最大探测范围,从而可根据所述第二探测范围及所述最大探测范围得到较为完整的障碍物的图像信息,进一步根据较为完整的障碍物的图形信息确定更为准确的避障路径。

Description

无人载具的避障方法、避障装置及无人载具
本申请要求于2019年10月22日提交中国专利局、申请号为201911006620.5、申请名称为“无人载具的避障方法、避障装置及无人载具”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
【技术领域】
本发明涉及无人载具技术领域,尤其涉及一种无人载具的避障方法、避障装置及无人载具。
【背景技术】
无人载具(Unmanned Vehicle)是一种无搭载人员的载具。通常可大致分为以下几种类型:可于地面上运作的无人地面载具(Unmanned Ground Vehicle,UGV)、常被称为无人飞机的无人飞行载具(Unmanned Aerial Vehicle,UAV)、在水面上运作的无人海面载具(Unmanned Surface Vehicle,USV),以及在水下运作的无人水下载具(Unmanned Underwater Vehicle,UUV)。无人载具通常使用遥控、导引或自动驾驶来控制,无须搭载成员,可在科学研究、军事、休闲娱乐用途上使用。
近些年,无人载具由于其灵活便携和空间机动性强等特点,逐步在测绘、搜救、房地产和农业等领域得以越来越广泛地应用,在航拍或娱乐领域更是广受消费者喜爱。而无人载具的普及,需以其能够安全可靠的在各种环境下正常工作为前提的。那么增加其安全性并提高环境适应能力,使无人载具能够识别周围环境信息并有效地避开障碍物,是无人载具技术研究的重点,也是急需攻克的技术难点。
在实现本发明的过程中,发明人发现相关技术至少存在以下问题:目前无人载具利用深度传感器进行障碍物检测以及路径规划时,由于深度传感器探测范围有限,当制动性能一定时,需要设定速度上限以保证检测到障碍物时及时规避或刹车。然而在避障时,若仅利用刚刚进入探测范围的部分区域执行路径规划会由于障碍物探测不完全而不能很好的计算规划路线,导致路径规划结果不准确。
【发明内容】
本发明实施例提供一种无人载具避障方法、避障装置及无人载具,能够解决障碍物探测不完全,路径规划结果准确度不高的技术问题。
为解决上述技术问题,本发明实施例提供以下技术方案:一种无人载具避障方法。所述无人载具避障方法包括:
可选地,根据所述无人载具的最大探测范围确定第一探测距离;
根据第一探测距离确定第二探测距离,其中,所述第二探测距离小于第一探测距离;
根据第二探测距离确定第二探测范围;
当检测到第二探测范围内有障碍物时,根据所述第二探测范围及所述最大探测范围,规划避障路径。
可选地,所述根据所述第二探测范围及所述最大探测范围,规划避障路径,包括:
将所述第二探测范围扩展至最大探测范围;
获取最大探测范围内的所述障碍物的深度图像;
根据所述深度图像规划避障路径。
可选地,所述根据所述第二探测范围及所述最大探测范围,规划避障路径,包括:
将所述第二探测范围扩展至预设探测范围,所述预设探测范围对应有预设探测距离,所述预设探测距离大于第二探测距离,且所述预设探测距离小于所述第一探测距离;
获取预设探测范围内的所述障碍物的深度图像;
根据所述深度图像,规划避障路径。
可选地,所述根据所述深度图像,规划避障路径,包括:
判断所述深度图像是否为所述障碍物的完整深度图像;
若是,根据所述深度图像规划避障路径;
若否,将所述预设探测范围扩展至所述最大探测范围,获取最大探测范围内的所述障碍物的深度图像,根据所述障碍物的深度图像规划避障路径。
可选地,所述根据所述深度图像,规划避障路径,包括:
判断所述深度图像是否为所述障碍物的完整深度图像;
若是,根据所述深度图像规划避障路径;
若否,将所述预设探测范围对应的预设探测距离增加预设距离,继续获取扩展后的预设探测范围内的所述障碍物的深度图像。
可选地,所述深度图像包括所述障碍物的三维点云。
可选地,所述当获取到所述障碍物的深度图像之后,还包括:
对所述深度图像进行滤波和/或平滑处理。
可选地,所述规划避障路径之前,还包括:将所述无人载具减速至预设安全速度。
可选地,所述将所述无人载具减速至预设安全速度,包括:
将所述无人载具以预设加速度减速至预设安全速度;
通过如下算式,计算得到所述预设加速度
Figure PCTCN2020118850-appb-000001
其中,V为当前速度,V max为所述预设安全速度,d为所述第二探测距离,a为所述预设加速度。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种避障装置,应用于无人载具,所述避障装置包括:第一探测距离确定模块,用于根据所述无人载具的最大探测范围确定第一探测距离;
第二探测距离确定模块,用于根据所述第一探测距离确定第二探测距离,其中,所述第二探测距离小于所述第一探测距离;
第二探测范围确定模块,用于根据所述第二探测距离确定第二探测范围;
避障路径规划模块,用于当检测到所述第二探测范围内有障碍物时,根据所述第二探测范围及所述最大探测范围,规划避障路径。
可选地,所述避障路径规划模块包括探测范围扩展单元、深度图像获取单元及避障路径规划单元;
所述探测范围扩展单元用于将所述第二探测范围扩展至最大探测范围;
所述深度图像获取单元用于获取最大探测范围内的所述障碍物的深度图像;
所述避障路径规划单元用于根据所述深度图像规划避障路径。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种无人载具。所述无人载具包括:
无人载具主体;
深度传感器,所述深度传感器设置于所述无人载具主体,用于采集障碍物的深度图像及规划避障路径;
处理器,所述处理器设置于所述无人载具主体内,并且分别与所述深度传感器通信连接;以及,
与所述处理器通信连接的存储器;其中,
所述存储器存储有可被所述处理器执行的指令,所述指令被所述处理器执行,以使所述处理器能够执行上述所述的方法。
与现有技术相比较,本发明实施例的提供无人载具避障方法能够根据所述无人载具的最大探测范围确定第一探测距离,然后根据第一探测距离确定第二探测距离及第二探测距离相对应的第二探测范围,由于所述第二探测距离对应的第二探测范围小于第一探测距离对应的最大探测范围,从而可根据所述第二探测范围及所述最大探测范围得到较为完整的障碍物的图像信息,进一步根据较为完整的障碍物的图形信息确定更为准确的避障路径。
【附图说明】
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1为本发明实施例的应用环境示意图;
图2为本发明实施例提供的避障方法的流程示意图;
图3为本发明其中一实施例提供的避障方法结构示意图;
图4为本发明另一实施例提供的避障方法结构示意图;
图5是图2中S40的其中一实施例流程示意图;
图6为本发明又一实施例提供的避障方法结构示意图;
图7是图2中S40的另一实施例流程示意图;
图8为本发明再一实施例提供的避障方法结构示意图;
图9是图7中S46的其中一实施例流程示意图;
图10是图7中S46的另一实施例流程示意图;
图11为本发明实施例提供的无人载具的避障装置的结构框图;
图12为本发明实施例提供的无人载具的结构框图。
【具体实施方式】
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,如果不冲突,本申请实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。再者,本申请所采用的“第一”、“第二”、“第三”等字样并不对数据和执行次序进行限定,仅是对功能和作用基本相同的相同项或相似项进行区分。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本发明。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本发明不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
本发明实施例提供了一种无人载具避障方法和装置,所述方法和装置通过根据所述无人载具的最大探测范围确定第一探测距离,然后根据第一探测距离确定第二探测距离及第二探测距离相对应的第二探测范围,由于所述第二探测距离对应的第二探测范围小于第一探测距离对应的最大探测范围,从而可根据所述第二探测范围及所述最大探测范围得到较为完整的障碍物的图像信息,进一步根据较为完整的障碍物的图形信息确定更为准确的避障路径。
以下举例说明所述无人载具避障方法和装置的应用环境。
图1是本发明实施例提供的无人载具避障系统的应用环境的示意图;如图1所示,所述应用场景包括无人载具10、无线网络20、智能终端30以及 用户40。用户40可操作智能终端30通过无线网络20操控所述无人载具10。
无人载具10可以是以任何类型的动力驱动的无人载具,包括但不限于无人地面载具,例如无人车、无人机器人等;无人飞行载具,例如旋翼无人载具、固定翼无人载具、伞翼无人载具、扑翼无人载具及直升机模型等;在水面上运作的无人海面载具;以及在水下运作的无人水下载具。在本实施例中以无人机为例进行陈述。
该无人载具10可以根据实际情况的需要,具备相应的体积或者动力,从而提供能够满足使用需要的载重能力、飞行速度以及飞行续航里程等。无人载具10上还可以添加有一种或者多种功能模块,令无人载具10能够实现相应的功能。
例如,在本实施例中,该无人载具10设置有深度传感器、距离传感器、磁力计、GPS导航仪和视觉传感器中的至少一种传感器。相对应地,该无人载具10设置有信息接收装置,接收并处理上述至少一种传感器采集的信息。
无人载具10上包含至少一个主控芯片,作为无人载具运动和数据传输等的控制核心,整合一个或者多个模块,以执行相应的逻辑控制程序。
例如,在一些实施例中,所述主控芯片上可以包括用于对避障路径进行选取和处理的无人载具避障装置50。
智能终端30可以是任何类型,用以与无人载具10建立通信连接的智能装置,例如手机、平板电脑或者智能遥控器等。该智能终端30可以装配有一种或者多种不同的用户40交互装置,用以采集用户40指令或者向用户40展示和反馈信息。
这些交互装置包括但不限于:按键、显示屏、触摸屏、扬声器以及遥控操作杆。例如,智能终端30可以装配有触控显示屏,通过该触控显示屏接收用户40对无人载具10的遥控指令并通过触控显示屏向用户40展示无人载具获得的图像信息,用户40还可以通过遥控触摸屏切换显示屏当前显示的图像信息。
在一些实施例中,无人载具10与智能终端30之间还可以融合现有的图 像视觉处理技术,进一步的提供更智能化的服务。例如无人载具10可以通过双光相机采集图像的方式,由智能终端30对图像进行解析,从而实现用户40对于无人载具10的手势控制。
无线网络20可以是基于任何类型的数据传输原理,用于建立两个节点之间的数据传输信道的无线通信网络,例如位于不同信号频段的蓝牙网络、WiFi网络、无线蜂窝网络或者其结合。
图2为本发明实施例提供的无人载具的避障方法的实施例。如图2所示,该无人载具避障方法包括如下步骤:
S10:根据所述无人载具的最大探测范围确定第一探测距离。
具体地,所述最大探测范围可由所述无人载具机载的深度传感器获取。所述最大探测范围可为所述深度传感器的物理探测极限范围,也可是实际应用过程中的最大的稳定工作范围。
所述深度传感器是对能获取环境深度的传感器的总称,针对不同的环境和所需精度选取不同的传感器。
在本实施例中,所述深度传感器用于检测所述无人载具路径上的障碍物信息,具体实现方式以深度传感器Kinect为例,Kinect的系统架构包括物理层、驱动与接口层和应用层,通过Kinect提供的SDK和API,可实现多类功能,主要有:深度数据的获取:Kinect通过由红外发射器向环境发射红外的散射点阵、红外接收器接收的环境内障碍物的发射数据进行计算,得到接收的图像中每个像素点的深度信息,从而生成三维深度图像。
所述深度传感器也可为双目相机、RGBD相机、结构光、TOF、激光雷达等等。
具体地,针对不同的环境和所需精度选取不同的传感器,确定所述无人载具的深度传感器的类型后,可根据所述深度传感器的最大探测范围确定第一探测距离。
具体地,举例说明,如图3所示,首先确定所述深度传感器11的最大探测范围112为对应的最大探测门限L1,然后以所述深度传感器11为起点作第一垂线垂直于所述最大探测门限L1,所述第一垂线和所述最大探测门限L1相交于一点,所述第一垂线和所述最大探测门限L1的相交点与所述深度传感 器11的之间的距离为所述第一探测距离D1。
在一些实施例中,第一探测距离也可通过其他方式获取,例如根据最大探测范围对应的最大探测角度获取。只要满足确定所述深度传感器的最大探测范围之后,所获取到的第一探测距离是唯一值即可。
S20:根据第一探测距离确定第二探测距离,其中,所述第二探测距离小于第一探测距离。
具体地,通过所述深度传感器的最大探测范围获取到第一探测距离之后,将第一探测距离缩短预设距离后得到所述第二探测距离,所述预设距离可根据预存的障碍物的尺寸确定,例如使所述预设距离等于或者大于所述预存障碍物的尺寸。
所述无人载具在历史行驶路径中将检测到的障碍物尺寸存储到指定存储器,所述预存障碍物的尺寸可通过所述指定存储器内存储的障碍物尺寸获得。例如,可计算得到所述指定存储器内存储的障碍物尺寸的平均障碍物尺寸,使所述预存障碍物的尺寸等于所述平均障碍物尺寸,也即使所述预设距离等于或者大于所述平均障碍物尺寸。
具体地,举例说明,如图4所示,通过所述深度传感器11的最大探测范围112获取到第一探测距离D1之后,将第一探测距离D1缩短预设距离△D后得到所述第二探测距离D2。所述第一探测距离D1缩短预设距离的方向与第一垂线的延伸方向相反。
S30:根据第二探测距离确定第二探测范围。
具体地,将第一探测距离缩短预设距离后得到所述第二探测距离之后,根据根据第二探测距离可以唯一确定第二探测范围。
具体地,举例说明,如图4所示,将第一探测距离D1缩短预设距离△D后得到所述第二探测距离D2之后,做一垂线与在所述第二探测距离D2的一端垂直相交,所述垂线与所述最大探测范围的边缘处相交有两个交点,两个所述交点相连即可得到与所述第二探测距离D2对应的第二探测门限L2,所述第二探测门限L2与所述最大探测范围的边缘处构成与所述第二探测距离D2对应的第二探测范围114。
在一些实施例中,第二探测范围也可通过其他方式获取,只要满足确定所述第二探测距离对应有唯一的第二探测范围即可。
S40:当检测到第二探测范围内有障碍物时,根据所述第二探测范围及所述最大探测范围,规划避障路径。
具体地,当所述无人机可通过视觉传感器、超声波、红外线、毫米波雷达等来检测所述第二探测范围内是否存在有障碍物。
具体地,在本实施例中,利用视觉传感器代替人眼捕获客观事物信息,通过相关视觉图像处理算法获取事物的轮廓信息、深度信息、位置信息等,来检测所述第二探测范围内是否存在有障碍物。视觉传感器与超声波、激光雷达等传感器不同,它是被动接受光源信息(被动感知传感器),而且获取信息丰富。超声波和激光雷达传感器是主动发射声波或光波(主动感知传感器),同时再接收反射回的信息,且获取信息单一。
所述视觉传感器包括镜头、图像传感器、模数转换器、图像处理器、存储器等。视觉传感器在成像时,障碍物在三维空间中的光信息通过透镜,经过图像几何变化,投射在二维平面上,图像传感器采集二维图像光信号,获得模拟图像信号,图像模拟信号经过模数转换器编码成数字图像,最终由图像处理器对数字图像重新编码,并保存在存储器。在本实施例中,采用CCD图像传感器的高端摄像机作为视觉传感器,CCD具有影像品质高、抗噪能力强等优点。
具体地,举例说明,如图4所示,当检测到第二探测范围114内一旦存在有障碍物12时,根据所述第二探测范围114及所述最大探测范围112,规划避障路径。
在本实施例中,通过根据所述无人载具的最大探测范围确定第一探测距离,然后根据第一探测距离确定第二探测距离及第二探测距离相对应的第二探测范围,由于所述第二探测距离对应的第二探测范围小于第一探测距离对应的最大探测范围,从而可根据所述第二探测范围及所述最大探测范围得到较为完整的障碍物的图像信息,进一步根据较为完整的障碍物的图形信息确定更为准确的避障路径。
在一些实施例中,为了保证无人载具能更安全更平缓的绕行或刹车以躲避障碍物,在一些实施例中,所述规划避障路径之前还包括如下步骤:
将所述无人载具减速至预设安全速度。
具体地,将所述无人载具以预设加速度减速至预设安全速度;
通过如下算式,计算得到所述预设加速度
Figure PCTCN2020118850-appb-000002
其中,V为当前速度,V max为所述预设安全速度,d为所述第二探测距离,a为所述预设加速度。
为了提供更加完全的障碍物信息,以保障避障路径规划的准确性,在一些实施例中,如图5所示,S40包括如下步骤:
S41:将所述第二探测范围扩展至最大探测范围。
具体地,如图6所示,当检测到第二探测范围114内一旦存在有障碍物12时,将所述第二探测距离D2对应的所述第二探测范围114扩展至所述第一探测距离D1对应的最大探测范围112。当检测到第二探测范围114内一旦存在有障碍物12时,将所述第二探测范围114扩展至最大探测范围112,能够获取到所述障碍物较为完整的障碍物信息,进而根据所述较为完整的障碍物信息确定更为准确的避障路径。
S43:获取最大探测范围内的所述障碍物的深度图像。
具体地,如图6所示,当所述第二探测距离D2对应的所述第二探测范围114扩展至所述第一探测距离D1对应的最大探测范围112时,获取最大探测范围112内的所述障碍物的深度图像。
具体地,可通过深度传感器一般能获取障碍物的三维点云,通过对三维点云进行特征提取与匹配处理,获取所述障碍物的深度图像。
在一些实施例中,也可以基于帧间差分算法、背景差分算法、光流法及图像分割算法获取所述障碍物的深度图像。例如,所述帧间差分算法是根据时间上连续的图像的前、后帧之间的差值,根据障碍物在图像内运动中位置的变化时前后不同而获取障碍物的边缘轮廓。帧间差分算法的优势在于运算简单、检测速度快、扩展性强以及算法实现复杂度低等特点。又例如,背景差分算法是根据障碍物出现的图像与固定背景间的差异检测所述障碍物。因此,该方法需要先对固定场景进行存储,在检测障碍物位置信息时具有运算简单、检测速度快以及算法实现复杂度低等特点。
为了使获取到更加清晰准确的所述深度图像,以更为准确的规划避障路 径,在一些实施例中,在获取到所述障碍物的深度图像之后,所述方法还包括:
对所述深度图像进行滤波和/或平滑处理。
S45:根据所述深度图像规划避障路径。
具体地,当获取最大探测范围内的所述障碍物的深度图像后,可利用路径规划的算法根据获取到的所述深度图像,进行避障路径的规划。所述路径规划的算法包括且不仅限于vfh,vfh+,轨迹库等。
为了提供更加完全的障碍物信息,以保障避障路径规划的准确性,在一些实施例中,如图7所示,S40通过如下步骤代替上述S41、S43及S45:
S42:将所述第二探测范围扩展至预设探测范围,所述预设探测范围对应有预设探测距离,所述预设探测距离大于第二探测距离,且所述预设探测距离小于所述第一探测距离。
具体地,如图8所示,将所述第二探测范围114对应的第二探测距离D2增大至预设探测距离D3,所述预设探测距离D3对应有预设探测范围116,从而实现将所述第二探测范围114扩展至预设探测范围116,所述预设探测距离D3大于第二探测距离D2,且所述预设探测距离D3小于所述第一探测距离D1。
当检测到第二探测范围114内一旦存在有障碍物12时,将所述第二探测范围114扩展至预设探测范围116,能够获取到所述障碍物较为完整的障碍物信息,进而根据所述较为完整的障碍物信息确定更为准确的避障路径。
S44:获取预设探测范围内的所述障碍物的深度图像。
具体地,如图8所示,当所述第二探测距离D2对应的所述第二探测范围114扩展至所述预设探测距离D3对应的预设探测范围116时,获取预设探测范围116内的所述障碍物的深度图像。
具体地,可通过深度传感器一般能获取障碍物的三维点云,通过对三维点云进行特征提取与匹配处理,获取所述障碍物的深度图像。
S46:根据所述深度图像,规划避障路径。
具体地,当获取最大探测范围内的所述障碍物的深度图像后,可利用路径规划的算法根据获取到的所述深度图像,进行避障路径的规划。所述路径规划的算法包括且不仅限于vfh,vfh+,轨迹库等。
为了能够得到所述障碍物更为完整的深度图像,进而根据所述深度图像,更加准确的规划避障路径,在一些实施例中,如图9所示,S46包括如下步骤:
S461:判断所述深度图像是否为所述障碍物的完整深度图像。
具体地,当所述第二探测距离D2对应的所述第二探测范围114扩展至所述预设探测距离D3对应的预设探测范围116时,若获取的预设探测范围116内的所述障碍物深度图像小于所述障碍物的实际尺寸,即获取的所述障碍物的深度图像不是所述障碍物完整的深度图像,从而导致根据所述深度图像,确定的避障路径不准确,所以需要进一步判断获取的预设探测范围116内的所述障碍物深度图像是否为所述障碍物的完整深度图像。
S463:若是,根据所述深度图像规划避障路径。
具体地,若获取的预设探测范围内的所述障碍物深度图像等于所述障碍物的实际尺寸,即获取的所述障碍物的深度图像是所述障碍物完整的深度图像,则可直接根据所述深度图像规划避障路径。通过获取预设探测范围内的所述障碍物深度图像,而非直接获取最大探测范围内的所述障碍物深度图像,能够优化深度传感器的作业流程,提高深度图像获取的速度,从而能够及时的根据所述障碍物的深度图像规划避障路径。
具体地,可利用路径规划的算法根据获取到的所述深度图像,进行避障路径的规划。所述路径规划的算法包括且不仅限于vfh,vfh+,轨迹库等。
S465:若否,将所述预设探测范围扩展至所述最大探测范围,获取最大探测范围内的所述障碍物的深度图像,根据所述障碍物的深度图像规划避障路径。
具体地,若获取的预设探测范围内的所述障碍物深度图像小于所述障碍物的实际尺寸,即获取的所述障碍物的深度图像不是所述障碍物完整的深度图像,将所述预设探测范围扩展至所述最大探测范围,获取最大探测范围内的所述障碍物的深度图像,获取到的所述深度图像为所述障碍物较为完整的障碍物信息,进而根据所述较为完整的障碍物信息确定更为准确的避障路径。
为了能够得到所述障碍物更为完整的深度图像,进而根据所述深度图像,更加准确的规划避障路径,在一些实施例中,如图10所示,S46包括如下步骤:
S462:判断所述深度图像是否为所述障碍物的完整深度图像。
S464:若是,根据所述深度图像规划避障路径。
S466:若否,将所述预设探测范围对应的预设探测距离增加预设距离,继续获取扩展后的预设探测范围内的所述障碍物的深度图像。
具体地,所述预设距离可根据深度传感器的精度确定,也可根据无人载具执行任务的外界环境情况预设。例如,当所述无人载具的执行任务的外界环境较为复杂,障碍物较多且对应的尺寸较大,则将所述预设距离调整为较大数值5-10m,当所述无人载具的执行任务的外界环境较为简单,障碍物较少且对应的尺寸较小,则将所述预设距离调整为较小数值2-3m,从而能够优化深度传感器的作业流程,提高深度图像获取的速度,从而能够及时的根据所述障碍物的深度图像规划避障路径。
进一步地,将所述预设探测范围对应的预设探测距离增加预设距离后,由于预设探测距离的增大,从而使预设探测距离对应的预设探测范围增大,进而可获取增大后的预设探测范围内的所述障碍物的深度图像,所述获取到的所述深度图像为所述障碍物较为完整的障碍物信息,进而根据所述较为完整的障碍物信息确定更为准确的避障路径。
需要说明的是,在上述各个实施例中,上述各步骤之间并不必然存在一定的先后顺序,本领域普通技术人员,根据本申请实施例的描述可以理解,不同实施例中,上述各步骤可以有不同的执行顺序,亦即,可以并行执行,亦可以交换执行等等。
作为本申请实施例的另一方面,本申请实施例提供一种无人载具避障装置50。请参阅图5,该无人载具避障装置50包括:第一探测距离确定模块51、第二探测距离确定模块52、第二探测范围确定模块53以避障路径规划模块54。
第一探测距离确定模块51用于根据所述无人载具的最大探测范围确定第一探测距离。
第二探测距离确定模块52用于根据第一探测距离确定第二探测距离,其中,所述第二探测距离小于第一探测距离。
第二探测范围确定模块53用于根据第二探测距离确定第二探测范围。
避障路径规划模块54用于当检测到第二探测范围内有障碍物时,根据所 述第二探测范围及所述最大探测范围,规划避障路径。
在本实施例中,通过根据所述无人载具的最大探测范围确定第一探测距离,然后根据第一探测距离确定第二探测距离及第二探测距离相对应的第二探测范围,由于所述第二探测距离对应的第二探测范围小于第一探测距离对应的最大探测范围,从而可根据所述第二探测范围及所述最大探测范围得到较为完整的障碍物的图像信息,进一步根据较为完整的障碍物的图形信息确定更为准确的避障路径。
在一些实施例中,无人载具避障装置50还包括减速模块55,所述减速模块55用于将所述无人载具减速至预设安全速度。
具体地,将所述无人载具以预设加速度减速至预设安全速度;
通过如下算式,计算得到所述预设加速度
Figure PCTCN2020118850-appb-000003
其中,V为当前速度,V max为所述预设安全速度,d为所述第二探测距离,a为所述预设加速度。
在一些实施例中,无人载具避障装置50还包括图像后处理模块56,所述图像后处理模块56用于对所述深度图像进行滤波和/或平滑处理。
其中,在一些实施例中,所述避障路径规划模块54包括探测范围扩展单元、深度图像获取单元及避障路径规划单元。
所述探测范围扩展单元用于将所述第二探测范围扩展至最大探测范围。
在一些实施例中,所述探测范围扩展单元还用于将所述第二探测范围扩展至预设探测范围,所述预设探测范围对应有预设探测距离,所述预设探测距离大于第二探测距离,且所述预设探测距离小于所述第一探测距离。
在一些实施例中,所述探测范围扩展单元还用于将所述预设探测范围对应的预设探测距离增加预设距离。
所述深度图像获取单元用于获取最大探测范围内的所述障碍物的深度图像。
在一些实施例中,所述深度图像获取单元还用于获取预设探测范围内的所述障碍物的深度图像。
在一些实施例中,所述深度图像获取单元还用于获取扩展后的预设探测 范围内的所述障碍物的深度图像。
所述避障路径规划单元用于根据所述深度图像规划避障路径。
其中,在一些实施例中,所述所述避障路径规划单元包括图像完整度判断子单元、路径规划子单元及探测范围扩展子单元,所述图像完整度判断子单元用于判断所述深度图像是否为所述障碍物的完整深度图像。所述路径规划子单元用于根据所述深度图像规划避障路径。
需要说明的是,上述无人载具的避障装置可执行本发明实施例所提供的无人载具的避障方法,具备执行方法相应的功能模块和有益效果。未在无人载具的避障装置实施例中详尽描述的技术细节,可参见本发明实施例所提供的无人载具的避障方法。
图12为本发明实施例提供的无人载具10的结构框图。该无人载具10可以用于实现所述主控芯片中的全部或者部分功能模块的功能。如图6所示,该无人载具10可以包括:无人载具主体、深度传感器、距离传感器、处理器110、存储器120以及通信模块130。
所述深度传感器设置于所述无人载具主体,用于采集障碍物的深度图像及规划避障路径;所述距离传感器设置于所述无人载具主体,用于检测所述无人载具与所述障碍物的距离信息;所述处理器设置于所述无人载具主体内,并且分别与所述深度传感器和所述距离传感器通信连接。
所述处理器110、存储器120以及通信模块130之间通过总线的方式,建立任意两者之间的通信连接。
处理器110可以为任何类型,具备一个或者多个处理核心的处理器110。其可以执行单线程或者多线程的操作,用于解析指令以执行获取数据、执行逻辑运算功能以及下发运算处理结果等操作。
存储器120作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态性计算机可执行程序以及模块,如本发明实施例中的无人载具避障方法对应的程序指令/模块(例如,附图11所示的第一探测距离确定模块51、第二探测距离确定模块52、第二探测范围确定模块53、避障路径规划模块54)。处理器110通过运行存储在存储器120中的非暂态软件程序、指令以及模块,从而执行无人载具避障装置50的各种功能应用以及数据处理,即实现上述任一方法实施例中无人载具避障方法。
存储器120可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据无人载具避障装置50的使用所创建的数据等。此外,存储器120可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器120可选包括相对于处理器110远程设置的存储器,这些远程存储器可以通过网络连接至无人载具10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述存储器120存储有可被所述至少一个处理器110执行的指令;所述至少一个处理器110用于执行所述指令,以实现上述任意方法实施例中无人载具避障方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图11中的模块51-54的功能。
通信模块130是用于建立通信连接,提供物理信道的功能模块。通信模块130以是任何类型的无线或者有线通信模块130,包括但不限于WiFi模块或者蓝牙模块等。
进一步地,本发明实施例还提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器110执行,例如,被图6中的一个处理器110执行,可使得上述一个或多个处理器110执行上述任意方法实施例中无人载具避障方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图11中的模块51-54的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序产品中的计算机程序来指令相关的硬件来完成,所述的计 算机程序可存储于一非暂态计算机可读取存储介质中,该计算机程序包括程序指令,当所述程序指令被相关设备执行时,可使相关设备执行上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
上述产品可执行本发明实施例所提供的无人载具避障方法,具备执行无人载具避障方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本发明实施例所提供的无人载具避障方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (12)

  1. 一种避障方法,应用于无人载具,其特征在于,包括:
    根据所述无人载具的最大探测范围确定第一探测距离;
    根据所述第一探测距离确定第二探测距离,其中,所述第二探测距离小于所述第一探测距离;
    根据所述第二探测距离确定第二探测范围;
    当检测到所述第二探测范围内有障碍物时,根据所述第二探测范围及所述最大探测范围,规划避障路径。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第二探测范围及所述最大探测范围,规划避障路径,包括:
    将所述第二探测范围扩展至所述最大探测范围;
    获取所述最大探测范围内的所述障碍物的深度图像;
    根据所述深度图像规划避障路径。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述第二探测范围及所述最大探测范围,规划避障路径,包括:
    将所述第二探测范围扩展至预设探测范围,所述预设探测范围对应有预设探测距离,所述预设探测距离大于所述第二探测距离,且所述预设探测距离小于所述第一探测距离;
    获取所述预设探测范围内的所述障碍物的深度图像;
    根据所述深度图像,规划避障路径。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述深度图像,规划避障路径,包括:
    判断所述深度图像是否为所述障碍物的完整深度图像;
    若是,根据所述深度图像规划避障路径;
    若否,将所述预设探测范围扩展至所述最大探测范围,获取所述最大探测范围内的所述障碍物的深度图像,根据所述障碍物的深度图像规划避障路径。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述深度图像,规划避障路径,包括:
    判断所述深度图像是否为所述障碍物的完整深度图像;
    若是,根据所述深度图像规划避障路径;
    若否,将所述预设探测范围对应的所述预设探测距离增加预设距离,继续获取扩展后的所述预设探测范围内的所述障碍物的深度图像。
  6. 根据权利要求2-5任一项所述的方法,其特征在于,
    所述深度图像包括所述障碍物的三维点云。
  7. 根据权利要求2-6任一项所述的方法,其特征在于,
    所述当获取到所述障碍物的深度图像之后,还包括:
    对所述深度图像进行滤波和/或平滑处理。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,
    所述规划避障路径之前,还包括:
    将所述无人载具减速至预设安全速度。
  9. 根据权利要求8所述的方法,其特征在于,
    所述将所述无人载具减速至预设安全速度,包括:
    将所述无人载具以预设加速度减速至所述预设安全速度;
    通过如下算式,计算得到所述预设加速度
    Figure PCTCN2020118850-appb-100001
    其中,V为当前速度,V max为所述预设安全速度,d为所述第二探测距离,a为所述预设加速度。
  10. 一种避障装置,应用于无人载具,其特征在于,包括:
    第一探测距离确定模块,用于根据所述无人载具的最大探测范围确定第一探测距离;
    第二探测距离确定模块,用于根据所述第一探测距离确定第二探测距离,其中,所述第二探测距离小于所述第一探测距离;
    第二探测范围确定模块,用于根据所述第二探测距离确定第二探测范围;
    避障路径规划模块,用于当检测到所述第二探测范围内有障碍物时,根据所述第二探测范围及所述最大探测范围,规划避障路径。
  11. 根据权利要求10所述的避障装置,其特征在于,
    所述避障路径规划模块包括探测范围扩展单元、深度图像获取单元及避 障路径规划单元;
    所述探测范围扩展单元用于将所述第二探测范围扩展至所述最大探测范围;
    所述深度图像获取单元用于获取最大探测范围内的所述障碍物的深度图像;
    所述避障路径规划单元用于根据所述深度图像规划避障路径。
  12. 一种无人载具,其特征在于,包括:
    无人载具主体;
    深度传感器,所述深度传感器设置于所述无人载具主体,用于采集障碍物的深度图像及规划避障路径;
    处理器,所述处理器设置于所述无人载具主体内,并且分别与所述深度传感器通信连接;以及,
    与所述处理器通信连接的存储器;其中,
    所述存储器存储有可被所述处理器执行的指令,所述指令被所述处理器执行,以使所述处理器能够执行如权利要求1-9任一项所述的方法。
PCT/CN2020/118850 2019-10-22 2020-09-29 无人载具的避障方法、避障装置及无人载具 WO2021078003A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911006620.5A CN110751336B (zh) 2019-10-22 2019-10-22 无人载具的避障方法、避障装置及无人载具
CN201911006620.5 2019-10-22

Publications (1)

Publication Number Publication Date
WO2021078003A1 true WO2021078003A1 (zh) 2021-04-29

Family

ID=69279373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118850 WO2021078003A1 (zh) 2019-10-22 2020-09-29 无人载具的避障方法、避障装置及无人载具

Country Status (2)

Country Link
CN (1) CN110751336B (zh)
WO (1) WO2021078003A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114701526A (zh) * 2022-04-02 2022-07-05 广东电网有限责任公司惠州供电局 一种自动操控方法及无人操控输电线路轨道运输装备
CN114994604A (zh) * 2022-04-21 2022-09-02 深圳市倍思科技有限公司 人机交互位置确定方法、装置、机器人及存储介质
CN116047909A (zh) * 2023-01-13 2023-05-02 大连海事大学 面向海事平行搜寻的无人机-船协同鲁棒自适应控制方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751336B (zh) * 2019-10-22 2023-04-14 深圳市道通智能航空技术股份有限公司 无人载具的避障方法、避障装置及无人载具

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060103512A1 (en) * 2004-11-16 2006-05-18 Honda Access Corporation Obstacle detection apparatus
CN107480638A (zh) * 2017-08-16 2017-12-15 北京京东尚科信息技术有限公司 车辆避障方法、控制器、装置和车辆
CN107650908A (zh) * 2017-10-18 2018-02-02 长沙冰眼电子科技有限公司 无人车环境感知系统
CN109917420A (zh) * 2019-02-27 2019-06-21 科沃斯商用机器人有限公司 一种自动行走装置和机器人
CN110231832A (zh) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 用于无人机的避障方法和避障装置
CN110751336A (zh) * 2019-10-22 2020-02-04 深圳市道通智能航空技术有限公司 无人载具的避障方法、避障装置及无人载具

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106802668B (zh) * 2017-02-16 2020-11-17 上海交通大学 基于双目与超声波融合的无人机三维避撞方法及系统
CN107831777B (zh) * 2017-09-26 2020-04-10 中国科学院长春光学精密机械与物理研究所 一种飞行器自主避障系统、方法及飞行器
CN108268036A (zh) * 2018-01-19 2018-07-10 刘晋宇 一种新型机器人智能避障系统
CN109857112A (zh) * 2019-02-21 2019-06-07 广东智吉科技有限公司 机器人避障方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060103512A1 (en) * 2004-11-16 2006-05-18 Honda Access Corporation Obstacle detection apparatus
CN107480638A (zh) * 2017-08-16 2017-12-15 北京京东尚科信息技术有限公司 车辆避障方法、控制器、装置和车辆
CN107650908A (zh) * 2017-10-18 2018-02-02 长沙冰眼电子科技有限公司 无人车环境感知系统
CN110231832A (zh) * 2018-03-05 2019-09-13 北京京东尚科信息技术有限公司 用于无人机的避障方法和避障装置
CN109917420A (zh) * 2019-02-27 2019-06-21 科沃斯商用机器人有限公司 一种自动行走装置和机器人
CN110751336A (zh) * 2019-10-22 2020-02-04 深圳市道通智能航空技术有限公司 无人载具的避障方法、避障装置及无人载具

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114701526A (zh) * 2022-04-02 2022-07-05 广东电网有限责任公司惠州供电局 一种自动操控方法及无人操控输电线路轨道运输装备
CN114994604A (zh) * 2022-04-21 2022-09-02 深圳市倍思科技有限公司 人机交互位置确定方法、装置、机器人及存储介质
CN116047909A (zh) * 2023-01-13 2023-05-02 大连海事大学 面向海事平行搜寻的无人机-船协同鲁棒自适应控制方法
CN116047909B (zh) * 2023-01-13 2023-09-05 大连海事大学 面向海事平行搜寻的无人机-船协同鲁棒自适应控制方法

Also Published As

Publication number Publication date
CN110751336B (zh) 2023-04-14
CN110751336A (zh) 2020-02-04

Similar Documents

Publication Publication Date Title
US11797009B2 (en) Unmanned aerial image capture platform
WO2021078003A1 (zh) 无人载具的避障方法、避障装置及无人载具
US20220234733A1 (en) Aerial Vehicle Smart Landing
US11649052B2 (en) System and method for providing autonomous photography and videography
CN111448476B (zh) 在无人飞行器与地面载具之间共享绘图数据的技术
US10969784B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
US10802509B2 (en) Selective processing of sensor data
US11423792B2 (en) System and method for obstacle avoidance in aerial systems
US10266263B2 (en) System and method for omni-directional obstacle avoidance in aerial systems
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
WO2018086130A1 (zh) 飞行轨迹的生成方法、控制装置及无人飞行器
WO2022001748A1 (zh) 目标跟踪方法、装置、电子设备及移动载具
CN111670339B (zh) 用于无人飞行器和地面载运工具之间的协作地图构建的技术
US20180275659A1 (en) Route generation apparatus, route control system and route generation method
JP2019011971A (ja) 推定システムおよび自動車
WO2021027886A1 (zh) 无人机飞行控制方法及无人机
JP7501535B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム
US20230259132A1 (en) Systems and methods for determining the position of an object using an unmanned aerial vehicle
JP2021103410A (ja) 移動体及び撮像システム
JP7317684B2 (ja) 移動体、情報処理装置、及び撮像システム
JP2022040134A (ja) 推定システムおよび自動車

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20878551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20878551

Country of ref document: EP

Kind code of ref document: A1