CN114746822A - Path planning method, path planning device, path planning system, and medium - Google Patents

Path planning method, path planning device, path planning system, and medium Download PDF

Info

Publication number
CN114746822A
CN114746822A CN202080074264.5A CN202080074264A CN114746822A CN 114746822 A CN114746822 A CN 114746822A CN 202080074264 A CN202080074264 A CN 202080074264A CN 114746822 A CN114746822 A CN 114746822A
Authority
CN
China
Prior art keywords
movable platform
image area
information
path
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080074264.5A
Other languages
Chinese (zh)
Inventor
邹亭
赵力尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN114746822A publication Critical patent/CN114746822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Abstract

A path planning method, a path planning apparatus, a path planning system, and a medium for planning a movement path of a movable platform, the method comprising: obtaining a semantic map of a movable platform operating environment, wherein semantic information of each image area in the semantic map has a corresponding relation with an obstacle avoidance strategy of the movable platform; determining semantic information of a target image area in the semantic map according to the semantic map; and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.

Description

Path planning method, path planning device, path planning system, and medium Technical Field
The present application relates to the field of robotics, and in particular, to a path planning method, a path planning apparatus, a path planning system, and a medium.
Background
In the field of robots, many typical application scenarios require path planning operations. Such as path planning for drones, transport robots, etc., within closed or open environments.
However, in the related art, the obstacle avoidance strategies adopted during path planning are single, and corresponding obstacle avoidance strategies are not designed for different types of obstacles, so that more reasonable and efficient path planning is difficult to realize in a complex scene.
Disclosure of Invention
In view of this, embodiments of the present application provide a path planning method, a path planning apparatus, a path planning system, and a medium, so as to implement more reasonable and efficient path planning.
In a first aspect, an embodiment of the present application provides a path planning method, configured to plan a moving path of a movable platform, where the method includes: firstly, a semantic map of a movable platform operating environment is obtained, semantic information of each image area in the semantic map has a corresponding relation with an obstacle avoidance strategy of the movable platform, and then the semantic information of a target image area in the semantic map is determined according to the semantic map. And then planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.
In a second aspect, an embodiment of the present application provides a path planning apparatus for planning a moving path of a movable platform, where the apparatus includes: one or more processors; and a computer readable storage medium storing one or more computer programs which, when executed by a processor, implement: obtaining a semantic map of a mobile platform operation environment, wherein semantic information of each image area in the semantic map has a corresponding relation with an obstacle avoidance strategy of the mobile platform; determining semantic information of a target image area in the semantic map according to the semantic map; and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.
In a third aspect, an embodiment of the present application provides a path planning system, configured to plan a moving path, where the system includes: the control terminal and the movable platform are in communication connection with each other, wherein the control terminal and/or the movable platform comprise the path planning device.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the method as above.
In the embodiment, the semantic information of each image area in the semantic map and the obstacle avoidance strategy of the movable platform have a corresponding relationship, so that the movable platform can plan a path based on the obstacle avoidance strategy corresponding to each image area, and more reasonable and efficient path planning can be realized in a complex scene.
It should be understood that different aspects of the present application may be understood individually, collectively, or in combination with each other. The various aspects of the present application described herein may be applicable to any of the specific applications set forth below or to any other type of movable platform. Any description herein of an aircraft, such as an unmanned aerial vehicle, may be applicable and used with any movable platform, such as any vehicle. Additionally, the systems, devices, and methods disclosed herein in the context of airborne motion (e.g., flying) may also be applicable in the context of other types of motion, such as movement on the ground or on water, underwater motion, or motion in space.
Advantages of additional aspects of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and other objects, features and advantages of embodiments of the present application will be more readily understood from the following detailed description taken in conjunction with the accompanying drawings. Embodiments of the present application will be described by way of example and not limitation in the accompanying drawings, in which:
fig. 1 is an application scenario of a path planning method, a path planning apparatus, a path planning system, and a medium according to an embodiment of the present application;
fig. 2 is an application scenario of a path planning method, a path planning apparatus, a path planning system and a medium according to another embodiment of the present application;
fig. 3 is a schematic flowchart of a path planning method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a semantic map provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a return trip start point and a return trip end point provided in the embodiment of the present application;
FIG. 6 is a schematic diagram of a user interaction interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of updating a semantic map according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an updated semantic map according to another embodiment of the present application;
FIG. 9 is a schematic diagram of an updated semantic map according to another embodiment of the present application;
FIG. 10A is a diagram illustrating an updated semantic map according to another embodiment of the present application;
FIG. 10B is a schematic diagram of a movement path provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a safety distance provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a semantic map provided in accordance with another embodiment of the present application;
fig. 13 is a schematic diagram illustrating movement of obstacle information detected by a sensor according to an embodiment of the present application;
FIG. 14 is a schematic diagram of updating a semantic map based on obstacle information detected by a movable platform according to an embodiment of the present disclosure;
FIG. 15 is a schematic diagram of smoothness of a movement path provided by an embodiment of the present application to meet maneuvering requirements of a movable platform;
fig. 16 is a schematic flow chart of path planning according to another embodiment of the present application;
FIG. 17 is a schematic illustration of an unresponsive lever amount provided by an embodiment of the present application;
fig. 18 is a schematic structural diagram of a path planning apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
In order to facilitate understanding of the technical aspects of the present invention, the following detailed description is made with reference to fig. 1 to 20.
Fig. 1 is an application scenario of a path planning method, a path planning device, a path planning system, and a medium according to an embodiment of the present application.
As shown in fig. 1, the embodiment of the application is a path planning mode based on a semantic map, which fills up the technical gap of robot path planning in a stable environment and provides a cheap and reusable path planning mode. To facilitate understanding of embodiments of the present application, a semantic map is illustrated.
As shown in fig. 1, the semantic map may be a picture of pixels (which may be in tif/tfw format), each pixel in the picture corresponding to a real-world coordinate location, and the pixel storing a semantic information indicating the semantic information corresponding to the location, labeled object type corresponding to the location. In order to facilitate the use of the semantic map, a plurality of pixels having the same semantic meaning and adjacent to each other may be grouped together into an image area. Each image region has corresponding semantic information. Wherein, each image area can be represented by different colors, filling patterns and the like. For example, the green pixel points correspond to the semantics of the farmland, which means that the actual positions corresponding to the pixel points are the real object of the farmland. For another example, the purple pixels correspond to the object of "tree". For another example, as shown in fig. 1, semantic information of an image region having the origin fill pattern 1111 in the semantic map 1 is a cornfield. The semantic information of the image area with the pure white fill pattern 1112 is a river. The semantic information of the image area having the horizontal line fill pattern 1113 is a no-fly zone. The semantic information for the image region with the grid fill pattern 1114 is a building. The semantic information of the image area with the diagonal fill pattern 1115 is the maize field. The semantic information for the image area with the vertical line fill pattern 1116 is the high voltage line tower.
It should be noted that the granularity of the semantic division is adjustable. For example, the particle size can be adjusted to be larger according to the requirement, such as dividing wheat field and corn field into the same type: a crop; high-pressure line towers are incorporated in buildings. Certainly, the granularity can be adjusted to be small according to the requirements, for example, the building can be divided into a high-rise building, a short-rise building and the like.
The acquisition of the semantic map can comprise various ways, and the source of the semantic map is not limited. For example, the semantic map may come from aerial map identification, manual user planning, third party downloads, and the like.
The existing movable platform can not be provided with a complex sensor, and if a real-time image is required to be built based on a laser radar, an ultrasonic radar, an image sensor and the like, the obstacle avoidance can be realized. But complex sensors entail increased hardware costs. Furthermore, for small robots (such as drones), complex sensors can add weight to the robot and crowd the space inside the small robot for disposing energy storage media, work items, and the like. Taking an unmanned aerial vehicle plant protection scene as an example, in the related technology, a path is planned by detecting obstacles in real time based on a complex hardware sensor, and the basis of the large application is that each robot is provided with a complex environment detection sensor. On the one hand, a complicated sensor inevitably brings about an increase in hardware cost. On the other hand, for a scene that environment information is relatively fixed and a sensor carried by the robot is limited, the current path planning scheme is difficult to adapt.
Therefore, for a relatively stable operation scene, compared with a semantic map obtained by real-time detection, the hardware cost of the movable platform can be reduced by adopting the semantic map loaded in advance, and the computing resource consumed by real-time map construction is reduced. The semantic map is a configuration file as a way of describing the environment, and can be used by a plurality of robot platforms at the same time. Because the map loaded by the robot is the same, different robots can be guaranteed to make the same planning result under the same condition.
It should be noted that the above plant protection scenario for unmanned aerial vehicles is only an exemplary illustration, and is not to be construed as a limitation of the present application. Wherein, unmanned aerial vehicle can be consumption level unmanned aerial vehicle, agricultural unmanned aerial vehicle and industry application unmanned aerial vehicle etc.. The robot may be an on-road robot, an indoor robot, a water robot, an underwater robot, a water-bottom robot, an aerial robot, etc., and is not limited herein.
As shown in fig. 1, a semantic map may be used as an input of path planning, an obstacle avoidance policy corresponding to an image region is determined based on semantic information of each image region in the semantic map, and then path planning is performed based on the obstacle avoidance policy. As in the semantic map of fig. 1, the obstacle avoidance policy for the image regions of the origin filling pattern 1111 and the pure white filling pattern 1112 is passage, the obstacle avoidance policy for the image regions of the horizontal line filling pattern 1113 and the grid filling pattern 1114 is detour, and the obstacle avoidance policy for the image region of the oblique line filling pattern 1115 is upward passage. Therefore, the moving path of the robot in a stable environment can be conveniently planned, the path is planned by a sensor carried by the robot, and the reliability of the planned path is improved due to the high reliability of the environment information given by the semantic map. The drone 2 can move from a position corresponding to the path starting point 3 to a position corresponding to the path ending point 4 according to the planned movement path.
It should be noted that the above-mentioned obstacle avoidance strategies are only examples, and the obstacle avoidance strategies may have more or fewer strategies. For example, the obstacle avoidance strategy includes only: passing and detouring. For another example, the obstacle avoidance strategy includes passing, detouring, passing above, passing below, passing quickly, passing at low speed, and the like, and is not limited herein.
According to the path planning method, the path planning device, the path planning system and the medium, because the semantic information of each image area in the semantic map and the obstacle avoidance strategy of the movable platform have the corresponding relation, the movable platform can carry out path planning based on the corresponding obstacle avoidance strategy, and more reasonable and efficient path planning can be realized in a complex scene.
According to the path planning method, the path planning device, the path planning system and the medium, the environment information included in the semantic map can be edited, so that a user can edit the environment information according to an actual scene. Different from the real-time monitoring environment of the robot in the related technology, the semantic map can be modified in advance and configured according to the actual situation, so that the application flexibility of the semantic map is improved.
Fig. 2 is an application scenario of a path planning method, a path planning device, a path planning system, and a medium according to another embodiment of the present application. As shown in fig. 2, the working machine 14 mounted on the movable platform 10 will be described as an example.
In fig. 2, the movable platform 10 includes a body 11, a carrier 13, and a working device 14 (such as a plant protection device, a surveying device, an image capturing device, etc.). Although the movable platform 10 is described as an aircraft, such description is not limiting and any of the types of movable platforms described above are suitable. In some embodiments, the mapping device may be located directly on the movable platform 10 without the carrier 13. The mobile platform 10 may include a powered mechanism 15, a sensing system 12. In addition, the movable platform 10 may also include a communication system.
The communication system can realize the communication between the movable platform 10 and the wireless signal 30 transmitted and received by the control terminal 20 with the communication system through the antenna 22, and the antenna 22 is arranged on the body 21. The communication system may include any number of transmitters, receivers, and/or transceivers for wireless communication.
In some embodiments, the control terminal 20 may provide control instructions to one or more of the movable platform 10, the carrier 13, and the work equipment 14, and receive information (e.g., position and/or motion information of an obstacle, the movable platform 10, the carrier 13, or the work equipment 14, load sensing data, such as level information, flow information, temperature information, etc.) from one or more of the movable platform 10, the carrier 13, and the work equipment 14. In some embodiments, the control data of control terminal 20 may include instructions regarding position, motion, and braking for control of movable platform 10, carrier 13, and/or work equipment 14. For example, the control data may cause a change in position and/or orientation of the movable platform 10 (e.g., by controlling the power mechanism 15), or cause movement of the carrier 13 relative to the movable platform 10 (e.g., by controlling the carrier 13). The control data of the control terminal 20 may lead to load control, such as controlling the operation of the spraying device (start spraying, stop spraying, control flow, spraying angle, spraying liquid ratio, etc.). In some embodiments, communication with movable platform 10, carrier 13, and/or work implement 14 may include information from one or more sensors (e.g., distance sensor 12, water level sensor, angle sensor, etc.). The communication may include sensed information transmitted from one or more different types of sensors, such as a GPS sensor, a motion sensor, an inertial sensor, a proximity sensor, or an image sensor. Control terminal 20 communicates the provided control data that may be used to control the state of one or more of movable platform 10, carrier 13, or work device 14. Optionally, one or more of carrier 13 and work device 14 may include a communication module for communicating with control terminal 20 so that control terminal 20 may communicate individually or control movable platform 10, carrier 13, and work device 14. The control terminal 20 may be a remote controller of the movable platform 10, or may be an intelligent electronic device such as a mobile phone, iPad, wearable electronic device, and the like, which can be used to control the movable platform 10.
It should be noted that the control terminal 20 may be remote from the movable platform 10 to implement remote control of the movable platform 10, and may be fixedly or detachably disposed on the movable platform 10, and may be specifically disposed as required.
In some embodiments, the movable platform 10 may communicate with other remote devices other than the control terminal 20, or with remote devices other than the control terminal 20. The control terminal 20 may also communicate with another remote device and the movable platform 10. For example, the movable platform 10 and/or the control terminal 20 may communicate with another movable platform or a carrier or load of another movable platform. The additional remote device may be a second terminal or other computing device (e.g., a computer, desktop, tablet, smartphone, or other mobile device) when desired. The remote device may transmit data (e.g., transmit semantic maps) to the movable platform 10, receive data (e.g., obstacle information) from the movable platform 10, transmit data to the control terminal 20, and/or receive data from the control terminal 20. Alternatively, the remote device may be connected to the Internet or other telecommunications network to enable data received from the mobile platform 10 and/or the control terminal 20 to be uploaded to a website or server.
It should be noted that the movable platform 10 may also be a land robot, an unmanned vehicle, an underwater robot, etc., and is not limited herein.
Fig. 3 is a schematic flow chart of a path planning method according to an embodiment of the present application. As shown in fig. 3, the path planning method for planning the moving path of the movable platform may include operations S301 to S305.
In operation S301, a semantic map of the operation environment of the movable platform is obtained, where semantic information of each image area in the semantic map has a corresponding relationship with an obstacle avoidance policy of the movable platform.
In this embodiment, the semantic map may be input by the user to the movable platform or a control terminal of the movable platform. The semantic map may also be automatically downloaded by the mobile platform or a control terminal of the mobile platform, such as searched from a semantic map collection based on current location information of the mobile platform. The semantic map can also be stored in the local of the movable platform or the control terminal of the movable platform, and can be automatically read from the storage space after the movable platform is powered on. The semantic map can be drawn by other electronic equipment or can be drawn in advance by the semantic map.
The semantic map may include a plurality of image regions, and the obstacle avoidance strategies corresponding to respective pixels in each image region are the same. For example, semantic information corresponding to each pixel in the cornfield area is the cornfield, and an obstacle avoidance strategy corresponding to the cornfield area is a pass. The corresponding semantic information may be set for each pixel, or may be set for each image region only, which is not limited herein.
The corresponding relationship between the semantic information of the image area in the semantic map and the obstacle avoidance strategy of the movable platform may be set by a user, may also be set by a manufacturer, and may also be set by a user who draws the semantic map, which is not limited herein. The correspondence may be modified by a user, such as modifying an image area in a semantic map, modifying semantic information of the image area, modifying an obstacle avoidance policy corresponding to the semantic information, and the like. For example, obstacle avoidance strategies include: at least one of a side bypass strategy, an over pass strategy, or an under pass strategy.
Fig. 4 is a schematic diagram of a semantic map provided in an embodiment of the present application.
As shown in fig. 4, the semantic map includes six image regions, where semantic information of each image region is: cornfields, water surfaces, buildings, no-fly zones, corn fields and high-voltage line towers. The obstacle avoidance strategy corresponding to each image area may be: passage, high speed passage, detour, no flight, passing over, etc.
It should be noted that, in the embodiment of the present application, an operation state corresponding to an obstacle avoidance policy may be further set. For example, the method may further include an operation of setting the operation state of the movable platform to a disabled operation when the obstacle avoidance policy includes an upper pass. For example, the prohibited job includes at least one of: spraying is prohibited, photographing is prohibited, surveying and mapping is prohibited, and the like. Therefore, convenience in setting corresponding operation states for all areas of the semantic map is effectively improved.
In operation S303, semantic information of a target image area in a semantic map is determined according to the semantic map.
In this embodiment, the target image area may be an image area that the movement path may need to pass through. Referring to fig. 1, the image area corresponding to the current position of the unmanned aerial vehicle in the semantic map 1 is denoted by reference numeral 1111, and the image area denoted by reference numeral 1111 is the target image area. The drone passes through the image areas numbered 1113, 1114, 1112, and 1115 as it moves in the current direction of movement, and the image areas numbered 1113, 1114, 1112, and 1115 are the target image areas. The drone does not pass through the image area referenced 1116 as it moves in the current direction of movement, and the image area referenced 1116 is not the target image area.
After the target image area is determined, an obstacle avoidance strategy corresponding to the target image area can be determined from the semantic map based on the correspondence, so that path planning can be performed based on the obstacle avoidance strategy.
In one embodiment, determining semantic information for a target image region in a semantic map may include the following operations, based on the semantic map. Firstly, determining a target image area according to the corresponding position information of the initial path point and the target path point of the movable platform in the semantic map and the semantic map. Then, semantic information of the target image area is determined according to the target image area. The initial waypoint and the target waypoint may be set by a user, and if the user sets that the movable platform moves from the position a to the position B, the position a is the initial waypoint and the position B is the target waypoint.
The initial path point and the target path point can be automatically determined by the movable platform, for example, when a user sends a return instruction to the movable platform through the control terminal and finishes operation designation, or when the movable platform detects a hijacked risk, receives a return instruction from an air traffic control and finishes a preset operation task, the current position of the movable platform is taken as the initial path point and the initial moving path point of the movable platform is taken as the target path point. Taking the movable platform as an unmanned aerial vehicle as an example, the initial path point of the movable platform is a return flight starting point, and the target path point of the movable platform is a return flight terminal point.
Fig. 5 is a schematic diagram of a return trip start point and a return trip end point provided in the embodiment of the present application.
As shown in fig. 5, marks corresponding to the return journey starting point and the return journey ending point may be marked on the semantic map by file loading or the like, and the marks include, but are not limited to: triangles, origins, circles, crosses, sighting marks, etc. The return trip start point and the return trip end point may be set by a user operation, such as a user clicking two locations on a semantic map, such as one or more coordinates input by the user. The return journey starting point and the return journey ending point can be automatically set by the movable platform, for example, the coordinates of the starting point when the return journey is started are used as the return journey starting point, and the coordinates of the starting point when the movable platform starts to work are used as the return journey ending point. The initial waypoint and the target waypoint may be displayed on a semantic map.
Specifically, it is possible to determine which image areas the line between the two points passes through by connecting the initial path point and the target path point, and to take these image areas as the target image areas. It should be noted that, when the obstacle avoidance policy corresponding to the target image area is detour, at least one image area adjacent to the target image area needs to be added to the target image area, so as to form a continuous moving path.
In one embodiment, determining semantic information for a target image region in a semantic map based on the semantic map may include the following operations. Firstly, a target image area is determined according to the corresponding position information of the movable platform in the semantic map and the semantic map. Then, semantic information of the target image area is determined according to the target image area.
For example, a user controls the unmanned aerial vehicle to fly or controls the land robot to move by means of a pole, in this scenario, there are no initial path point and no target path point, and the movable platform moves in response to a control instruction (such as forward, backward, left turn or right turn) corresponding to the pole. At this time, the image area where the movable platform is currently located and the specified number of image areas that intersect with each other after extending along the current moving direction may be used as the target image area. For example, the current position of the movable platform corresponds to a current image area in the semantic map, and one or more image areas closest to the current image area in the current movement direction are taken as target image areas.
In operation S305, a moving path of the movable platform to avoid the target object corresponding to the target image area is planned according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.
In this embodiment, the movement path may be planned based on semantic information of the target image area and an outline of the target image area. For example, for a scene in which an initial waypoint and a target waypoint exist, a connecting line between the initial waypoint and the target waypoint may be determined, and if the connecting line intersects with a target image region around which semantic information is to be passed, an alternative moving path (or waypoints) may be generated based on the outline of the target image region in place of a line segment (or waypoints) of the connecting line that intersects with the target image region. Wherein the alternative movement path may or may not be conformal to at least a partial contour of the target image area. A safety distance needs to be set between the alternative movement path and the contour of the target image area.
For another example, for a scene where there are no initial waypoints and target waypoints, the semantic information of the target image region to which the movable platform is to move may be, if detour, generated an alternative movement path (or waypoints) to replace the line segment (or waypoints) of the connecting line that intersects the target image region based on the outline of the target image region.
In one embodiment, the method further comprises: the contour of each region of the semantic map is extracted to generate a movement path based on at least the contour of the target image region. For example, for an image region corresponding to the obstacle avoidance policy being the detour policy, a contour of the image region may be extracted, and then a conformal path or a smoother movement path compared to the conformal path may be generated based on the contour. The extracting of the contour of the image region may be completed before the path planning is performed, or may be completed in the process of performing the path planning, which is not limited herein.
In addition, after the moving path is preliminarily determined, the moving path may be further optimized, for example, during the moving process of the movable platform along the moving path, the following conditions are satisfied as much as possible: the method has the advantages of reducing triggering of operation instructions such as emergency stop, brake and the like, shortening the path length as much as possible, reducing energy consumption as much as possible, improving the moving safety of the movable platform as much as possible and the like.
For example, the moving path satisfies at least one of the following conditions: the distance between the path point on the moving path and the target object is greater than the safety distance; the resources for moving the movable platform from the initial path point to the target path point of the moving path are optimally consumed, and the resources comprise at least one of the following: path length, energy or time; the smoothness of the movement path meets the maneuvering requirements of the movable platform. Wherein, the safe distance can be related to the size of the movable platform, the working radius of the movable platform and the like so as to ensure the flight safety and the working effect of the movable platform.
According to the path planning method provided by the embodiment of the application, because the semantic information of each image area in the semantic map has the corresponding relation with the obstacle avoidance strategy of the movable platform, the semantic information contained in the semantic map can be used for supplementing the environment information of the movable platform, and the moving path is planned based on the moving strategy corresponding to the semantic information. The problems that in the related art, the movable platform usually obtains the running environment information of the movable platform through a complex environment detection sensor to plan a path, and the cost is high and the energy consumption is large are solved. Particularly, the technical blank of path planning of the movable platform in a relatively stable environment is filled, and the requirements of users on low-cost and reusable path planning are at least partially met.
The following is an exemplary description of updating a semantic map.
In order to improve the applicability of the path planning based on the semantic map, the user can update the semantic map based on own requirements, so that the updated semantic map better meets the requirements of the user or the adaptation degree of the current environment where the movable platform is located.
In one embodiment, before obtaining the semantic map of the mobile platform operating environment, the method may further include the following operations.
First, an initial semantic map of the mobile platform operating environment is obtained. Then, semantic map update information generated based on the user operation is acquired. And then, updating the initial semantic map according to the semantic map updating information to obtain the semantic map of the mobile platform operating environment.
The initial semantic map may be a semantic map read from a storage space, a semantic map obtained from a network, or a semantic map input by a user. The initial semantic map may be displayed in an interactive interface for editing. The user operation may be input through the control terminal. For example, the control terminal is provided with a key, a stick and other components, and a user can input semantic map updating information by operating the components. For another example, the control terminal may include a display screen, and the user may input semantic map update information through an interactive component (such as a virtual key, a joystick, etc.) displayed on the display screen.
Further, the user operation may be determined and input based on gesture recognition, body feeling, or voice recognition. For example, a user may tilt the control terminal to control the position, direction of movement, or other aspects of a cursor on the interactive interface. The tilt of the control terminal may be detected by one or more inertial sensors and corresponding commands (e.g., movement commands) may be generated. As another example, a user may adjust an operating parameter of a load (e.g., a spray parameter, a mapping parameter, etc.), a pose of a load (via a carrier), or other aspect of any object on a movable platform using a touch screen.
For example, the object for which the user operates may be a control terminal communicatively connected to the movable platform. The user inputs at least one of the following information on the control terminal: the method comprises the following steps of selecting information, inputting point coordinates, specifying operations (such as editing, deleting and adding), objects and parameter values (such as coordinate values, safe distances, semantic information and obstacle avoidance strategies) of the specifying operations and the like. The control terminal may be an integrated type, such as a remote controller provided with a processor, a memory, a display screen, and the like. The control terminal can be split type, for example, the remote controller can form the control terminal together with other electronic equipment, for example, the remote controller and the smart phone are connected to form the control terminal together. Wherein, can install Application (APP) on the smart mobile phone, can input operating instruction, set up operating parameter etc. on this APP.
In one embodiment, the method may further include the operations of: and providing a user interactive interface, and displaying the initial semantic map on the user interactive interface.
Accordingly, obtaining semantic map update information generated based on a user operation may include the following operations: semantic map update information generated based on user operations on a user interaction interface is obtained.
This facilitates the user to enter semantic map update information to edit the semantic map.
Fig. 6 is a schematic diagram of a user interaction interface provided in an embodiment of the present application.
As shown in fig. 6, the user interactive interface may include an editing area and an effect presentation area. Semantic map update information may be entered by a user in the editing area.
In one embodiment, the semantic map update information includes location, shape, and semantic information of an updated image region in the semantic map. If the user adds an image area representing the building on the semantic map, the obstacle avoidance strategy does not need to be set for the image area representing the building separately, but the obstacle avoidance strategy is automatically set as the detour based on the existing corresponding relation. In addition, for the position, shape and semantic information of the existing image area, at least part of the information can be edited by a user, so that the operation convenience of the user is effectively improved, and the method can be better suitable for more scenes.
Referring to fig. 6, the user may update each part of information in the editing area, such as adding a corresponding relationship, editing a corresponding relationship, or deleting a corresponding relationship. When the corresponding relationship is edited, the pattern can be modified (such as modifying color, filling pattern, etc.), the semantic information can be modified (such as modifying cornfield into orchard), and the obstacle avoidance strategy can be modified (such as modifying passage into detour or passing above).
In one embodiment, the semantic map update information is stored in a configuration file and can be retrieved when used. Therefore, the original semantic map cannot be directly modified, and the semantic map is convenient to reuse.
Fig. 7 is a schematic diagram of updating a semantic map according to an embodiment of the present application.
As shown in fig. 7, the image area representing the maize field in fig. 7 is enlarged relative to the initial semantic map, and the image area representing the maize field may be determined based on semantic map update information input by the user, for example, the user manually enlarges the image area representing the maize field in the original semantic map on the semantic map (or redraws a new image area representing the maize field and covers an original image area representing the maize field, or redraws a new image area representing the maize field after deleting the original image area representing the maize field), and establishes a corresponding relationship between the image area and the semantic information of the maize field. The image area representing the corn field can also be determined by calling a configuration file, for example, after a user or an unmanned aerial vehicle performs plant protection operation last time, the image area representing the corn field as shown in fig. 7 is determined, and the configuration file is generated, so that when the user or the unmanned aerial vehicle performs plant protection operation next time, the previous configuration file can be directly called to update the initial semantic map.
In one embodiment, the semantic map updating information further includes an obstacle avoidance strategy corresponding to the semantic information of the image area updated in the semantic map.
Specifically, semantic map update information may be input by a user to set an obstacle avoidance policy corresponding to the semantic information of the updated image region. For example, an obstacle avoidance strategy can be further set, such as newly adding semantic information to the map: the reef is provided with a corresponding obstacle avoidance strategy originally, and the reef needs to prompt a user to set, or the reef also has a corresponding obstacle avoidance strategy originally but is opened for the user to modify.
Of course, the embodiment of the application may also modify the obstacle avoidance strategy corresponding to the semantic information of the existing image area in the semantic map. For example, the semantic map update information includes an obstacle avoidance policy corresponding to the semantic information of the image area in the initial semantic map.
Fig. 8 is a schematic diagram of updating a semantic map according to another embodiment of the present application.
As shown in fig. 8, the initial semantic map includes image areas corresponding to the water surface, no-fly zone, high-voltage line tower, cornfield, building, and corn field, respectively. The initial map further comprises obstacle avoidance strategies corresponding to the image areas: passing, detouring, passing over, and the like. If the operation task is carried out without spraying the corn field, or the corn field is not suitable for the obstacle avoidance strategy of passing through the upper part along with the continuous growth and height of the corn, the obstacle avoidance strategy of passing through the upper part corresponding to the corn field can be changed into bypassing by the user. Therefore, the applicability of the planned moving path is effectively improved.
Fig. 9 is a schematic diagram of updating a semantic map according to another embodiment of the present application.
As shown in fig. 9, the initial semantic map includes image areas corresponding to the water surface, no-fly zone, high-voltage line tower, cornfield, building and corn field, respectively. The user may replace the corn crop on the corn field with a fruit tree in order to increase economic profit, so that the image area originally representing the corn field is currently changed to the image area representing the fruit forest. At the moment, the user can modify semantic information 'corn land' into 'fruit forest' through the user interaction interface of the APP without reconstructing a map.
Fig. 10A is a schematic diagram of updating a semantic map according to another embodiment of the present application.
As shown in fig. 10A, the user may directly set an obstacle region in the semantic map according to the requirement, where the obstacle region is used to represent a virtual obstacle, so that the planned movement path bypasses the obstacle region. This effectively increases the ease with which the work area, for example, is adjusted. For example, a region has been manually sprayed with a liquid medicine without repeated operations; for another example, a piece of area may be constructed or pulled recently, and at this time, the user may set an obstacle area at a corresponding position in the semantic map through the user interaction interface, so that the planned path bypasses the obstacle area, and the flexibility of operation is effectively improved.
The following is an exemplary illustration of a side bypass strategy.
The side bypassing strategy can be an obstacle avoidance strategy adopted when the mobile platform is not suitable for passing through the geographic position corresponding to the image area, so that the probability of interference of the mobile platform is reduced, the probability of unexpected operations such as change of a moving direction, sudden stop, braking and the like of the mobile platform in the moving process according to a moving path is reduced, the moving efficiency is improved, and the energy consumption is reduced.
In one embodiment, the side bypass strategy comprises: and when the size of the target object does not meet the preset condition, performing side detour by adopting a second side detour strategy. For example, when the size of the target object in the target image area is too large and a similar zigzag type moving path is adopted, if a strategy of bypassing an obstacle sideways and then continuing the work is adopted, an excessive moving path is caused to be used for bypassing the obstacle, and the work efficiency may be reduced. For another example, when the size of the target object in the target image area is small, and the obstacle can be quickly bypassed, an operation strategy for bypassing the obstacle can be adopted.
For example, the movement path is a zigzag path including a work path and a traverse path, and the size of the target object is the size of the target object in a direction perpendicular to the work path. The zigzag path may be a projection of the moving path on a two-dimensional horizontal plane, and may further include height information for each path point in a three-dimensional space.
Fig. 10B is a schematic diagram of a moving path according to an embodiment of the present application.
As shown in fig. 10B, the obstacle avoidance policy of the graphic area representing the building on the left side of the map is different from the obstacle avoidance policy of the graphic area representing the building on the right side of the map, and the size of the graphic area representing the building (such as a factory) on the left side of the map is significantly larger than the size of the image area representing the building (such as a utility pole) on the right side of the map. In order to improve the working efficiency, an obstacle avoidance strategy for switching to an adjacent working path is adopted for the graphic area representing the building on the left side of the map, and the graphic area representing the building on the left side of the map is bypassed from the side edge to continue the current working path.
Specifically, it may be determined whether the size of the target object satisfies the preset condition by comparing the size of the target object and the working path interval of the movable platform. The size of the target object may refer to a maximum size of the target object, such as a maximum width, a maximum length, or a maximum height.
For example, the first side bypass strategy includes: move to the adjacent working path. The second side bypass strategy comprises: and the current operation path is continued by bypassing from the side.
When the ratio between the size of the target object and the distance between the operation paths is smaller than the designated multiple, the second side bypassing policy is adopted, and the designated multiple can be set or modified by a user.
It should be noted that the priority of the obstacle avoidance strategy of each path point in the planned moving path is lower than the obstacle avoidance strategy corresponding to the actual obstacle detected by the movable platform in the moving process according to the moving path. For example, the obstacle avoidance policy corresponding to a certain waypoint in the moving path is a passage, but when the movable platform moves to the vicinity of the waypoint, an obstacle avoidance policy that does not cause the movable platform to interfere with the obstacle is preferably adopted if the obstacle avoidance policy detects that the waypoint has the obstacle.
In order to facilitate understanding of the correspondence between the semantic information and the obstacle avoidance policy, table 1 exemplarily lists the correspondence between some semantic values of a scene in which the movable platform is an unmanned aerial vehicle and the obstacle avoidance policy.
TABLE 1
Figure PCTCN2020127623-APPB-000001
Figure PCTCN2020127623-APPB-000002
The concept of safe distance is also shown in table 1, which is exemplified below.
In one embodiment, the obstacle avoidance strategy further comprises safe distance information indicating a minimum distance of the movable platform relative to the target object corresponding to the target image area.
As shown in table 1, image areas having different semantic information may each have a different safety distance to reduce the length of a movement path on the basis of ensuring the safety of movement. For example, there is a higher probability of encountering undesirable operations around a building that require the movable platform to slow down, brake hard, etc., as may be caused by pedestrians, artificially placed objects, etc., relative to trees. Therefore, a larger safety distance can be set for the image area for representing the building in the semantic map, for example, the value of the safety distance corresponding to the semantic value is increased. For another example, the electromagnetic radiation generated by the wires may cause a greater interference in the communication between the control terminal and the movable platform with respect to a common building, and therefore a greater safety distance may be set for the image area characterizing the wires in the semantic map.
Specifically, the safe distance is related to at least one of: the size of the movable platform, the working radius of the movable platform. For example, the obstacle avoidance strategy contains safety distance information, and the obstacle avoidance strategy is related to the semantic information, so that the corresponding relation between the safety distance information and the semantic information can be conveniently determined. In addition, different movable devices respectively have different minimum obstacle avoidance distances, for example, the braking distance of the road robot running at a high speed is greater than the braking distance of the road robot running at a low speed, the braking distance of the aerial robot is greater than the braking distance of the road robot at the same running speed, and the like. Therefore, the safe distances can be set respectively based on the minimum obstacle avoidance distance of the movable platform, so that the safety of the movable platform is improved.
Fig. 11 is a schematic diagram of a safety distance provided by an embodiment of the present application.
As shown in fig. 11, the safe distance may be a distance for two scenes, as indicated by the double-headed arrow line segment in the reference drawing. For example, the safe distance shown in the left diagram of fig. 11 is a graphic area that is bypassed by the obstacle avoidance policy in the semantic map, and when the movable platform bypasses an area corresponding to a certain graphic area, the distance between the movable platform and the area needs to be greater than the safe distance. The safe distance shown in the right diagram of fig. 11 is that, in the process that the movable platform moves according to the planned moving path, the distance between the movable platform and the obstacle needs to be controlled to be greater than the safe distance for the detected obstacle.
The concept of height information is also shown in table 1, which is exemplified below. Referring to fig. 1, semantic information corresponding to the graphic area denoted by reference numeral 1115 is a corn field, which is higher than a wheat field in height, but the unmanned aerial vehicle can move in a manner of passing above without bypassing from the side, which helps to reduce the length of a moving path.
In one embodiment, the method may further include acquiring elevation information corresponding to the target image area.
Correspondingly, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, and the method comprises the following steps: and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
For example, elevation information may be read from a semantic map with elevation information. The semantic map with elevation information may be obtained by fusing an elevation map and the semantic map. For another example, the semantic map with elevation information may be generated by the user marking the elevation information. As another example, the semantic map with elevation information may be generated directly by a mapping device having an image sensor and a ranging sensor. The elevation information may be a specific height value, or may be a height range, etc.
In one embodiment, planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area may include the following operations. And when the obstacle avoidance strategy passes through the upper part, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
Fig. 12 is a schematic diagram of a semantic map according to another embodiment of the present application.
As shown in fig. 12, the semantic map includes a plurality of image areas each having different semantic information, and also includes elevation information corresponding to each semantic information. For example, the semantic information is that the elevation information of the image area of the water surface (or road) is 0 meter, the elevation information of the no-fly zone is infinite (∞) meters, the elevation information of the high-voltage tower is less than 20 meters (< 20 meters), the elevation information of the wheat field is 1 meter, the elevation information of the building is greater than 10 meters (> 10 meters), and the elevation information of the corn field is 2-3 meters. Based on the above information, more appropriate path planning can be performed for different movable platforms. For example, a drone may fly in the air at heights exceeding 4 meters, and then an obstacle avoidance strategy for passing over the corn field may be adopted. For another example, an on-road robot may travel on the ground through a high-voltage mast area, around which a drone needs to detour.
The height value of the elevation information is relative to the ground, may be relative to the horizontal plane, and is not limited herein. For a scenario where the environmental information is fixed, the height value of the elevation information may be relative to a predetermined plane, such as a height value of a ground plane.
In one embodiment, the movable platform may be moved based on the planned movement path. In the moving process, obstacle avoidance needs to be carried out based on the obstacle information monitored in real time. However, the complexity of the sensor adopted in the present embodiment can be significantly reduced compared to a scenario in which a semantic map is not adopted for path planning. For example, in the process of automatic return voyage, in order to ensure the moving safety of the movable platform, the related art may adopt, for example, an omnidirectional radar and, for example, an image sensor, and the technical scheme of the present application may only adopt a bidirectional radar, so that the hardware cost is reduced, and at the same time, the weight of the body, the consumption of computing resources, the consumption of energy resources, and the like are also reduced.
Specifically, after planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, the method may further include the following operations: and controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor through the movable platform. The obstacle information detected by the movable platform through the sensor may be detected by a method of detecting an obstacle in the related art, such as detection based on an image, a radar (e.g., a laser radar or an ultrasonic radar), a distance measurement sensor, and the like, which is not limited herein.
In one embodiment, referring to table 1, when there are a plurality of obstacle avoidance strategies for an image area of one voice message, the most suitable movement strategy may be selected based on, for example, the type of the movable platform. For example, the movable platform is a drone, and in controlling the movable platform to move based on the movement path and obstacle information detected by the movable platform passing sensor, for an obstacle detected by the drone, a first priority of the drone passing from above the obstacle is higher than a second priority of the drone passing from below the obstacle. This is because for a drone, it can adjust the flying height, and the probability of an obstacle existing in low air is greater than that in high air, so the safety probability of the drone passing from above is greater than that of the drone passing from below.
Fig. 13 is a schematic diagram illustrating movement of obstacle information detected by a sensor according to an embodiment of the present application.
As shown in fig. 13, the movable platform is an unmanned aerial vehicle for example. In the process that the unmanned aerial vehicle performs return voyage based on the planned moving path, in the process of bypassing the no-fly zone, obstacle information is detected. In order to realize smooth return voyage, the unmanned aerial vehicle can deviate from a planned moving path to bypass the obstacle.
In one embodiment, controlling the movable platform to move based on the path of movement and obstacle information detected by the movable platform through the sensor includes: and when the confidence degree of the obstacles detected by the sensor is greater than a preset threshold value, controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor, wherein the confidence degree of the obstacles is related to the times of repeatedly detecting the obstacles and the environment information when the obstacles are detected.
Wherein if the confidence of the obstacle is less than a certain threshold, the information of the obstacle is determined to be unreliable and can be ignored. For example, in a rainy day, obstacle information such as leaves and raindrops may be detected, but the confidence of the obstacles is low, and if the detection results in a plurality of adjacent detection periods are different greatly, the obstacle information may be ignored. For example, if the drone detects an obstacle at the same position multiple times or if the drone continuously detects an obstacle in a smaller area, the confidence of the obstacle information is higher.
In one embodiment, the method further comprises: the semantic map is updated based on obstacles detected by the movable platform. For example, if it is determined that obstacle information existing in a certain image area in the semantic map satisfies a certain condition, it may be determined that the obstacle is a more stable obstacle, and the semantic map may be updated with respect to the obstacle information.
Fig. 14 is a schematic diagram of updating a semantic map based on obstacle information detected by a movable platform according to an embodiment of the present application.
Referring to fig. 13, taking the movable platform as an unmanned aerial vehicle as an example for explanation, if the number of times (proportion) that the unmanned aerial vehicle detects an obstacle in the same position range of the no-fly zone accessory during multiple operations satisfies a preset condition, it may be determined that the obstacle is located in the area more stably. Thus, as shown in fig. 14, one obstacle region may be provided at a corresponding position on the semantic map.
In this embodiment, the semantic map may also be updated based on the detected obstacle information during the process of moving the movable platform according to the planned moving path. The semantic map can be an initial semantic map which is directly modified, or an obstacle area aiming at the semantic map can be newly added in a configuration file mode. In one embodiment, it may also be determined whether to update the initial semantic map based on the obstacle area in the configuration file based on the stability of the obstacle. For example, the plant protection unmanned aerial vehicle may continuously detect the obstacle information at the same position for multiple times during a work process or a return journey process for a designated area, or detect the obstacle information at the same position for a duration exceeding a preset time threshold (e.g., 1 week, 1 month, 1 year), and then solidify the obstacle area stably existing in the configuration file in the initial semantic map. Thus, automatic updating of the initial semantic map can be realized.
It should be noted that the obstacle area may be automatically updated by the movable platform based on a preset rule, or may be set by the user. For example, before a user performs a job, an obstacle region may be set in a region (such as an image region where an obstacle may exist) on a semantic map where no job is needed or where avoidance is needed, so as to meet the diversified needs of the user.
In one embodiment, planning a moving path of the movable platform to avoid a target object corresponding to the target image area according to an obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area includes: firstly, an objective function is established according to the three-dimensional position information of the target image area and an obstacle avoidance strategy. The objective function is then optimized to determine a movement path for the movable platform to avoid the target object.
Wherein optimizing the objective function to determine a movement path of the movable platform to avoid the target object may comprise: and minimizing the objective function to determine position parameters of the movable platform corresponding to the plurality of target track points, wherein the position parameters of the movable platform corresponding to the plurality of target track points minimize a function value of the objective function. The objective function may include at least one of: and the collision cost function represents a cost function of kinematics and dynamics constraint, and the safety distance in the obstacle avoidance strategy influences the collision cost function. In addition, the objective function may also include a path length cost function for constraining the direction of traffic (the location of the passing waypoints). In addition, the objective function may also include a kinetic energy damage cost function for constraining the flight speed and the like. It should be noted that the cost function shown above is only an exemplary illustration, and should not be construed as a limitation of the present application. For example, various parameters that may affect the path planning may be set to corresponding cost functions, such as assistant cost functions, which may be used to constrain the moving direction to constrain the resistance encountered during the moving process, such as energy consumption for moving downwind (downwind) is less than energy consumption for moving upwind, such as energy consumption for moving on an asphalt road is less than energy consumption for moving on a sandy soil road.
Fig. 15 is a schematic diagram of smoothness of a moving path provided by an embodiment of the present application to meet a maneuvering requirement of a movable platform. As shown in fig. 15, the moving path is smoother relative to the two-by-two connection lines between the multiple points, and meets the maneuvering requirements of the movable platform.
In order to save the computation amount for minimizing the objective function, a plurality of predicted trajectory points may be sampled from the predicted trajectory, and the objective function may be minimized by using the position parameters of the drone corresponding to the sampled predicted trajectory points as initial values.
Fig. 16 is a schematic flow chart of path planning according to another embodiment of the present application.
As shown in fig. 16, taking the movable platform as an unmanned aerial vehicle as an example for explanation, the path planning process may mainly include 3 parts: input condition preparation, algorithm planning and execution. The details of each stage are described in turn below.
First, about input conditions
The input conditions referred to in this section are parameters that the operator has set in advance for the return mission. At least part of parameters are not required to be set by a user every time the user executes a task, such as a safe distance parameter, an initial path starting point, a target path point and the like.
Specifically, the user may set an initial path start point and a target path point of the movement path. Taking the return mission as an example, the return mission is to safely return to a target point from the current position of the robot, so that a return starting point and a return ending point can be preset by a user. The return trip starting point is the current position of the robot by default, and the return trip ending point can be set by a default return trip point (home) point or a user through APP clicking, or input in a configuration file form, which is not limited herein.
Modification with respect to semantic graphs
Semantic graph modification includes two ways: and (3) framing the polygonal barrier on the semantic map by the user, or modifying the semantic value of the pixel point by the user.
Further, the detour obstacle information may also be set by the user.
Although the semantic values of different objects are labeled in the semantic map information, it is necessary to determine which image areas corresponding to the semantic values need to be bypassed in planning. For example, a graphic area corresponding to a semantic value such as a fruit tree, a building, or a utility pole may be designated as a graphic area that needs to be bypassed, and a graphic area corresponding to a farmland, a road surface, or a water surface does not need to be bypassed.
With respect to setting planning parameters
Planning parameters include, but are not limited to: the minimum safety distance between the robot and the obstacle, the motion limit parameters of the robot, the flying height of the robot and the like.
Second, algorithm planning part
First, semantic graph obstacle contour extraction may be performed. The movement path planning can also be performed based on each pixel of the semantic map.
Wherein, the semantic graph obstacle outline extraction is carried out by considering that: the barrier information in the semantic map is set through one pixel point, and a large number of barrier pixel points need to be aggregated into a polygonal outline so as to improve subsequent calculation. It should be noted that the operation does not need to be an operation step, and can also be performed directly on the pixel, but this programming method is slow (it takes a long time to query the semantic value of the pixel many times). For example, refer to the outline of each graphic region in fig. 14.
Information fusion with respect to multiple obstacle configurations
As described above, except for the pixel points of the semantic graph, the barrier also supports the user to directly frame out the polygonal barrier on the semantic graph. The boxed polygon information may not be stored in the semantic graph, such as by being read in the form of a configuration file. The graphic areas of the polygonal obstacles formed in the operation are fused with the outlines of the obstacles extracted in the previous operation, so that the graphic areas of all the obstacles are represented by the polygonal graphic areas on the semantic map.
Planning map generation and return route planning
After determining a graph area corresponding to the obstacle, or a graph area corresponding to the obstacle, the initial path point and the target path point, operating a path planning algorithm to calculate a safe moving path, wherein the moving path can have the following characteristics: on the one hand, anywhere in the path is satisfied the minimum distance (safe distance) of the movable platform to the obstacle. On the one hand, the "cost" from the initial waypoint to the target waypoint is minimal, wherein the cost can be evaluated by different criteria, for example, shortest distance, minimal energy cost. On the one hand, the moving path is smooth, and the maneuvering requirements of a movable platform (such as an unmanned aerial vehicle) are met.
About output movement path
After the execution of the planning algorithm is finished, appropriate format processing needs to be performed on the path to enable the path to meet the requirement of later execution, and then the path is output.
The whole semantic graph-based planning process is ended up to this point. Further, the following operations may also be included.
Third, regarding the flight control execution path
In this embodiment, the movable platform may be based on the movement path of the return flight generated in the previous operation, and execute the movement path.
In one embodiment, the effect similar to a geo-fence can be realized based on a semantic map, so that an operation specification can be conveniently set for a mobile platform operator, and the risk that the mobile platform moves to a forbidden area is reduced.
For example, after planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, the following operations are further included.
A joystick amount generated based on a user operation is received. If the user controls the movement of the movable platform through the control rod of the remote controller, if the control rod quantity is determined to enable the movable platform to enter the target image area (for example, the obstacle avoidance strategy is a graphic area corresponding to the bypassing strategy), the control rod quantity is not responded. For example, in the unmanned aerial vehicle flight competition, a closed no-fly zone can be set through a semantic map, or a no-fly zone can be set for a specific area in a competition field, so that the condition that the unmanned aerial vehicle is not operated improperly to hurt audiences and the like, or contestants compete in an illegal mode is avoided.
FIG. 17 is a schematic illustration of an unresponsive lever amount provided by an embodiment of the present application.
As shown in fig. 17, the dotted line is the moving track of the movable platform, the peripheral mesh area is set as semantic information of a building, and the interior is also provided with a no-fly zone. If the user's joystick amount is such that the movable platform will enter a graphical area corresponding to a building or a graphical area corresponding to a no-fly zone, the movable platform will determine from the semantic map that the joystick amount is not to be responded to.
The following takes the unmanned aerial vehicle and the control terminal thereof as an example, and the execution subject of each operation is exemplarily described. Wherein, can mutual transmission following at least some information that relate to between unmanned aerial vehicle and the control terminal thereof.
The semantic map of the operating environment of the movable platform can be acquired by the unmanned aerial vehicle and/or the control terminal thereof.
Semantic information of the target image area in the semantic map may be determined by the drone and/or its control terminal.
The unmanned aerial vehicle and/or the control terminal thereof can plan a moving path of the movable platform for avoiding the target object corresponding to the target image area.
The semantic map update information may be received by the control terminal.
The user interaction interface and various information related to the path planning may be displayed by the control terminal.
Obstacle information during flight can be detected by the drone through sensors to move.
The semantic map may be updated by the drone and/or its control terminal.
A user operation may be received by the control terminal to generate the joystick amount.
It should be noted that the execution main bodies of the above operations are only exemplary, and are not to be construed as limiting the application, and may be performed by one of the movable platform, the control terminal, the pan/tilt head, or the load independently, or by several of them in cooperation. For example, in the case that the movable platform is a land robot, a human-computer interaction module (such as a display for displaying a human-computer interaction interface) may be disposed on the land robot, and a user may directly obtain a user operation on the interaction interface displayed on the movable platform to obtain map update information, and the like. Wherein independently completing comprises actively or passively, directly or indirectly obtaining respective data from other devices to perform respective operations
According to the path planning method provided by the embodiment of the application, the environmental information provided based on the semantic map is used as at least part of basis of path planning, the sources of the environmental information are enriched, and the dependence on the related technology for sensing the environmental information through a complex sensor is reduced. Particularly for a relatively stable environment, the semantic map can be reused, and the situation that the environment information is sensed through a complex sensor to construct a map in real time when path planning is carried out every time is not needed. In addition, because the environment information included in the semantic map can be edited, a user can edit the environment information according to an actual scene, and the application flexibility of the semantic map is improved. In addition, because the semantic map is made in advance, the calculation amount of environment detection during the operation of the movable platform is reduced, and the resource consumption is effectively reduced.
Fig. 18 is a schematic structural diagram of a path planning apparatus according to an embodiment of the present application.
As shown in fig. 18, the path planning apparatus 1800 may include one or more processors 1810, and the one or more processors 1810 may be integrated into one processing unit or may be respectively disposed in a plurality of processing units. A computer readable storage medium 1820 for storing one or more computer programs 1821 which, when executed by a processor, implement the path planning method as above, e.g., retrieving first user instructions; determining at least one base point from within the auxiliary field of view in response to a first user instruction; and determining the expected field of view based on the at least one base point to obtain an image that coincides with the expected field of view.
The path planning apparatus 1800 may be disposed in one execution body or disposed in a plurality of execution bodies respectively. For example, in a scene of a land robot or the like that can implement a local control function, the path planning apparatus 1800 may be disposed in the land robot, if a cradle head is disposed on the land robot, a camera may be disposed on the cradle head, and a display screen is disposed on a body of the land robot so as to interact with a user. For another example, in a scenario where a non-local control terminal can be used to control a movable platform, at least a portion of the path planning apparatus 1800 can be disposed in the control terminal, such as a related function that accepts a user operation. At least a portion of the path planner 1800 may be disposed in a movable platform, such as at least one of an information transfer function, an environmental information sensing function, and a coordinated control function. Furthermore, at least part of the path planner 1800 may be arranged in a load or the like.
For example, the processing unit may include a Field-Programmable Gate Array (FPGA) or one or more ARM processors. The processing unit may be connected with a nonvolatile computer readable storage medium 1820. The non-volatile computer readable storage medium 1820 may store logic, code, and/or computer instructions that are executed by the processing unit to perform one or more steps. The non-volatile computer-readable storage medium 1820 may include one or more storage units (removable media or external memory such as an SD card or RAM). In certain embodiments, data sensed by the sensors may be transferred directly to and stored in a storage unit of the non-volatile computer readable storage medium 1820. The memory unit of the non-transitory computer-readable storage medium 1820 may store logic, code, and/or computer instructions that are executed by the processing unit to perform the various embodiments of the methods described herein. For example, the processing unit may be configured to execute instructions to cause one or more processors of the processing unit to perform the tracking functions described above. The storage unit may store sensing module sensing data, which is processed by the processing unit. In certain embodiments, a storage unit of the non-volatile computer readable storage medium 1820 may store processing results generated by the processing unit.
In some embodiments, the processing unit may be coupled to a control module for controlling the state of the movable platform. For example, the control module may be used to control the power mechanism of the movable platform to adjust the spatial orientation, velocity, and/or acceleration of the movable platform with respect to the six degrees of freedom. Alternatively or in combination, the control module may control one or more of the carrier, load or sensing module.
The processing unit may also be coupled to a communication module for transmitting and/or receiving data with one or more peripheral devices, such as a terminal, a display device, or other remote control device. Any suitable communication method may be utilized, such as wired or wireless communication. For example, the communications module may utilize one or more local area networks, wide area networks, infrared, radio, Wi-Fi, peer-to-peer (P2P) networks, telecommunications networks, cloud networks, and the like. Alternatively, a relay station such as a signal tower, a satellite, or a mobile base station may be used.
The above components may be adapted to each other. For example, one or more components may be located on a movable platform, carrier, load, terminal, sensing system, or additional external device in communication with the foregoing. In some embodiments, one or more of the processing units and/or non-volatile computer-readable media can be located in different locations, such as on a movable platform, a carrier, a load, a terminal, a sensing system, or additional external devices in communication with the foregoing, as well as various combinations of the foregoing.
In addition, the control terminal adapted to the movable platform may include an input module, a processing unit, a memory, a display module, and a communication module, all of which are connected via a bus or similar network.
The input module includes one or more input mechanisms to obtain input generated by a user through manipulation of the input module. The input mechanisms include one or more joysticks, switches, knobs, slide switches, buttons, dials, touch screens, keypads, keyboards, mice, voice controls, gesture controls, inertial modules, and the like. The input module may be used to obtain user input for controlling any aspect of the movable platform, carrier, load, or component thereof. Any aspect includes attitude, position, orientation, flight, tracking, and the like. For example, the input mechanism may be a user manually setting one or more positions, each position corresponding to a preset input, to control the movable platform.
In some embodiments, the input mechanism may be operable by a user to input control commands to control the movement of the movable platform. For example, a user may input a movement pattern of the movable platform, such as auto-flight, auto-pilot, or movement according to a preset movement path, using knobs, switches, or similar input mechanisms. As another example, a user may control the position, attitude, orientation, or other aspect of the movable platform by tilting the control terminal in some way. The tilt of the control terminal may be detected by one or more inertial sensors and corresponding movement commands generated. As another example, a user may adjust an operating parameter of the load (e.g., zoom), a pose of the load (via the carrier), or other aspect of any object on the movable platform using the input mechanisms described above.
In some embodiments, the input mechanism may be operable by a user to input the aforementioned descriptive object information. For example, the user may select an appropriate tracking mode, such as a manual tracking mode or an automatic tracking mode, using a knob, switch, or similar input mechanism. The user may also use the input mechanism to select a particular object to track, type of object to perform, or other similar information. In various embodiments, the input module may be executed by more than one device. For example, the input module may be implemented by a standard remote control with a joystick. A standard remote controller with a joystick is connected to a mobile device (e.g. a smartphone) running a suitable application program ("APP") to generate control instructions for the movable platform. The APP may be used to obtain user input.
The processing unit may be connected to the memory. The memory includes volatile or non-volatile storage media for storing data and/or logic, code, and/or program instructions executable by the processing unit for performing one or more rules or functions. The memory may include one or more storage units (removable media or external memory such as an SD card or RAM). In some embodiments, the data of the input module may be directly transferred and stored in a storage unit of the memory. The memory elements of the memory may store logic, code, and/or computer instructions that are executed by the processing unit to perform the various embodiments of the various methods described herein. For example, the processing unit may be configured to execute instructions that cause one or more processors of the processing unit to process and display sensed data (e.g., images) captured from the movable platform, generate control commands based on user input, including motion commands and object information, and cause the communication module to transmit and/or receive data, etc. The storage unit may store sensed data or other data received from an external device, such as a movable platform. In some embodiments, a storage unit of the memory may store processing results generated by the processing unit.
In some embodiments, the display module may be used to display information about the position, translational velocity, translational acceleration, direction, angular velocity, angular acceleration, or a combination thereof, for example, as shown in fig. 2 for movable platform 10, carrier 13, and/or work device 14. The display module may be used to capture information transmitted by the movable platform and/or load, such as sensed data (images recorded by a camera or other image capture device), tracking data as described, control feedback data, and the like. In some embodiments, the display module may be executed by the same device as the input module. In other embodiments, the display module and the input module may be executed by different devices.
The communication module may be used to transmit and/or receive data from one or more remote devices (e.g., a movable platform, a bearer, a base station, etc.). For example, the communication module may transmit control signals (e.g., motion signals, object information, tracking control commands) to peripheral systems or devices, such as the movable stage 10, the carrier 13, and/or the mapping apparatus of fig. 2. The communication module may include a transmitter and a receiver for receiving data from and transmitting data to the remote device, respectively. In some embodiments, the communication module may include a transceiver that combines the functionality of a transmitter and a receiver. In some embodiments, the transmitter and receiver and the processing unit may communicate with each other. The communication may be by any suitable communication means, such as wire or wireless communication.
Images captured by the movable platform during motion may be transmitted from the movable platform or imaging device back to a control terminal or other suitable device for display, playback, storage, editing, or other purposes. Such transmission may occur in real time or near real time as the imaging device captures the image. Optionally, there may be a delay between the capture and transmission of the image. In some embodiments, the imagery may be stored in the memory of the removable platform without being transferred to any other location. The user may view these images in real time, adjust the object information or adjust other aspects of the movable platform or its components, if desired. The adjusted target information may be provided to the movable platform and the iterative process may continue until a desired image is obtained. In some embodiments, the imagery may be transmitted from the movable platform, the imagery device, and/or the control terminal to a remote server. For example, the imagery may be shared on some social networking platforms, such as a WeChat friend circle or a microblog.
In one embodiment, determining semantic information for a target image region in a semantic map based on the semantic map may include the following operations.
Firstly, determining a target image area according to the corresponding position information of the initial path point and the target path point of the movable platform in the semantic map and the semantic map.
Then, semantic information of the target image area is determined according to the target image area.
For details, reference is made to the same parts of the foregoing embodiments, and further description is omitted here.
In one embodiment, the initial waypoint of the movable platform is a return origin and the target waypoint of the movable platform is a return destination.
In one embodiment, determining semantic information for a target image region in a semantic map from the semantic map comprises: determining a target image area according to the corresponding position information of the movable platform in the semantic map and the semantic map; and determining semantic information of the target image area according to the target image area.
In one embodiment, before obtaining the semantic map of the mobile platform operating environment, the method may further include: acquiring an initial semantic map of a mobile platform operating environment; obtaining semantic map updating information generated based on user operation; and updating the initial semantic map according to the semantic map updating information to obtain the semantic map of the mobile platform operating environment.
In one embodiment, the method further comprises: and providing a user interactive interface, and displaying the initial semantic map on the user interactive interface. Accordingly, the semantic map updating information generated based on the user operation is acquired, and the semantic map updating information comprises the following steps: semantic map update information generated based on user operations on a user interaction interface is acquired.
In one embodiment, the semantic map update information includes location, shape, and semantic information of an updated image region in the semantic map.
In one embodiment, the semantic map update information further includes an obstacle avoidance policy corresponding to semantic information of the image area updated in the semantic map.
In one embodiment, the semantic map update information includes an obstacle avoidance strategy corresponding to the semantic information of the image region in the initial semantic map.
In one embodiment, the obstacle avoidance strategy includes: at least one of a side bypass strategy, an over pass strategy, or an under pass strategy.
In one embodiment, the side bypass strategy includes: and when the size of the target object does not meet the preset condition, performing side bypassing by using a second side bypassing strategy.
In one embodiment, the movement path is a bow-type path including a work path and a traverse path, and the size of the target object is a size of the target object in a direction perpendicular to the work path.
In one embodiment, the movement path is a bow-type path including a work path and a traverse path, and the size of the target object is a size of the target object in a direction perpendicular to the work path.
In one embodiment, it is determined whether the size of the target object satisfies a preset condition by comparing the size of the target object with the working path pitch of the movable platform.
In one embodiment, the first side bypass strategy comprises: move to the adjacent working path. The second side bypass strategy comprises: and the current operation path is continued by bypassing from the side.
In one embodiment, when the obstacle avoidance policy includes an upper pass, the operation state of the movable platform is set to a disabled operation.
In one embodiment, the obstacle avoidance strategy further comprises safe distance information indicating a minimum distance of the movable platform relative to the target object corresponding to the target image area.
In one embodiment, the safe distance is related to at least one of: the size of the movable platform, the working radius of the movable platform.
In one embodiment, elevation information corresponding to a target image area is obtained. Correspondingly, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, and the method comprises the following steps: and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
In one embodiment, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area includes: and when the obstacle avoidance strategy passes through the upper part, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
In one embodiment, after planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, the method further includes: and controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor through the movable platform.
In one embodiment, the movable platform is a drone, and in controlling the movable platform to move based on the path of movement and obstacle information detected by the movable platform via the sensors, a first priority for the drone to pass over an obstacle is higher than a second priority for the drone to pass under the obstacle for the obstacle detected by the drone.
In one embodiment, the semantic map is updated based on obstacles detected by the movable platform.
In one embodiment, controlling the movable platform to move based on the movement path and obstacle information detected by the movable platform through the sensor includes: and when the confidence degree of the obstacles detected by the sensor is greater than a preset threshold value, controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor, wherein the confidence degree of the obstacles is related to the times of repeatedly detecting the obstacles and the environment information when the obstacles are detected.
In one embodiment, the contours of regions of the semantic map are extracted to generate a movement path based at least on the contours of the target image region.
In one embodiment, the movement path satisfies at least one of the following conditions: the distance between the path point on the moving path and the target object is greater than the safety distance; the resources for moving the movable platform from the initial path point to the target path point of the moving path are optimally consumed, and the resources comprise at least one of the following: path length, energy or time; and the smoothness of the movement path meets the maneuvering requirements of the movable platform.
In one embodiment, planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area includes: establishing a target function according to the three-dimensional position information of the target image area and an obstacle avoidance strategy; the objective function is optimized to determine a movement path of the movable platform to avoid the target object.
In one embodiment, after planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, the method further includes: receiving a joystick amount generated based on a user operation; if it is determined that the joystick amount indicates that the movable platform enters the target image area, the joystick amount is not responded to.
For details, reference is made to the same parts of the foregoing embodiments, and further description is omitted here.
Another aspect of the present application further provides a path planning system for planning a moving path, wherein the system includes: control terminal and movable platform of mutual communication connection, wherein: the control terminal and/or the movable platform comprise the path planning device.
For example, the movable platform may be specifically an agricultural drone or an agricultural drone vehicle, or the like.
The above is a preferred embodiment of the present application, and it should be noted that the preferred embodiment is only for understanding the present application, and is not intended to limit the scope of the present application. Furthermore, the features of the preferred embodiments, unless otherwise specified, are applicable to both the method embodiments and the apparatus embodiments, and technical features that may be present in the same or different embodiments may be used in combination without conflict with each other.
In some possible embodiments, it should be finally stated that: the above embodiments are only intended to illustrate the technical solution of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (56)

  1. A path planning method for planning a path of movement of a movable platform, the method comprising:
    obtaining a semantic map of the operation environment of the movable platform, wherein semantic information of each image area in the semantic map has a corresponding relation with an obstacle avoidance strategy of the movable platform;
    determining semantic information of a target image area in the semantic map according to the semantic map;
    and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.
  2. The method of claim 1, wherein determining semantic information for a target image region in the semantic map from the semantic map comprises:
    determining the target image area according to the corresponding position information of the initial path point and the target path point of the movable platform in the semantic map and the semantic map;
    and determining semantic information of the target image area according to the target image area.
  3. The method of claim 2, the initial waypoint of the movable platform being a return origin and the target waypoint of the movable platform being a return destination.
  4. The method of claim 1, wherein determining semantic information for a target image region in the semantic map from the semantic map comprises:
    determining the target image area according to the corresponding position information of the movable platform in the semantic map and the semantic map;
    and determining semantic information of the target image area according to the target image area.
  5. The method of claim 1, further comprising, prior to said obtaining a semantic map of the mobile platform operating environment:
    acquiring an initial semantic map of the movable platform operating environment;
    obtaining semantic map updating information generated based on user operation;
    and updating the initial semantic map according to the semantic map updating information to obtain the semantic map of the mobile platform operating environment.
  6. The method of claim 5, further comprising:
    providing a user interaction interface, and displaying the initial semantic map on the user interaction interface;
    the obtaining of semantic map update information generated based on user operation includes:
    and acquiring semantic map updating information generated based on the operation of the user on the user interaction interface.
  7. The method of claim 5, wherein the semantic map update information comprises location, shape, and semantic information of an updated image region in the semantic map.
  8. The method of claim 5, wherein the semantic map update information further comprises an obstacle avoidance policy corresponding to semantic information of an updated image area in the semantic map.
  9. The method of claim 5, wherein the semantic map update information comprises an obstacle avoidance strategy corresponding to semantic information of an image region in the initial semantic map.
  10. The method of claim 1, wherein the obstacle avoidance strategy comprises: at least one of a side bypass strategy, an over pass strategy, or an under pass strategy.
  11. The method of claim 10, wherein the side bypass strategy comprises:
    and when the size of the target object does not meet the preset condition, performing side detour by adopting a second side detour strategy.
  12. The method of claim 11, wherein the movement path is a bow-type path, the bow-type path including a work path and a traverse path, and the size of the target object is a size of the target object in a direction perpendicular to the work path.
  13. The method of claim 12, further comprising:
    and determining whether the size of the target object meets the preset condition or not by comparing the size of the target object with the distance between the working paths.
  14. The method of claim 12, wherein:
    the first side bypass strategy comprises: moving to an adjacent work path;
    the second side bypass policy comprises: and the current operation path is continued by bypassing from the side.
  15. The method of claim 1, further comprising:
    and when the obstacle avoidance strategy comprises the condition that the obstacle passes above, setting the operation state of the movable platform as forbidden operation.
  16. The method of claim 1, wherein the obstacle avoidance strategy further comprises safe distance information indicating a minimum distance of the movable platform relative to a target object corresponding to the target image area.
  17. The method of claim 16, wherein the safe distance is related to at least one of: the size of the movable platform, the working radius of the movable platform.
  18. The method of claim 1, further comprising:
    acquiring elevation information corresponding to the target image area;
    the planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area includes:
    planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
  19. The method of claim 18, wherein planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area comprises:
    when the obstacle avoidance strategy passes through the upper part, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
  20. The method according to claim 1, wherein after the planning of the moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance policy of the movable platform corresponding to the semantic information of the target image area, the method further includes:
    controlling the movable platform to move based on the moving path and obstacle information detected by the movable platform through a sensor.
  21. The method of claim 20, wherein the movable platform is a drone, and wherein a first priority for the drone to pass over the obstacle is higher than a second priority for the drone to pass under the obstacle during the controlling of the movable platform to move based on the path of movement and the obstacle information detected by the movable platform passing sensor.
  22. The method of claim 20, further comprising: updating the semantic map based on obstacles detected by the movable platform.
  23. The method of claim 20, wherein said controlling the movable platform to move based on the path of movement and obstacle information detected by the movable platform through sensors comprises:
    and when the confidence coefficient of the obstacle detected by the sensor is greater than a preset threshold value, controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor, wherein the confidence coefficient of the obstacle is related to the number of times of repeatedly detecting the obstacle and the environment information when the obstacle is detected.
  24. The method of claim 1, further comprising: extracting contours of regions of the semantic map to generate the movement path based on at least the contour of the target image region.
  25. The method of claim 1, wherein the moving path satisfies at least one of the following conditions:
    the distance between a path point on the moving path and the target object is greater than a safe distance;
    the resource consumption of the movable platform moving from the initial path point to the target path point of the moving path is optimal, and the resource comprises at least one of the following resources: path length, energy or time; and
    the smoothness of the moving path meets the maneuvering requirements of the movable platform.
  26. The method according to claim 1, wherein planning, according to an obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, a moving path of the movable platform avoiding a target object corresponding to the target image area comprises:
    establishing a target function according to the three-dimensional position information of the target image area and the obstacle avoidance strategy;
    optimizing the objective function to determine a movement path of the movable platform to avoid the target object.
  27. The method according to any one of claims 1 to 26, wherein after the planning of the moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area, the method further comprises:
    receiving a joystick amount generated based on a user operation;
    not responding to the joystick amount if it is determined that the joystick amount indicates that the movable platform enters the target image area.
  28. A path planning apparatus for planning a path of movement of a movable platform, the apparatus comprising:
    one or more processors; and
    a computer readable storage medium storing one or more computer programs which, when executed by the processor, implement:
    obtaining a semantic map of the operating environment of the movable platform, wherein semantic information of each image area in the semantic map has a corresponding relation with an obstacle avoidance strategy of the movable platform;
    determining semantic information of a target image area in the semantic map according to the semantic map; and
    and planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area.
  29. The apparatus of claim 28, wherein the determining semantic information of a target image area in the semantic map according to the semantic map comprises:
    determining the target image area according to the corresponding position information of the initial path point and the target path point of the movable platform in the semantic map and the semantic map;
    and determining semantic information of the target image area according to the target image area.
  30. The apparatus of claim 29, wherein the initial waypoint of the movable platform is a return origin and the target waypoint of the movable platform is a return destination.
  31. The apparatus of claim 28, wherein the determining semantic information of a target image area in the semantic map according to the semantic map comprises:
    determining the target image area according to the corresponding position information of the movable platform in the semantic map and the semantic map;
    and determining semantic information of the target image area according to the target image area.
  32. The apparatus of claim 28, further comprising, prior to said obtaining a semantic map of the mobile platform operating environment:
    acquiring an initial semantic map of the movable platform operating environment;
    obtaining semantic map updating information generated based on user operation;
    and updating the initial semantic map according to the semantic map updating information to obtain the semantic map of the mobile platform operating environment.
  33. The apparatus of claim 32, further comprising:
    providing a user interaction interface, and displaying the initial semantic map on the user interaction interface;
    the obtaining of semantic map update information generated based on user operation includes:
    and acquiring semantic map updating information generated based on the operation of the user on the user interaction interface.
  34. The apparatus of claim 32, wherein the semantic map update information comprises location, shape, and semantic information of an updated image area in the semantic map.
  35. The apparatus of claim 32, wherein the semantic map update information further comprises an obstacle avoidance policy corresponding to semantic information of an updated image area in the semantic map.
  36. The apparatus of claim 32, wherein the semantic map update information comprises an obstacle avoidance policy corresponding to semantic information of an image region in the initial semantic map.
  37. The apparatus of claim 28, wherein the obstacle avoidance strategy comprises: at least one of a side bypass strategy, an over pass strategy, or an under pass strategy.
  38. The apparatus of claim 37, wherein the side bypass strategy comprises:
    and when the size of the target object does not meet the preset condition, performing side detour by adopting a second side detour strategy.
  39. The apparatus of claim 38, wherein the movement path is a bow-type path, the bow-type path including a work path and a traverse path, and the size of the target object is a size of the target object in a direction perpendicular to the work path.
  40. The apparatus of claim 39, further comprising:
    and determining whether the size of the target object meets the preset condition or not by comparing the size of the target object with the distance between the working paths.
  41. The apparatus of claim 39, wherein:
    the first side bypass strategy comprises: moving to an adjacent work path;
    the second side bypass policy comprises: and the current operation path is continued by bypassing from the side.
  42. The apparatus of claim 28, further comprising:
    and when the obstacle avoidance strategy comprises the condition that the obstacle passes above, setting the operation state of the movable platform as forbidden operation.
  43. The apparatus of claim 28, wherein the obstacle avoidance strategy further comprises safe distance information indicating a minimum distance of the movable platform relative to a target object corresponding to the target image area.
  44. The apparatus of claim 43, wherein the safe distance is related to at least one of: the size of the movable platform, the working radius of the movable platform.
  45. The apparatus of claim 28, further comprising:
    acquiring elevation information corresponding to the target image area;
    the planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area includes:
    planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
  46. The apparatus of claim 45, wherein the planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area comprises:
    when the obstacle avoidance strategy passes through the upper part, planning a moving path of the movable platform for avoiding the target object corresponding to the target image area according to the obstacle avoidance strategy of the movable platform corresponding to the semantic information of the target image area and the elevation information corresponding to the target image area.
  47. The apparatus of claim 28, wherein after the planning of the obstacle avoidance strategy of the movable platform according to the semantic information of the target image area to avoid the moving path of the target object corresponding to the target image area by the movable platform, the apparatus further comprises:
    controlling the movable platform to move based on the moving path and obstacle information detected by the movable platform through a sensor.
  48. The apparatus of claim 47, wherein the movable platform is a drone, and wherein a first priority for the drone to pass over the obstacle is higher than a second priority for the drone to pass under the obstacle during control of the movable platform to move based on the path of movement and obstacle information detected by the movable platform passing sensor.
  49. The apparatus of claim 47, further comprising: updating the semantic map based on obstacles detected by the movable platform.
  50. The apparatus of claim 47, wherein said controlling the movable platform to move based on the path of movement and obstacle information detected by the movable platform via sensors comprises:
    and when the confidence coefficient of the obstacle detected by the sensor is greater than a preset threshold value, controlling the movable platform to move based on the moving path and the obstacle information detected by the sensor, wherein the confidence coefficient of the obstacle is related to the number of times of repeatedly detecting the obstacle and the environment information when the obstacle is detected.
  51. The apparatus of claim 28, further comprising: extracting contours of regions of the semantic map to generate the movement path based on at least the contour of the target image region.
  52. The apparatus of claim 28, wherein the moving path satisfies at least one of the following conditions:
    the distance between a path point on the moving path and the target object is greater than a safe distance;
    resources for the movable platform to move from an initial path point to a target path point of the movement path are optimally consumed, the resources including at least one of: path length, energy or time; and
    the smoothness of the movement path meets the maneuvering requirements of the movable platform.
  53. The apparatus of claim 28, wherein the planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance policy of the movable platform corresponding to the semantic information of the target image area comprises:
    establishing a target function according to the three-dimensional position information of the target image area and the obstacle avoidance strategy;
    optimizing the objective function to determine a movement path of the movable platform to avoid the target object.
  54. The apparatus according to any one of claims 28 to 53, further comprising, after planning a moving path of the movable platform to avoid the target object corresponding to the target image area according to the obstacle avoidance policy of the movable platform corresponding to the semantic information of the target image area, a step of:
    receiving a joystick amount generated based on a user operation;
    not responding to the joystick amount if it is determined that the joystick amount indicates that the movable platform enters the target image area.
  55. A path planning system for planning a path of movement, the system comprising: control terminal and movable platform of mutual communication connection, wherein:
    the control terminal and/or the movable platform comprising a path planner according to any of the claims 28-54.
  56. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any one of claims 1-27.
CN202080074264.5A 2020-11-09 2020-11-09 Path planning method, path planning device, path planning system, and medium Pending CN114746822A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/127623 WO2022095060A1 (en) 2020-11-09 2020-11-09 Path planning method, path planning apparatus, path planning system, and medium

Publications (1)

Publication Number Publication Date
CN114746822A true CN114746822A (en) 2022-07-12

Family

ID=81458587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080074264.5A Pending CN114746822A (en) 2020-11-09 2020-11-09 Path planning method, path planning device, path planning system, and medium

Country Status (2)

Country Link
CN (1) CN114746822A (en)
WO (1) WO2022095060A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116952250A (en) * 2023-09-18 2023-10-27 之江实验室 Robot path guiding method and device based on semantic map

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114770559B (en) * 2022-05-27 2022-12-13 中迪机器人(盐城)有限公司 Fetching control system and method of robot
CN115577511B (en) * 2022-09-26 2023-11-17 南京航空航天大学 Short-term track prediction method, device and system based on unmanned aerial vehicle motion state
CN115963857B (en) * 2023-01-04 2023-08-08 广东博幻生态科技有限公司 Pesticide spraying method based on unmanned aerial vehicle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106840141A (en) * 2017-02-02 2017-06-13 王恒升 A kind of semantic map of mobile robot indoor navigation
CN110174888B (en) * 2018-08-09 2022-08-12 深圳瑞科时尚电子有限公司 Self-moving robot control method, device, equipment and storage medium
CN111679661A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Semantic map construction method based on depth camera and sweeping robot
CN110986945B (en) * 2019-11-14 2023-06-27 上海交通大学 Local navigation method and system based on semantic altitude map

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116952250A (en) * 2023-09-18 2023-10-27 之江实验室 Robot path guiding method and device based on semantic map
CN116952250B (en) * 2023-09-18 2024-01-05 之江实验室 Robot path guiding method and device based on semantic map

Also Published As

Publication number Publication date
WO2022095060A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
US11789459B2 (en) Vehicle controllers for agricultural and industrial applications
US11932392B2 (en) Systems and methods for adjusting UAV trajectory
WO2022095060A1 (en) Path planning method, path planning apparatus, path planning system, and medium
US10814976B2 (en) Using unmanned aerial vehicles (UAVs or drones) in forestry machine-connectivity applications
CN110325939B (en) System and method for operating an unmanned aerial vehicle
WO2022095067A1 (en) Path planning method, path planning device, path planning system, and medium thereof
CN108521787B (en) Navigation processing method and device and control equipment
CN111051198A (en) Unmanned aerial vehicle control system, unmanned aerial vehicle control method, and program
WO2021081960A1 (en) Route planning method, device and system, and storage medium
CN113574487A (en) Unmanned aerial vehicle control method and device and unmanned aerial vehicle
WO2022226720A1 (en) Path planning method, path planning device, and medium
EP4024155B1 (en) Method, system and computer program product of control of unmanned aerial vehicles
WO2023082295A1 (en) Unmanned aerial vehicle control method and apparatus, device, system, and storage medium
JP2024031660A (en) Information processing device, method, and program
CN112585555A (en) Flight control method, device and equipment based on passable airspace judgment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination