CN115540896B - Path planning method and device, electronic equipment and computer readable medium - Google Patents

Path planning method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115540896B
CN115540896B CN202211553459.5A CN202211553459A CN115540896B CN 115540896 B CN115540896 B CN 115540896B CN 202211553459 A CN202211553459 A CN 202211553459A CN 115540896 B CN115540896 B CN 115540896B
Authority
CN
China
Prior art keywords
obstacle
information
target vehicle
road boundary
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211553459.5A
Other languages
Chinese (zh)
Other versions
CN115540896A (en
Inventor
张�雄
李敏
蒋建辉
蔡仲辉
申苗
刘智睿
艾永军
黄家琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211553459.5A priority Critical patent/CN115540896B/en
Publication of CN115540896A publication Critical patent/CN115540896A/en
Application granted granted Critical
Publication of CN115540896B publication Critical patent/CN115540896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3461Preferred or disfavoured areas, e.g. dangerous zones, toll or emission zones, intersections, manoeuvre types, segments such as motorways, toll roads, ferries
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a path planning method, a path planning device, an electronic device and a computer readable medium. One embodiment of the method comprises: determining global road path information according to the driving starting point and the driving end point; detecting the surrounding environment of the target vehicle in real time; the following path planning steps are performed: acquiring a point cloud data set; performing ground segmentation on the point cloud data set to obtain road boundary information; carrying out obstacle detection on the point cloud data set to obtain obstacle information; performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates; determining local obstacle avoidance road path information; and updating the global road path information, and controlling the target vehicle to travel to the travel termination point. The embodiment can combine the physical limitation of the vehicle and the limitation of the surrounding environment of the vehicle, so that the vehicle can smoothly run to the running termination point on the lane, thereby improving the running safety of the vehicle.

Description

Path planning method, path planning device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a path planning method and apparatus, an electronic device, and a computer-readable medium.
Background
The globally planned path of the vehicle is a collision-free path planned from a driving starting point to a driving end point in a global environment. The vehicle runs according to the global planned route, in the running process, various sensors mounted on the vehicle sense the local environment around the vehicle, and the detected obstacles are avoided until the vehicle runs to the running termination point. For path planning, the method generally adopted is as follows: a traditional artificial potential field algorithm is adopted, the surrounding environment information of the vehicle is abstracted into a gravitational field function and a repulsive field function, and a collision-free path from a driving starting point to a driving ending point is planned through a resultant force field function.
However, the inventor finds that when the driving path of the vehicle is planned in the above manner, the following technical problems often exist:
firstly, the conventional artificial potential field algorithm has the problems of zero potential field position and local minimum value in a complex environment, so that the vehicle falls into a local optimal solution, and the vehicle cannot run to a running termination point and has a shock problem near an obstacle.
Secondly, because roads have curved surfaces of different degrees, the road is easy to be divided insufficiently and excessively on uneven ground, so that the ground division efficiency and accuracy are low.
Thirdly, the extraction accuracy of the road boundary information is low due to the shielding of obstacles in the road, so that the extracted road boundary information has deviation from the actual road boundary information, and the driving safety of the vehicle is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a path planning method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a path planning method, including: determining global road path information corresponding to the target map according to the obtained driving starting point and driving ending point set by the target user; in response to the fact that the target vehicle is located at the driving starting point, detecting the surrounding environment of the target vehicle in real time; in response to determining that the target vehicle detects an obstacle, performing the following path planning steps: controlling a laser radar loaded on the target vehicle to acquire a point cloud data set of a target road section; performing ground segmentation on the point cloud data set to obtain road boundary information; and detecting the obstacles in the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information; performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information; determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information; and updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to travel to the travel termination point.
In a second aspect, some embodiments of the present disclosure provide a path planning apparatus, including: a determination unit configured to determine global road path information corresponding to the target map according to the acquired driving start point and driving end point set by the target user; a real-time detection unit configured to detect an environment around a target vehicle in real time in response to a determination that the target vehicle is located at the travel start point; an execution unit configured to, in response to determining that the target vehicle detects an obstacle, execute the following path planning steps: controlling a laser radar loaded on the target vehicle to acquire a point cloud data set of a target road section; performing ground segmentation on the point cloud data set to obtain road boundary information; and detecting the obstacles in the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information; performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information; determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information; and updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to travel to the travel termination point.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements a method as described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: the path planning method of some embodiments of the present disclosure may combine the physical limitation of the vehicle itself and the limitation of the vehicle surrounding environment, so that the vehicle smoothly travels on the lane to the travel termination point, thereby improving the travel safety of the vehicle. Specifically, the reason why the vehicle cannot travel to the travel end point and the problem of hunting occurs near the obstacle is that: the problem that the position of a zero potential field and a local minimum value exist in a complex environment due to a traditional artificial potential field algorithm, and therefore a vehicle is trapped into a local optimal solution. Based on this, the route planning method of some embodiments of the present disclosure may first determine global road route information corresponding to the target map according to the acquired driving start point and driving end point set by the target user. Here, the obtained global road path information is used for making a path reference for the target vehicle, and avoiding collision with a static obstacle in the target map. Then, in response to determining that the target vehicle is located at the travel start point, the environment around the target vehicle is detected in real time. The method mainly detects the dynamic obstacles in the surrounding environment of the target vehicle, reduces traffic accidents caused by accidents of the vehicle and further improves the safety of the vehicle. Finally, in response to determining that the target vehicle detects an obstacle, performing the following path planning steps: the method comprises the steps of firstly, controlling a laser radar loaded on a target vehicle to obtain a point cloud data set of a target road section. Here, the resulting point cloud data set is used for the subsequent extraction of road boundary information and obstacle information. And secondly, performing ground segmentation on the point cloud data set to obtain road boundary information. Thirdly, obstacle detection is carried out on the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information. The obtained road boundary information and the obtained obstacle information are used for subsequently determining the local obstacle avoidance road path information. And fourthly, performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information. And fifthly, determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information. According to the method, a piece of local obstacle avoidance road path information is planned according to the physical limit and the environmental limit of a target vehicle, and the problems of unreachable local optimal solution of the traditional artificial potential field algorithm are effectively solved. And sixthly, updating the path of the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to run to the running termination point. The target vehicle drives to the driving end point according to the obtained local obstacle avoidance road path information, so that the problem of vibration of the target vehicle near an obstacle is reduced, and the driving safety of the target vehicle is improved. Therefore, the path planning method can combine the vehicle and the limit of the environment to enable the vehicle to smoothly run to the running end point on the lane, thereby improving the running safety of the vehicle.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow diagram of some embodiments of a path planning method according to the present disclosure;
fig. 2 is a schematic structural diagram of some embodiments of a path planner according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of some embodiments of a path planning method according to the present disclosure is shown. The path planning method comprises the following steps:
step 101, determining global road path information corresponding to the target map according to the obtained driving starting point and driving ending point set by the target user.
In some embodiments, the executing entity of the route planning method may determine global road route information corresponding to the target map according to the acquired driving start point and driving end point set by the target user. The target user may be a user who sets a line start point and a driving end point on a target map. The above target map may be a high-precision map showing a travel start point and a travel end point. The global road path information may be a collision-free road path planned by the intelligent assistance system of the vehicle from the driving start point to the driving end point. For example, if the driving start point is point 1 and the driving end point is point 2, the corresponding global road path information may be a road path selected by the user between point 1 and point 2 on the high-precision map.
As an example, the execution body may determine global road path information corresponding to the target map according to the acquired driving start point and driving end point set by the target user using a fast-expanding random tree algorithm.
And 102, responding to the fact that the target vehicle is located at the driving starting point, and detecting the surrounding environment of the target vehicle in real time.
In some embodiments, the execution subject may perform real-time detection of the environment around the target vehicle in response to determining that the target vehicle is located at the driving start point. The target vehicle may be a vehicle that receives the global road route information and travels according to the global road route information. For example, the target vehicle may be an unmanned vehicle. The surrounding environment may be an object within a range of the target vehicle as a center and a radius of 5 m. For example, the thing may include at least one of: driving vehicles, pedestrians, road conditions and stationary objects. The stationary object may comprise at least one of: vehicles parked on the roadside and road traffic signs.
As an example, the execution subject may detect the surroundings of the target vehicle in real time using various sensors mounted on the target vehicle.
Step 103, in response to determining that the target vehicle detects an obstacle, performing the following path planning steps:
and step 1031, controlling the laser radar loaded by the target vehicle to obtain a point cloud data set of the target road section.
In some embodiments, the executive agent may control a lidar carried by the target vehicle to acquire a point cloud data set of the target road segment. The target path may be a road segment of the point cloud data set to be acquired.
And 1032, performing ground segmentation on the point cloud data set to obtain road boundary information.
In some embodiments, the executing entity may perform ground segmentation on the point cloud data set to obtain road boundary information. The road boundary information may be boundary information where a road edge that distinguishes a range between a vehicle drivable area and a non-drivable area is located.
As an example, the execution body may first establish a planar grid of a preset size using a planar grid method. Secondly, the fused point cloud data set is projected to a corresponding grid. And then, extracting the characteristics of the fused point cloud data set in each grid to obtain a ground point cloud data set. And finally, performing curve fitting on the ground point cloud data set by using a least square method to obtain road boundary information.
In some optional implementation manners of some embodiments, the performing ground segmentation on the point cloud data set to obtain road boundary information may include the following steps:
in the first step, the executing body may correct the motion deformity of the point cloud data set to generate a point cloud data set at the same time under the coordinate system of the target vehicle as a first point cloud data set.
As an example, the execution subject may first acquire that the target vehicle is present
Figure 180432DEST_PATH_IMAGE001
At the time, the first position coordinate and the travel speed of the target vehicle, and the second position coordinate and the speed of the dynamic obstacle. Then, the target vehicle is obtained
Figure 520408DEST_PATH_IMAGE002
At the moment, the third position coordinate of the target vehicle, the fourth position coordinate of the dynamic obstacle, and the angle of rotation of the lidar
Figure 34566DEST_PATH_IMAGE003
. Finally, according to
Figure 960934DEST_PATH_IMAGE001
At the time of day, the user may, first position coordinates and running speed of the target vehicle, second position coordinates and speed of the dynamic obstacle, at
Figure 181831DEST_PATH_IMAGE002
At the moment, the third position coordinate of the target vehicle, the fourth position coordinate of the dynamic obstacle and the rotating angle of the laser radar are
Figure 801031DEST_PATH_IMAGE003
Will be
Figure 802485DEST_PATH_IMAGE002
The coordinate system of the point cloud data set obtained at the moment is converted into
Figure 657177DEST_PATH_IMAGE001
And generating a point cloud data set at the same moment under the coordinate system of the target vehicle as a first point cloud data set.
And in the second step, the execution subject can perform filtering processing on the first point cloud data set to generate a filtered first point cloud data set within a preset range as a second point cloud data set. The preset range may be a range of 50 meters in front and rear, and 20 meters in left and right, with the target vehicle as a center.
As an example, the executing subject may first remove the remote point cloud data in the first point cloud data set by using a pass-through filtering algorithm to generate the first point cloud data set with the remote point cloud data removed as the third point cloud data set. The remote point cloud data may be point cloud data located outside a preset range. The preset range may be a range of 50 meters in front and rear, and 20 meters in left and right, with the target vehicle as the center. And then, removing the point cloud data set with the point cloud density in the unit range less than or equal to the preset density threshold value by using a statistical filtering algorithm to generate a point cloud data set with a lower removed density as a fourth point cloud data set. Wherein the predetermined density threshold may be 20. And finally, performing rasterization on the fourth point cloud data set by using a point cloud down-sampling algorithm to generate a filtered first point cloud data set in a certain range to serve as a second point cloud data set.
And thirdly, the executing body can perform ground segmentation on the second point cloud data set to obtain a ground point cloud data set.
As an example, the executing entity may perform ground segmentation on the second point cloud data set by using a plane grid algorithm to obtain a ground point cloud data set.
And fourthly, the executing body can screen the ground point cloud data set to obtain a road boundary point cloud data set.
As an example, the executing body may filter ground point cloud data according to an included angle between a radial direction and a tangential direction, and determine point cloud data with an included angle degree greater than or equal to 75 degrees as road boundary point cloud data. The included angle between the radial direction and the tangential direction can be an included angle between a connecting line of a circular ring where the point cloud data are located and the original point of the target vehicle and the tangential direction of the point cloud data in the circular ring.
And fifthly, the executing body may generate road boundary information according to the road boundary point cloud data set in response to determining that the road boundary point cloud data set is continuous.
As an example, the executing subject may fit the road boundary point cloud data set by using a least square method to generate road boundary information.
In some optional implementation manners of some embodiments, the above ground segmentation on the second point cloud data set to obtain a ground point cloud data set may include the following steps:
in the first step, the executing subject may perform region of interest division on the second point cloud data set to obtain a region of interest point cloud data set.
And secondly, the executing main body can perform voxelization on the point cloud data set of the region of interest to generate the point cloud data set of the region of interest after the interference points are filtered out, so as to obtain a first point cloud data set of the region of interest.
And thirdly, the execution subject can perform primary ground segmentation on the first point cloud data set of the region of interest to obtain a first ground point cloud data set.
And fourthly, the executing main body can perform quadrant division on the first ground point cloud data set to generate a first ground point cloud data subset for dividing four quadrants so as to obtain four second ground point cloud data sets.
And fifthly, the executing body may perform regional division on each of the four second ground point cloud data sets to generate a second ground point cloud data subset of multiple regions, so as to obtain multiple third ground point cloud data sets.
And sixthly, performing secondary ground segmentation on each third ground point cloud data set in the plurality of third ground point cloud data sets by the executing body to obtain a ground point cloud data set.
The technical scheme and the related content serve as an invention point of the embodiment of the disclosure, and the technical problem that the ground segmentation efficiency and accuracy are low due to the fact that insufficient segmentation and excessive segmentation are easy to occur on the non-flat ground in the prior art is solved. ". Factors that lead to less efficient and accurate segmentation of non-flat ground tend to be as follows: under-segmentation and over-segmentation are easy to occur on non-flat ground, so that the ground segmentation efficiency and accuracy are low. If the above factors are solved, the segmentation efficiency and accuracy on the non-flat ground can be improved. To achieve this effect, the above ground segmentation of the second point cloud data set to obtain a ground point cloud data set may include the following steps: firstly, dividing the region of interest of the second point cloud data set to obtain a region of interest point cloud data set. Here, the calculation amount of the point cloud data processing can be reduced and the real-time performance can be improved by dividing the region of interest. And then, performing voxelization on the point cloud data set of the region of interest to generate the point cloud data set of the region of interest after the interference points are filtered out, so as to obtain a first point cloud data set of the region of interest. Here, through the voxelization of the point cloud information in the region of interest, voxels can be used to replace the point cloud for calculation, which not only facilitates the subsequent retrieval of the point cloud data set of the first region of interest, but also reduces the calculation amount. And then, performing primary ground segmentation on the first region-of-interest point cloud data set to obtain a first ground point cloud data set. Most of abnormal values used for plane estimation are removed, and ground segmentation is facilitated subsequently. And then, quadrant division is carried out on the first ground point cloud data set so as to generate a first ground point cloud data subset for dividing four quadrants, and four second ground point cloud data sets are obtained. Then, area division is performed on each second ground point cloud data set in the four second ground point cloud data sets to generate a second ground point cloud data subset of multiple areas, and multiple third ground point cloud data sets are obtained. And finally, performing secondary ground segmentation on each third ground point cloud data set in the plurality of third ground point cloud data sets to obtain a ground point cloud data set. Here, by dividing the point cloud into a plurality of regions, it may be that the estimated plane more accurately represents complex terrain, accounting for possible sloping terrain. Therefore, the segmentation of the ground point cloud on the non-flat ground is completed, and the segmentation efficiency and accuracy on the non-flat ground are improved.
In some optional implementations of some embodiments, after the generating the road boundary information according to the road boundary point cloud data set in response to determining that the road boundary point cloud data set is continuous, the method may further include:
in a first step, the executing agent may determine a target sub-segment corresponding to the discontinuous road boundary point cloud data set in response to determining that the road boundary point cloud data set is discontinuous. The point cloud data discontinuity may be road boundary point cloud data discontinuity caused by occlusion of an obstacle. The target sub-segments may be segments of the corresponding point cloud dataset that are not contiguous.
In the second step, the executing agent may determine, as the first midpoint position coordinate set, a midpoint position coordinate set of a road boundary point cloud data set corresponding to a lower sub-link of the target sub-link in response to determining that both sides of the target sub-link include the road boundary point cloud data sets.
And thirdly, the executive body can determine a midpoint position coordinate set of the road boundary point cloud data set corresponding to the upper sub-road section of the target sub-road section as a second midpoint position coordinate set.
And fourthly, the executing main body can perform curve fitting on the first midpoint position coordinate set to obtain a first fitted curve corresponding to the first midpoint position coordinate set.
For example, the execution subject may perform curve fitting on the first midpoint position coordinate set by using a least square method to obtain a first fitted curve corresponding to the first midpoint position coordinate set.
And fifthly, the executing body may perform curve fitting on the second midpoint position coordinate set to obtain a second fitted curve corresponding to the second midpoint position coordinate set.
As an example, the execution subject may perform curve fitting on the second midpoint position coordinate set by using a least square method to obtain a second fitted curve corresponding to the second midpoint position coordinate set.
In the sixth step, the execution subject may determine a midpoint position coordinate set of the target sub-link and a midpoint position coordinate set of a lower sub-link as a third midpoint position coordinate set and a fourth midpoint position coordinate set according to the curvature of the first fitting curve.
As an example, the execution agent may calculate a midpoint position coordinate set of the target sub-link as a third midpoint position coordinate set using the curvature of the first fitted curve, the first midpoint position of the upper sub-link, and the last midpoint position of the lower sub-link. The executing agent may calculate a midpoint position coordinate set of the lower sub-link as a fourth midpoint position coordinate set of the third midpoint position coordinate set using a last midpoint position of the lower sub-link of the curvature of the first fitted curve.
Seventhly, the executing body may perform curve fitting on the fourth midpoint position coordinate set to obtain a fourth fitted curve.
And step eight, the executing body can fill the road boundary point cloud set corresponding to the third midpoint position coordinate set in response to the fact that the curvature of the first fitting curve is consistent with the curvature of the fourth fitting curve, so as to obtain a continuous road boundary point cloud data set.
And ninthly, the executing body can generate road boundary information according to the continuous road boundary point cloud data set.
As an example, the executing subject may fit the road boundary point cloud data set by using a least square method to obtain the road boundary information.
The technical scheme and the related content serve as an invention point of the embodiment of the disclosure, and the technical problem of extraction of road boundary information mentioned in the background art is solved, and due to shielding of obstacles in a road, the accuracy of extraction of the road boundary information is low, so that the extracted road boundary information deviates from the actual road boundary information, and further the driving safety of a vehicle is reduced. ". Factors that cause a reduction in the running safety of the vehicle tend to be as follows: due to the obstruction of obstacles in the road, the accuracy of extracting the road boundary information is low. If the above factors are solved, the purpose of improving the running safety of the vehicle can be achieved. To achieve this effect, the generating road boundary information from the road boundary point cloud data set in response to determining that the road boundary point cloud data set is discontinuous may include: firstly, a target sub-section corresponding to the discontinuous road boundary point cloud data set is determined. Here, determining the road boundary point cloud data set with the discontinuous road boundary point cloud is beneficial for filling the discontinuous road boundary subsequently. Secondly, in response to the fact that the two sides of the target sub-road section comprise the road boundary point cloud data sets, determining a midpoint position coordinate set of the road boundary point cloud data set corresponding to the lower sub-road section of the target sub-road section as a first midpoint position coordinate set. And thirdly, determining the midpoint position of the road boundary point cloud data set corresponding to the sub-road section on the upper side of the target sub-road section as a set, and taking the midpoint position as a second midpoint position coordinate set. And then, performing curve fitting on the first midpoint position coordinate set to obtain a first fitted curve corresponding to the first midpoint position coordinate set. And performing curve fitting on the second midpoint position coordinate set to obtain a second fitted curve corresponding to the second midpoint position coordinate set. The obtained first fitting curve and the second fitting curve are used for filling the discontinuous road boundary point cloud data set subsequently. And then, according to the curvature of the first fitted curve, determining a midpoint position coordinate set of the target sub-link and a midpoint position coordinate set of the lower sub-link as a fourth midpoint position coordinate set of a third midpoint position coordinate set. And then, performing curve fitting on the fourth midpoint position coordinate set to obtain a fourth fitted curve. And finally, filling the road boundary point cloud set corresponding to the third midpoint position coordinate set in response to the fact that the curvature of the first fitting curve is consistent with the curvature of the fourth fitting curve, and obtaining a continuous road boundary point cloud data set. Here, the curvature of the first fitted curve is compared with the curvature of the fourth fitted curve, so that whether the fitted curve is accurate or not is determined, and the road boundary information extraction accuracy is improved. And finally, generating road boundary information according to the continuous road boundary point cloud data set. Therefore, the road boundary information is extracted by filling the point cloud data set of the discontinuous road boundary, the extraction accuracy of the road boundary information is improved, and the driving safety of the vehicle is further improved.
And 1033, performing obstacle detection on the point cloud data set to obtain obstacle information.
In some embodiments, the executing subject may perform obstacle detection on the point cloud data set to obtain obstacle information. Wherein the obstacle information includes: static obstacle information and dynamic obstacle information.
In some optional implementation manners of some embodiments, the above-mentioned performing obstacle detection on the point cloud data set to obtain obstacle information may include the following steps:
in the first step, the executing body may perform preprocessing on the point cloud data set to obtain a sparse tensor of the point cloud data set. The preprocessing may be to perform voxelization processing on the point cloud data set.
And secondly, the executive body can input the sparse tensor to an obstacle detection model to obtain obstacle information. The obstacle detection model may be a model for detecting an obstacle. For example, the obstacle detection model may be a YOLO (target detection) V5 model. V5 in YOLOV5 above may be a serial number of the YOLO model. Wherein the obstacle detection model is obtained by the following steps:
in substep 1, the executing entity may obtain a sample set, wherein the sample set includes: an obstacle image set, a sample label set corresponding to the obstacle image set. The sample labelsets may be labelsets corresponding to the obstacle image sets.
And substep 2, the executing agent may execute the following training steps for each sample in the sample set:
the first substep is to input the obstacle image of the sample to an initial obstacle detection model to obtain an obstacle recognition result corresponding to the sample.
And a second substep of comparing the obstacle recognition result with a sample label corresponding to the corresponding obstacle image.
And a third substep of determining whether the initial obstacle detection model reaches a preset optimization target according to the comparison result. Wherein, the optimization goal can be the accuracy of the identification of the first initial radiation source individual identification model. For example, the preset optimization goal of the initial obstacle detection model may be 99%. The comparison result may include: and the sample label representing the obstacle identification result is the same information as the sample label corresponding to the corresponding obstacle image, and the sample label representing the obstacle identification result is different from the sample label corresponding to the corresponding obstacle image.
As an example, when a difference between an obstacle identification result corresponding to one sample and a sample label corresponding to a corresponding obstacle image is less than or equal to a preset error threshold, the individual identification result of the radiation source corresponding to the sample is considered to be accurate. Wherein the preset error threshold may be 0.05.
And a fourth substep of taking the initial obstacle detection model as a trained obstacle detection model in response to determining that the initial obstacle detection model achieves the optimization objective.
And substep 3, in response to determining that the initial obstacle detection model does not reach the optimization target, adjusting relevant parameters of the initial obstacle detection model, reselecting a sample from the sample set, taking the adjusted initial obstacle detection model as the initial obstacle detection model, and executing the training step again.
As an example, the network parameters of the initial neural network may be adjusted by using a Back propagation Algorithm (BP Algorithm) and a gradient descent method (e.g., a small batch gradient descent Algorithm).
Step 1034, performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates as first road boundary information and first obstacle information.
In some embodiments, the execution body may coordinate-convert the road boundary information and the obstacle information to generate the road boundary information and the obstacle information in world coordinates as the first road boundary information and the first obstacle information.
As an example, the execution body may coordinate-convert the road boundary information and the obstacle information using interpolation to generate the road boundary information and the obstacle information in world coordinates as the first road boundary information and the first obstacle information.
In some optional implementations of some embodiments, a coordinate system corresponding to the road boundary information and the obstacle information is a coordinate system established based on the target vehicle; and the coordinate conversion of the road boundary information and the obstacle information to generate the road boundary information and the obstacle information in world coordinates may include:
in a first step, the executing entity may determine a common coordinate set in the target vehicle coordinate system and the world coordinate system. Wherein the common coordinate set may be position coordinates of at least one object existing in both the target vehicle coordinate system and the world coordinate system.
In a second step, the executing entity may determine a first position coordinate set of the common coordinate set in the target vehicle coordinate system.
As an example, the execution subject may acquire a first position coordinate set of the common coordinate set in the target vehicle coordinate system through a sensor carried by the target vehicle.
Third, the executing entity may determine a second position coordinate set of the common coordinate set in the world coordinate system.
As an example, the executing entity may acquire the second position coordinate set of the common coordinate set in the world coordinate system through a global positioning system.
And fourthly, the executing body can determine the conversion relation between the target vehicle coordinate system and the world coordinate system according to the first position coordinate set and the second position coordinate set.
As an example, the execution agent may first construct a transformation relation containing unknown parameters. And then, substituting the first position coordinate set and the second position coordinate set into a conversion relation to obtain unknown parameters, namely obtaining the conversion relation between the target vehicle coordinate system and the world coordinate system.
The executing body may perform coordinate transformation on the road boundary information and the obstacle information according to the transformation relationship to generate road boundary information and obstacle information in world coordinates.
As an example, the executing entity may substitute the position coordinate set of the point cloud data set corresponding to the road boundary information and the position coordinate set of the point cloud data set corresponding to the obstacle information into the conversion relationship, respectively, generate a position coordinate set of the point cloud data set corresponding to the road boundary information and a position coordinate set of the point cloud data set corresponding to the obstacle information in the world coordinate, and obtain the road boundary information and the obstacle information in the world coordinate.
And 1035, determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information.
In some embodiments, the execution subject may determine the local obstacle avoidance road path information according to the first road boundary information and the first obstacle information. The local obstacle avoidance road path information may be a piece of path information obtained by detecting an environment around the target vehicle.
As an example, the execution subject may determine the local obstacle avoidance road path information according to the first road boundary information and the first obstacle information by using an a-star algorithm.
In some optional implementation manners of some embodiments, the determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information may include the following steps:
in a first step, the execution subject may acquire vehicle information of the target vehicle.
As an example, the execution subject described above may acquire the vehicle information of the target vehicle through a sensor mounted on the target vehicle. The vehicle information may be real-time position information and course angle information of the target vehicle.
And secondly, the execution main body can construct a road boundary constraint model according to the first road boundary information.
As an example, the execution subject may construct a road boundary constraint model according to the first road boundary information. Wherein, the left road boundary constraint model can be expressed as:
Figure 122794DEST_PATH_IMAGE004
wherein,
Figure 850578DEST_PATH_IMAGE005
expressed as road boundary repulsion gain factor.
Figure 277012DEST_PATH_IMAGE006
Indicated as the center of the target vehicle.
Figure 748444DEST_PATH_IMAGE007
Represented as a coordinate vector of the corresponding point on the left boundary.
Figure 802988DEST_PATH_IMAGE008
Indicated as target vehicle width.
The right-side road boundary constraint model may be expressed as:
Figure 652739DEST_PATH_IMAGE009
wherein,
Figure 628785DEST_PATH_IMAGE005
expressed as road boundary repulsion gain factor.
Figure 700646DEST_PATH_IMAGE006
Indicated as the center of the target vehicle.
Figure 78538DEST_PATH_IMAGE010
Represented as a coordinate vector of the corresponding point on the right boundary of the road.
Figure 820229DEST_PATH_IMAGE008
Indicated as target vehicle width.
And thirdly, constructing an obstacle constraint model according to the first obstacle information. The obstacle constraint model may be:
Figure 345888DEST_PATH_IMAGE011
wherein,
Figure 159124DEST_PATH_IMAGE012
expressed as repulsive field gain.
Figure 578473DEST_PATH_IMAGE013
Expressed as the distance between the target vehicle and the first obstacle.
Figure 84540DEST_PATH_IMAGE014
Expressed as the boundary distance at which the first obstacle generates a repulsive force against the target vehicle.
Figure 566337DEST_PATH_IMAGE015
Expressed as a unit vector of the first obstacle to the target vehicle.
Figure 120946DEST_PATH_IMAGE016
Expressed as repulsive force applied to the target vehicleThe number of fields.
And fourthly, the executing body can fuse the road boundary constraint model, the obstacle constraint model and the path planning model to obtain a local path planning model. The path planning model may be represented as:
Figure 207851DEST_PATH_IMAGE017
wherein,
Figure 947137DEST_PATH_IMAGE018
the gravitational field gain is indicated.
Figure 542329DEST_PATH_IMAGE019
Expressed as a unit vector between the target vehicle and the travel end point.
Figure 697366DEST_PATH_IMAGE020
Is the distance between the target vehicle and the travel termination point. Wherein,
Figure 966674DEST_PATH_IMAGE021
Figure 814544DEST_PATH_IMAGE022
expressed as the position coordinates of the target vehicle.
Figure 146299DEST_PATH_IMAGE023
Expressed as the position coordinates of the travel end point.
The above local path planning model may be represented as:
Figure 167345DEST_PATH_IMAGE024
wherein,
Figure 963263DEST_PATH_IMAGE025
represented as a path planning model.
Figure 168985DEST_PATH_IMAGE026
Represented as a left road boundary constraint model.
Figure 112670DEST_PATH_IMAGE027
Represented as the right-side road boundary constraint model.
Figure 875090DEST_PATH_IMAGE028
Represented as a collection of multiple obstacle constraint models.
Figure 728776DEST_PATH_IMAGE029
Expressed as the number of obstacles in the surroundings of the target vehicle.
And fifthly, the execution main body can determine local obstacle avoidance road path information according to the local path planning model.
As an example, the executing entity may plan a piece of local obstacle avoidance road path information through a local path planning model.
And step 1036, updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to travel to a travel termination point.
In some embodiments, the execution subject may perform path update on the global road path information according to the local obstacle avoidance road path information, and control the target vehicle to travel to the travel termination point.
As an example, the execution body may update path information corresponding to the local obstacle avoidance path information in the original global road path information, update the path information into the local obstacle avoidance path information, and control the target vehicle to travel to the travel termination point, through the received local obstacle avoidance road path information.
Optionally, the method may further include the steps of:
and controlling the target vehicle to travel to the travel termination point according to the global road path information in response to determining that the target vehicle does not detect the obstacle.
As an example, the execution body may control the target vehicle to travel to the travel termination point by receiving global road path information.
The above embodiments of the present disclosure have the following beneficial effects: the path planning method of some embodiments of the present disclosure may combine the physical limitation of the vehicle itself and the limitation of the vehicle surrounding environment, so that the vehicle smoothly travels on the lane to the travel termination point, thereby improving the travel safety of the vehicle. Specifically, the reason why the vehicle cannot travel to the travel end point and the problem of hunting occurs near the obstacle is that: the problem that the position of a zero potential field and a local minimum value exist in a complex environment in a traditional artificial potential field algorithm, so that a vehicle is trapped in a local optimal solution. Based on this, the route planning method of some embodiments of the present disclosure may first determine global road route information corresponding to the target map according to the acquired driving start point and driving end point set by the target user. Here, the obtained global road path information is used for making a path reference for the target vehicle, and avoiding collision with a static obstacle in the target map. Then, in response to determining that the target vehicle is located at the travel start point, the environment around the target vehicle is detected in real time. The method mainly detects the dynamic obstacles in the surrounding environment of the target vehicle, reduces traffic accidents caused by accidents of the vehicle and further improves the safety of the vehicle. Finally, in response to determining that the target vehicle detects an obstacle, performing the following path planning steps: the method comprises the steps of firstly, controlling a laser radar loaded on a target vehicle to obtain a point cloud data set of a target road section. Here, the obtained point cloud data set is used for subsequently extracting road boundary information and obstacle information. And secondly, performing ground segmentation on the point cloud data set to obtain road boundary information. Thirdly, performing obstacle detection on the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information. The obtained road boundary information and the obtained obstacle information are used for determining the local obstacle avoidance road path information in the follow-up process. And fourthly, performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information. And fifthly, determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information. According to the method, a piece of local obstacle avoidance road path information is planned according to the physical limitation and the environmental limitation of a target vehicle, and the problems of unreachability and local optimal solution of a traditional artificial potential field algorithm are effectively solved. And sixthly, updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to run to the running end point. The target vehicle drives to the driving end point according to the obtained local obstacle avoidance road path information, so that the problem that the target vehicle vibrates near an obstacle is solved, and the driving safety of the target vehicle is improved. Therefore, the path planning method can combine the vehicle and the limit of the environment to enable the vehicle to smoothly run to the running end point on the lane, thereby improving the running safety of the vehicle.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a path planning apparatus, which correspond to those shown in fig. 1, and which may be applied in various electronic devices.
As shown in fig. 2, a path planning apparatus 200 includes: a determination unit 201, a real-time detection unit 202 and an execution unit 203. Wherein the determining unit 201 is configured to: and determining global road path information corresponding to the target map according to the acquired driving starting point and driving ending point set by the target user. The real-time detection unit 202 is configured to: and responding to the determined target vehicle to be positioned at the driving starting point, and detecting the surrounding environment of the target vehicle in real time. The execution unit 203 is configured to: in response to determining that the target vehicle detects an obstacle, performing the following path planning steps: controlling a laser radar loaded on the target vehicle to acquire a point cloud data set of a target road section; performing ground segmentation on the point cloud data set to obtain road boundary information; and detecting the obstacles in the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information; performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information; determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information; and updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to travel to the travel termination point.
It will be appreciated that the elements described in the path planner 200 correspond to the various steps in the method described with reference to figure 1. Thus, the operations, features and resulting advantages described above for the method are also applicable to the path planning apparatus 200 and the units included therein, and are not described herein again.
Referring now to fig. 3, a block diagram of an electronic device (e.g., electronic device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate with other devices, wireless or wired, to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining global road path information corresponding to the target map according to the obtained driving starting point and driving ending point set by the target user; in response to the fact that the target vehicle is located at the driving starting point, detecting the surrounding environment of the target vehicle in real time; in response to determining that the target vehicle detects an obstacle, performing the following path planning steps: controlling a laser radar loaded on the target vehicle to acquire a point cloud data set of a target road section; performing ground segmentation on the point cloud data set to obtain road boundary information; and detecting the obstacles in the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information; performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information; determining local obstacle avoidance road path information according to the first road boundary information and the first obstacle information; and updating the global road path information according to the local obstacle avoidance road path information, and controlling the target vehicle to travel to the travel termination point.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a determination unit, a real-time detection unit, and an execution unit. Where the names of these units do not constitute a limitation on the unit itself in some cases, for example, the determination unit may also be described as a "unit that determines the global road path information corresponding to the target map from the acquired travel start point and travel end point set by the target user".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combinations of the above-mentioned features, and other embodiments in which the above-mentioned features or their equivalents are combined arbitrarily without departing from the spirit of the invention are also encompassed. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A path planning method, comprising:
determining global road path information corresponding to the target map according to the obtained driving starting point and driving ending point set by the target user;
detecting the surrounding environment of the target vehicle in real time in response to determining that the target vehicle is located at the driving starting point;
in response to determining that the target vehicle detects an obstacle, performing the following path planning steps:
controlling a laser radar loaded on the target vehicle to obtain a point cloud data set of a target road section;
performing ground segmentation on the point cloud data set to obtain road boundary information;
and carrying out obstacle detection on the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information;
performing coordinate conversion on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, wherein the road boundary information and the obstacle information are used as first road boundary information and first obstacle information;
acquiring vehicle information of the target vehicle;
and constructing a road boundary constraint model according to the first road boundary information, wherein the left road boundary constraint model is expressed as:
Figure QLYQS_1
wherein,
Figure QLYQS_2
expressed as a road boundary repulsive force gain factor,
Figure QLYQS_3
indicated as the center of the target vehicle,
Figure QLYQS_4
expressed as a coordinate vector of the corresponding point on the left border,
Figure QLYQS_5
expressed as a target vehicle width;
the right-side road boundary constraint model is expressed as:
Figure QLYQS_6
wherein,
Figure QLYQS_7
expressed as a road boundary repulsive force gain factor,
Figure QLYQS_8
indicated as the center of the target vehicle,
Figure QLYQS_9
expressed as a coordinate vector of the corresponding point on the right border of the road,
Figure QLYQS_10
expressed as a target vehicle width;
constructing an obstacle constraint model according to the first obstacle information, wherein the obstacle constraint model is as follows:
Figure QLYQS_11
wherein,
Figure QLYQS_12
expressed as the gain of the repulsive force field,
Figure QLYQS_13
is shown as a target vehicleThe distance to the first obstacle is such that,
Figure QLYQS_14
expressed as the boundary distance at which the first obstacle generates a repulsive force to the target vehicle,
Figure QLYQS_15
expressed as a unit vector of the first obstacle to the target vehicle,
Figure QLYQS_16
expressed as the number of repulsive potential fields experienced by the target vehicle;
and fusing the road boundary constraint model, the obstacle constraint model and a path planning model to obtain a local path planning model, wherein the path planning model is expressed as:
Figure QLYQS_17
wherein,
Figure QLYQS_18
the gain of the gravitational field is represented,
Figure QLYQS_19
expressed as a unit vector between the target vehicle and the travel end point,
Figure QLYQS_20
is the distance between the target vehicle and the travel end point, wherein,
Figure QLYQS_21
Figure QLYQS_22
expressed as the position coordinates of the target vehicle,
Figure QLYQS_23
position coordinates expressed as a travel end point;
the local path planning model is represented as:
Figure QLYQS_24
wherein,
Figure QLYQS_25
represented as a model of the path planning model,
Figure QLYQS_26
represented as a left-hand road boundary constraint model,
Figure QLYQS_27
represented as the right-hand road boundary constraint model,
Figure QLYQS_28
represented as a collection of a plurality of obstacle constraint models,
Figure QLYQS_29
expressed as the number of obstacles in the surroundings of the target vehicle;
determining local obstacle avoidance road path information according to the local path planning model;
and according to the local obstacle avoidance road path information, performing path updating on the global road path information, and controlling the target vehicle to travel to the travel termination point.
2. The method of claim 1, wherein the method further comprises:
in response to determining that the target vehicle does not detect an obstacle, controlling the target vehicle to travel to the travel termination point according to the global road path information.
3. The method of claim 1, wherein the ground segmentation of the point cloud dataset to obtain road boundary information comprises:
carrying out motion deformity correction on the point cloud data set to generate a point cloud data set which is positioned at the same moment under the coordinate system of the target vehicle and is used as a first point cloud data set;
filtering the first point cloud data set to generate a filtered first point cloud data set in a preset range as a second point cloud data set;
performing ground segmentation on the second point cloud data set to obtain a ground point cloud data set;
screening the ground point cloud data set to obtain a road boundary point cloud data set;
in response to determining that the road boundary point cloud dataset is not continuous, generating road boundary information from the road boundary point cloud dataset.
4. The method of claim 1, wherein the performing obstacle detection on the point cloud data set to obtain obstacle information comprises:
preprocessing the point cloud data set to obtain a sparse tensor of the point cloud data set;
inputting the sparse tensor to an obstacle detection model to obtain obstacle information, wherein the obstacle detection model is obtained through the following steps:
obtaining a sample set, wherein the sample set comprises: the system comprises an obstacle image set and a sample label set corresponding to the obstacle image set;
for each sample in the set of samples, performing the following training steps:
inputting the obstacle image of the sample into an initial obstacle detection model to obtain an obstacle identification result corresponding to the sample;
comparing the obstacle identification result with a corresponding sample label of the corresponding obstacle image;
determining whether the initial obstacle detection model reaches a preset optimization target or not according to a comparison result;
in response to determining that the initial obstacle detection model reaches the optimization goal, taking the initial obstacle detection model as a trained obstacle detection model;
and in response to determining that the initial obstacle detection model does not reach the optimization target, adjusting relevant parameters of the initial obstacle detection model, reselecting samples from the sample set, taking the adjusted initial obstacle detection model as the initial obstacle detection model, and executing the training step again.
5. The method according to claim 1, wherein the coordinate system to which the road boundary information and the obstacle information correspond is a coordinate system established based on the target vehicle;
and the coordinate conversion of the road boundary information and the obstacle information is performed to generate road boundary information and obstacle information in world coordinates, including:
determining a common coordinate set in the target vehicle coordinate system and the world coordinate system;
determining a first set of location coordinates of the common set of coordinates in the target vehicle coordinate system;
determining a second set of location coordinates of the common set of coordinates in the world coordinate system;
determining a conversion relation between the target vehicle coordinate system and a world coordinate system according to the first position coordinate set and the second position coordinate set;
and performing coordinate conversion on the road boundary information and the obstacle information according to the conversion relation to generate road boundary information and obstacle information under world coordinates.
6. A path planner, comprising:
a determination unit configured to determine global road path information corresponding to the target map according to the acquired driving start point and driving end point set by the target user;
a real-time detection unit configured to detect an environment around a target vehicle in real time in response to a determination that the target vehicle is located at the driving start point;
an execution unit configured to, in response to determining that the target vehicle detects an obstacle, perform the following path planning steps: controlling a laser radar loaded on the target vehicle to obtain a point cloud data set of a target road section; performing ground segmentation on the point cloud data set to obtain road boundary information; and carrying out obstacle detection on the point cloud data set to obtain obstacle information, wherein the obstacle information comprises: static obstacle information and dynamic obstacle information; coordinate conversion is carried out on the road boundary information and the obstacle information to generate road boundary information and obstacle information under world coordinates, and the road boundary information and the obstacle information are used as first road boundary information and first obstacle information; acquiring vehicle information of the target vehicle; according to the first road boundary information, a road boundary constraint model is constructed, wherein the left road boundary constraint model is expressed as:
Figure QLYQS_30
wherein,
Figure QLYQS_31
expressed as a road boundary repulsive force gain factor,
Figure QLYQS_32
indicated as the center of the target vehicle,
Figure QLYQS_33
represented as a coordinate vector of the corresponding point on the left boundary,
Figure QLYQS_34
expressed as a target vehicle width; the right-hand road boundary constraint model is expressed as:
Figure QLYQS_35
wherein,
Figure QLYQS_36
expressed as a road boundary repulsive force gain factor,
Figure QLYQS_37
indicated as the center of the target vehicle,
Figure QLYQS_38
expressed as a coordinate vector of the corresponding point on the right border of the road,
Figure QLYQS_39
expressed as a target vehicle width; constructing an obstacle constraint model according to the first obstacle information, wherein the obstacle constraint model is as follows:
Figure QLYQS_40
wherein,
Figure QLYQS_41
expressed as the gain of the repulsive force field,
Figure QLYQS_42
expressed as the distance between the target vehicle and the first obstacle,
Figure QLYQS_43
expressed as the boundary distance at which the first obstacle generates a repulsive force against the target vehicle,
Figure QLYQS_44
expressed as a unit vector of the first obstacle to the target vehicle,
Figure QLYQS_45
expressed as the number of repulsive potential fields experienced by the target vehicle; and fusing the road boundary constraint model, the obstacle constraint model and a path planning model to obtain a local path planning model, wherein the path planning model is expressed as:
Figure QLYQS_46
wherein,
Figure QLYQS_47
the gain of the gravitational field is represented,
Figure QLYQS_48
expressed as a unit vector between the target vehicle and the travel end point,
Figure QLYQS_49
is the distance between the target vehicle and the travel end point, wherein,
Figure QLYQS_50
Figure QLYQS_51
expressed as the position coordinates of the target vehicle,
Figure QLYQS_52
position coordinates expressed as a travel end point; the local path planning model is represented as:
Figure QLYQS_53
wherein,
Figure QLYQS_54
represented as a model of the path planning model,
Figure QLYQS_55
represented as a left-side road-boundary constraint model,
Figure QLYQS_56
represented as the right-hand road boundary constraint model,
Figure QLYQS_57
represented as a collection of a plurality of obstacle constraint models,
Figure QLYQS_58
expressed as the number of obstacles in the surroundings of the target vehicle; determining local obstacle avoidance road path information according to the local path planning model; and according to the local obstacle avoidance road path information, performing path updating on the global road path information, and controlling the target vehicle to travel to the travel termination point.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202211553459.5A 2022-12-06 2022-12-06 Path planning method and device, electronic equipment and computer readable medium Active CN115540896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553459.5A CN115540896B (en) 2022-12-06 2022-12-06 Path planning method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553459.5A CN115540896B (en) 2022-12-06 2022-12-06 Path planning method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115540896A CN115540896A (en) 2022-12-30
CN115540896B true CN115540896B (en) 2023-03-07

Family

ID=84721808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553459.5A Active CN115540896B (en) 2022-12-06 2022-12-06 Path planning method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115540896B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052122B (en) * 2023-01-28 2023-06-27 广汽埃安新能源汽车股份有限公司 Method and device for detecting drivable space, electronic equipment and storage medium
CN115827938B (en) * 2023-02-20 2023-04-21 四川省煤田测绘工程院 Homeland space planning data acquisition method, electronic equipment and computer readable medium
CN116166033B (en) * 2023-04-21 2024-05-21 深圳市速腾聚创科技有限公司 Vehicle obstacle avoidance method, device, medium and electronic equipment
CN116400362B (en) * 2023-06-08 2023-08-08 广汽埃安新能源汽车股份有限公司 Driving boundary detection method, device, storage medium and equipment
CN116588087B (en) * 2023-07-14 2023-09-29 上海伯镭智能科技有限公司 Unmanned mine car loading and unloading point parking method based on multi-mode data
CN117148837B (en) * 2023-08-31 2024-08-23 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium
CN117848350B (en) * 2024-03-05 2024-05-07 湖北华中电力科技开发有限责任公司 Unmanned aerial vehicle route planning method for power transmission line construction engineering
CN118243105A (en) * 2024-03-28 2024-06-25 北京小米机器人技术有限公司 Path planning method, path planning device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445503A (en) * 2018-03-12 2018-08-24 吉林大学 The unmanned path planning algorithm merged with high-precision map based on laser radar
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN110717981A (en) * 2019-09-05 2020-01-21 中国人民解放军陆军工程大学 Method and device for acquiring indoor passable area of small robot
CN111445472A (en) * 2020-03-26 2020-07-24 达闼科技成都有限公司 Laser point cloud ground segmentation method and device, computing equipment and storage medium
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar
CN113341970A (en) * 2021-06-01 2021-09-03 苏州天准科技股份有限公司 Intelligent inspection navigation obstacle avoidance system, method, storage medium and inspection vehicle
CN113379776A (en) * 2021-05-14 2021-09-10 北京踏歌智行科技有限公司 Road boundary detection method
CN113759391A (en) * 2021-08-27 2021-12-07 武汉大学 Passable area detection method based on laser radar
CN114442630A (en) * 2022-01-25 2022-05-06 浙江大学 Intelligent vehicle planning control method based on reinforcement learning and model prediction
CN114743181A (en) * 2022-04-29 2022-07-12 重庆长安汽车股份有限公司 Road vehicle target detection method and system, electronic device and storage medium
CN114820662A (en) * 2022-05-23 2022-07-29 燕山大学 Road side visual angle ground segmentation method, system and medium based on point cloud two-dimensional density
CN114924559A (en) * 2022-04-08 2022-08-19 南京航空航天大学 Intelligent vehicle path planning method for controlling virtual force direction

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445503A (en) * 2018-03-12 2018-08-24 吉林大学 The unmanned path planning algorithm merged with high-precision map based on laser radar
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN110717981A (en) * 2019-09-05 2020-01-21 中国人民解放军陆军工程大学 Method and device for acquiring indoor passable area of small robot
CN111445472A (en) * 2020-03-26 2020-07-24 达闼科技成都有限公司 Laser point cloud ground segmentation method and device, computing equipment and storage medium
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar
CN113379776A (en) * 2021-05-14 2021-09-10 北京踏歌智行科技有限公司 Road boundary detection method
CN113341970A (en) * 2021-06-01 2021-09-03 苏州天准科技股份有限公司 Intelligent inspection navigation obstacle avoidance system, method, storage medium and inspection vehicle
CN113759391A (en) * 2021-08-27 2021-12-07 武汉大学 Passable area detection method based on laser radar
CN114442630A (en) * 2022-01-25 2022-05-06 浙江大学 Intelligent vehicle planning control method based on reinforcement learning and model prediction
CN114924559A (en) * 2022-04-08 2022-08-19 南京航空航天大学 Intelligent vehicle path planning method for controlling virtual force direction
CN114743181A (en) * 2022-04-29 2022-07-12 重庆长安汽车股份有限公司 Road vehicle target detection method and system, electronic device and storage medium
CN114820662A (en) * 2022-05-23 2022-07-29 燕山大学 Road side visual angle ground segmentation method, system and medium based on point cloud two-dimensional density

Also Published As

Publication number Publication date
CN115540896A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN115540896B (en) Path planning method and device, electronic equipment and computer readable medium
EP3707469B1 (en) A point clouds registration system for autonomous vehicles
US11465642B2 (en) Real-time map generation system for autonomous vehicles
US11468690B2 (en) Map partition system for autonomous vehicles
US11315317B2 (en) Point clouds ghosting effects detection system for autonomous driving vehicles
CN111462275B (en) Map production method and device based on laser point cloud
US20210302585A1 (en) Smart navigation method and system based on topological map
KR102539942B1 (en) Method and apparatus for training trajectory planning model, electronic device, storage medium and program
WO2017020466A1 (en) Urban road recognition method, apparatus, storage medium and device based on laser point cloud
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN115761702B (en) Vehicle track generation method, device, electronic equipment and computer readable medium
WO2023155580A1 (en) Object recognition method and apparatus
CN115185271A (en) Navigation path generation method and device, electronic equipment and computer readable medium
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
CN114612616A (en) Mapping method and device, electronic equipment and storage medium
CN112622923B (en) Method and device for controlling a vehicle
CN114267027A (en) Image processing method and device
CN116279596A (en) Vehicle control method, apparatus, electronic device, and computer-readable medium
CN112558036B (en) Method and device for outputting information
CN115468578B (en) Path planning method and device, electronic equipment and computer readable medium
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN110363847B (en) Map model construction method and device based on point cloud data
CN115565374A (en) Logistics vehicle driving optimization method and device, electronic equipment and readable storage medium
CN112668371B (en) Method and device for outputting information
CN111383337A (en) Method and device for identifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant