CN115774444A - Route planning optimization method based on sparse navigation map - Google Patents

Route planning optimization method based on sparse navigation map Download PDF

Info

Publication number
CN115774444A
CN115774444A CN202111057852.0A CN202111057852A CN115774444A CN 115774444 A CN115774444 A CN 115774444A CN 202111057852 A CN202111057852 A CN 202111057852A CN 115774444 A CN115774444 A CN 115774444A
Authority
CN
China
Prior art keywords
road
coordinate system
intelligent vehicle
information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111057852.0A
Other languages
Chinese (zh)
Other versions
CN115774444B (en
Inventor
安成刚
张旗
吴程飞
李巍
李会祥
王增志
李志永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Langfang Heyi Life Network Technology Co ltd
Original Assignee
Langfang Heyi Life Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Langfang Heyi Life Network Technology Co ltd filed Critical Langfang Heyi Life Network Technology Co ltd
Priority to CN202111057852.0A priority Critical patent/CN115774444B/en
Publication of CN115774444A publication Critical patent/CN115774444A/en
Application granted granted Critical
Publication of CN115774444B publication Critical patent/CN115774444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure relates to a path planning optimization method based on a sparse navigation map, which comprises the following steps: the intelligent vehicle vision system acquires obstacle information sensed by the sensing equipment on the intelligent vehicle and driving image information of a designated area in front of the intelligent vehicle; converting the obstacle information into an RTK geodetic coordinate system, and determining a passable area in the driving image information and converting the passable area into the RTK geodetic coordinate system; acquiring sparse navigation data according to a known sparse navigation map and the position information of the intelligent vehicle; and performing collision detection twice on the sparse navigation data by combining the information of the obstacles in the two-dimensional longitude and latitude grid map according to the constraint condition of the boundary information of the passable area and the safety distance of the intelligent vehicle to obtain optimized navigation data, and obtaining the optimal path planning scheme of the intelligent vehicle by adopting an A-x algorithm. The method can complete the detection and tracking of the vehicle road posture of the current road environment, and realize intelligent vehicle road detection and path optimization in a no-priori-map mode.

Description

Path planning optimization method based on sparse navigation map
Technical Field
The application belongs to the technical field of intelligent driving, and particularly relates to a path planning optimization method based on a sparse navigation map.
Background
The unmanned automobile as an artificial intelligence technology is applied to the automobile industry and the traffic field, and is closely concerned by the industry and the even the national level in recent years in the world, and the safe driving of the unmanned automobile depends on the accurate perception of the road environment. For unmanned vehicles, the road environment may be divided into structured roads and unstructured roads. The structured road has clear road boundaries, the road surface is even and smooth and has consistent optical properties, and the unmanned vehicle can easily adjust the direction of the unmanned vehicle in time according to the detection and positioning of the road marking lines. However, in a real environment, a large number of unstructured road regions with unobvious road characteristics exist, due to the lack of obvious road surface identification, the boundary is fuzzy, the difference of different roads is large, unknown factors influencing the recognition result often appear, and in addition, the artificial labeling information of the unstructured roads is often lost by the offline high-precision map, so that the perception of the unstructured road environment is the difficulty of unmanned driving. In fact, a high-precision map is not required when a human being drives an automobile, but only the relative position (left, center, right) of the automobile on the road needs to be sensed.
In view of this, currently, research on human driving automobile experience, particularly a method for perceiving a road in a lean navigation map mode such as high delicacy or hundredth and the like, that is, identification of the road and the road environment, and how to identify the position, speed and movement direction of a road participant by means of the mode of human driving an automobile, so that an intelligent vehicle intelligent system can make efficient decision and control, and the method becomes a technical problem which needs to be solved at present.
Disclosure of Invention
Technical problem to be solved
In view of the above disadvantages and shortcomings of the prior art, the present application provides a route planning optimization method based on a sparse navigation map.
(II) technical scheme
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method for optimizing a path plan based on a sparse navigation map, the method including:
b10, the intelligent vehicle vision system acquires obstacle information sensed by the sensing equipment on the intelligent vehicle and driving image information of a designated area in front of the intelligent vehicle;
b20, the intelligent vehicle vision system converts the obstacle information into an RTK (real-time kinematic) geodetic coordinate system, determines a passable area in the driving image information and converts the determined passable area into the RTK geodetic coordinate system at the same time, and digital road information of the passable area in the RTK geodetic coordinate system is obtained;
b30, acquiring sparse navigation data according to the known sparse navigation map and the position information of the intelligent vehicle;
b40, performing collision detection on the sparse navigation data according to the constraint condition of the digital road information of the passable area under the RTK coordinate system and the safety distance of the intelligent vehicle and the information of the obstacles in the two-dimensional longitude and latitude grid map so as to obtain optimized navigation data;
and B50, acquiring an optimal path planning scheme of the intelligent vehicle by adopting an A-star algorithm based on the optimized navigation data.
Optionally, B10 comprises:
b11, the intelligent vehicle vision system receives driving image information of a designated area in front of the intelligent vehicle;
and B12, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle.
Optionally, B20 comprises:
b21, segmenting and labeling the driving image information, and determining a passable area and a non-passable area;
b22, converting the passable area based on a perspective projection transformation model, and acquiring digital road information of the passable area under RTK geodetic coordinates;
and B23, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle and converting the obstacle information into an RTK geodetic coordinate system.
Optionally, B22 comprises:
b22-1, converting a camera view of the driving scene of the intelligent vehicle into a virtual aerial view from top to bottom according to the perspective projection transformation model, and performing binarization processing on the aerial view;
b22-2, extracting the road boundary of the passable area by using a Canny edge detection operator based on the aerial view after binarization processing, and performing polynomial curve fitting to obtain a curve equation of the road boundary;
b22-3, acquiring the absolute position M (x) of each pixel (u, v) in the image coordinate system on the road projection plane P based on the coordinate conversion relation among the road world coordinate system, the camera coordinate system and the image coordinate system 0 ,y 0 ,z 0 ): determining the absolute position of the road boundary of the passable area on the road projection surface;
the coordinate conversion relation among the road world coordinate system, the camera coordinate system and the image coordinate system is calculated by using the relation between the height information of the known camera from the road plane and the object image;
b22-4, obtaining longitude and latitude coordinates of a camera coordinate system through coordinate transformation by utilizing the vehicle-mounted RTK longitude and latitude coordinates, realizing alignment transformation from boundary metric coordinates of a passable area under a road surface coordinate system to the RTK longitude and latitude coordinates, and obtaining road boundary longitude and latitude coordinates of the passable area and an azimuth angle theta of a road boundary fitting curve starting point under an RTK geodetic coordinate system;
and the longitude and latitude coordinates of the road boundary of the passable area under the geodetic coordinate system and the azimuth angle theta of the starting point of the fitting curve of the road boundary of the passable area form digital road information.
Alternatively, B22-3 comprises:
suppose the focal length f, optical center (u) in the camera's internal reference matrix 0 ,v 0 ) As is known, the roll angle and the pitch angle of the intelligent vehicle relative to the road surface are both 0, and the camera height z c =h cam Then, according to the camera model, the following are obtained:
Figure BDA0003255395950000041
the camera coordinate system and the road surface coordinate system are parallel to each other, Y C Is the normal of the lens surface (camera depth direction), P is the tangent plane of the road surface, Z C The direction and the z direction are the normal of the road surface, and X and y are a camera coordinate system X C ,Y C Projection on the road surface, O Road Is the origin O of the camera coordinate system Camera with a camera module The projection on the section of the road surface shows that the coordinate of the point M on the road surface projection plane in the road surface coordinate system is (x) c ,y c 0), realizing the scale conversion from pixels to meters of an image coordinate system and a road surface coordinate system through the formula (1), and determining the absolute position of the road boundary of the passable area on a road projection surface; the camera is a monocular camera installed on the intelligent vehicle.
Alternatively, B22-4 comprises:
wherein theta is less than 10 degrees or more than 350 degrees and is east, theta is more than 10 degrees and less than 80 degrees and is northeast;
theta is more than 80 degrees and less than 100 degrees and is north, theta is more than 100 degrees and less than 170 degrees and is northwest;
theta is more than 170 degrees and less than 190 degrees, which is west, theta is more than 190 degrees and less than 260 degrees, which is southwest;
theta is more than 260 degrees and less than 280 degrees, and the south is more than 280 degrees and less than 350 degrees, and the southeast is obtained.
Optionally, B40 comprises:
b41, obtaining the road direction of each navigation point of the sparse navigation data through visual navigation, and vertically projecting the navigation data to form a new navigation path;
b42, performing collision detection on the new navigation path by taking the safety distance of the intelligent vehicle and the digital road information of the passable area under the RTK geodetic coordinate system as constraint conditions, and determining the path after the first optimization;
b43, establishing a two-dimensional longitude and latitude grid map according to the obstacle information under the RTK geodetic coordinate system;
b44, projecting the path after the first optimization in a two-dimensional longitude and latitude grid map to obtain path data;
and B45, performing collision detection on the projected path data and the obstacle information to obtain finally optimized navigation data.
Optionally, B50 comprises:
taking the position of the current intelligent vehicle on the grid map as a starting point M, taking the first point of the optimized navigation data as an end point N, and finding the shortest path from the point M to the point N by using an A-routing algorithm under the condition that road boundary and obstacle position information are known, thereby realizing the real-time road planning of the intelligent vehicle in the sparse navigation map mode;
the starting point and the end point are two adjacent points on the optimized navigation path, and the shortest path of the navigation is formed by connecting discrete path points.
In a second aspect, an embodiment of the present invention further provides an embedded processing system, including: the route planning optimization method based on the sparse navigation map comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory so as to realize the execution of the steps of the route planning optimization method based on the sparse navigation map in any one of the first aspect.
In a third aspect, an embodiment of the present invention further provides an intelligent vehicle, which includes a monocular camera installed in front of the intelligent vehicle, a lidar, a plurality of sensors, and the embedded processing system of the second aspect, where the monocular camera, the lidar, and the plurality of sensors are all in communication with the embedded processing system.
(III) advantageous effects
The technical scheme provided by the application can comprise the following beneficial effects:
the method firstly combines a lightweight monocular image segmentation network model to obtain a passable road area, and realizes the alignment transformation from the passable area boundary metric system coordinate to the RTK longitude and latitude coordinate under a road surface coordinate system through perspective projection transformation, thereby completing the detection and tracking of the vehicle road posture of the current road environment.
Further, different from the current automatic driving automobile which utilizes a high-precision map and high-precision positioning (centimeter level) to carry out planning navigation, the method carries out secondary difference optimization analysis by means of a low-cost sparse navigation map and the current actual physical world road after acquiring the attitude information of the automobile road; and finally, a passable vehicle planning track is generated in the intelligent vehicle local sensing system in an optimized mode, the dependence of an automatic driving technology on a high-precision navigation map is broken, and therefore road detection and path optimization of the intelligent vehicle in a no-priori-map mode are achieved.
Drawings
Fig. 1A is a flowchart of a method for sensing a vehicle road posture based on monocular vision according to an embodiment of the present invention;
fig. 1B is a flowchart of a sparse navigation map-based path planning optimization method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a passable region segmented by a Mask R-CNN deep learning model;
FIG. 3 is a schematic diagram of the aerial view converted and binarized view of the passable area segmented in FIG. 2;
FIG. 4 is a schematic diagram of a road fitting effect of a curve of a traffic area obtained under a view angle of an aerial view;
FIG. 5 is a schematic diagram of a conversion relationship between an image plane and a road projection plane;
FIG. 6 is a schematic diagram of a road boundary grid map of a class-leaving traffic area in an RTK geodetic coordinate system;
FIG. 7 is a schematic diagram of vehicle road attitude sensing for an unstructured road environment;
FIG. 8 is a schematic diagram of the wrong navigation data of the square in front of the station given by the Gade sparse navigation map;
FIG. 9 is a schematic view of a navigation path optimized based on the Goodpastel navigation data;
FIG. 10 is a schematic diagram of navigation data optimized by an intelligent vehicle on the basis of a sparse navigation map;
fig. 11 is a schematic diagram of a route tracking track a of the smart car on the optimized path;
fig. 12 is a schematic diagram of collision detection.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present embodiments of the invention, which are illustrated in the accompanying drawings. It is to be understood that the following specific examples are illustrative of the invention only and are not to be construed as limiting the invention. In addition, it should be noted that, in the case of no conflict, the embodiments and features in the embodiments in the present application may be combined with each other; for convenience of description, only portions related to the invention are shown in the drawings.
At present, the experience of human driving of automobiles is researched, particularly, a road sensing method is adopted in a poor navigation map mode such as high morality or Baidu navigation, namely, the road and the road environment are identified, the position, the speed and the motion direction of a road participating main body are identified, the digital road information is converted into the digital road information, an intelligent system of an intelligent automobile can conveniently and efficiently make a decision and control, and the method has important significance for realizing real commercial landing of the future automatic driving of the intelligent automobile.
At present, research on digital road perception methods at home and abroad can be mainly divided into three types, namely data fusion based on a visual sensor, a radar-based equidistant sensor and a plurality of sensors.
Vision-based means that a series of processing is performed on image data obtained by a vehicle-mounted camera, and the processing can be roughly divided into two algorithms based on road boundaries and on region segmentation. The method based on the road boundary mainly utilizes the characteristics of edges, colors and the like to extract boundary lines or vanishing points of roads to obtain road areas; the algorithm based on region segmentation mainly segments road regions by means of region segmentation clustering and the like on road images and by means of multi-feature information fusion of road colors, textures and the like. Because the vision sensor has lower cost and is similar to data obtained by human eyes, the vision sensor is visual and convenient to research, and the vision sensor is widely applied to current road identification. The visual sensor has the defect that the image is easily influenced by factors such as illumination, shadow, small degree of distinguishing the road from the surrounding environment and the like.
The radar-based method includes that 3D point cloud data obtained by the vehicle-mounted multi-line laser radar are spliced to generate a point cloud high-precision map, clustering is performed by using longitudinal spatial features of the point cloud, obstacles in the surrounding environment are identified, and finally a grid map is provided for autonomous driving of the intelligent vehicle. The laser radar has the advantages that the perception and identification of all-weather roads and road participating bodies without light interference can be realized, and more accurate distance measurement and speed measurement can be carried out. One of the defects is that the resolution ratio is low, people, two-wheel vehicles or animals which are slightly far away from each other cannot be accurately detected, in addition, unstructured small targets such as marking lines, cracks, potholes, road beds, guardrails and vertical rods of roads cannot be identified, the other defect is that targets which are close to each other in front and back cannot be accurately distinguished, the point cloud splicing result depends on the actual three-dimensional structure of a field, and in a structured road, the point cloud splicing structure is basically expected to be composed of a middle road and curbs, trees and guardrails on two sides; however, in an unstructured road, it is difficult to have a priori knowledge about the scene structure, and a scene with point cloud failure often occurs, for example, a point cloud map fails in scenes such as squares, commercial streets, single-side steps, parking lots on the ground, and the like, and at this time, other sensors are required to be used for compensation and repair.
The point cloud can provide reliable longitudinal spatial features considering that the image has better transverse texture features. In recent years, learners propose a road identification and positioning scheme of multi-vehicle multi-sensor data fusion, and aim to combine the respective advantages of two kinds of data and enhance the road identification and positioning accuracy by improving the information redundancy degree. The road sensing is carried out by means of sensors such as radar, video and laser of a single vehicle, then the vehicle-vehicle communication and the vehicle-cloud communication realize the sharing of sensing data of multiple vehicles, and finally the accurate global sensing of the road is realized. However, this solution is implemented on the premise that all vehicles on the road are equipped with various sensors, which may be implemented in very local limited environments, such as smart mines, smart docks, smart parks, etc., but is obviously unrealistic on wide-area roads, and thus it is difficult to meet the application requirements of the current smart vehicles in cities and towns and villages.
In view of the above, the invention provides a path planning optimization method based on a sparse navigation map, which simulates the experience of driving a vehicle by a human without a high-precision prior map mode, and effectively reduces the complexity and the calculated amount of a road perception method on the premise of ensuring the safe driving of an intelligent vehicle.
Example one
The method is characterized in that the method mainly simulates the characteristics of strong perception and weak location of a human-driven automobile, only the driving experience of the automobile at the relative position (left, middle and right) of a road needs to be perceived, the concept of acquiring the posture of the automobile is provided, namely the relative position (extreme left, middle, right and extreme right) of the intelligent automobile and the current road environment is obtained by uniformly converting the obstacle information obtained by a laser radar and ultrasonic waves and the road direction and the boundary obtained by an image into a vehicle-mounted GPS-RTK geodetic coordinate system, and the steering and the decision-making speed of the intelligent automobile (such as an intelligent automobile vision system) can be conveniently adjusted in real time according to the change of the current road environment.
The embodiment of the invention provides a vehicle road posture sensing method based on monocular vision, which comprises the following steps:
a10, the intelligent vehicle vision system receives driving image information of a designated area in front of the intelligent vehicle.
For example, be provided with a plurality of monocular cameras in the intelligent vehicle of this embodiment, can set up a plurality of monocular cameras if the intelligent vehicle front, and then gather intelligent vehicle front driving image information in real time with the help of the monocular camera to with the driving image information transmission intelligent vehicle vision system who gathers.
The shortest range in the driving image information of the specified area in the present embodiment is 50m in width, 30m in length, and 80 ° in angle of view.
In this embodiment, the monocular camera is not limited, and other cameras may be used to acquire the driving image information in the minimum range.
And A20, segmenting and labeling the driving image information by a deep learning method, and determining a passable area and a non-passable area.
For example, a Mask R-CNN deep learning network model can be used for detecting passable areas and non-passable areas of a road in driving image information, and segmentation and labeling are carried out to obtain passable areas and non-passable areas. In the embodiment, the MASK R-CNN deep learning network model is only used for illustration, and is not limited thereto. That is, the image segmentation labeling method can also realize road segmentation by other image segmentation methods.
And A30, converting the passable area based on a perspective projection transformation model, and acquiring digital road information of the passable area under RTK (Real Time Kinematic) geodetic coordinates.
In this embodiment, the digitized road information may include road boundary locations and road directions in geodetic latitude and longitude coordinates.
And A40, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle and converting the obstacle information into an RTK geodetic coordinate system.
And A50, acquiring the vehicle road posture information of the intelligent vehicle relative to the current passable road based on the digital road information and the obstacle information of the passable area under the RTK geodetic coordinates.
The vehicle road posture information in this embodiment may include position information of the smart vehicle, information of a road, and the like, for example, relative orientation information between the smart vehicle and a current passable road in an RTK geodetic coordinate system, a border of a passable area on the left and right sides of the current road, and five levels of extreme left, middle, right, and extreme right according to the relative orientation between the smart vehicle and the road in the RTK geodetic coordinate system and the road border, so as to determine a road state (forward, backward) where the smart vehicle is currently located. The intelligent vehicle intelligent parking system mainly solves the problems that where an intelligent vehicle is, the direction and the edge of a road where the intelligent vehicle faces, the position of the road where the intelligent vehicle is located and the like.
Compared with the existing unmanned driving technology using a high-precision navigation map, the embodiment adopts the lightweight monocular image to identify and detect the road environment in front (such as a passable area consisting of road edge steps, vehicles, pedestrians, soil piles and other complex obstacles), and realizes the alignment transformation from the passable area boundary metric coordinate to the RTK longitude and latitude coordinate under the road coordinate system through the vehicle-mounted RTK longitude and latitude coordinate, so as to obtain the passable area road boundary and the road direction under the geodetic coordinate system.
Example two
The embodiment of the invention provides a path planning optimization method based on a sparse navigation map, which comprises the following steps:
and B10, the intelligent vehicle vision system acquires the obstacle information sensed by the intelligent vehicle sensing equipment and the driving image information of the designated area in front of the intelligent vehicle.
This step may include, for example, the following substeps:
b11, the intelligent vehicle vision system receives driving image information of a designated area in front of the intelligent vehicle;
and B12, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle.
B20, the intelligent vehicle vision system converts the obstacle information into an RTK (real-time kinematic) geodetic coordinate system, determines a passable area in the driving image information, converts the determined passable area into the RTK geodetic coordinate system, and acquires digital road information of the passable area in the RTK geodetic coordinate system.
B30, acquiring sparse navigation data according to the known sparse navigation map and the position information of the intelligent vehicle;
and B40, performing multiple collision detection on the sparse navigation data according to the information of a passable area under an RTK coordinate system and the safety distance of the intelligent vehicle as constraint conditions and by combining the information of the two-dimensional longitude and latitude grid map obstacles to obtain optimized navigation data.
This step may include, for example, the following sub-steps not shown in the figures:
and B41, obtaining the road direction of each navigation point of the navigation data through visual navigation, and vertically projecting the navigation data to form a new navigation path.
And B42, performing collision detection on the new navigation path by taking the safety distance of the intelligent vehicle (such as 5m before and after, and no obstacle in 1m left and right) and the boundary information of the passable area in the RTK geodetic coordinate system as constraint conditions, and determining the path after the first optimization.
The constraint conditions at the position can comprise obstacle constraint conditions sensed by a laser radar, left and right boundary constraint of a channel sensed by a monocular camera and the like.
And B43, establishing a two-dimensional longitude and latitude grid map according to the obstacle information in the RTK geodetic coordinate system (see a method for establishing the grid map in CN 111928862A).
B44, projecting the path after the first optimization in a grid map to obtain path data;
and B45, performing collision detection on the projected path data and the obstacle information to obtain optimized navigation data.
And B50, acquiring an optimal path planning scheme of the intelligent vehicle by adopting an A-star algorithm based on the optimized navigation data.
For example, the position of the current intelligent vehicle on a grid map is taken as a starting point M, the first point of optimized navigation data is taken as an end point N, and the shortest path from the point M to the point N is found by using an A-way routing algorithm under the condition that road boundary and obstacle position information are known, so that real-time road planning of the intelligent vehicle in a sparse navigation map mode is realized;
the starting point and the end point are two adjacent points on the optimized navigation path, and the shortest path of the navigation is composed of discrete path points. For example, each adjacent path point is about 4 meters, and path finding is performed by using a-x (shortest path algorithm) within the range of each two adjacent path points, the purpose of path finding is to autonomously walk around the boundary of an obstacle and a corridor, and the shortest distance is ensured.
In the embodiment, the situation that the current large-scale high-precision map acquisition scheme is high in cost and low in efficiency, and unmanned technology development is limited, and the situation that a low-cost sparse navigation map (such as a high-grade or hectometrical map) is often failed in positioning and navigation on non-main roads in a city is considered, a secondary difference optimization analysis is carried out on the low-cost sparse navigation map and the actual physical world road environment acquired by a current sensor, and through repeated collision detection, a vehicle passable optimized path is finally generated in an intelligent vehicle local perception grid map, so that a wrong navigation route of the high-grade sparse map is effectively corrected, dependence of an automatic driving technology on the high-precision navigation map is broken, and safe and reliable navigation and path planning are provided for normal running of an intelligent vehicle in a high-precision prior map mode.
EXAMPLE III
The method is used for meeting the technical requirement of autonomous driving in a no-priori-map mode in a strange environment, and discloses a vehicle road posture sensing method and a path planning optimization method based on a sparse navigation map. In the two methods, a deep learning MASK-RCNN network model is used for carrying out segmentation detection on a current road travelable area (other network models can be selected in other embodiments, the implementation is not limited), then, a laser radar point cloud detection obstacle and a MASK-RCNN image are segmented through perspective projection transformation to obtain road boundary information, the road boundary information is unified to an RTK world coordinate system, and the detection and tracking of the current road environment vehicle posture are completed. In the path planning optimization method, the sparse navigation map is considered to only provide rough navigation information, the difference between the actual physical environment of the current road and the sparse navigation map is obtained by combining vision and laser radar data, path optimization is carried out twice, and finally a vehicle planning track capable of passing is generated, so that road detection and path planning in a prior map-free mode in an unfamiliar environment are realized.
With reference to fig. 1B to fig. 11, a vehicle road posture sensing method and a path planning optimization method based on a sparse navigation map are described in detail, where the vehicle road posture sensing method may include the following steps 1 and 2; the path planning optimization method may include steps 1 to 5 described below. The concrete description is as follows:
step 1: and segmenting the drivable region 3D road detection based on the MASK-RCNN monocular image.
Figure BDA0003255395950000131
The front monocular camera of the intelligent vehicle collects driving image information in real time and transmits the driving image information to the intelligent vehicle vision system, namely the front driving image information is obtained through the vehicle-mounted monocular camera.
In the step, the monocular camera acquires the camera view, and the conversion of the camera view into the bird's-eye view is mainly used for facilitating the accurate fitting of the road boundary curve equation.
Figure BDA0003255395950000132
The Mask R-CNN deep learning network model is used for detecting the passable area and the non-passable area of the road in the front driving image, and segmentation and labeling are carried out, as shown in FIG. 2.
The steps realize the separation of the foreground (such as passable areas like roads) and the background (such as non-passable areas) from the image.
Figure BDA0003255395950000133
And converting the camera view of the driving scene of the intelligent vehicle into a virtual bird's-eye view from top to bottom according to the perspective projection transformation model, and performing binarization processing on the bird's-eye view, as shown in FIG. 3.
Figure BDA0003255395950000134
And extracting the road boundary of the passable area by using a Canny edge detection operator, and performing polynomial curve fitting, wherein the curve boundary of the passable area of the road is shown in a bird's-eye view in fig. 4.
In this step, polynomial curve fitting is used to determine the road boundaries and road directions. The polynomial curve fitting is performed on the image in order to reduce the amount of curve fitting calculation, because the polynomial curve fitting is a road edge curve, is two-dimensional image information, and needs to be converted into absolute position information by using a coordinate system change.
Figure BDA0003255395950000135
Solving the coordinate conversion relation among the road world coordinate system, the camera coordinate system and the image coordinate system by using the height information of the known camera from the road plane and the object image relation so as to determine the absolute position M (x) of each pixel (u, v) in the bird's-eye view image on the road projection plane P 0 ,y 0 ,z 0 ) As shown in fig. 5. The binary aerial view fitting is adopted in the step, and the purpose is to reduce the calculated amount。
The concrete description is as follows:
suppose the focal length f, the optical center (u) of the internal reference matrix of the camera (i.e., the aforementioned monocular camera located in front of the smart car) 0 ,v 0 ) It is known that the roll angle and pitch angle of the vehicle relative to the road surface are both 0 (roll angle and pitch angle refer to the tilt angle of the road coordinate system and camera coordinate system relative to the Y-axis and X-axis in fig. 5), and the camera height (i.e. the height of the camera from the ground) z c =h cam Then, according to the camera model, it can get:
Figure BDA0003255395950000141
as shown in FIG. 5, the camera coordinate system and the road surface coordinate system are parallel to each other, Y C Is the normal of the lens surface (camera depth direction), P is the tangent plane of the road surface, Z C The direction of the Z and the normal of the road surface, and X and y are a camera coordinate system X C ,Y C Projection on the road surface, O Road Is the origin O of the camera coordinate system Camera with a camera module The projection on the section of the road surface can know that the coordinate of the point M on the projection surface of the road surface in the coordinate system of the road surface is (x) c ,y c 0), namely, the scale conversion of the image coordinate system and the road surface coordinate system from pixels to meters can be realized through the formula (1), so that the absolute position of the boundary line of the road passable area on the road projection surface can be determined, and the absolute position can correspond to the road boundary of the digital road information.
Figure BDA0003255395950000142
The longitude and latitude coordinates of the camera coordinate system are obtained through coordinate transformation by utilizing the vehicle-mounted RTK longitude and latitude coordinates, the alignment transformation from the passable area boundary metric coordinate under the road surface coordinate system to the RTK longitude and latitude coordinates is realized, and then the passable area road boundary longitude and latitude coordinates and the azimuth angle theta of the road boundary fitting curve starting point under the geodetic coordinate system are obtained as shown in figure 6. In the embodiment, the longitude and latitude coordinates of the road boundary of the passable area under the geodetic coordinate system and the passable area road boundary simulationThe azimuth theta of the starting point of the resultant curve forms the digital road information, and the azimuth theta of the starting point of the road boundary of the passable area corresponds to the road direction of the digital road information.
Wherein theta is less than 10 degrees or more than 350 degrees is east, and theta is more than 10 degrees and less than 80 degrees is northeast;
theta is more than 80 degrees and less than 100 degrees and is used as north, and theta is more than 100 degrees and less than 170 degrees and is used as northwest;
theta is more than 170 degrees and less than 190 degrees, namely 'west' and theta is more than 190 degrees and less than 260 degrees, namely 'southwest';
theta is more than 260 degrees and less than 280 degrees, and the south is more than 280 degrees and less than 350 degrees, and the southeast is obtained.
Step 2: and (3) vehicle road attitude sensing based on image, radar and RTK multi-sensor combination.
Figure BDA0003255395950000151
Acquiring barrier information based on a laser radar and ultrasonic waves on the intelligent vehicle;
Figure BDA0003255395950000152
unifying obstacle information obtained by a laser radar and ultrasonic waves to a vehicle-mounted RTK geodetic coordinate system;
Figure BDA0003255395950000153
and (3) combining the digitized road information (comprising road boundary and azimuth information) in the step (1) to realize the perception of the vehicle road posture.
The vehicle-road posture sensing is characterized in that the strong sensing and weak positioning characteristics of a human driving vehicle are simulated, the driving experience of the vehicle at the relative position (left, middle and right) of the road is only required to be sensed, and the vehicle-road posture sensing mainly comprises the following three contents:
1) Acquiring road boundary position information of passable areas on the left side and the right side of a current road in an RTK geodetic coordinate system in real time;
2) Acquiring relative position information of the intelligent vehicle and the current passable road boundary under an RTK geodetic coordinate system in real time;
3) According to the relative position of the intelligent vehicle and the road boundary under the RTK geodetic coordinate system, dividing the relative position of the road where the vehicle is located into five levels of extreme left, middle, right and extreme right, and further judging the current road state (such as forward driving or backward driving) where the intelligent vehicle is located.
For example, the sensing of the vehicle road posture is that the driving direction and the driving speed of the intelligent vehicle are adjusted at any time according to the change of the road environment obtained by the image sensor, the laser radar and the ultrasonic detection. The vehicle-road posture perception diagram of the unstructured road environment is shown in fig. 7. In fig. 7, the left side is a green belt, the right side is a parking space, and there is no obvious road marking line, the road boundary and the direction (as indicated by asterisks) of the passable area can be obtained by using image segmentation of the vision sensor, the information (as indicated by wavy lines) of the obstacle near the intelligent vehicle is obtained by using ultrasonic ranging, and if the distance d1 is detected by ultrasonic detection of the obstacle on the right side of the vehicle and the distance d2 is detected by the vehicle-mounted camera, the direction of the road where the vehicle is located can be determined according to the ratio of d1 to d2, for example, when 10% < d1/d2 < 30%, the right position of the road of the intelligent vehicle can be considered.
And step 3: and (4) combining the sparse navigation map with visual road detection to optimize the first path.
Considering that the navigation and positioning of the low-cost sparse navigation map on the urban non-main road are often wrong, as shown in fig. 8, the navigation result of the high-resolution map on the plaza in front of the station of the railway station in the bazhou city is a rectangular right-angled turning road, the high-resolution map displays a westward curved navigation line, and if the navigation data is wrong according to the high-resolution map, the intelligent vehicle can be driven on the green belt of the plaza, so that the difference analysis needs to be carried out on the actual direction of the road obtained by the visual navigation in the step 1 and the current high-resolution sparse navigation map.
Figure BDA0003255395950000161
And acquiring God sparse navigation data (shown by an arrow on the right side of the figure 8) by giving the longitude and latitude initial positions of the intelligent vehicle.
Figure BDA0003255395950000162
And vertically projecting the Goodpastel navigation point to the road direction (left-turning arrow) obtained through visual navigation to form a new navigation path, and performing collision detection under the constraint condition of the road boundary and the safety distance of the intelligent vehicle (requiring the new navigation path to smoothly move to the road edge, automatically rebounding when encountering the road boundary, and otherwise, continuously moving to the road boundary in parallel).
In particular, the guidance in the present embodiment is exemplified by the guidance in the moral language, and the guidance data in the actual moral map is not limited to only the guidance data in the moral language, but is all discrete longitude and latitude guidance points.
Figure BDA0003255395950000163
The finally determined track route is the route optimized for the first time through repeated collision rebound detection.
As shown in fig. 9, the dotted square line in the figure is the gorge sparse navigation data, the slash line is the boundary line of the visually detected road, and the dotted diamond line is the optimized path finally determined by the repeated collision detection.
And 4, step 4: and combining the second path optimization of the laser radar grid map.
Figure BDA0003255395950000164
Obtaining obstacle information through vehicle-mounted laser radar ranging to establish a two-dimensional longitude and latitude grid map (the process information for establishing the grid map is recorded in detail in CN111928862A and is introduced into the application for understanding);
Figure BDA0003255395950000171
projecting the first optimized path obtained in the step (3) (namely projecting on a grid map obtained by establishing a laser radar), performing collision detection on the first optimized path data and obstacle information obtained by detecting the laser radar in the grid map, determining that the collision is ended, rebounding to set a fixed safe distance, and performing collision againAnd detecting the collision so as to generate a passable vehicle optimized path in the intelligent vehicle perception local map.
As shown in fig. 10, which is a schematic diagram of a secondary optimized path, a long dotted line is error navigation data given by a gold sparse navigation map, a gray solid line is the corridor edge of the passable area fitted in step 1, a dotted line intersecting with the gray solid line is an obstacle detected by a laser radar, and a square dotted line is final navigation data after secondary optimization.
The first path optimization and the second path optimization are explained as follows:
the first path optimization can be understood as macroscopic sparse map optimization, which is equivalent to the planning of an offline map, and at this time, the map only has some basic static information of road conditions, such as unchanged information of roads, buildings, greenbelts and the like.
The second path planning can be understood as microscopic accurate map planning, usually the grid map has a refreshing frequency, and after the map with the first path optimization is projected to the grid map, static and dynamic obstacles in the road can be distinguished in real time, such as a vehicle suddenly driving head-on in the road, or a watering vehicle being constructed or temporarily stopped on one side of the road, which cannot be seen in a sparse navigation map, must be detected by sensors such as vision or laser radar in the field. The second path planning is to enable the intelligent vehicle to sense the road condition information which cannot be seen in the sparse navigation high-grade map, so that real automatic driving is realized, and manual auxiliary automatic driving is not realized.
Collision detection is mentioned in step 3 and step 4, and is described below with reference to fig. 12. Collision detection: taking rectangle-to-rectangle collision detection as an example, the principle is to detect whether two rectangles overlap, as shown in fig. 12, it is assumed that the parameters of rectangle 1 are: the coordinates in the upper left corner are (x 1, y 1), the width is w1, and the height is h1; the parameters of rectangle 2 are: the coordinates in the upper left corner are (x 2, y 2), the width is w2 and the height is h2.
Therefore, the detection of whether the two rectangular frames have the overlapped area can be mathematically converted into the relation of the distance and the width of the coordinate of the central point in the X and Y directions. That is, the absolute value of the distance Δ X between the center points of the two rectangles in the X direction is less than or equal to half (w 1+ w 2)/2 of the sum of the widths of the rectangles, and the absolute value Δ Y of the distance in the Y direction is less than or equal to half (Y1 + Y2)/2 of the sum of the heights of the rectangles.
And 5: and (4) searching a path on the optimized path by using an A-x algorithm.
And (3) taking the position of the current intelligent vehicle on the grid map as a starting point M, taking the first point of the secondary optimized path data as an end point N, and finding the shortest path from the point M to the point N by using an A-path searching algorithm under the condition that the road boundary and the position information of the obstacle are known, thereby realizing the real-time road detection and tracking of the intelligent vehicle in the sparse navigation map mode.
It should be noted that the navigation path of the high-grade map and the optimized path are discrete points, the distance between every two adjacent navigation path points is 4-6 meters, and in this embodiment, an a-star algorithm is used to avoid obstacles in this range.
Fig. 11 shows the route-seeking trajectory a of the intelligent vehicle on the optimized path. In the graph, a white square area is an obstacle obtained by laser radar detection, a white circular virtual point is a secondary optimized path track, a first white circular virtual point is the current position (starting point) of the intelligent vehicle occupying the grid map, a third white circular virtual point is the first point (end point) of path data of secondary optimization, and a red solid line is the safe driving shortest path of the intelligent vehicle obtained through an A-route searching algorithm.
Example four
A third aspect of the present application provides an embedded processing system according to the third embodiment, including: a memory, a processor and a computer program stored on the memory and executable on the processor (the computer program may be implemented to run on the robot operating system ROS), the computer program, when executed by the processor, implementing the steps of the sparse navigation map based path planning optimization method as described in any of the above embodiments.
The embedded processing system may include: at least one processor, at least one memory, at least one network interface, and other user interfaces. The various components of which are coupled together by a bus system. It will be appreciated that a bus system is used to enable communications among the components. The bus system includes a power bus, a control bus, and a status signal bus in addition to a data bus. The user interface may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, or touch pad, among others.
It will be appreciated that the memory in this embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In an embodiment of the present invention, the processor is configured to execute the method steps provided in the first aspect by calling a program or an instruction stored in the memory, specifically, a program or an instruction stored in an application program.
The method disclosed by the embodiment of the invention can be applied to a processor or realized by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software.
In addition, in combination with the sparse navigation map based path planning optimization method in the foregoing embodiments, an embodiment of the present invention may provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the sparse navigation map based path planning optimization method in any one of the above embodiments is implemented.
The embedded processing system in this embodiment is located in an intelligent vehicle, the intelligent vehicle may include an intelligent driving vehicle or an unmanned vehicle, etc., and this embodiment is not limited thereto, and various sensors or cameras, such as monocular cameras, radars, etc., may be provided in the intelligent vehicle of this embodiment, and these structures may all be in communication connection or physical connection with the embedded processing system, which implements a path planning optimization method for the intelligent vehicle.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related descriptions of the above-described apparatus may refer to the corresponding process in the foregoing method embodiments, and are not described herein again.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (10)

1. A route planning optimization method based on a sparse navigation map is characterized by comprising the following steps:
b10, the intelligent vehicle vision system acquires obstacle information sensed by the sensing equipment on the intelligent vehicle and driving image information of a designated area in front of the intelligent vehicle;
b20, the intelligent vehicle vision system converts the obstacle information into an RTK (real-time kinematic) geodetic coordinate system, determines a passable area in the driving image information and converts the determined passable area into the RTK geodetic coordinate system at the same time, and digital road information of the passable area in the RTK geodetic coordinate system is obtained;
b30, acquiring sparse navigation data according to a known sparse navigation map and the position information of the intelligent vehicle;
b40, performing collision detection on the sparse navigation data according to the constraint condition of the digital road information of the passable area under the RTK coordinate system and the safety distance of the intelligent vehicle and the information of the obstacles in the two-dimensional longitude and latitude grid map so as to obtain optimized navigation data;
and B50, acquiring an optimal path planning scheme of the intelligent vehicle by adopting an A-star algorithm based on the optimized navigation data.
2. The method of claim 1, wherein B10 comprises:
b11, the intelligent vehicle vision system receives driving image information of a designated area in front of the intelligent vehicle;
and B12, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle.
3. The method of claim 2, wherein B20 comprises:
b21, segmenting and labeling the driving image information by a deep learning method, and determining a passable area and a non-passable area;
b22, converting the passable area based on a perspective projection transformation model, and acquiring digital road information of the passable area under RTK geodetic coordinates;
and B23, acquiring obstacle information based on the laser radar and the ultrasonic waves on the intelligent vehicle and converting the obstacle information into an RTK geodetic coordinate system.
4. The method of claim 3, wherein B22 comprises:
b22-1, converting a camera view of the driving scene of the intelligent vehicle into a virtual aerial view from top to bottom according to the perspective projection transformation model, and performing binarization processing on the aerial view;
b22-2, extracting the road boundary of the passable area by using a Canny edge detection operator based on the aerial view after binarization processing, and performing polynomial curve fitting to obtain a curve equation of the road boundary;
b22-3, acquiring the absolute position M (x) of each pixel (u, v) in the image coordinate system on the road projection plane P based on the coordinate conversion relation among the road world coordinate system, the camera coordinate system and the image coordinate system 0 ,y 0 ,z 0 ) Determining the absolute position of the road boundary of the passable area on a road projection surface;
the coordinate conversion relation among the road world coordinate system, the camera coordinate system and the image coordinate system is calculated by using the relation between the height information of the known camera from the road plane and the object image;
b22-4, obtaining longitude and latitude coordinates of a camera coordinate system through coordinate transformation by utilizing the vehicle-mounted RTK longitude and latitude coordinates, realizing alignment transformation from boundary metric coordinates of a passable area under a road surface coordinate system to the RTK longitude and latitude coordinates, and obtaining road boundary longitude and latitude coordinates of the passable area and an azimuth angle theta of a road boundary fitting curve starting point under an RTK geodetic coordinate system;
and the longitude and latitude coordinates of the road boundary of the passable area under the geodetic coordinate system and the azimuth angle theta of the starting point of the fitting curve of the road boundary of the passable area form digital road information.
5. The method of claim 4, wherein B22-3 comprises:
suppose the focal length f, optical center (u) in the internal reference matrix of the camera 0 ,v 0 ) As known, the roll angle and the pitch angle of the intelligent vehicle relative to the road surface are both 0, and the height z of the camera c =h cam Then, according to the camera model, the following are obtained:
Figure FDA0003255395940000021
the camera coordinate system and the road surface coordinate system are parallel to each other, Y C Is the normal of the lens surface (camera depth direction), P is the tangent plane of the road surface, Z C The direction of the Z and the normal of the road surface, and X and y are a camera coordinate system X C ,Y C Projection on the road surface, O Road Is the origin O of the camera coordinate system Camera with a camera module The projection on the section of the road surface shows that the coordinate of the point M on the road surface projection plane in the road surface coordinate system is (x) c ,y c 0), realizing the scale conversion from pixels to meters of an image coordinate system and a road surface coordinate system through the formula (1), and determining the absolute position of the road boundary of the passable area on a road projection surface; the camera is a monocular camera installed on the intelligent vehicle.
6. The method of claim 4, wherein B22-4 comprises:
wherein theta is less than 10 degrees or more than 350 degrees and is east, theta is more than 10 degrees and less than 80 degrees and is northeast;
theta is more than 80 degrees and less than 100 degrees and is north, theta is more than 100 degrees and less than 170 degrees and is northwest;
theta is more than 170 degrees and less than 190 degrees, namely 'west' and theta is more than 190 degrees and less than 260 degrees, namely 'southwest';
theta is more than 260 degrees and less than 280 degrees, and the south is more than 280 degrees and less than 350 degrees, and the southeast is obtained.
7. The method of claim 1, wherein B40 comprises:
b41, obtaining the road direction of each navigation point of the sparse navigation data through visual navigation, and vertically projecting the navigation data to form a new navigation path;
b42, carrying out collision detection on the new navigation path by taking the intelligent vehicle safety distance and the digital road information of the passable area under the RTK geodetic coordinate system as constraint conditions, and determining the path after the first optimization;
b43, establishing a two-dimensional longitude and latitude grid map according to the obstacle information under the RTK geodetic coordinate system;
b44, projecting the path after the first optimization in a two-dimensional longitude and latitude grid map to obtain path data;
and B45, performing collision detection on the projected path data and the obstacle information to obtain finally optimized navigation data.
8. The method of claim 1, wherein B50 comprises:
taking the position of the current intelligent vehicle on the grid map as a starting point M, taking the first point of the optimized navigation data as an end point N, and finding the shortest path from the point M to the point N by using an A-routing algorithm under the condition that road boundary and obstacle position information are known, thereby realizing the real-time road planning of the intelligent vehicle in the sparse navigation map mode;
the starting point and the end point are two adjacent points on the optimized navigation path, and the shortest navigation path is formed by connecting discrete path points.
9. An embedded processing system, comprising: a memory having a computer program stored thereon, and a processor executing the computer program stored in the memory to perform the steps of performing the sparse navigation map based path planning optimization method of any of the preceding claims 1 to 8.
10. An intelligent vehicle comprising a monocular camera mounted in front of the intelligent vehicle, and a lidar and a plurality of sensors, and further comprising the embedded processing system of claim 9, the monocular camera, the lidar and the plurality of sensors all communicating with the embedded processing system.
CN202111057852.0A 2021-09-09 2021-09-09 Path planning optimization method based on sparse navigation map Active CN115774444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111057852.0A CN115774444B (en) 2021-09-09 2021-09-09 Path planning optimization method based on sparse navigation map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111057852.0A CN115774444B (en) 2021-09-09 2021-09-09 Path planning optimization method based on sparse navigation map

Publications (2)

Publication Number Publication Date
CN115774444A true CN115774444A (en) 2023-03-10
CN115774444B CN115774444B (en) 2023-07-25

Family

ID=85388279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111057852.0A Active CN115774444B (en) 2021-09-09 2021-09-09 Path planning optimization method based on sparse navigation map

Country Status (1)

Country Link
CN (1) CN115774444B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984279A (en) * 2023-03-20 2023-04-18 银河航天(北京)网络技术有限公司 Path determining method, system, electronic equipment and storage medium
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle
CN116311095A (en) * 2023-03-16 2023-06-23 广州市衡正工程质量检测有限公司 Pavement detection method based on region division, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470142A (en) * 2018-01-30 2018-08-31 西安电子科技大学 Lane location method based on inverse perspective projection and track distance restraint
CN110398979A (en) * 2019-06-25 2019-11-01 天津大学 A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111332285A (en) * 2018-12-19 2020-06-26 长沙智能驾驶研究院有限公司 Method and device for vehicle to avoid obstacle, electronic equipment and storage medium
CN111928862A (en) * 2020-08-10 2020-11-13 廊坊和易生活网络科技股份有限公司 Method for constructing semantic map on line by fusing laser radar and visual sensor
CN113255520A (en) * 2021-05-25 2021-08-13 华中科技大学 Vehicle obstacle avoidance method based on binocular vision and deep learning and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470142A (en) * 2018-01-30 2018-08-31 西安电子科技大学 Lane location method based on inverse perspective projection and track distance restraint
CN111332285A (en) * 2018-12-19 2020-06-26 长沙智能驾驶研究院有限公司 Method and device for vehicle to avoid obstacle, electronic equipment and storage medium
CN110398979A (en) * 2019-06-25 2019-11-01 天津大学 A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 Fusion method and system of real-time perception information and automatic driving map
CN111928862A (en) * 2020-08-10 2020-11-13 廊坊和易生活网络科技股份有限公司 Method for constructing semantic map on line by fusing laser radar and visual sensor
CN113255520A (en) * 2021-05-25 2021-08-13 华中科技大学 Vehicle obstacle avoidance method based on binocular vision and deep learning and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311095A (en) * 2023-03-16 2023-06-23 广州市衡正工程质量检测有限公司 Pavement detection method based on region division, computer equipment and storage medium
CN116311095B (en) * 2023-03-16 2024-01-02 广州市衡正工程质量检测有限公司 Pavement detection method based on region division, computer equipment and storage medium
CN115984279A (en) * 2023-03-20 2023-04-18 银河航天(北京)网络技术有限公司 Path determining method, system, electronic equipment and storage medium
CN116189114A (en) * 2023-04-21 2023-05-30 西华大学 Method and device for identifying collision trace of vehicle

Also Published As

Publication number Publication date
CN115774444B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US11288521B2 (en) Automated road edge boundary detection
US11852729B2 (en) Ground intensity LIDAR localizer
US20210311486A1 (en) Navigation by augmented path prediction
CN115774444B (en) Path planning optimization method based on sparse navigation map
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN111492403A (en) Lidar to camera calibration for generating high definition maps
GB2613692A (en) Systems and methods for vehicle navigation
US20220363263A1 (en) Automated bump and/or depression detection in a roadway
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
WO2022041706A1 (en) Positioning method, positioning system, and vehicle
Hervieu et al. Road side detection and reconstruction using LIDAR sensor
US20230195122A1 (en) Systems and methods for map-based real-world modeling
Rieken et al. Toward perception-driven urban environment modeling for automated road vehicles
Li et al. Hybrid filtering framework based robust localization for industrial vehicles
US20230206608A1 (en) Systems and methods for analyzing and resolving image blockages
CN115797900B (en) Vehicle-road gesture sensing method based on monocular vision
Burger et al. Unstructured road slam using map predictive road tracking
KR102316818B1 (en) Method and apparatus of updating road network
Tian et al. Vision-based mapping of lane semantics and topology for intelligent vehicles
Hongbo et al. Relay navigation strategy study on intelligent drive on urban roads
Yuan et al. 3D traffic scenes construction and simulation based on scene stages
Zhao Recognizing features in mobile laser scanning point clouds towards 3D high-definition road maps for autonomous vehicles
KR20220151572A (en) Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting
Ali et al. Path navigation of mobile robot in a road roundabout setting
Andersen et al. Vision assisted laser scanner navigation for autonomous robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Path Planning Optimization Method Based on Sparse Navigation Maps

Effective date of registration: 20231116

Granted publication date: 20230725

Pledgee: Bazhou Financing Guarantee Co.,Ltd.

Pledgor: Langfang Heyi Life Network Technology Co.,Ltd.

Registration number: Y2023980066232