CN116499477B - Map fusion method, device, medium and vehicle - Google Patents

Map fusion method, device, medium and vehicle Download PDF

Info

Publication number
CN116499477B
CN116499477B CN202310746047.1A CN202310746047A CN116499477B CN 116499477 B CN116499477 B CN 116499477B CN 202310746047 A CN202310746047 A CN 202310746047A CN 116499477 B CN116499477 B CN 116499477B
Authority
CN
China
Prior art keywords
map
information
vehicle
lane
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310746047.1A
Other languages
Chinese (zh)
Other versions
CN116499477A (en
Inventor
袁鹏飞
李志伟
豆家敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310746047.1A priority Critical patent/CN116499477B/en
Publication of CN116499477A publication Critical patent/CN116499477A/en
Application granted granted Critical
Publication of CN116499477B publication Critical patent/CN116499477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/56Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The disclosure relates to a map fusion method, a map fusion device, a map fusion medium and a map fusion vehicle, belongs to the technical field of automatic driving, and can solve the problem that the vehicle can not pass through a road junction when the vehicle turns left and right and the road junction is a large special-shaped road junction under the scene that no lane line exists in the road junction. The method comprises the following steps: acquiring a real-time perception map, wherein the real-time perception map is a vector map established by a vehicle according to a first perception result of a first target area; under the condition that the first perception result comprises intersection information, matching the real-time perception map with the gray map to obtain an initial fusion map; the gray map is a vector map established by the vehicle in the history journey according to a second perception result of the second target area; and fusing the navigation map of the vehicle with the initial fusion map to obtain a target fusion map. Based on the target fusion map, the vehicle can realize decisions such as positioning, lane line supplement, trajectory planning and the like in the range of the intersection, thereby realizing automatic driving in multiple scenes.

Description

Map fusion method, device, medium and vehicle
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to a map fusion method, a map fusion device, a map fusion medium and a vehicle.
Background
In automatic driving, the vehicle adopts a millimeter wave radar and camera detection mode, so that self-adaptive cruising is realized, and most scenes of expressway scenes and urban expressway scenes can be covered.
In the related art, assuming that there is no obstacle in front of a vehicle, a lane line in a local area is recognized by a camera of the vehicle, and a lane line is planned according to the lane line for the vehicle to travel according to the lane line. However, the planning of the track line depends on the recognition of the lane line, the detection distance of the camera to the lane line is limited, and the vehicle can not pass through the intersection under the conditions of left turn, right turn and larger special-shaped intersection when the lane line is not arranged in the intersection.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a map fusion method, a device, a medium and a vehicle.
According to a first aspect of an embodiment of the present disclosure, there is provided a fusion method of maps, including:
acquiring a real-time perception map, wherein the real-time perception map is a vector map established by a vehicle according to a first perception result of a first target area;
under the condition that the first perception result comprises intersection information, matching the real-time perception map with the gray map to obtain an initial fusion map; the gray map is a vector map established by the vehicle in a history journey according to a second perception result of a second target area, and the second target area comprises the first target area;
And fusing the navigation map of the vehicle with the initial fusion map to obtain a target fusion map.
Optionally, matching the real-time sensing map and the gray map to obtain an initial fusion map, including:
according to the positioning information corresponding to the first sensing result, performing preliminary matching on the real-time sensing map and the gray map to obtain an intermediate matching result;
establishing a first constraint condition according to the first lane line information of the real-time perception map and the second lane line information of the gray level map, wherein the first constraint condition is used for aligning the lane lines in the real-time perception map with the lane lines in the gray level map;
establishing a second constraint condition according to the first stop line information of the real-time perception map and the second stop line information of the gray level map, wherein the second constraint condition is used for aligning the stop line of the real-time perception map with the stop line of the gray level map;
and correcting the intermediate matching result according to the first constraint condition and the second constraint condition to obtain the initial fusion map.
Optionally, establishing a first constraint condition according to the first lane line information of the real-time perception map and the second lane line information of the gray scale map includes:
Determining first distance information between a target edge of a road where the vehicle is located and a first lane line in the first lane line information, and determining second distance information between the target edge and a second lane line in the second lane line information, wherein the first lane line is any lane line in a plurality of lane lines corresponding to the first lane line information, and the second lane line is a lane line corresponding to the first lane line in a plurality of lane lines corresponding to the second lane line information;
and determining a gap between the first distance information and the second distance information, and establishing the first constraint condition according to a relation between the gap and a preset gap threshold.
Optionally, establishing a second constraint condition according to the first stop line information of the real-time perception map and the second stop line information of the gray scale map includes:
third distance information between a stop line corresponding to the first stop line information and a stop line corresponding to the second stop line information is determined, and the second constraint condition is established according to the relation between the third distance information and a preset distance threshold.
Optionally, performing preliminary matching on the real-time sensing map and the gray map to obtain an intermediate matching result, including:
Performing preliminary matching on the gray map and the real-time perception map to obtain an initial matching result;
and deleting lane line information of a reverse driving road of the vehicle in the initial matching result to obtain the intermediate matching result, wherein the reverse driving road represents a road with the opposite driving direction specified by the road where the vehicle is located and/or a road with the opposite driving direction specified by a driving capable road corresponding to the road where the vehicle is located.
Optionally, the method further comprises:
determining a first target lane of a road where the vehicle is located and a second target lane of a drivable road corresponding to the road where the vehicle is located in the initial fusion map;
connecting the first target lane with the second target lane to generate a drivable path;
and under the condition that the drivable paths comprise a plurality of drivable paths, assigning a value to the recommended degree of each drivable path according to a preset measurement condition to obtain a recommended driving lane for the drivable path, so that the vehicle plans the driving path according to the recommended driving lane and the dynamic obstacle information contained in the first perception result.
Optionally, under the condition that the first sensing result includes intersection information, matching the real-time sensing map and the gray map, and before obtaining the initial fusion map, the method further includes:
in the first perception result, under the condition that at least one of information of a stop line, a zebra crossing and a traffic light is identified, determining that intersection information is included in the first perception result; and/or the number of the groups of groups,
and determining that the first perception result comprises intersection information under the condition that the position of an endpoint of any lane line extending to the lane line is unchanged in a real-time perception map established by the vehicle according to the first perception result.
Optionally, fusing the navigation map of the vehicle with the initial fusion map to obtain a target fusion map, including:
determining the first lane number corresponding to third lane information in the navigation map and the second lane number corresponding to fourth lane information in the initial fusion map;
and under the condition that the number of the first tracks is the same as that of the second tracks, fusing the navigation map with the initial fusion map to obtain the target fusion map.
Optionally, the method further comprises:
and under the condition that the number of the first lanes is different from the number of the second lanes, correcting the attribute information of all lanes in the fourth lane information according to the attribute information of all lanes in the third lane information to obtain a corrected navigation map, and fusing the corrected navigation map with the initial fusion map to obtain the target fusion map.
According to a second aspect of the embodiments of the present disclosure, there is provided a fusion apparatus of a map, including:
the acquisition module is configured to acquire a real-time perception map, wherein the real-time perception map is a vector map established by a vehicle according to a first perception result of a first target area;
the matching module is configured to match the perceived map and the gray map to obtain an initial fusion map under the condition that the first perceived result comprises intersection information; the gray map is a vector map established by the vehicle in a history journey according to a second perception result of a second target area, and the second target area comprises the first target area;
and the fusion module is configured to fuse the navigation map of the vehicle with the initial fusion map to obtain a target fusion map.
According to a third aspect of the disclosed embodiments, there is provided a computer readable storage medium having stored thereon computer program instructions which when executed by a processor implement the steps of the method of the first aspect of the disclosed embodiments.
According to a fourth aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a storage device having a computer program stored thereon;
control means for executing said computer program in said storage means to carry out the steps of the method according to the first aspect of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
in the embodiment of the disclosure, a real-time perceived map established by a vehicle in real time for a first target area can be matched with a gray map established by the vehicle in a history journey for a second target area, so as to obtain an initial fusion map. Wherein, because the second target area comprises the first target area, namely the second target area is larger than the first target area, the real-time perception map of the vehicle can be supplemented and perfected based on the gray map. And then, carrying out information supplementation on the initial fusion map according to the navigation map to obtain a target fusion map. Therefore, the target fusion map fuses the real-time perception map established by the real-time perception of the vehicle, the gray level map established by the perception in the history travel of the vehicle and the navigation map, so that map data in the range of the intersection can be more complete, and the vehicle can realize decisions such as positioning, lane line supplementation, track line planning and the like in the range of the intersection based on the target fusion map under the conditions that the vehicle turns left and right and the intersection is a large special-shaped intersection, thereby realizing automatic driving in multiple scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a method of fusing maps according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a real-time perceived map and a gray scale map, shown according to an example embodiment.
Fig. 3 is a schematic diagram showing a travelable road on which a vehicle is located according to an exemplary embodiment.
Fig. 4 is a schematic diagram illustrating a drivable path of a vehicle according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a map fusing apparatus according to an exemplary embodiment.
FIG. 6 is a functional block diagram of a vehicle, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In order to reduce fatigue of a driver and improve driving comfort, the vehicle can realize auxiliary driving of the vehicle in a self-adaptive cruising mode. Unlike common cruising systems, the adaptive cruising system can automatically lock the speed of a front vehicle, accelerate along with the acceleration of the front vehicle, and slow down the front vehicle along with the acceleration of the front vehicle, so that the vehicle is relatively dependent on the front vehicle.
The self-adaptive cruise auxiliary automatic driving is realized through millimeter wave radar detection, through a camera detection mode and through a millimeter wave radar fusion camera detection mode. The millimeter wave radar method is limited by whether a vehicle is in front of the vehicle, and if the vehicle is not present, the function use is limited. The mode of adopting the camera is easily influenced by the environment, and the distance detection precision is not as high as that of millimeter wave radar.
The current automatic driving auxiliary system adopts a mode of millimeter wave radar and a forward looking camera to realize self-adaptive cruising, and can fully cover most scenes of expressway scenes and urban expressways.
In the face of non-intersection scenes, assuming that no vehicle is in front of the vehicle, a lane line is detected through a camera, and then a path line is predicted and planned so that the vehicle can run according to the planned path line, and the planning of the path line depends on the identification of the lane line.
When the road junction scene is faced, no lane line exists in the road junction due to the fact that the road junction scene exceeds the stop line range. The detection distance of the camera to the lane line is limited, and the vehicle can not pass through the intersection under the conditions of left turn, right turn and the intersection being a large special-shaped intersection.
In one approach, lane lines are virtually lengthened by a distance according to a lane line planning module, and then a trajectory line is planned through an intersection according to the virtual lane line. But the method can be realized only under the condition that the virtual prolonged lane line and the lane line of the travelling road object can be connected, and once the virtual prolonged lane line and the lane line of the travelling road object cannot be connected, the virtual prolonged lane line and the lane line of the travelling road object cannot pass through the intersection; and can only be used for straight running, and can not turn left and right. In another mode, the front vehicle is assumed to pass through the intersection, and is detected and tracked by the camera and the millimeter wave radar, and then passes through the intersection after the vehicle, but more vehicles are in the intersection, and the situation that the front vehicle passes through the intersection easily occurs to cause the failure of target tracking, and the failure of passing through the intersection is caused.
In addition, high-precision maps of partial areas cannot currently provide high-precision maps carrying latitude and longitude information due to regulatory restrictions.
In order to solve the problems in the related art, embodiments of the present disclosure provide a map fusion method, device, medium, and vehicle, and a detailed description is made of a technical solution of the embodiments of the present disclosure with reference to the accompanying drawings of the specification.
Referring to fig. 1, fig. 1 is a flowchart illustrating a map fusing method according to an exemplary embodiment, and as shown in fig. 1, the map fusing method is used in a terminal, and includes the following steps.
In step S101, a real-time perceived map is acquired, the real-time perceived map being a vector map established by the vehicle according to a first perceived result of the first target area.
The real-time perception map is a vector map established by the vehicle aiming at a first perception result of the first target area in real time, and has real-time performance. Wherein a vector map, also called an object-oriented image or a pictorial image, is a series of points connected by lines. Each graphical object in the vector map is a self-contained entity drawn according to geometric characteristics, and has properties of color, shape, outline, size, screen position, and the like. The vector may be a point or a line, generated by software, whose file takes up little memory space, since this type of image file contains separate images that are independent and can be freely recombined without restriction. The vector map is characterized in that the enlarged image is not distorted and has no relation with resolution.
In step S102, under the condition that the first sensing result includes intersection information, matching the real-time sensing map and the gray map to obtain an initial fusion map, where the gray map is a vector map established by the vehicle according to a second sensing result of the second target area in the history journey, and the second target area includes the first target area.
For example, in one embodiment, before the vehicle performs automatic driving of the corresponding route, corresponding lane lines and road edges can be simply modeled through the intersections of the areas where the vehicle needs to pass by other terminals not including the vehicle, for example, the corresponding lane lines and road edges can be converted to absolute geographic positions through a GPS (Global Positioning System) and a global positioning system according to data stored in cloud ends of other vehicle ends and computer modeling according to the existing high-precision map, so as to generate a gray map, the absolute position precision of the gray map reaches the meter level, and the relative precision of the same intersection and the real-time map building of the vehicle ends are consistent. In another embodiment, the vector map generated by the real-time sensing result can be stored to the vehicle end in an off-line storage manner or stored to the vehicle end or the cloud end on line in the historical driving process of the vehicle to generate a gray map, so that the real-time sensing map can be supplemented in the next driving process of the vehicle.
It is understood that the lane line information, the lane information, the navigation information, etc. in the gray map are more complete than the lane line information, the lane information, the navigation information, etc. in the real-time perception map, for example, the lane line information including the road where the vehicle is located in the real-time perception map, but the lane line information including the rest of the roads other than the road where the vehicle is located, the road where the vehicle is located may be interpreted as the road between the last intersection to the next intersection during the traveling of the vehicle, and the gray map includes both the lane line information of the road where the vehicle is located and the lane line information of the rest of the roads other than the road where the vehicle is located.
The image of the gray map may be as shown in fig. 2, where the dashed box labeled a in fig. 2 is a first target area, the dashed box labeled B in fig. 2 is a second target area, and the circle in fig. 2 is the position of the vehicle.
In step S103, a navigation map of the vehicle is fused with the initial fusion map to obtain a target fusion map.
The navigation map of the vehicle is a standard map, and is mainly composed of road topology information with relatively coarse granularity, and the accuracy requirement is not high, generally about 15 meters.
For example, the navigation information and the lane information of the navigation map may be directly added to the initial fusion map, or the navigation information and the lane information may be processed and then added to the initial fusion map to obtain the target fusion map, so that the vehicle may plan a route and make a decision according to the target fusion map, and thus the vehicle may slow down in advance according to the distance from the current location to the next traffic light, change lane in advance to the next action, and select a lane according to the lane information.
In the embodiment of the disclosure, a real-time perceived map established by a vehicle in real time for a first target area can be matched with a gray map established by the vehicle in a history journey for a second target area, so as to obtain an initial fusion map. Wherein, because the second target area comprises the first target area, namely the second target area is larger than the first target area, the real-time perception map of the vehicle can be supplemented and perfected based on the gray map. And then, carrying out information supplementation on the initial fusion map according to the navigation map to obtain a target fusion map. Therefore, the target fusion map fuses the real-time perception map established by the real-time perception of the vehicle, the gray level map established by the perception in the history travel of the vehicle and the navigation map, so that map data in the range of the intersection can be more complete, and the vehicle can realize decisions such as positioning, lane line supplementation, track line planning and the like in the range of the intersection based on the target fusion map under the conditions that the vehicle turns left and right and the intersection is a large special-shaped intersection, thereby realizing automatic driving in multiple scenes.
For example, under the condition that the first sensing result does not include intersection information, the vehicle models in real time according to the first sensing result, extends lane lines in the established real-time sensing map, and fuses the navigation map to realize the fusion of the map.
In some embodiments, matching the real-time perceived map with the gray scale map to obtain an initial fused map includes:
according to the positioning information corresponding to the first sensing result, performing preliminary matching on the real-time sensing map and the gray map to obtain an intermediate matching result;
establishing a first constraint condition according to the first lane line information of the real-time perception map and the second lane line information of the gray level map, wherein the first constraint condition is used for aligning the lane lines in the real-time perception map with the lane lines in the gray level map;
establishing a second constraint condition according to the first stop line information of the real-time sensing map and the second stop line information of the gray level map, wherein the second constraint condition is used for aligning the stop line of the real-time sensing map with the stop line of the gray level map;
and correcting the intermediate matching result according to the first constraint condition and the second constraint condition to obtain an initial fusion map.
In an example, during the running process of the vehicle, according to the positioning information corresponding to the first sensing result and the gray map, the first target area of the first sensing result and the first target area included in the second target area in the gray map can be initially aligned, but because the precision level of the gray map and the real-time sensing map is in the meter level or the ten meter level, the matching position precision of the gray map and the real-time sensing map is also in the meter level or the ten meter level, the matching precision is lower, the intermediate matching result after the initial alignment can be corrected according to the established first constraint condition and the established second constraint condition, the matching degree of the real-time sensing map and the gray map after the correction is better, that is, the initial fusion map obtained by supplementing and perfecting the real-time sensing map according to the gray map is more accurate.
In some embodiments, establishing the first constraint based on the first lane-line information of the real-time awareness map and the second lane-line information of the gray scale map includes:
determining first distance information between a target edge of a road where a vehicle is located and a first lane line in first lane line information, and determining second distance information between the target edge and a second lane line in second lane line information, wherein the first lane line is any lane line in a plurality of lane lines corresponding to the first lane line information, and the second lane line is a lane line corresponding to the first lane line in a plurality of lane lines corresponding to the second lane line information;
And determining a gap between the first distance information and the second distance information, and establishing a first constraint condition according to a relation between the gap and a preset gap threshold.
The target edge of the road on which the vehicle is located may be a certain lane line including any one of a lane edge line, a lane dividing line, a lane center line, and a certain edge of the road, for example. The lane lines comprise lane edge lines, lane center lines and lane dividing lines, and are commonly solid lines, broken lines and double yellow lines, and the current lane position can be identified by identifying the type of lane line automatic driving auxiliary system. The first lane line information and the second lane line information are two or more, and the number of the first lane line information and the number of the second lane line information are the same. And taking the target edge as a standard, calculating first distance information between any corresponding first lane line in the first lane line information and the target edge and second distance information between a second lane line in the second lane line information corresponding to the first lane line and the target edge.
For example, a gap may exist between the first distance information and the second distance information due to a deviation of the photographing angle of the camera or a calculation error of the vehicle, and thus the gap may be controlled within a preset gap threshold value range, so as to determine a first constraint condition, so as to correspond all first lane lines in the first lane line information to all second lane lines in the second lane line information one by one, and achieve longitudinal alignment in the road extending direction. The preset gap threshold may be in the range of 0.1-0.5m.
In other embodiments, first attribute information of a lane where a vehicle is located may be determined according to first lane line information, second attribute information of each lane in a target road corresponding to the lane where the vehicle is located on a gray scale map may be determined according to second lane line information, and lane lines on any side of the vehicle are identified and matched with corresponding second lane lines in the second lane line information according to the first attribute information and the second attribute information, so as to establish a first constraint condition, where the attribute information includes a left turn lane, a straight turn lane, and a right turn lane.
In some embodiments, establishing the second constraint based on the first stop line information of the real-time awareness map and the second stop line information of the gray scale map includes:
third distance information between a stop line corresponding to the first stop line information and a stop line corresponding to the second stop line information is determined, and a second constraint condition is established according to the relation between the third distance information and a preset distance threshold.
For example, the first sensing result includes intersection information indicating that the vehicle has arrived at or is about to arrive at the intersection. At an intersection, the endpoints of all lane lines of the road are connected by a stop line, and typically the stop line is perpendicular to any lane line. According to first stop line information in a first sensing result recognized by the vehicle and according to second stop line information in a history journey of the vehicle, the position of a stop line in the first stop line information and the position of a stop line in the second stop line information can be determined, so that third distance information between the stop line corresponding to the first stop line information and the stop line corresponding to the second stop line information is determined, the third distance information is controlled within a range of a preset distance threshold value, and therefore a second constraint condition can be determined, the stop line corresponding to the first stop line information and the stop line corresponding to the second stop line information are in one-to-one correspondence, and transverse alignment in the vertical direction of the road extending direction is achieved. It can be understood that when the vehicle arrives at or is about to arrive at the intersection, the vehicle can only acquire the stop line of the road where the vehicle is located, but cannot acquire the stop lines of other roads at the intersection due to the limited detection range of the camera, so that only one stop line corresponding to the first stop line information and the second stop line information exists.
In some embodiments, performing preliminary matching on the real-time perceived map and the gray scale map to obtain an intermediate matching result, including:
preliminary matching is carried out on the gray level map and the real-time perception map, and an initial matching result is obtained;
and deleting lane line information of a reverse driving road of the vehicle in the initial matching result to obtain an intermediate matching result, wherein the reverse driving road represents a road with the opposite driving direction specified by the road where the vehicle is located and/or a road with the opposite driving direction specified by a drivable road corresponding to the road where the vehicle is located.
For example, as shown in fig. 3, where the circle in fig. 3 is the position of the vehicle, and the double solid line is the dividing line, the road marked with C is the drivable road of the road on which the vehicle is located, it can be interpreted that all the roads on which the vehicle can be driven at the intersection are defined according to the related rule that the vehicle is driven on the right side of the road. For example, in fig. 3, the right side road of the counterpart road of the road on which the vehicle is located, and the upper side road of the left side road of the road on which the vehicle is located, i.e., the road on which the vehicle turns left, and the lower side road of the right side road of the road on which the vehicle is located, i.e., the road on which the vehicle turns right. In general, a travelable road may also be defined as a road that does not include a stop line at the next intersection of the road where the vehicle is located.
In an example, in the running process of the vehicle, according to the positioning information corresponding to the first sensing result and the gray map, the first target area of the first sensing result and the first target area included in the second target area in the gray map can be initially aligned to obtain an initial matching result.
In some embodiments, the fusion method of the map further comprises:
determining a first target lane of a road where a vehicle is located and a second target lane of a drivable road corresponding to the road where the vehicle is located in an initial fusion map;
connecting the first target lane with the second target lane to generate a drivable path;
and under the condition that the drivable paths comprise a plurality of drivable paths, assigning a value to the recommended degree of each drivable path according to a preset measurement condition to obtain a recommended driving lane for the drivable path, so that the vehicle plans the driving path according to the recommended driving lane and the dynamic obstacle information contained in the first perception result.
For example, when the vehicle has not yet traveled to the intersection, the first target lane of the road on which the vehicle is located may be all lanes in the lane information at the next intersection, and when the vehicle has arrived to the intersection, the first target lane of the road on which the vehicle is located may be the lane on which the vehicle is located, but in order that the vehicle may make a plan of the advanced traveling track line, the first target lane may be determined as all lanes in the lane information at the next intersection. The second target lane is a travelable road corresponding to the road where the vehicle is located. The drivable path formed by connecting the first target lane and the second target lane is shown in fig. 4, in which the circle in fig. 4 is the position of the vehicle, and the connecting solid line in each lane in fig. 4 corresponds to one drivable path, it can be understood that in fig. 4, only the drivable paths of all lanes between the road where the vehicle is located at the intersection and the plurality of drivable roads are taken as an example, and the connection between the remaining lanes of the road where the remaining vehicles are located and the drivable roads is not repeated here.
For example, the preset measurement condition may be a smoothness degree between a road where the vehicle is located and a target location, a distance between the road where the vehicle is located and the target location, a distance between the road where the vehicle is located and a drivable road, and the like, and a specific recommended degree value may be set according to the requirement, which is not limited herein.
For example, after the recommended driving lane is obtained, further planning is performed according to the dynamic obstacle information of the first sensing result in the real-time sensing map, so that the vehicle generates a decision of avoiding/following the dynamic obstacle such as the vehicle in front of the vehicle and the pedestrian in the driving process, and the planned driving route is completed.
In some embodiments, under the condition that the first sensing result includes intersection information, matching the real-time sensing map and the gray map, and before obtaining the initial fusion map, the fusion method of the map further includes:
in the first perception result, under the condition that at least one of information of a stop line, a zebra crossing and a traffic light is identified, determining that intersection information is included in the first perception result; and/or the number of the groups of groups,
and determining that the first perception result comprises intersection information under the condition that the position of an end point of any lane line extending to the lane line is unchanged in a real-time perception map established by the vehicle according to the first perception result.
For example, in one instance, a camera and lidar may be utilized to identify traffic lights, stop lines, zebra crossings, etc. information to determine when a vehicle is arriving or is about to arrive at an intersection. The traffic light comprises red lights, green lights and yellow lights, and the auxiliary navigation is based on the traffic light. Stop line, zebra crossing, etc. information may be used to determine when a vehicle is arriving or is about to arrive at an intersection. The vehicle traffic control system can be used for controlling the vehicle to automatically drive according to at least one of the stop line, the zebra crossing and the traffic light, and can be used for better controlling the vehicle to pass by identifying obstacles and the like based on the combination of the information and the laser radar.
In some embodiments, fusing a navigation map of a vehicle with an initial fusion map to obtain a target fusion map includes:
determining the first lane number corresponding to the third lane information in the navigation map and the second lane number corresponding to the fourth lane information in the initial fusion map;
and under the condition that the number of the first tracks is the same as that of the second tracks, fusing the navigation map with the initial fusion map to obtain a target fusion map.
Illustratively, the navigation map includes navigation information and third lane information. The third lane information may include lane information at a next intersection of a road where the vehicle is located after a planned navigation route of the vehicle, for example, each lane is one of left-turn, straight-turn, and right-turn lanes. The navigation information includes road condition information such as driving speed, lane change, speed limit and the like, and illustratively, the distance from the position of the vehicle to the next limit mark is selected, for example, navigation broadcasting information such as "red light running shooting after 100 meters", "illegal shooting after 500 meters" and the like, the distance from the next action of the vehicle and the next action of the vehicle, for example, navigation broadcasting information such as "turning right after 100 meters" and the like. The fourth lane information in the initial fusion map includes lane information of a road where the vehicle is located when the vehicle is far from the next intersection, and includes lane information of a road where the vehicle is located at the next intersection when the vehicle is near the next intersection.
For example, when determining the first lane number corresponding to the third lane information in the navigation map and the second lane number corresponding to the fourth lane information in the initial fusion map, the navigation map may be directly fused with the initial fusion map to obtain the target fusion map.
In some embodiments, the fusion method of the map further comprises:
and under the condition that the number of the first lanes is different from the number of the second lanes, correcting the attribute information of all lanes in the fourth lane information according to the attribute information of all lanes in the third lane information to obtain a corrected navigation map, and fusing the corrected navigation map with the initial fusion map to obtain a target fusion map.
For example, since the lane information in the navigation map indicates the lane information at the next intersection of the road where the vehicle is located, and the fourth lane information in the initial fusion map only includes the lane information of the road where the vehicle is located when the vehicle is far away from the next intersection, the number of lanes of the road where the vehicle is located obtained according to the initial fusion map may be different from the number of lanes corresponding to the lane information obtained according to the navigation map, and at this time, the information of each lane may be obtained by fusion by combining the forward assignment with the reverse assignment. For example, the number of lanes of the road where the vehicle is located is 3, the number of lanes of the next intersection is 4 in the lane information, and left-to-right lanes, straight lanes and right-to-left lanes are respectively left-to-right lanes, and at this time, the attributes of the 3 lanes are respectively left-to-right lanes, straight lanes and straight lanes by forward assignment; the attribute obtained through reverse assignment is respectively straight, straight and right turning, and finally the attribute of the three lanes obtained through fusion is straight, left turning, straight and right turning. For another example, the number of lanes of the road where the vehicle is located is 4, the number of lanes of the next intersection is 3 in the lane information, and left turn, straight run and right turn lanes are respectively left turn, straight run, right turn and empty from left to right turn, at this time, the attributes of the 4 lanes are respectively left turn, straight run, right turn and empty through forward assignment; and through reverse assignment, the obtained attributes are respectively empty, left-turn, straight-going and right-turn, and finally the four lane fused attributes are left-turn, straight-going right-turn and right-turn.
Fig. 5 is a block diagram illustrating a map fusing apparatus according to an exemplary embodiment. Referring to fig. 5, the map fusion apparatus 500 includes an acquisition module 510, a matching module 520, and a fusion module 530.
The obtaining module 510 is configured to obtain a real-time perceived map, which is a vector map established by the vehicle according to a first perceived result of the first target area;
the matching module 520 is configured to match the perceived map and the gray map to obtain an initial fusion map when the first perceived result includes intersection information; the gray map is a vector map established by the vehicle in a history journey according to a second perception result of a second target area, and the second target area comprises a first target area;
the fusion module 530 is configured to fuse a navigation map of a vehicle with an initial fusion map to obtain a target fusion map.
In some embodiments, the matching module 520 includes:
the obtaining sub-module is configured to perform preliminary matching on the real-time sensing map and the gray map according to the positioning information corresponding to the first sensing result to obtain an intermediate matching result;
the first establishing sub-module is configured to establish a first constraint condition according to the first lane line information of the real-time perception map and the second lane line information of the gray level map, wherein the first constraint condition is used for aligning the lane lines in the real-time perception map with the lane lines in the gray level map;
The second establishing sub-module is configured to establish a second constraint condition according to the first stop line information of the real-time sensing map and the second stop line information of the gray level map, wherein the second constraint condition is used for aligning the stop line of the real-time sensing map with the stop line of the gray level map;
and the correction sub-module is configured to correct the intermediate matching result according to the first constraint condition and the second constraint condition to obtain an initial fusion map.
In some embodiments, the first setup submodule is specifically configured to:
determining first distance information between a target edge of a road where a vehicle is located and a first lane line in first lane line information, and determining second distance information between the target edge and a second lane line in second lane line information, wherein the first lane line is any lane line in a plurality of lane lines corresponding to the first lane line information, and the second lane line is a lane line corresponding to the first lane line in a plurality of lane lines corresponding to the second lane line information;
and determining a gap between the first distance information and the second distance information, and establishing a first constraint condition according to a relation between the gap and a preset gap threshold.
In some embodiments, the second setup submodule is specifically configured to:
Third distance information between a stop line corresponding to the first stop line information and a stop line corresponding to the second stop line is determined, and a second constraint condition is established according to the relation between the third distance information and a preset distance threshold.
In some embodiments, the obtaining submodule is specifically configured to:
preliminary matching is carried out on the gray level map and the real-time perception map, and an initial matching result is obtained;
and deleting lane line information of a reverse driving road of the vehicle in the initial matching result to obtain an intermediate matching result, wherein the reverse driving road represents a road with the opposite driving direction specified by the road where the vehicle is located and/or a road with the opposite driving direction specified by a drivable road corresponding to the road where the vehicle is located.
In some embodiments, the fusion device 500 of the map further includes:
the first determining module is configured to determine a first target lane of a road where the vehicle is located and a second target lane of a travelable road corresponding to the road where the vehicle is located in the initial fusion map;
the generation module is configured to connect the first target lane with the second target lane and generate a drivable path;
the obtaining module is configured to assign a value to the recommended degree of each drivable path according to a preset measurement condition under the condition that the drivable paths comprise a plurality of drivable paths, and obtain a recommended driving lane for the drivable path, so that the vehicle plans the driving path according to the recommended driving lane and the dynamic obstacle information contained in the first perception result.
In some embodiments, the fusion device 500 of the map further includes:
the identifying module is configured to determine that the first sensing result comprises intersection information when at least one of information of a stop line, a zebra crossing and a traffic light is identified in the first sensing result; and/or the number of the groups of groups,
the second determining module is configured to determine that the first perception result comprises intersection information under the condition that the position of an endpoint of any lane line extending to the lane line is unchanged in a real-time perception map established by the vehicle according to the first perception result.
In some embodiments, the fusion module 530 specifically includes:
the determining submodule is configured to determine the first number of lanes corresponding to third lane information in the navigation map and the second number of lanes corresponding to fourth lane information in the initial fusion map;
the obtaining sub-module is configured to fuse the navigation map with the initial fusion map to obtain a target fusion map under the condition that the number of the first tracks is the same as the number of the second tracks.
In some embodiments, the fusion device 500 of the map further includes:
the obtaining module is configured to correct the attribute information of all lanes in the fourth lane information according to the attribute information of all lanes in the third lane information under the condition that the number of the first lanes is different from the number of the second lanes, obtain a corrected navigation map, and fuse the corrected navigation map with the initial fusion map to obtain the target fusion map.
With respect to the map fusion apparatus 500 in the above embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the map fusion method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the fusion method of maps provided by the present disclosure.
The present disclosure also provides a vehicle including:
a storage device having a computer program stored thereon;
and the control device is used for executing the computer program in the storage device to realize the steps of the fusion method of the map.
Fig. 6 is a block diagram of a vehicle 600, according to an exemplary embodiment. For example, vehicle 600 may be a hybrid vehicle, but may also be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 600 may be an autonomous vehicle or a semi-autonomous vehicle.
Referring to fig. 6, a vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Wherein the vehicle 600 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 600 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 610 may include a communication system, an entertainment system, a navigation system, and the like.
The perception system 620 may include several sensors for sensing information of the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 630 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one processor 651 and memory 652, the processor 651 may execute instructions 653 stored in the memory 652.
The processor 651 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 652 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 653, memory 652 may store data such as road maps, route information, vehicle location, direction, speed, and the like. The data stored by memory 652 may be used by computing platform 650.
In an embodiment of the present disclosure, the processor 651 may execute the instructions 653 to complete all or part of the steps of the map fusion method described above.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described fusion method of maps when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of fusing maps, comprising:
Acquiring a real-time perception map, wherein the real-time perception map is a vector map established by a vehicle according to a first perception result of a first target area;
under the condition that the first perception result comprises intersection information, a first constraint condition is established according to the first lane line information of the real-time perception map and the second lane line information of the gray level map; the gray map is a vector map established by the vehicle in a history journey according to a second perception result of a second target area, wherein the second target area comprises the first target area, and the first constraint condition is used for aligning a lane line in the real-time perception map with a lane line in the gray map;
establishing a second constraint condition according to the first stop line information of the real-time perception map and the second stop line information of the gray level map, wherein the second constraint condition is used for aligning the stop line of the real-time perception map with the stop line of the gray level map;
matching the real-time sensing map with the gray map according to the positioning information, the first constraint condition and the second constraint condition corresponding to the first sensing result to obtain an initial fusion map;
And fusing the navigation map of the vehicle with the initial fusion map to obtain a target fusion map.
2. The method of merging maps according to claim 1, wherein matching the real-time perceived map and the grayscale map according to the positioning information corresponding to the first perceived result, the first constraint condition, and the second constraint condition to obtain an initial merged map includes:
according to the positioning information corresponding to the first sensing result, performing preliminary matching on the real-time sensing map and the gray map to obtain an intermediate matching result;
and correcting the intermediate matching result according to the first constraint condition and the second constraint condition to obtain the initial fusion map.
3. The method of merging maps according to claim 1, wherein establishing a first constraint based on the first lane information of the real-time perceived map and the second lane information of the grayscale map comprises:
determining first distance information between a target edge of a road where the vehicle is located and a first lane line in the first lane line information, and determining second distance information between the target edge and a second lane line in the second lane line information, wherein the first lane line is any lane line in a plurality of lane lines corresponding to the first lane line information, and the second lane line is a lane line corresponding to the first lane line in a plurality of lane lines corresponding to the second lane line information;
And determining a gap between the first distance information and the second distance information, and establishing the first constraint condition according to a relation between the gap and a preset gap threshold.
4. The method of merging maps according to claim 1, wherein establishing a second constraint based on the first stop line information of the real-time perceived map and the second stop line information of the grayscale map comprises:
third distance information between a stop line corresponding to the first stop line information and a stop line corresponding to the second stop line information is determined, and the second constraint condition is established according to the relation between the third distance information and a preset distance threshold.
5. The fusion method of maps according to claim 2, wherein performing preliminary matching on the real-time perceived map and the gray scale map to obtain an intermediate matching result comprises:
performing preliminary matching on the gray map and the real-time perception map to obtain an initial matching result;
and deleting lane line information of a reverse driving road of the vehicle in the initial matching result to obtain the intermediate matching result, wherein the reverse driving road represents a road with the opposite driving direction to the driving direction specified by the road where the vehicle is located and/or a road with the opposite driving direction to the driving direction specified by the driving capable road corresponding to the road where the vehicle is located.
6. The method of merging maps according to any one of claims 1 to 5, characterized in that it further comprises:
determining a first target lane of a road where the vehicle is located and a second target lane of a drivable road corresponding to the road where the vehicle is located in the initial fusion map;
connecting the first target lane with the second target lane to generate a drivable path;
and under the condition that the drivable paths comprise a plurality of drivable paths, assigning a value to the recommended degree of each drivable path according to a preset measurement condition to obtain a recommended driving lane for the drivable path, so that the vehicle plans the driving path according to the recommended driving lane and the dynamic obstacle information contained in the first perception result.
7. The method for fusing maps according to any one of claims 1 to 5, wherein, when the first sensing result includes intersection information, the real-time sensing map and the grayscale map are matched according to positioning information corresponding to the first sensing result, the first constraint condition and the second constraint condition, and before obtaining an initial fused map, the method further comprises:
In the first perception result, under the condition that at least one of information of a stop line, a zebra crossing and a traffic light is identified, determining that intersection information is included in the first perception result; and/or the number of the groups of groups,
and determining that the first perception result comprises intersection information under the condition that the position of an endpoint of any lane line extending to the lane line is unchanged in a real-time perception map established by the vehicle according to the first perception result.
8. The fusion method of maps according to any one of claims 1 to 5, characterized in that fusing the navigation map of the vehicle with the initial fusion map to obtain a target fusion map comprises:
determining the first lane number corresponding to third lane information in the navigation map and the second lane number corresponding to fourth lane information in the initial fusion map;
and under the condition that the number of the first tracks is the same as that of the second tracks, fusing the navigation map with the initial fusion map to obtain the target fusion map.
9. The method of merging maps according to claim 8, characterized in that it further comprises:
and under the condition that the number of the first lanes is different from the number of the second lanes, correcting the attribute information of all lanes in the fourth lane information according to the attribute information of all lanes in the third lane information to obtain a corrected navigation map, and fusing the corrected navigation map with the initial fusion map to obtain the target fusion map.
10. A map fusion apparatus, comprising:
the acquisition module is configured to acquire a real-time perception map, wherein the real-time perception map is a vector map established by a vehicle according to a first perception result of a first target area;
the first establishing sub-module is configured to establish a first constraint condition according to the first lane line information of the real-time perception map and the second lane line information of the gray level map under the condition that the first perception result comprises intersection information; the gray map is a vector map established by the vehicle in a history journey according to a second perception result of a second target area, wherein the second target area comprises the first target area, and the first constraint condition is used for aligning a lane line in the real-time perception map with a lane line in the gray map;
the second establishing sub-module is used for establishing a second constraint condition according to the first stop line information of the real-time perception map and the second stop line information of the gray level map, wherein the second constraint condition is used for aligning the stop line of the real-time perception map with the stop line of the gray level map;
The matching module is used for matching the perceived map with the gray map according to the positioning information corresponding to the first perceived result, the first constraint condition and the second constraint condition to obtain an initial fusion map;
and the fusion module is configured to fuse the navigation map of the vehicle with the initial fusion map to obtain a target fusion map.
11. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-9.
12. A vehicle, characterized by comprising:
a storage device having a computer program stored thereon;
control means for executing said computer program in said storage means to carry out the steps of the method according to any one of claims 1-9.
CN202310746047.1A 2023-06-21 2023-06-21 Map fusion method, device, medium and vehicle Active CN116499477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310746047.1A CN116499477B (en) 2023-06-21 2023-06-21 Map fusion method, device, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310746047.1A CN116499477B (en) 2023-06-21 2023-06-21 Map fusion method, device, medium and vehicle

Publications (2)

Publication Number Publication Date
CN116499477A CN116499477A (en) 2023-07-28
CN116499477B true CN116499477B (en) 2023-09-26

Family

ID=87323378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310746047.1A Active CN116499477B (en) 2023-06-21 2023-06-21 Map fusion method, device, medium and vehicle

Country Status (1)

Country Link
CN (1) CN116499477B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111295666A (en) * 2019-04-29 2020-06-16 深圳市大疆创新科技有限公司 Lane line detection method, device, control equipment and storage medium
CN112577479A (en) * 2019-09-27 2021-03-30 北京初速度科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
CN114969414A (en) * 2022-05-27 2022-08-30 重庆长安汽车股份有限公司 Map updating method and system, beyond-the-horizon road condition coordination method and system
CN115112146A (en) * 2022-07-07 2022-09-27 安徽蔚来智驾科技有限公司 Method, computer system, and medium for generating an autopilot map
CN115493602A (en) * 2022-09-28 2022-12-20 智道网联科技(北京)有限公司 Semantic map construction method and device, electronic equipment and storage medium
WO2023092451A1 (en) * 2021-11-26 2023-06-01 华为技术有限公司 Method and apparatus for predicting drivable lane
CN116202538A (en) * 2023-05-05 2023-06-02 广州小鹏自动驾驶科技有限公司 Map matching fusion method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110160502B (en) * 2018-10-12 2022-04-01 腾讯科技(深圳)有限公司 Map element extraction method, device and server
US11738770B2 (en) * 2019-07-02 2023-08-29 Nvidia Corporation Determination of lane connectivity at traffic intersections for high definition maps
CN112541437A (en) * 2020-12-15 2021-03-23 北京百度网讯科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111295666A (en) * 2019-04-29 2020-06-16 深圳市大疆创新科技有限公司 Lane line detection method, device, control equipment and storage medium
CN112577479A (en) * 2019-09-27 2021-03-30 北京初速度科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
WO2023092451A1 (en) * 2021-11-26 2023-06-01 华为技术有限公司 Method and apparatus for predicting drivable lane
CN114969414A (en) * 2022-05-27 2022-08-30 重庆长安汽车股份有限公司 Map updating method and system, beyond-the-horizon road condition coordination method and system
CN115112146A (en) * 2022-07-07 2022-09-27 安徽蔚来智驾科技有限公司 Method, computer system, and medium for generating an autopilot map
CN115493602A (en) * 2022-09-28 2022-12-20 智道网联科技(北京)有限公司 Semantic map construction method and device, electronic equipment and storage medium
CN116202538A (en) * 2023-05-05 2023-06-02 广州小鹏自动驾驶科技有限公司 Map matching fusion method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋莺.实时交通信息与移动导航电子地图融合表达.武汉大学学报· 信息科学版.2010,第35卷(第9期),1108-1111. *
实时交通信息与移动导航电子地图融合表达;宋莺;武汉大学学报· 信息科学版;第35卷(第9期);1108-1111 *

Also Published As

Publication number Publication date
CN116499477A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US11307040B2 (en) Map information provision system
JP7068456B2 (en) Driving environment information generation method, driving control method, driving environment information generation device
CN109952547B (en) Automatic control of a motor vehicle on the basis of lane data and motor vehicle
US20200232800A1 (en) Method and apparatus for enabling sequential groundview image projection synthesis and complicated scene reconstruction at map anomaly hotspot
JP6956268B2 (en) Driving environment information generation method, driving control method, driving environment information generation device
US20180273031A1 (en) Travel Control Method and Travel Control Apparatus
JP7260064B2 (en) Own vehicle position estimation device, running position estimation method
CN118235180A (en) Method and device for predicting drivable lane
CN114987529A (en) Map generation device
CN116027375B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115905449B (en) Semantic map construction method and automatic driving system with acquaintance road mode
CN116499477B (en) Map fusion method, device, medium and vehicle
CN115143985A (en) Vehicle positioning method and device, vehicle and readable storage medium
CN117858827A (en) Control method and device for vehicle, program product and storage medium
CN114655243A (en) Map-based stop point control
JP7024871B2 (en) Route calculation method, operation control method and route calculation device
CN113753073B (en) Vehicle speed control method, device, equipment and storage medium
CN117002527A (en) Vehicle control method and device, vehicle and storage medium
CN117048628A (en) Virtual lane marker generation
CN114987530A (en) Map generation device
CN118533187A (en) Track prediction method and device, electronic equipment and storage medium
CN117141494A (en) Lane-level navigation map generation method, system and medium
CN115574805A (en) Method and device for identifying lane line relationship, vehicle and storage medium
CN118163812A (en) Method and device for providing a driving strategy for an automated vehicle for a predefined area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant