WO2024078265A1 - 多图层高精地图生成方法和装置 - Google Patents

多图层高精地图生成方法和装置 Download PDF

Info

Publication number
WO2024078265A1
WO2024078265A1 PCT/CN2023/119314 CN2023119314W WO2024078265A1 WO 2024078265 A1 WO2024078265 A1 WO 2024078265A1 CN 2023119314 W CN2023119314 W CN 2023119314W WO 2024078265 A1 WO2024078265 A1 WO 2024078265A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
ultrasonic
millimeter wave
vehicle
Prior art date
Application number
PCT/CN2023/119314
Other languages
English (en)
French (fr)
Inventor
赵翔
陈成
伍孟琪
王凡
Original Assignee
纵目科技(上海)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纵目科技(上海)股份有限公司 filed Critical 纵目科技(上海)股份有限公司
Publication of WO2024078265A1 publication Critical patent/WO2024078265A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3878Hierarchical structures, e.g. layering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Definitions

  • the present disclosure generally relates to the field of intelligent driving, and more particularly to a method and apparatus for generating a multi-layer high-precision map.
  • sensors such as visual sensors, global navigation satellite systems (GNSS), and radars are usually used to collect data and build maps.
  • GNSS global navigation satellite systems
  • radars are usually used to collect data and build maps.
  • GNSS global navigation satellite systems
  • a vehicle may encounter various changes in the environment (for example, insufficient light, weak network signals, etc.), and existing map building solutions sometimes cannot meet the needs.
  • the present application provides a method for generating a multi-layer high-precision map, including:
  • the plurality of sensors including at least a millimeter wave radar and an ultrasonic radar;
  • the millimeter wave point cloud data collected by the millimeter wave radar and the ultrasonic point cloud data collected by the ultrasonic radar are fused to generate a millimeter wave-ultrasonic wave information layer;
  • the millimeter wave-ultrasonic wave information layer is used to generate a high-precision map.
  • fusing the millimeter wave point cloud data and the ultrasonic point cloud data includes:
  • the millimeter wave point cloud data is selected according to the measured values of the millimeter wave point cloud data and the ultrasonic point cloud data.
  • One of the point cloud data and the ultrasonic point cloud data is used as the corresponding point cloud data at the trajectory point.
  • the plurality of sensors include visual sensors, the method further comprising:
  • the matched point cloud data is associated with the category of the first target.
  • the method further comprises:
  • the high-precision map is generated using the network layer.
  • the network signal comprises a cellular signal and/or a wifi signal.
  • the plurality of sensors include visual sensors, the method further comprising:
  • the high-precision layer is generated by using the basic semantic information layer.
  • determining the driving trajectory of the vehicle comprises determining the driving trajectory of the vehicle using at least one of the following or a combination thereof:
  • Semantic SLAM using images collected by visual sensors.
  • Another aspect of the present disclosure provides a device for generating a multi-layer high-precision map, comprising:
  • a module for generating a high-precision map using the millimeter wave-ultrasonic information layer is provided.
  • the module for fusing the millimeter wave point cloud data and the ultrasonic point cloud data includes:
  • the plurality of sensors include visual sensors
  • the apparatus further comprising:
  • a module for associating the matched point cloud data with the category of the first target is
  • the device further comprises:
  • a module for generating the high-precision map using the network layer is a module for generating the high-precision map using the network layer.
  • the network signal comprises a cellular signal and/or a wifi signal.
  • the plurality of sensors include visual sensors
  • the apparatus further comprising:
  • a module for generating the high-precision layer using the basic semantic information layer is a module for generating the high-precision layer using the basic semantic information layer.
  • determining the driving trajectory of the vehicle comprises determining the driving trajectory of the vehicle using at least one of the following or a combination thereof:
  • Semantic SLAM using images collected by visual sensors.
  • One aspect of the present disclosure provides an electronic device, including a processor and a memory, wherein the memory stores program instructions; the processor executes the program instructions to implement the method for generating a multi-layer high-precision map as described above.
  • FIG. 1 shows a system for generating a multi-layer high-precision map according to various aspects of the present disclosure.
  • FIG. 2 is a diagram of a sensor module on a vehicle according to aspects of the present disclosure.
  • FIG. 3 is a diagram of a high-precision map generating apparatus according to various aspects of the present disclosure.
  • FIG. 4 is a diagram of a trajectory determination unit according to aspects of the present disclosure.
  • FIG. 5 is a flow chart for generating a multi-layer high-precision map according to various aspects of the present disclosure.
  • FIG. 6 is a diagram of an electronic device for object detection according to aspects of the present disclosure.
  • FIG. 1 shows a system for generating a multi-layer high-precision map according to various aspects of the present disclosure.
  • the system for generating a multi-layer high-precision map may include multiple vehicles 102 and a server 104.
  • the multiple vehicles 102 and the server 104 may communicate via a wireless network (eg, a cellular network, a wifi network, etc.).
  • a wireless network eg, a cellular network, a wifi network, etc.
  • Each vehicle 102 may be equipped with multiple sensors (e.g., visual sensors, millimeter wave radars, ultrasonic radars, network signal units, inertial measurement units, wheel speed meters, GNSS units, etc., as explained below in FIG. 2 ).
  • the multiple sensors may collect various types of data.
  • the vehicle 102 may send the collected data to the server 104 via a wireless network.
  • the server 104 may receive data from each vehicle 102 and process it to generate a multi-layer high-precision map, as described below.
  • the server 104 may receive relevant data (e.g., basic semantic data, point cloud data, network signal data, etc.) for each track point on the driving track of each vehicle 102.
  • the server 104 may use the data of multiple vehicles on their respective driving tracks to form corresponding base layers, point cloud layers, network maps, and other layers. Layers, etc., can be further formed into a multi-layer high-precision map.
  • the track points of multiple vehicles on their respective driving tracks can be formed into map points on the map.
  • a multi-layer high-precision map can be received from the server 104 through a wireless network to assist its intelligent driving. For example, when a vehicle 102 enters a parking lot, a multi-layer high-precision map of the current parking lot can be received from the server 104.
  • FIG. 2 is a diagram of a sensor device 200 on a vehicle according to aspects of the present disclosure.
  • the sensor device 200 may include a visual sensor 202 , a millimeter wave radar 204 , an ultrasonic radar 206 , a network signal unit 208 , an inertial measurement unit 210 , a wheel speed meter unit 212 , and a GNSS (Global Navigation Satellite System) unit 214 .
  • a visual sensor 202 may include a visual sensor 202 , a millimeter wave radar 204 , an ultrasonic radar 206 , a network signal unit 208 , an inertial measurement unit 210 , a wheel speed meter unit 212 , and a GNSS (Global Navigation Satellite System) unit 214 .
  • GNSS Global Navigation Satellite System
  • the visual sensor 202 may include a plurality of cameras and an image processor connected to the cameras.
  • a camera e.g., a monocular camera, a multi-camera camera, etc.
  • the plurality of cameras may acquire a plurality of images in real time.
  • the image processor receives a plurality of images taken by the plurality of cameras at each moment, and processes the plurality of images.
  • the processing of the images may include distortion correction, image stitching, etc.
  • the image processor may stitch the images taken by the plurality of cameras to obtain a bird's-eye view of the vehicle.
  • the bird's-eye view may include traffic signs on the road surface, such as arrows, road lines, speed bumps, zebra crossings, parking space lines, etc.
  • the millimeter wave radar 204 can detect in the millimeter wave band (30-300GHz frequency domain), transmit millimeter waves to the surrounding environment and receive millimeter wave point cloud data reflected by the target.
  • the millimeter wave point cloud data may include the distance and direction of the detected target relative to the vehicle.
  • the detection range of the millimeter wave radar 204 is generally in the range of tens to hundreds of meters.
  • the ultrasonic radar 206 can use ultrasonic waves for detection, and common operating frequencies include 40kHz, 48kHz and 58kHz.
  • the ultrasonic radar 206 can emit ultrasonic waves to the surrounding environment and receive ultrasonic point cloud data reflected back by the target.
  • the ultrasonic point cloud data may include the distance and direction of the detected target relative to the vehicle.
  • the detection range of the ultrasonic radar 206 is generally within ten meters.
  • the network signal unit 208 can receive wireless network signals, such as cellular network signals and wifi signals. Further, the network signal unit 208 can measure the quality of the wireless network signal, such as signal strength, signal delay, etc.
  • IMU Inertial measurement unit 210 may measure linear acceleration and angular acceleration of the vehicle.
  • the wheel speed meter unit 212 collects data from a wheel speed meter, such as wheel rotation speed.
  • the GNSS unit 214 can receive satellite positioning signals.
  • GNSS can include the GPS of the United States, the Glonass of Russia, the Galileo of Europe, the Beidou satellite navigation system of China, and the like.
  • the satellite positioning signals received by the GNSS unit 214 are in a global geocentric coordinate system, which can be converted into a map coordinate system for combination with other positioning information.
  • the sensor device 200 may transmit the data collected by each sensor (as described above) to the server 104 for further processing.
  • FIG. 3 is a diagram of a high-precision map generation device 300 according to aspects of the present disclosure.
  • the high-precision map generation device 300 may be included in the server 104.
  • the high-precision map generating device 300 may include a trajectory determining unit 302 , a base layer unit 304 , a point cloud layer unit 306 , and a network layer unit 308 .
  • the trajectory determination unit 302 may determine the driving trajectory of the vehicle based on data from various sensors of the vehicle.
  • FIG. 4 is a diagram of a trajectory determination unit 302 according to aspects of the present disclosure.
  • the trajectory determination unit 302 may include a combined navigation module 402 , a visual SLAM module 404 , and a semantic SLAM module 406 .
  • the integrated navigation module 402 may process data (eg, linear acceleration and angular acceleration data) from the vehicle's inertial measurement unit 210 , wheel speed data from the tachometer unit 212 , and satellite positioning signals from the GNSS unit 214 to generate integrated navigation data.
  • data eg, linear acceleration and angular acceleration data
  • the integrated navigation module 402 may perform a combination of inertial navigation and satellite navigation.
  • Inertial navigation can determine the position of the vehicle (the position in the coordinate system of the map) based on the data (linear acceleration and angular acceleration data) from the inertial measurement unit 210 and the wheel speed data from the wheel speed meter unit 212 .
  • Satellite navigation may utilize satellite positioning signals received by the GNSS unit 214 to perform positioning.
  • the positioning results of satellite navigation can be converted from the global geocentric coordinate system to the map coordinate system, and then combined with the positioning results of inertial navigation.
  • the strength of the satellite positioning signal is unstable, and the vehicle's GNSS unit 214 receives the satellite positioning signal intermittently.
  • Use inertial navigation for positioning During the inertial navigation positioning process, if the GNSS module receives a satellite positioning signal at a certain moment, the satellite signal can be used to correct the positioning result of the inertial navigation. For example, the positioning result in the satellite positioning signal received at that moment can be compared with the positioning result of the inertial navigation. If the distance between the two is greater than the threshold distance, the current positioning result is updated to the positioning result in the satellite positioning signal.
  • the visual SLAM (simultaneous localization and mapping) module 404 may use visual SLAM technology to determine the trajectory of the vehicle based on the images obtained by the visual sensor 202 .
  • Visual SLAM can include monocular SLAM and binocular SLAM.
  • Monocular SLAM can use a monocular camera to complete SLAM.
  • Monocular SLAM can use the camera on the vehicle to collect several images at adjacent times while moving to triangulate, measure the distance between reference pixels in different images, and thus obtain the vehicle's motion trajectory.
  • Binocular SLAM can calculate the distance of pixels by using the parallax between the left and right cameras, thereby realizing the positioning of the vehicle.
  • the binocular camera consists of two monocular cameras, but the distance between the two cameras (called the baseline) is known.
  • the baseline can be used to estimate the spatial position of each pixel, thereby obtaining the motion trajectory of the vehicle.
  • the vehicle positions determined at various times may be combined (e.g., concatenated) to obtain a driving trajectory of the vehicle.
  • Each trajectory point on the driving trajectory is associated with a time, e.g., the vehicle is determined to be at that trajectory point on the driving trajectory at that time.
  • the position (position in the coordinate system of the map) of each trajectory point on the driving trajectory and the corresponding time may be stored as an entry in a memory.
  • the visual SLAM module 404 may also associate and store the visual feature map (ie, one or more images collected by the visual sensor 202 ) with each point (and/or corresponding time) on the trajectory while determining the trajectory of the vehicle.
  • the visual feature map ie, one or more images collected by the visual sensor 202
  • the semantic SLAM module 406 may use semantic SLAM technology to determine the trajectory of the vehicle based on the images obtained by the visual sensor 202 .
  • the semantic SLAM module 406 can obtain one or more images (e.g., a bird's-eye view, as described above) obtained by the visual sensor 202 at each moment, and generate a semantic SLAM image based on the one or more images obtained at that moment.
  • the neural network can be used to identify the reference target (e.g., traffic sign) in the image.
  • a neural network can be used to identify the traffic sign (e.g., road sign, ground arrow, road line, speed bump, zebra crossing, parking space line, etc.) in the image as a reference target.
  • the travel distance and direction of the vehicle can then be determined based on the position changes of the identified reference targets in a plurality of temporally adjacent images, thereby determining the trajectory of the vehicle.
  • any of the integrated navigation module 402 , the visual SLAM module 404 , and the semantic SLAM module 406 may be used to determine the driving trajectory of the vehicle.
  • the satellite positioning signals received by the GNSS unit 214 may be used to adjust the trajectories generated by the visual SLAM module 404 and the semantic SLAM module 406 .
  • the GNSS unit 214 may receive satellite positioning signals at time t1 and t2 .
  • the vehicle position G1 is determined by the satellite positioning signal
  • the vehicle position G2 is determined by the satellite positioning signal.
  • the distance LG and direction DG between position G1 and position G2 may be determined.
  • the vehicle position determined by the visual SLAM module 404 is S1 ; at time t2 , the vehicle position determined by the visual SLAM module 404 is S2 . Further, the distance Ls and direction Ds between the positions S1 and S2 can be determined.
  • the distance difference between distances LG and Ls , and the angular difference between directions DG and Ds can then be determined. If the distance difference is greater than a distance threshold and/or the angular difference is greater than an angle threshold, the data of the satellite positioning signal (e.g., distance LG and direction DG ) is used to adjust (correct) the trajectory of the vehicle.
  • the satellite positioning signal data may be used to provide an initial value of the trajectory for the SLAM module 404.
  • the vehicle position G1 determined by the satellite positioning signal at time t1 is used as the position corresponding to time t1 .
  • the relative pose changes between satellite positioning signal data can be used as constraints to optimize the factor graph in SLAM, and the poses of key frames in SLAM can be updated through local or global optimization to correct trajectory deviations.
  • the ICP Intelligent Closest Point
  • the ICP can be used to adjust the vehicle's trajectory using satellite positioning signal data.
  • ICP obtains corresponding point pairs between the source point cloud (vehicle trajectory) and the target point cloud (satellite positioning signal), and constructs a rotation and translation matrix based on the corresponding point pairs.
  • the required matrix is used to transform the source point cloud into the coordinate system of the target point cloud, and the error function between the transformed source point cloud and the target point cloud is estimated. If the error function value is greater than the threshold, the above operation is iterated until the given error requirement is met.
  • the trajectory determined by the semantic SLAM module 406 can also be adjusted using the satellite positioning signal using the above method.
  • the trajectories generated by the visual SLAM module 404 and the semantic SLAM module 406 may be loop-closed.
  • a target e.g., an arrow on the ground
  • the target is detected again after a period of time, it can be determined that the vehicle has returned to its original position, that is, it has traveled a closed loop. Therefore, the trajectory of the vehicle can be adjusted in a closed loop.
  • the loop candidate frame of the current frame can be identified in the historical frames. For example, by detecting the distances between multiple historical frames and the current frame, when the distance between a specific historical frame and the current frame is very small (for example, less than a threshold), the historical frame can be determined to be the loop candidate frame of the current frame. Alternatively, the difference in description information between the historical frame and the current frame can be determined. If the description information of the specific historical frame is similar to that of the current frame, the historical frame can be determined to be the loop candidate frame of the current frame.
  • multiple loop candidate frames may be determined for a current frame and screened in a subsequent process.
  • the relative pose relationship between the current frame and the loop closure candidate frame can then be determined, and the determined pose relationship is used as a constraint to adjust the factor graph in SLAM.
  • a loop candidate frame and its temporally adjacent key frames can be selected to generate a local map, and the relative pose relationship between the current frame and the loop candidate frame can be used as the initial value to project the semantic information of the current frame into the local map.
  • the overlap rate between the semantic information in the current frame and the semantic information in the local map is calculated, and the pose transformation relationship with the highest overlap rate is found by adjusting near the initial frame.
  • the pose of the current frame is then recalculated as the loop optimization result of the current frame, and the frames between the initial frame and the current frame are adjusted using the SLAM algorithm to achieve loop adjustment.
  • the loop adjustment of semantic SLAM is particularly suitable for parking lots, where there is a certain probability that the vehicle's driving trajectory is a closed loop. Through loop adjustment, the accumulated errors in the closed loop process of vehicle driving can be eliminated.
  • the base layer unit 304 may include the vehicle trajectory generated by the trajectory determination unit 302 Each point is associated with its corresponding semantic information.
  • the semantic information may be a target (e.g., a road sign, a ground arrow, a road line, a speed bump, a zebra crossing, a parking space line, etc.) detected from a plurality of images collected by the visual sensor 202 at the point (e.g., using a neural network).
  • a target or objects detected at each trajectory point on the trajectory using semantic SLAM can be directly associated (mapped) with the trajectory point.
  • the images collected at each trajectory point (or its corresponding moment) on the driving trajectory can be processed (for example, using a neural network for target recognition) to identify one or more targets therein, and then the one or more targets can be associated with the trajectory point (or its corresponding moment).
  • the point cloud layer unit 306 receives the millimeter wave point cloud data from the millimeter wave radar 202 and the ultrasonic point cloud data from the ultrasonic radar 204 , and processes (fuses) the millimeter wave point cloud data and the ultrasonic point cloud data to generate a point cloud layer.
  • the detection distance of the millimeter wave radar 204 is relatively long, generally within the range of tens to hundreds of meters.
  • the detection distance of the ultrasonic radar 206 is relatively short, generally within ten meters.
  • the millimeter wave/ultrasonic point cloud data may include the distance and orientation of the detected target relative to the vehicle, and the reflected wave intensity corresponding to the target.
  • the millimeter wave/ultrasonic point cloud data may be compared with a threshold value, and the millimeter wave/ultrasonic point cloud data with reflected wave intensity lower than the threshold value may be filtered out.
  • dynamic point cloud data eg, point cloud data returned from moving pedestrians or vehicles
  • static point cloud data may be filtered out, leaving only static point cloud data.
  • the intensity of reflected waves in millimeter wave/ultrasonic point cloud data can be used to distinguish objects of different types (eg, materials), thereby filtering out point cloud data related to dynamic objects such as pedestrians and vehicles.
  • the speed information about the data points in the millimeter wave point cloud data can be used to filter out the dynamic point cloud data based on the speed information.
  • a dynamic object may be identified through a visual image, the visual image may be associated with millimeter wave/ultrasonic point cloud data, and the point cloud data corresponding to the dynamic object may be filtered out.
  • millimeter wave point cloud data and ultrasonic point cloud data may be fused.
  • the millimeter wave point cloud data and the ultrasonic point cloud data can be matched to find matching millimeter wave point cloud data and ultrasonic point cloud data (for example, corresponding to the same environmental point), that is, point cloud data with the same distance and orientation as the vehicle at a certain trajectory point (time).
  • the reflected wave intensity in the matched millimeter wave point cloud data and the ultrasonic point cloud data can then be compared, and the one with the stronger reflected wave intensity between the millimeter wave point cloud data and the ultrasonic point cloud data can be selected as the point cloud data corresponding to the vehicle at the trajectory point and the environmental point.
  • the point cloud data can be used as the point cloud data of the environmental point.
  • the point cloud data generated for each track point on the driving track can then be associated with the track point, thereby generating a point cloud layer.
  • the point cloud data may include the distance to the vehicle, the orientation, the reflection intensity, and the like.
  • point cloud data can cover both a few meters and tens to hundreds of meters. This fusion can reduce the amount of point cloud data and improve the accuracy of point cloud data. Furthermore, since millimeter wave detection has certain jump points, and ultrasonic detection detects very stable contour points at close range, the stability of the point cloud layer is enhanced.
  • the image data collected by the visual sensor can be used to classify the point cloud data in the point cloud layer.
  • the image data collected by the visual sensor can be processed to identify one or more targets therein (for example, using a neural network for target recognition, as described above).
  • target point cloud data e.g., the identified target and the point cloud data are at the same distance from the vehicle, the same orientation to the central axis of the vehicle, etc.
  • point cloud data of the track point i.e., the point cloud data collected at the track point, such as millimeter wave data or ultrasonic data. If there is point cloud data that matches the target in the image, the determined category (the category of the target in the image, as determined above) can be associated with the corresponding point cloud data.
  • each map point can include the point cloud data at the map point (for example, the point cloud data detected by the radar). target) and its type (e.g., street sign, pillar, etc.).
  • the network layer unit 308 may receive quality data of wireless signals (eg, signals of cellular networks, wifi networks) from the network signal unit 208. Further, the network signal quality data of each point on the trajectory may be associated with the point, thereby forming a network layer.
  • wireless signals eg, signals of cellular networks, wifi networks
  • vehicles can avoid areas with poor network signal quality during path planning, thereby obtaining a better autonomous driving experience.
  • This application first determines the driving trajectory of each vehicle, and then uses each trajectory point on the driving trajectory and its corresponding time as a reference to associate the data sensed by each sensor (visual sensor, millimeter wave/ultrasonic radar, network signal unit, etc.) with the trajectory point/time.
  • the trajectories of multiple vehicles and their corresponding data are then combined (for example, the trajectory points of multiple vehicles and their corresponding data are projected onto the coordinate system of the map), thereby forming a multi-layer high-precision map.
  • At each map point in the multi-layer high-precision map there are corresponding multiple types of data (for example, image data, point cloud data, network signal data, etc.).
  • FIG. 5 is a flow chart for generating a multi-layer high-precision map according to various aspects of the present disclosure.
  • step 502 data from a plurality of sensors on a vehicle may be received, the plurality of sensors including at least a millimeter wave radar and an ultrasonic radar.
  • the driving trajectory of the vehicle may be constructed based on the data collected by the multiple sensors.
  • determining the driving trajectory of the vehicle includes using at least one of the following or a combination thereof to determine the driving trajectory of the vehicle: a combination of inertial navigation and satellite navigation; visual SLAM performed using images collected by a visual sensor; and semantic SLAM performed using images collected by a visual sensor.
  • the millimeter wave point cloud data collected by the millimeter wave radar and the ultrasonic point cloud data collected by the ultrasonic radar may be fused according to the driving trajectory of the vehicle to generate a millimeter wave-ultrasonic wave information layer.
  • fusing the millimeter wave point cloud data and the ultrasonic point cloud data may include: for each track point on the driving track, determining a measurement value of the millimeter wave point cloud data and the ultrasonic point cloud data collected at the track point; and selecting according to the measurement value of the millimeter wave point cloud data and the ultrasonic point cloud data.
  • One of the millimeter wave point cloud data and the ultrasonic point cloud data is used as the corresponding point cloud data at the trajectory point.
  • the measurement value of the millimeter wave/ultrasonic point cloud data may be the reflected wave intensity, signal-to-noise ratio, distance and angle range of the point cloud data, etc.
  • the millimeter wave/ultrasonic point cloud data may be a weighted sum of the reflected wave intensity, signal-to-noise ratio, distance and angle range.
  • the multiple sensors include a visual sensor
  • the method further includes: processing images captured by the visual sensor to detect a first target and determine a category of the first target; determining point cloud data matching the first target; and associating the matched point cloud data with the category of the first target.
  • the millimeter wave-ultrasonic wave information layer can be used to generate a high-precision map.
  • the method further comprises: receiving a network signal using a wireless receiver on the vehicle;
  • the network signal comprises a cellular signal and/or a wifi signal.
  • the multiple sensors include a visual sensor
  • the method further includes: using the visual sensor to collect multiple images; using the multiple images to generate a basic semantic information layer; and using the basic semantic information layer to generate the high-precision layer.
  • FIG. 6 is a diagram of an electronic device for object detection according to aspects of the present disclosure.
  • the electronic device 600 may include a memory 602 and a processor 604.
  • the memory 602 stores program instructions, and the processor 604 may be connected and communicated with the memory 602 via a bus 606.
  • the processor 604 may call the program instructions in the memory 602 to perform the following steps: receiving data from multiple sensors on the vehicle, the multiple sensors at least including millimeter wave radars and ultrasonic radars; constructing the driving trajectory of the vehicle according to the data collected by the multiple sensors; according to the driving trajectory of the vehicle, fusing the millimeter wave point cloud data collected by the millimeter wave radar and the ultrasonic point cloud data collected by the ultrasonic radar to generate a millimeter wave-ultrasonic information layer; and generating a high-precision map using the millimeter wave-ultrasonic information layer.
  • the processor 604 can also call the program instructions in the memory 602 to perform the following steps: for each trajectory point on the driving trajectory, determine the measurement values of the millimeter wave point cloud data and the ultrasonic point cloud data collected at the trajectory point; and select one of the millimeter wave point cloud data and the ultrasonic point cloud data as the corresponding point cloud data at the trajectory point according to the measurement values of the millimeter wave point cloud data and the ultrasonic point cloud data.
  • the processor 604 may also call program instructions in the memory 602 to perform the following steps: An image acquired by a visual sensor is processed to detect a first target and determine a category of the first target; point cloud data matching the first target is determined; and the matched point cloud data is associated with the category of the first target.
  • the processor 604 can also call program instructions in the memory 602 to perform the following steps: using the wireless receiver on the vehicle to receive a network signal; determining the signal quality of the received network signal; using the signal quality of the network signal to generate a network layer; and using the network layer to generate the high-precision map.
  • the network signal comprises a cellular signal and/or a wifi signal.
  • the processor 604 may also call program instructions in the memory 602 to perform the following steps: using a visual sensor to capture multiple images; using the multiple images to generate a basic semantic information layer; and using the basic semantic information layer to generate the high-precision layer.
  • the processor 604 may also call program instructions in the memory 602 to perform the following steps: a combination of inertial navigation and satellite navigation; visual SLAM using images collected by a visual sensor; and semantic SLAM using images collected by a visual sensor.
  • the various illustrative blocks and modules described in conjunction with the disclosure herein may be implemented or performed with a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
  • the processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • the processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
  • each function may be stored on a computer-readable medium or transmitted therefrom as one or more instructions or codes.
  • Other examples and implementations fall within the scope of the present disclosure and the appended claims.
  • the functions described above may be implemented using software executed by a processor, hardware, firmware, hard wiring, or any combination thereof.
  • the features that implement the functions may also be physically located in various locations, including being distributed so that the various parts of the functions are implemented at different physical locations.
  • the "or” used in the enumeration of items indicates an inclusive enumeration, so that, for example, the enumeration of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C).
  • the phrase “based on” should not be interpreted as referring to a closed set of conditions. For example, the exemplary steps described as “based on condition A” may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” should be interpreted in the same manner as the phrase “based at least in part on.”
  • Non-transient storage medium includes both non-transient computer storage medium and communication medium, and it includes any medium that facilitates computer program to transfer from one place to another place.
  • Non-transient storage medium can be any available medium that can be accessed by general or special-purpose computer.
  • non-transient computer readable medium can include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, disk storage or other magnetic storage device, or can be used to carry or store instruction or data structure form of desired program code means and can be accessed by general or special-purpose computer or general or special-purpose processor any other non-transient medium.
  • Any connection is also properly referred to as computer readable medium.
  • software is transmitted from website, server or other remote source using coaxial cable, optical fiber cable, twisted pair, digital subscriber line (DSL) or wireless technology such as infrared, radio and microwave, then this coaxial cable, optical fiber cable, twisted pair, digital subscriber line (DSL) or
  • Disk and disc as used herein include CDs, laser discs, optical discs, digital versatile discs (DVDs), floppy disks, and Blu-ray discs, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Navigation (AREA)

Abstract

一种多图层高精地图生成方法、装置及电子设备,其中该方法包括:接收来自车辆上的多个传感器的数据(502),该多个传感器至少包括毫米波雷达(204)和超声波雷达(206);根据该多个传感器采集的数据来构建车辆的行驶轨迹(504);根据该车辆的行驶轨迹,将毫米波雷达(204)采集的毫米波点云数据和超声波雷达(206)采集的超声波点云数据进行融合以生成毫米波-超声波信息图层(506);以及利用毫米波-超声波信息图层来生成高精度地图(508)。

Description

多图层高精地图生成方法和装置 技术领域
本公开一般涉及智能驾驶领域,尤其涉及用于生成多图层高精地图的方法和装置。
背景技术
近年来,随着自动驾驶技术的不断成熟,具备自动驾驶功能的车辆越来越多地出现在日常生活中。作为自动驾驶必备组件的高精地图,其创建和更新的技术是行业内的重点研究对象。
当前,通常使用视觉传感器、全球导航卫星系统(GNSS)、雷达等传感器来采集数据并建图。但在车辆行驶过程中,会遭遇环境的各种变化(例如,光线不足、网络信号弱等),现有的构建地图的方案有时不能满足需求。
发明内容
针对现有技术中存在的以上技术问题,本申请提供了一种用于生成多图层高精地图的方法,包括:
接收来自车辆上的多个传感器的数据,所述多个传感器至少包括毫米波雷达和超声波雷达;
根据所述多个传感器采集的数据来构建所述车辆的行驶轨迹;
根据所述车辆的行驶轨迹,将所述毫米波雷达采集的毫米波点云数据和所述超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层;以及
利用所述毫米波-超声波信息图层来生成高精度地图。
可任选地,将所述毫米波点云数据和所述超声波点云数据进行融合包括:
针对所述行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值;以及
根据所述毫米波点云数据和所述超声波点云数据的衡量值来选择所述毫米波 点云数据和所述超声波点云数据之一作为该轨迹点处的对应点云数据。
可任选地,所述多个传感器包括视觉传感器,所述方法进一步包括:
处理所述视觉传感器所采集的图像以检测第一目标并确定所述第一目标的类别;
确定与所述第一目标相匹配的点云数据;以及
将所匹配的点云数据与所述第一目标的类别进行关联。
可任选地,该方法进一步包括:
使用所述车辆上的无线接收机接收网络信号;
确定接收到的网络信号的信号质量;
使用所述网络信号的信号质量来生成网络图层;以及
利用所述网络图层来生成所述高精度地图。
可任选地,所述网络信号包括蜂窝信号和/或wifi信号。
可任选地,所述多个传感器包括视觉传感器,所述方法进一步包括:
使用所述视觉传感器采集多个图像;
使用所述多个图像来生成基础语义信息图层;以及
利用所述基础语义信息图层来生成所述高精度图层。
可任选地,确定所述车辆的行驶轨迹包括使用以下至少一者或其组合来确定所述车辆的行驶轨迹:
惯性导航和卫星导航的组合;
利用视觉传感器所采集的图像进行的视觉SLAM;以及
利用视觉传感器所采集的图像进行的语义SLAM。
本公开的另一方面提供了一种用于生成多图层高精地图的装置,包括:
用于接收来自车辆上的多个传感器的数据的模块,所述多个传感器至少包括毫米波雷达和超声波雷达;
用于根据所述多个传感器采集的数据来构建所述车辆的行驶轨迹的模块;
用于根据所述车辆的行驶轨迹,将所述毫米波雷达采集的毫米波点云数据和所述超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层的模块;以及
用于利用所述毫米波-超声波信息图层来生成高精度地图的模块。
可任选地,用于将所述毫米波点云数据和所述超声波点云数据进行融合的模块包括:
用于针对所述行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值的模块;以及
用于根据所述毫米波点云数据和所述超声波点云数据的衡量值来选择所述毫米波点云数据和所述超声波点云数据之一作为该轨迹点处的对应点云数据的模块。
可任选地,所述多个传感器包括视觉传感器,所述装置进一步包括:
用于处理所述视觉传感器所采集的图像以检测第一目标并确定所述第一目标的类别的模块;
用于确定与所述第一目标相匹配的点云数据的模块;以及
用于将所匹配的点云数据与所述第一目标的类别进行关联的模块。
可任选地,该装置进一步包括:
用于使用所述车辆上的无线接收机接收网络信号的模块;
用于确定接收到的网络信号的信号质量的模块;
用于使用所述网络信号的信号质量来生成网络图层的模块;以及
用于利用所述网络图层来生成所述高精度地图的模块。
可任选地,所述网络信号包括蜂窝信号和/或wifi信号。
可任选地,所述多个传感器包括视觉传感器,所述装置进一步包括:
用于使用所述视觉传感器采集多个图像的模块;
用于使用所述多个图像来生成基础语义信息图层的模块;以及
用于利用所述基础语义信息图层来生成所述高精度图层的模块。
可任选地,确定所述车辆的行驶轨迹包括使用以下至少一者或其组合来确定所述车辆的行驶轨迹:
惯性导航和卫星导航的组合;
利用视觉传感器所采集的图像进行的视觉SLAM;以及
利用视觉传感器所采集的图像进行的语义SLAM。
本公开的有一方面提供了一种电子设备,包括处理器和存储器,所述存储器存储有程序指令;所述处理器运行程序指令实现如上所述的用于生成多图层高精地图的方法。
附图说明
图1示出了根据本公开的各方面的用于生成多图层高精地图的系统。
图2是根据本公开的各方面的车辆上的传感器模块的示图。
图3是根据本公开的各方面的高精度地图生成装置的示图。
图4是根据本公开的各方面的轨迹确定单元的示图。
图5是根据本公开的各方面的用于生成多图层高精地图的流程图。
图6是根据本公开的各方面的用于目标检测的电子设备的示图。
具体实施方式
为让本发明的上述目的、特征和优点能更明显易懂,以下结合附图对本发明的具体实施方式作详细说明。
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是本发明还可以采用其它不同于在此描述的其它方式来实施,因此本发明不受下面公开的具体实施例的限制。
图1示出了根据本公开的各方面的用于生成多图层高精地图的系统。
如图1所示,用于生成多图层高精地图的系统可包括多个车辆102和服务器104。该多个车辆102和服务器104可以通过无线网络(例如,蜂窝网络、wifi网络等)进行通信。
每个车辆102上可安装有多个传感器(例如,视觉传感器、毫米波雷达、超声波雷达、网络信号单元、惯性测量单元、轮速计、GNSS单元等,如以下在图2中解说的)。该多个传感器可采集各类数据。车辆102可通过无线网络将所采集的数据发送给服务器104。
服务器104可以接收来自各个车辆102的数据,对其进行处理,由此生成多层高精度地图,如以下所描述的。
例如,服务器104可从每个车辆102接收在其行驶轨迹上的每个轨迹点的相关数据(例如,基础语义数据、点云数据、网络信号数据等)。服务器104可使用多个车辆在其各自的行驶轨迹上的数据来形成对应的基础图层、点云图层、网络图 层等,进一步形成多图层高精地图。例如,可以将多个车辆在各自的行驶轨迹上的轨迹点构成地图上的地图点。
在车辆102的行驶过程中,可以通过无线网络来接收来自服务器104的多图层高精地图以辅助其智能驾驶。例如,当有车辆102进入到停车场时,可以从服务器104接收当前停车场的多图层高精地图。
图2是根据本公开的各方面的车辆上的传感器装置200的示图。
如图2所示,传感器装置200可以包括视觉传感器202、毫米波雷达204、超声波雷达206、网络信号单元208、惯性测量单元210、轮速计单元212和GNSS(全球导航卫星系统)单元214。
视觉传感器202可以包括多个相机以及与相机连接的图像处理器。例如,可以在车身的左前方、右前方、左后方、右后方各设置一个相机(例如,单目相机、多目相机等)。该多个相机可以实时获取多个图像。图像处理器接收多个相机在每个时刻拍摄的多个图像,并且对该多个图像进行处理。对图像的处理可以包括畸变校正、图像拼接等。例如,图像处理器可将多个相机拍摄的图像进行拼接,获得车辆的鸟瞰图。鸟瞰图可以包括行驶路面上的交通标志,例如,箭头、道路线、减速带、斑马线、车位线等等。
毫米波雷达204可在毫米波波段(30-300GHz频域)进行探测,向周围环境发射毫米波并且接收目标反射回的毫米波点云数据。该毫米波点云数据可以包括所探测到的目标相对于车辆的距离和方位。毫米波雷达204的探测距离一般在几十到几百米的范围内。
超声波雷达206可利用超声波进行探测,常用的工作频率包括40kHz、48kHz和58kHz。超声波雷达206可以向周围环境发射超声波并且接收目标反射回的超声波点云数据。该超声波点云数据可以包括所探测到的目标相对于车辆的距离和方位。超声波雷达206的探测距离一般在十米内。
网络信号单元208可以接收无线网络信号,例如,蜂窝网络信号和wifi信号。进一步,网络信号单元208可以测量无线网络信号的质量,例如,信号强度、信号时延等。
惯性测量单元(IMU)210可以测量车辆的线加速度和角加速度。
轮速计单元212采集来自轮速计的数据,例如车轮旋转速度。
GNSS单元214可接收卫星定位信号。例如,GNSS可以包括美国的GPS、俄罗斯的Glonass、欧洲的Galileo、中国的北斗卫星导航系统等等。
GNSS单元214接收的卫星定位信号是在全球地心坐标系下的,可将其转换到地图坐标系中以便与其他定位信息进行结合。
传感器装置200可将各个传感器所采集的数据(如上所述)传送给服务器104以供进一步处理。
图3是根据本公开的各方面的高精度地图生成装置300的示图。高精度地图生成装置300可被包括在服务器104中。
如图3所示,高精度地图生成装置300可包括轨迹确定单元302、基础图层单元304、点云图层单元306和网络图层单元308。
轨迹确定单元302可以根据来自车辆的各个传感器的数据来确定该车辆的行驶轨迹。
图4是根据本公开的各方面的轨迹确定单元302的示图。
如图4所示,轨迹确定单元302可包括组合导航模块402、视觉SLAM模块404以及语义SLAM模块406。
组合导航模块402可以处理来自车辆的惯性测量单元210的数据(例如,线加速度和角加速度数据)、来自轮速计单元212的轮速数据和来自GNSS单元214的卫星定位信号以生成组合导航数据。
具体而言,组合导航模块402可以执行惯性导航和卫星导航的组合。
惯性导航可以根据来自惯性测量单元210的数据(线加速度和角加速度数据)和来自轮速计单元212的轮速数据来确定车辆的位置(在地图的坐标系中的位置)。
卫星导航可以利用GNSS单元214所接收的卫星定位信号以进行定位。
卫星导航的定位结果可以从全球地心坐标系转换到地图的坐标系,从而与惯性导航的定位结果进行结合。
在某些室内场景(例如,地下停车场)中,卫星定位信号的强度不稳定,车辆的GNSS单元214对卫星定位信号的接收断断续续。在这种场景中,可以 使用惯性导航进行定位。在惯性导航定位的过程中,如果GNSS模块在某一时刻接收到卫星定位信号,可以使用该卫星信号对惯性导航的定位结果进行纠正。例如,可将在该时刻接收到的卫星定位信号中的定位结果与惯性导航的定位结果进行比较,如果两者的距离大于阈值距离,则将当前定位结果更新为卫星定位信号中的定位结果。
视觉SLAM(即时定位与地图创建)模块404可以使用视觉SLAM技术,根据视觉传感器202获得的图像来确定车辆的轨迹。
视觉SLAM可以包括单目SLAM和双目SLAM。
单目SLAM可以使用一个单目相机来完成SLAM。单目SLAM可以利用车辆上的相机在移动中在相邻的时间采集的几个图像进行三角化,测量参考像素点在不同图像之间的距离,由此获得车辆的运动轨迹。
双目SLAM可以利用左右目的视差计算像素的距离,从而实现对车辆的定位。双目相机由两个单目相机组成,但这两个相机之间的距离(称为基线)是已知的。可以通过该基线来估计每个像素的空间位置,由此获得车辆的运动轨迹。
可将在各个时刻确定的车辆位置组合(例如,连接)起来,由此得到车辆的行驶轨迹。行驶轨迹上的每个轨迹点与一时刻相关联,例如,车辆在该时刻被确定为在行驶轨迹的该轨迹点处。在一方面,可将行驶轨迹上的每个轨迹点的位置(在该地图的坐标系中的位置)与对应的时刻作为一个条目存储在存储器中。
进一步,视觉SLAM模块404还可以在确定车辆的轨迹的同时,将视觉特征图(即,视觉传感器202所采集的一个或多个图像)与轨迹上的每个点(和/或对应时刻)进行关联和存储。
语义SLAM模块406可以使用语义SLAM技术,根据视觉传感器202获得的图像来确定车辆的轨迹。
语义SLAM模块406可以获取视觉传感器202在每个时刻所获得的一个或多个图像(例如,鸟瞰图,如上所述),并且根据该时刻所获得的一个或多个 图像来识别出其中的参考目标(例如,交通标志)。例如,可以使用神经网络来识别图像中的交通标志(例如,路牌、地面箭头、道路线、减速带、斑马线、车位线等)作为参考目标。
随后可以根据所识别出的参考目标在时间上相邻的多个图像中的位置变化来确定车辆的行驶距离和方向,由此确定车辆的轨迹。
可以使用组合导航模块402、视觉SLAM模块404以及语义SLAM模块406中的任一者来确定车辆的行驶轨迹。
在一方面,可以使用GNSS单元214所接收的卫星定位信号来对视觉SLAM模块404和语义SLAM模块406所生成的轨迹进行调整。
例如,GNSS单元214可在时间t1和t2接收到卫星定位信号。在时间t1,通过卫星定位信号确定车辆的位置为G1,在时间t2,通过卫星定位信号确定的车辆位置为G2。进一步,可以确定位置G1与位置G2之间的距离LG和方向DG
以视觉SLAM模块404为例,在时间t1,通过视觉SLAM模块404确定的车辆位置为S1;在时间t2,通过视觉SLAM模块404确定的车辆位置为S2。进一步,可以确定位置S1与位置S2之间的距离Ls和方向Ds
随后可以确定距离LG和Ls之间的距离差、以及方向DG和Ds之间的角度差。如果距离差大于距离阈值和/或角度差大于角度阈值,则使用卫星定位信号的数据(例如,距离LG和方向DG)来调整(纠正)车辆的轨迹。
在一方面,可以使用卫星定位信号数据来为SLAM模块404提供轨迹的初始值。例如,使用在时间t1通过卫星定位信号所确定的车辆位置G1作为与时间t1相对应的位置。
在另一方面,可以使用卫星定位信号数据之间的相对位姿变化作为约束量来对SLAM中的因子图进行优化,通过局部或全局的优化来更新SLAM中关键帧的位姿,纠正轨迹的偏差。
作为一个示例,可以使用ICP(Iterative Closest Point,最近点迭代算法)算法来利用卫星定位信号数据调整车辆的轨迹。ICP通过求取源点云(车辆轨迹)和目标点云(卫星定位信号)之间的对应点对,基于对应点对构造旋转平移矩阵, 并利用所求矩阵,将源点云变换到目标点云的坐标系下,估计变换后源点云与目标点云的误差函数,若误差函数值大于阀值,则迭代进行上述运算直到满足给定的误差要求。使用语义SLAM模块406确定的轨迹也可以使用如上方法利用卫星定位信号来调整。
在另一方面,可以对视觉SLAM模块404和语义SLAM模块406所生成的轨迹进行回环调整。
以语义SLAM为例,在某一时间从所采集的图像中检测到某个目标(例如,地面上的箭头)。过一段时间之后再次检测到该目标,可以确定车辆回到了原处,即,行驶了一个闭环。由此可以对车辆的轨迹进行回环调整。
具体而言,可以在历史帧中识别当前帧的回环候选帧。例如,可以通过检测多个历史帧与当前帧的距离,在特定历史帧与当前帧的距离很小(例如,小于阈值)时,可以确定该历史帧为当前帧的回环候选帧。替换地,可以确定历史帧与当前帧的描述信息的差异,如果特定历史帧与当前帧的描述信息相似,则可确定该历史帧为当前帧的回环候选帧。
在一方面,可以针对一当前帧确定多个回环候选帧,在后续过程中再进行筛选。
随后可以确定当前帧与回环候选帧的相对位姿关系。利用所确定的位姿关系作为约束来调整SLAM中的因子图。
例如,可以选取回环候选帧及其时间上相邻的关键帧生成局部地图,利用当前帧与回环候选帧的相对位姿关系作为初值,将当前帧的语义信息投影到该局部地图中;计算当前帧中的语义信息与局部地图中的语义信息的重合率,通过在初始帧附近进行调整,寻找重合率最高的位姿转换关系,随后重新计算当前帧的位姿,作为当前帧的回环优化结果,并且利用SLAM算法对初始帧与当前帧之间的各帧进行调整,从而达到回环调整。
对语义SLAM的回环调整尤其适用于停车场中,车辆有一定的概率行驶轨迹为闭环。通过回环调整,可以消除在车辆行驶闭环过程中累积的误差。
回到图3,基础图层单元304可将轨迹确定单元302所生成的车辆轨迹中 的每个点与其对应的语义信息进行关联。该语义信息可以是从视觉传感器202在该点所采集的多个图像中(例如,利用神经网络)检测出的目标(例如,路牌、地面箭头、道路线、减速带、斑马线、车位线等)。
如果车辆的行驶轨迹是使用语义SLAM模块406来确定的,则可以直接将利用语义SLAM在轨迹上的每个轨迹点检测到的一个或目标与该轨迹点进行关联(映射)。
如果车辆的行驶轨迹是使用组合导航模块402或视觉SLAM模块404确定的,则可以对在行驶轨迹上的每个轨迹点(或其对应的时刻)所采集的图像进行处理(例如,使用神经网络进行目标识别),识别出其中的一个或多个目标,进而将该一个或多个目标与该轨迹点(或其对应的时刻)进行关联。
点云图层单元306接收来自毫米波雷达202的毫米波点云数据和来自超声波雷达204的超声波点云数据,并且对毫米波点云数据和超声波点云数据进行处理(融合)以生成点云图层。
毫米波雷达204的探测距离较远,一般在几十到几百米的范围内。超声波雷达206的探测距离较近,一般在十米内。毫米波/超声波点云数据可包括所探测到的目标相对于车辆的距离和方位、与目标相对应的反射波强度。在一个示例中,可以将毫米波/超声波点云数据与一阈值进行比较,滤除掉反射波强度低于一阈值的毫米波/超声波点云数据。
进一步,可以滤除掉动态点云数据(例如,从移动的行人或车辆返回的点云数据),仅留下静态点云数据。
作为一个示例,可以使用毫米波/超声波点云数据中的反射波强度来区分出不同类型(例如,材质)的物体,由此可以滤除行人、车辆等动态物体相关的点云数据。
作为另一示例,可以利用毫米波点云数据中关于数据点的速度信息。根据速度信息,滤除掉动态点云数据。
作为又一示例,可以通过视觉图像识别出动态物体,将视觉图像与毫米波/超声波点云数据相关联,滤除与该动态物体相对应的点云数据。
在一方面,可以对毫米波点云数据和超声波点云数据进行融合。
在一个示例中,可以将毫米波点云数据与超声波点云数据进行匹配,找到相匹配(例如,与相同环境点相对应)的毫米波点云数据与超声波点云数据,即,与车辆在某一轨迹点(时刻)的距离和方位相同的点云数据。随后可将相匹配的毫米波点云数据与超声波点云数据中的反射波强度进行比较,选择毫米波点云数据与超声波点云数据中反射波强度较强的一者作为车辆在该轨迹点处与该环境点相对应的点云数据。
在某些环境点处,只存在超声波点云数据和毫米波点云数据中的一者,可将该点云数据作为该环境点的点云数据。
随后可将针对行驶轨迹上的每个轨迹点所生成的点云数据与该轨迹点进行关联,由此生成了点云图层。该点云数据可以包括与车辆的距离、方位和反射强度等等。
通过毫米波点云数据和超声波点云数据的融合,可以使得点云数据既可以覆盖几米范围,又可以覆盖几十到几百米的范围。该融合可以减少点云数据的数据量,提高点云数据的精度。进一步,由于毫米波探测存在一定的跳点,而超声波探测在近距离检测到的轮廓点非常稳定,由此增强了点云图层的稳定性。
在一方面,可以利用视觉传感器所采集的图像数据对点云图层中的点云数据进行分类。
例如,针对行驶轨迹上的特定轨迹点,可以对视觉传感器所采集的图像数据进行处理以识别出其中的一个或多个目标(例如,利用神经网络进行目标识别,如上所述)。
针对所识别出的每个目标,确定该轨迹点的点云数据(即,在该轨迹点处采集的点云数据,例如,毫米波数据或超声波数据)中是否存在相匹配的目标点云数据(例如,所识别出的目标和点云数据与车辆的距离相同、与车辆的中心轴的方位相同等等)。如果存在与图像中的目标相匹配的点云数据,则可将所确定的类别(图像中的目标的类别,如以上所确定的)与对应的点云数据相关联。
通过图像数据对点云图层中的点云数据进行分类,在所形成的高精度多图层地图中,各个地图点可以包括该地图点处的点云数据(例如,雷达检测到的 目标)及其类型(例如,路牌、柱子等等)。
网络图层单元308可以接收来自网络信号单元208的无线信号(例如,蜂窝网络、wifi网络的信号)的质量数据。进一步,可将轨迹上的每个点的网络信号质量数据与该点进行关联,由此形成网络图层。
通过网络图层,车辆可以在路径规划的过程中,避开网络信号质量较差的区域,由此获得更好的自动驾驶体验。
本申请首先针对每辆车确定该车辆的行驶轨迹,随后以行驶轨迹上的每个轨迹点及其对应的时间作为基准,将各个传感器(视觉传感器、毫米波/超声波雷达、网络信号单元等)所感测到的数据与该轨迹点/时间进行关联。随后将多辆车的轨迹及其相应数据进行组合(例如,将多辆车的轨迹点及其相应数据投影到地图的坐标系上),由此形成多图层高精度地图。在多图层高精度地图中的每一个地图点上,均具有相对应的多类数据(例如,图像数据、点云数据、网络信号数据等等)。
图5是根据本公开的各方面的用于生成多图层高精地图的流程图。
如图5所示,在步骤502,可以接收来自车辆上的多个传感器的数据,该多个传感器至少包括毫米波雷达和超声波雷达。
在步骤504,可以根据该多个传感器采集的数据来构建该车辆的行驶轨迹。
在一方面,确定该车辆的行驶轨迹包括使用以下至少一者或其组合来确定该车辆的行驶轨迹:惯性导航和卫星导航的组合;利用视觉传感器所采集的图像进行的视觉SLAM;以及利用视觉传感器所采集的图像进行的语义SLAM。
在步骤506,可以根据该车辆的行驶轨迹,将该毫米波雷达采集的毫米波点云数据和该超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层。
在一方面,将该毫米波点云数据和该超声波点云数据进行融合可以包括:针对该行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值;以及根据该毫米波点云数据和该超声波点云数据的衡量值来选择 该毫米波点云数据和该超声波点云数据之一作为该轨迹点处的对应点云数据。
毫米波/超声波点云数据的衡量值可以是点云数据的反射波强度、信噪比、距离与角度范围等等。例如,毫米波/超声波点云数据可以是反射波强度、信噪比、距离与角度范围的加权求和。
在一方面,该多个传感器包括视觉传感器,该方法进一步包括:处理该视觉传感器所采集的图像以检测第一目标并确定该第一目标的类别;确定与该第一目标相匹配的点云数据;以及将所匹配的点云数据与该第一目标的类别进行关联。
在步骤508,可以利用该毫米波-超声波信息图层来生成高精度地图。
在一方面,该方法进一步包括:使用该车辆上的无线接收机接收网络信号;
确定接收到的网络信号的信号质量;使用该网络信号的信号质量来生成网络图层;以及利用该网络图层来生成该高精度地图。
在一方面,该网络信号包括蜂窝信号和/或wifi信号。
在一方面,该多个传感器包括视觉传感器,该方法进一步包括:使用该视觉传感器采集多个图像;使用该多个图像来生成基础语义信息图层;以及利用该基础语义信息图层来生成该高精度图层。
图6是根据本公开的各方面的用于目标检测的电子设备的示图。
如图6所示,电子设备600可包括存储器602和处理器604。存储器602中存储有程序指令,处理器604可通过总线606与存储器602连接并通信,处理器604可调用存储器602中的程序指令以执行以下步骤:接收来自车辆上的多个传感器的数据,该多个传感器至少包括毫米波雷达和超声波雷达;根据该多个传感器采集的数据来构建该车辆的行驶轨迹;根据该车辆的行驶轨迹,将该毫米波雷达采集的毫米波点云数据和该超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层;以及利用该毫米波-超声波信息图层来生成高精度地图。
可任选地,处理器604还可以调用存储器602中的程序指令以执行以下步骤:针对该行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值;以及根据该毫米波点云数据和该超声波点云数据的衡量值来选择该毫米波点云数据和该超声波点云数据之一作为该轨迹点处的对应点云数据。
可任选地,处理器604还可以调用存储器602中的程序指令以执行以下步骤: 处理视觉传感器所采集的图像以检测第一目标并确定该第一目标的类别;确定与该第一目标相匹配的点云数据;以及将所匹配的点云数据与该第一目标的类别进行关联。
可任选地,处理器604还可以调用存储器602中的程序指令以执行以下步骤:使用该车辆上的无线接收机接收网络信号;确定接收到的网络信号的信号质量;使用该网络信号的信号质量来生成网络图层;以及利用该网络图层来生成该高精度地图。
可任选地,该网络信号包括蜂窝信号和/或wifi信号。
可任选地,处理器604还可以调用存储器602中的程序指令以执行以下步骤:使用视觉传感器采集多个图像;使用该多个图像来生成基础语义信息图层;以及利用该基础语义信息图层来生成该高精度图层。
可任选地,处理器604还可以调用存储器602中的程序指令以执行以下步骤:惯性导航和卫星导航的组合;利用视觉传感器所采集的图像进行的视觉SLAM;以及利用视觉传感器所采集的图像进行的语义SLAM。
本文结合附图阐述的说明描述了示例配置而不代表可被实现或者落在权利要求的范围内的所有示例。本文所使用的术语“示例性”意指“用作示例、实例或解说”,而并不意指“优于”或“胜过其他示例”。本详细描述包括具体细节以提供对所描述的技术的理解。然而,可以在没有这些具体细节的情况下实践这些技术。在一些实例中,众所周知的结构和设备以框图形式示出以避免模糊所描述的示例的概念。
在附图中,类似组件或特征可具有相同的附图标记。此外,相同类型的各个组件可通过在附图标记后跟随短划线以及在类似组件之间进行区分的第二标记来加以区分。如果在说明书中仅使用第一附图标记,则该描述可应用于具有相同的第一附图标记的类似组件中的任何一个组件而不论第二附图标记如何。
结合本文中的公开描述的各种解说性框以及模块可以用设计成执行本文中描述的功能的通用处理器、DSP、ASIC、FPGA或其他可编程逻辑器件、分立的门或晶体管逻辑、分立的硬件组件、或其任何组合来实现或执行。通用处 理器可以是微处理器,但在替换方案中,处理器可以是任何常规的处理器、控制器、微控制器、或状态机。处理器还可被实现为计算设备的组合(例如,DSP与微处理器的组合、多个微处理器、与DSP核心协同的一个或多个微处理器,或者任何其他此类配置)。
本文中所描述的功能可以在硬件、由处理器执行的软件、固件、或其任何组合中实现。如果在由处理器执行的软件中实现,则各功能可以作为一条或多条指令或代码存储在计算机可读介质上或藉其进行传送。其他示例和实现落在本公开及所附权利要求的范围内。例如,由于软件的本质,以上描述的功能可使用由处理器执行的软件、硬件、固件、硬连线或其任何组合来实现。实现功能的特征也可物理地位于各种位置,包括被分布以使得功能的各部分在不同的物理位置处实现。另外,如本文(包括权利要求中)所使用的,在项目列举(例如,以附有诸如“中的至少一个”或“中的一个或多个”之类的措辞的项目列举)中使用的“或”指示包含性列举,以使得例如A、B或C中的至少一个的列举意指A或B或C或AB或AC或BC或ABC(即,A和B和C)。同样,如本文所使用的,短语“基于”不应被解读为引述封闭条件集。例如,被描述为“基于条件A”的示例性步骤可基于条件A和条件B两者而不脱离本公开的范围。换言之,如本文所使用的,短语“基于”应当以与短语“至少部分地基于”相同的方式来解读。
计算机可读介质包括非瞬态计算机存储介质和通信介质两者,其包括促成计算机程序从一地向另一地转移的任何介质。非瞬态存储介质可以是能被通用或专用计算机访问的任何可用介质。作为示例而非限定,非瞬态计算机可读介质可包括RAM、ROM、电可擦除可编程只读存储器(EEPROM)、压缩盘(CD)ROM或其他光盘存储、磁盘存储或其他磁存储设备、或能被用来携带或存储指令或数据结构形式的期望程序代码手段且能被通用或专用计算机、或者通用或专用处理器访问的任何其他非瞬态介质。任何连接也被正当地称为计算机可读介质。例如,如果软件是使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)、或诸如红外、无线电、以及微波之类的无线技术从web网站、服务器、或其它远程源传送而来的,则该同轴电缆、光纤电缆、双绞线、数字订户线(DSL)、或诸如红外、无线电、以及微波之类的无线技术就被包括在介质的定义之中。 如本文所使用的盘(disk)和碟(disc)包括CD、激光碟、光碟、数字通用碟(DVD)、软盘和蓝光碟,其中盘常常磁性地再现数据而碟用激光来光学地再现数据。以上介质的组合也被包括在计算机可读介质的范围内。
提供本文的描述是为了使得本领域技术人员能够制作或使用本公开。对本公开的各种修改对于本领域技术人员将是显而易见的,并且本文中定义的普适原理可被应用于其他变形而不会脱离本公开的范围。由此,本公开并非被限定于本文所描述的示例和设计,而是应被授予与本文所公开的原理和新颖特征相一致的最广范围。

Claims (15)

  1. 一种用于生成多图层高精地图的方法,包括:
    接收来自车辆上的多个传感器的数据,所述多个传感器至少包括毫米波雷达和超声波雷达;
    根据所述多个传感器采集的数据来构建所述车辆的行驶轨迹;
    根据所述车辆的行驶轨迹,将所述毫米波雷达采集的毫米波点云数据和所述超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层;以及
    利用所述毫米波-超声波信息图层来生成高精度地图。
  2. 如权利要求1所述的方法,其中将所述毫米波点云数据和所述超声波点云数据进行融合包括:
    针对所述行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值;以及
    根据所述毫米波点云数据和所述超声波点云数据的衡量值来选择所述毫米波点云数据和所述超声波点云数据之一作为该轨迹点处的对应点云数据。
  3. 如权利要求2所述的方法,其中所述多个传感器包括视觉传感器,所述方法进一步包括:
    处理所述视觉传感器所采集的图像以检测第一目标并确定所述第一目标的类别;
    确定与所述第一目标相匹配的点云数据;以及
    将所匹配的点云数据与所述第一目标的类别进行关联。
  4. 如权利要求1所述的方法,进一步包括:
    使用所述车辆上的无线接收机接收网络信号;
    确定接收到的网络信号的信号质量;
    使用所述网络信号的信号质量来生成网络图层;以及
    利用所述网络图层来生成所述高精度地图。
  5. 如权利要求4所述的方法,其中所述网络信号包括蜂窝信号和/或wifi信号。
  6. 如权利要求1所述的方法,其中所述多个传感器包括视觉传感器,所述方法进一步包括:
    使用所述视觉传感器采集多个图像;
    使用所述多个图像来生成基础语义信息图层;以及
    利用所述基础语义信息图层来生成所述高精度图层。
  7. 如权利要求1所述的方法,其中确定所述车辆的行驶轨迹包括使用以下至少一者或其组合来确定所述车辆的行驶轨迹:
    惯性导航和卫星导航的组合;
    利用视觉传感器所采集的图像进行的视觉SLAM;以及
    利用视觉传感器所采集的图像进行的语义SLAM。
  8. 一种用于生成多图层高精地图的装置,包括:
    用于接收来自车辆上的多个传感器的数据的模块,所述多个传感器至少包括毫米波雷达和超声波雷达;
    用于根据所述多个传感器采集的数据来构建所述车辆的行驶轨迹的模块;
    用于根据所述车辆的行驶轨迹,将所述毫米波雷达采集的毫米波点云数据和所述超声波雷达采集的超声波点云数据进行融合以生成毫米波-超声波信息图层的模块;以及
    用于利用所述毫米波-超声波信息图层来生成高精度地图的模块。
  9. 如权利要求8所述的装置,其中用于将所述毫米波点云数据和所述超声波点云数据进行融合的模块包括:
    用于针对所述行驶轨迹上的每个轨迹点,确定在该轨迹点处采集的毫米波点云数据和超声波点云数据的衡量值的模块;以及
    用于根据所述毫米波点云数据和所述超声波点云数据的衡量值来选择所述毫 米波点云数据和所述超声波点云数据之一作为该轨迹点处的对应点云数据的模块。
  10. 如权利要求9所述的装置,其中所述多个传感器包括视觉传感器,所述装置进一步包括:
    用于处理所述视觉传感器所采集的图像以检测第一目标并确定所述第一目标的类别的模块;
    用于确定与所述第一目标相匹配的点云数据的模块;以及
    用于将所匹配的点云数据与所述第一目标的类别进行关联的模块。
  11. 如权利要求8所述的装置,进一步包括:
    用于使用所述车辆上的无线接收机接收网络信号的模块;
    用于确定接收到的网络信号的信号质量的模块;
    用于使用所述网络信号的信号质量来生成网络图层的模块;以及
    用于利用所述网络图层来生成所述高精度地图的模块。
  12. 如权利要求11所述的装置,其中所述网络信号包括蜂窝信号和/或wifi信号。
  13. 如权利要求8所述的装置,其中所述多个传感器包括视觉传感器,所述装置进一步包括:
    用于使用所述视觉传感器采集多个图像的模块;
    用于使用所述多个图像来生成基础语义信息图层的模块;以及
    用于利用所述基础语义信息图层来生成所述高精度图层的模块。
  14. 如权利要求8所述的装置,其中确定所述车辆的行驶轨迹包括使用以下至少一者或其组合来确定所述车辆的行驶轨迹:
    惯性导航和卫星导航的组合;
    利用视觉传感器所采集的图像进行的视觉SLAM;以及
    利用视觉传感器所采集的图像进行的语义SLAM。
  15. 一种电子设备,包括处理器和存储器,所述存储器存储有程序指令;所述处理器运行程序指令实现如权利要求1至权利要求7中任一项所述的用于生成多图层高精地图的方法。
PCT/CN2023/119314 2022-10-14 2023-09-18 多图层高精地图生成方法和装置 WO2024078265A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211262295.0 2022-10-14
CN202211262295.0A CN115597584A (zh) 2022-10-14 2022-10-14 多图层高精地图生成方法和装置

Publications (1)

Publication Number Publication Date
WO2024078265A1 true WO2024078265A1 (zh) 2024-04-18

Family

ID=84846687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/119314 WO2024078265A1 (zh) 2022-10-14 2023-09-18 多图层高精地图生成方法和装置

Country Status (2)

Country Link
CN (1) CN115597584A (zh)
WO (1) WO2024078265A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115597584A (zh) * 2022-10-14 2023-01-13 纵目科技(上海)股份有限公司(Cn) 多图层高精地图生成方法和装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110497901A (zh) * 2019-08-30 2019-11-26 的卢技术有限公司 一种基于机器人vslam技术的泊车位自动搜索方法和系统
CN111391823A (zh) * 2019-12-27 2020-07-10 湖北亿咖通科技有限公司 一种用于自动泊车场景的多层地图制作方法
US20210241026A1 (en) * 2020-02-04 2021-08-05 Nio Usa, Inc. Single frame 4d detection using deep fusion of camera image, imaging radar and lidar point cloud
CN113665500A (zh) * 2021-09-03 2021-11-19 南昌智能新能源汽车研究院 全天候作业的无人驾驶运输车环境感知系统及方法
CN113870379A (zh) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 地图生成方法、装置、电子设备及计算机可读存储介质
CN113865580A (zh) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 构建地图的方法、装置、电子设备及计算机可读存储介质
CN114142955A (zh) * 2020-09-04 2022-03-04 华为技术有限公司 一种广播信号的播放方法、地图生成方法及装置
CN114136305A (zh) * 2021-12-01 2022-03-04 纵目科技(上海)股份有限公司 多图层地图的创建方法、系统、设备及计算机可读存储介质
CN115597584A (zh) * 2022-10-14 2023-01-13 纵目科技(上海)股份有限公司(Cn) 多图层高精地图生成方法和装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110497901A (zh) * 2019-08-30 2019-11-26 的卢技术有限公司 一种基于机器人vslam技术的泊车位自动搜索方法和系统
CN111391823A (zh) * 2019-12-27 2020-07-10 湖北亿咖通科技有限公司 一种用于自动泊车场景的多层地图制作方法
US20210241026A1 (en) * 2020-02-04 2021-08-05 Nio Usa, Inc. Single frame 4d detection using deep fusion of camera image, imaging radar and lidar point cloud
CN114142955A (zh) * 2020-09-04 2022-03-04 华为技术有限公司 一种广播信号的播放方法、地图生成方法及装置
CN113665500A (zh) * 2021-09-03 2021-11-19 南昌智能新能源汽车研究院 全天候作业的无人驾驶运输车环境感知系统及方法
CN113870379A (zh) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 地图生成方法、装置、电子设备及计算机可读存储介质
CN113865580A (zh) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 构建地图的方法、装置、电子设备及计算机可读存储介质
CN114136305A (zh) * 2021-12-01 2022-03-04 纵目科技(上海)股份有限公司 多图层地图的创建方法、系统、设备及计算机可读存储介质
CN115597584A (zh) * 2022-10-14 2023-01-13 纵目科技(上海)股份有限公司(Cn) 多图层高精地图生成方法和装置

Also Published As

Publication number Publication date
CN115597584A (zh) 2023-01-13

Similar Documents

Publication Publication Date Title
US20210063162A1 (en) Systems and methods for vehicle navigation
CN107145578B (zh) 地图构建方法、装置、设备和系统
US11940804B2 (en) Automated object annotation using fused camera/LiDAR data points
Du et al. Comprehensive and practical vision system for self-driving vehicle lane-level localization
WO2018177026A1 (zh) 确定道路边沿的装置和方法
US20220270358A1 (en) Vehicular sensor system calibration
JP2019527832A (ja) 正確な位置特定およびマッピングのためのシステムおよび方法
WO2018047115A1 (en) Object recognition and classification using multiple sensor modalities
WO2020232648A1 (zh) 车道线的检测方法、电子设备与存储介质
CN111391823A (zh) 一种用于自动泊车场景的多层地图制作方法
WO2018047006A1 (en) Low-level sensor fusion
US20220194412A1 (en) Validating Vehicle Sensor Calibration
US11295521B2 (en) Ground map generation
WO2024078265A1 (zh) 多图层高精地图生成方法和装置
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
US11527085B1 (en) Multi-modal segmentation network for enhanced semantic labeling in mapping
US11961304B2 (en) Systems and methods for deriving an agent trajectory based on multiple image sources
US11675366B2 (en) Long-term object tracking supporting autonomous vehicle navigation
US11961241B2 (en) Systems and methods for deriving an agent trajectory based on tracking points within images
US20210229662A1 (en) Resolving range rate ambiguity in sensor returns
CN113743171A (zh) 目标检测方法及装置
CN111273304A (zh) 一种融合反光柱的自然定位方法及系统
US20230046410A1 (en) Semantic annotation of sensor data using unreliable map annotation inputs
US11677931B2 (en) Automated real-time calibration
WO2021057324A1 (zh) 数据处理方法、装置、芯片系统及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23876470

Country of ref document: EP

Kind code of ref document: A1