WO2023144023A1 - A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system - Google Patents
A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system Download PDFInfo
- Publication number
- WO2023144023A1 WO2023144023A1 PCT/EP2023/051322 EP2023051322W WO2023144023A1 WO 2023144023 A1 WO2023144023 A1 WO 2023144023A1 EP 2023051322 W EP2023051322 W EP 2023051322W WO 2023144023 A1 WO2023144023 A1 WO 2023144023A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- computing device
- electronic computing
- assistance system
- pose
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the invention relates to the field of automobiles. More specifically, the invention relates to a method for correcting a pose of a motor vehicle in the surroundings of the motor vehicle by an assistance system of the motor vehicle, as well as to a computer program product and an assistance system.
- Partially autonomous motor vehicles or fully autonomous motor vehicles fundamentally depend on precise estimates of their pose, in particular of position and orientation of the motor vehicle, within a map. Therefore, there is a need in the art to provide a method by which a precise pose estimation is realized.
- One aspect of the invention relates to a method for correcting a pose of a motor vehicle in the surroundings of the motor vehicle by an assistance system of the motor vehicle.
- First data of semantic contour image measurements are received by an electronic computing device of the assistance system.
- Second data of an initial pose estimate are received by the electronic computing device.
- Third data of semantically labeled map elements are received by the electronic computing device.
- a score image and/or an error image and its image derivates based on the first data are generated by the electronic computing device.
- Expected three-dimensional points of the map elements at the initial pose estimate based on the second data and third data are generated by the electronic computing device.
- the expected three-dimensional points of the map elements are compared with the score image and/or the error image and its image derivatives to perform model-to-image alignment by the electronic computing device. Based at least in part on projecting the expected three-dimensional points into the score image and/or the error image, the expected three-dimensional points are projected at least in part into the score image and/or the error image, and the score image and/or the error image and its image derivatives are used to perform an iterative optimization by the electronic computing device.
- the alignment’s resulting pose correction and pose correction uncertainty are transmitted to the assistance system.
- the method uniquely utilizes “unfilled” semantic contours. This ensures that a problem is sufficiently constrained. Furthermore, the method utilizes a correspondence-free registration by calculating the spatial derivatives of a continuous error/score image. The correspondence-free approach avoids the issues associated with incorrect point correspondences.
- the semantic contour image is captured by a monocular camera of the assistance system.
- the semantic image contour describes the boundaries of each detected object instance in the monocular camera image as well as the instance’s semantic classification.
- the method in particular depends on semantic image contours of an on-board monocular camera that are provided while driving. These contours serve as a critical measurement that provides information regarding the vehicle’s pose, in particular its position and orientation.
- semantic classifications may, for example, include, but are not limited to, roads, poles, pedestrians, sidewalks, trees, cars et cetera.
- the contours are a list of each object’s instance boundary’s image pixel locations.
- the semantically labeled map comprises at least one labeled landmark as map element, in particular the labeled landmark is dependent on raw measurements from a capturing device of the assistance system.
- lane markers may be represented as a series of polyline points instead of millions of individual intensity measurements.
- pole-like objects may be represented simply as a cylinder with a bottom point location, a top point location, and a cylinder radius.
- these landmarks are stored with their semantic classification, their three-dimensional pose, and the minimum set of attributes required to fully define the semantic class in a three-dimensional space.
- This three-dimensional information encoded in these map landmarks gives the method the freedom to utilize a monocular camera, since monocular cameras may not directly capture depths. Consequently, this method depends on semantically labeled map landmarks from a pre-built map.
- the pose correction is used for a localization of the motor vehicle by the assistance system.
- the localization may be used for an at least in part autonomous operation or a fully autonomous operation of a motor vehicle. Therefore, a more secure operation of an at least in part autonomous motor vehicle may be realized with the method.
- the pose correction is performed by the electronic computing device in six degrees of freedom.
- this method calculates a correction for the provided initial pose estimate. This is critical for improved accuracy when integrated within a larger localization framework. This can serve as one of the many measurements that are integrated and fused within a larger framework.
- this method may calculate pose corrections in all six degrees of freedom, for example, three translational and three rotational degrees of freedom. Therefore, more precise locations of the motor vehicle may be realized.
- the semantically labeled map is a sparse map of the surroundings. In particular, the method depends on the pre-built sparse map.
- a “dense” map contains raw measurements, for example, from a lidar sensor, so-called point clouds or camera images for each map location
- a sparse map contains landmarks that were extracted from these raw measurements.
- lane markers may be represented as a series of polyline points instead of millions of individual intensity measurements.
- these landmarks are stored with their semantic classification, their three-dimensional pose, and the minimum set of attributes required to fully define the semantic class in a three-dimensional space.
- sparse maps have the advantage of vastly reduced data storage/transmission costs.
- the map representation also changes the types of algorithms that the motor vehicle has to execute online. For example, it can be advantageous to use algorithms that rely on high-level abstractions.
- the method is a correspondence-free method without correspondences between landmarks and measured features in the surroundings.
- Many localization methods in particular those which depend on sparse maps, are correspondence-based. In other words, they establish correspondences or associations between map landmarks and measured features. These correspondences are typically established based on a distance metric. Specifically, given the initial pose estimate, map landmarks are transformed into “expected” features. Then, associations are made between the measured and expected features that are closest to each other. Ideally, the measured and expected features perfectly align, indicating a perfect pose estimate. In reality localization frameworks attempt to minimize these distances, more formally called errors or residuals, to converge to an accurate pose estimate. For this reason, correct correspondences are critical for an accurate pose estimate.
- the method is a computer-implemented method. Therefore, another aspect of the invention relates to a computer program product comprising program code means, which, when they are executed by an electronic computing device, cause the electronic computing device to perform a method according to the preceding aspect.
- Another aspect of the invention relates therefore to a computer-readable storage medium.
- the invention relates to an assistance system for correcting a pose of a motor vehicle, comprising at least one electronic computing device, wherein the assistance system is configured to perform a method according to the preceding aspect.
- the method is performed by the assistance system.
- the electronic computing device may comprise means, for example, processors or electronic circuits, for performing the method.
- a still further aspect of the invention relates to a motor vehicle comprising the assistance system, wherein the motor vehicle is at least in part autonomous.
- FIG. 1 a schematic top view of an embodiment of a motor vehicle comprising an embodiment of an assistance system
- FIG. 2 a schematic flow chart according to an embodiment of the method.
- FIG. 3 a schematic image generated according to an embodiment of the method.
- Fig. 1 shows a schematic top view according to an embodiment of a motor vehicle 10.
- the motor vehicle 10 may be at least in part autonomous or fully autonomous.
- the motor vehicle 10 comprises an assistance system 12.
- the assistance system 12 comprises at least an electronic computing device 14.
- the assistance system 12 may comprise a capturing device 16 for capturing the surroundings 18 of the motor vehicle 10.
- the capturing device 18 may be, for example, a monocular camera.
- Fig. 2 a schematic flow chart according to an embodiment of the method.
- Fig. 2 shows a method for pose correction 20 of the motor vehicle 10 in the surroundings 18 of the motor vehicle 10 by the assistance system 12.
- Receiving first data 22 of the semantic contour image measurements is performed by the electronic computing device 14.
- Second data 24 of an initial pose estimate are received by the electronic computing device 14.
- Third data 26 of semantically labeled map elements are received by the electronic computing device 14.
- Generating a score image 28 and/or an error image 30 and its image derivatives based on the first data 22 is performed by the electronic computing device 14.
- Expected three-dimensional (3D) points 32 of the map elements at the initial pose estimate are generated based on the second data 24 and third data 26 by the electronic computing device 14.
- Comparing the expected three- dimensional points 32 of the map elements with the score image 28 and/or the error image 30 and its image derivatives to perform model-to-image alignment 34 is performed by the electronic computing device 14. Based at least in part on projecting the expected three-dimensional points 32 into the score image 28 and/or the error image 30 and using the score image 28 and/or error image 30 and its image derivatives to perform an iterative optimization 34 is performed by the electronic computing device 14.
- the alignments resulting in pose correction 20 and pose correction uncertainty 36 are transmitted to the assistance system 12, wherein the assistance system 12, in particular the pose correction 20, is used for localization of the motor vehicle 10 by the assistance system 12.
- the semantic image contours describe the boundaries of each detected object instance in the monocular camera image as well as the instance’s semantic classification.
- Semantic classifications include, but are not limited to: roads, poles, pedestrians, sidewalks, trees, cars, etc.
- the contours are a list of each object instance boundary’s image pixel locations. This method depends on semantic image contours of an onboard monocular camera that are provided while driving. These contours serve as a critical measurement that provides information regarding the vehicle’s pose, which corresponds to position and orientation.
- Autonomous vehicle localization aims to estimate the vehicle’s pose. It fundamentally operates by converging to the pose that makes the sensor measurements consistent with what is expected based on a pre-built map. Thus, these “expected” measurements are generated based on an initial estimate of the vehicle’s pose. Consequently, each additional sensor measurement helps (partially or fully) correct the initial pose estimate. Continuous corrections result in a more accurate (and more certain) pose estimate. Similarly, this method corrects the pose estimate given an initial pose estimate.
- This method depends on a pre-built “sparse” map. While a “dense” map contains raw measurements (e.g. LIDAR point clouds, camera images, etc.) for each map location, a “sparse” map contains landmarks that were extracted from these raw measurements. For example, lane markers may be represented as a series of polyline points instead of millions of individual intensity measurements. Furthermore, pole-like objects may be represented simply as a cylinder with a bottom point location, a top point location, and a cylinder radius. In all, these landmarks are stored with their semantic classification, their 3D pose, and the minimum set of attributes required to fully define that semantic class in a 3D space. Thus, sparse maps have the advantage of vastly reduced data storage/transmission costs.
- the map representation also changes the types of algorithms that the vehicle must execute online. For example, it can be advantageous to use algorithms that rely on high-level abstractions.
- This method calculates and outputs a “pose correction” for the provided initial pose estimate. This is critical for improved accuracy when integrated within a larger localization framework. This can serve as one of the many measurements that are integrated and fused within the larger framework of the assistance system 12.
- this method can calculate pose corrections in all six degrees of freedom, in particular three translational and three rotational degrees of freedom.
- This method can either provide a “pose correction” or a “corrected pose” estimate. That distinction is a trivial implementation detail.
- this method also provides the uncertainties that correspond to its pose correction 20.
- this method is developed to address the aforementioned difficulties. Most notably, it is a “correspondence-free” method. It is more robust for scenarios in which incorrect correspondences are likely, for example scenarios with clustered landmarks. In other words, this method can align cluster of poles with a cluster of expected (map) poles, but never attempts to establish correspondences with individual poles. The only requirement is that it matches the observed semantic classes with the semantic classes represented in the map.
- this method is mostly agnostic to the semantic classes utilized. Due to its unique methodology, it does not require specialized error metrics so that it is generalizable to include a wide variety of different semantic classes. For example, if only a section of pole is observed, this method’s error metric will inherently leave the vehicle height unconstrained. Furthermore, since this method’s error metric does not depend on high-level abstractions, it is relatively robust to the inaccuracies present in semantic contour measurements. Combined, this yields a method that is scalable to many complex semantic classes beyond lane markings and pole-like objects.
- This sub-process is critical for the “correspondence-free” nature of this method. It generates a continuous error/score image 28, 30 from the semantic contour measurements. In other words, this image stores error or score values as its pixel values.
- the semantic contours correspond to image pixels with a “high score” or a “zero error”. As one moves farther and farther away from the semantic contour pixels, the score decreases or the error decreases.
- map elements will be projected onto these images and will be assigned the error/score of their projected pixels.
- this image efficiently serves as a “lookup table” for the error/score so it does not need to be calculated for every iteration.
- This method adjusts the pose so that the map elements project to pixels with a low error or a high score. By doing so, it will have aligned the measurement with the expected map features, correcting the pose estimate.
- This method then efficiently and directly calculates these images’ spatial derivatives.
- the spatial derivatives are critical for indicating the “direction” of the pose change in every iteration of the optimization process. Consequently, these images were specifically created so that they emulate a continuous “surface,” a necessary requirement for valid spatial derivatives. Thus, these spatial derivative images also serve as an efficient “lookup table” during the optimization process.
- the next sub-process is to generate expected 3D points 32 from the initial pose estimate and map landmarks.
- the 3D points 32 are generated from the initial pose estimate’s point-of-view. Thus a pose that is closer to a map landmark generates closer 3D points 32.
- the only requirement is that the 3D points 32 must correspond to a landmark’s visibility boundaries.
- this sub-process is invariant to the algorithm that produced it. This is important as different map representations may require different algorithms for producing these 3D points 32.
- This optimization sub-process requires two internal inputs: The generated score/error images 28, 30 and their corresponding spatial derivative images. Furthermore, the generated expected 3D points 32 of map landmarks are used.
- the expected 3D points 32 are projected onto the generated score/error images 28, 30. Using the image as a “lookup table,” each point is then assigned its error/score metric. Furthermore, the 3D points 32 are projected onto a??? spatial derivative image. Again using this image as a “lookup table,” the 3D points 32 are also assigned spatial derivatives. Using the 3D information from the 3D points 32, these spatial derivatives are transformed into an error/score Jacobian.
- the error/scores and the error/score Jacobians are then used by the iterative optimizer to compute an initial pose correction 20.
- This pose correction 20 is then applied to the initial pose estimate.
- This new pose is then used to transform the 3D points 32 so that they project into different image pixels. This results in different error/scores and error/score Jacobians.
- this iterative cycle repeats until the pose corrections 20 stop changing; the final pose correction 20 has been found and is published for use by the assistance system 12.
- FIG. 3 shows a schematic image 38 according to an embodiment of the method.
- a visualization of the correctly estimated pose correction 20 from real data of an urban scene 40 is presented.
- the underlying image represents the continuous score image 28 generated from the measured semantic contours.
- the points represented by the reference sign 42 represents the expected three-dimensional points 32 projected into the image from the initial pose estimate.
- the points with reference sign 44 are the expected three-dimensional points 32 projected into the image from the corrected pose estimate.
- the points with reference sign 44 are overlaying with the high-score region 46 of the continuous score image 28, indicating that the pose corrections 20 are accurate.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Aviation & Aerospace Engineering (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202380018602.7A CN118591821A (en) | 2022-01-25 | 2023-01-20 | Method, computer program product and assistance system for correcting the attitude of a motor vehicle |
DE112023000687.3T DE112023000687T5 (en) | 2022-01-25 | 2023-01-20 | METHOD FOR CORRECTING A POSITION OF A MOTOR VEHICLE, COMPUTER PROGRAM PRODUCT AND ASSISTANCE SYSTEM |
US18/730,952 US20250104273A1 (en) | 2022-01-25 | 2023-01-20 | A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB2200894.0 | 2022-01-25 | ||
GB2200894.0A GB2615073A (en) | 2022-01-25 | 2022-01-25 | A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023144023A1 true WO2023144023A1 (en) | 2023-08-03 |
Family
ID=80507273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2023/051322 WO2023144023A1 (en) | 2022-01-25 | 2023-01-20 | A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system |
Country Status (5)
Country | Link |
---|---|
US (1) | US20250104273A1 (en) |
CN (1) | CN118591821A (en) |
DE (1) | DE112023000687T5 (en) |
GB (1) | GB2615073A (en) |
WO (1) | WO2023144023A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10810445B1 (en) * | 2018-06-29 | 2020-10-20 | Zoox, Inc. | Pipeline with point cloud filtering |
CN109584302B (en) * | 2018-11-27 | 2023-12-01 | 北京旷视科技有限公司 | Camera pose optimization method, device, electronic equipment and computer-readable medium |
CN109544629B (en) * | 2018-11-29 | 2021-03-23 | 南京人工智能高等研究院有限公司 | Camera position and posture determining method and device and electronic equipment |
DE102019206036A1 (en) * | 2019-04-26 | 2020-10-29 | Volkswagen Aktiengesellschaft | Method and device for determining the geographical position and orientation of a vehicle |
US11003945B2 (en) * | 2019-05-22 | 2021-05-11 | Zoox, Inc. | Localization using semantically segmented images |
CN113920198B (en) * | 2021-12-14 | 2022-02-15 | 纽劢科技(上海)有限公司 | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment |
-
2022
- 2022-01-25 GB GB2200894.0A patent/GB2615073A/en active Pending
-
2023
- 2023-01-20 CN CN202380018602.7A patent/CN118591821A/en active Pending
- 2023-01-20 US US18/730,952 patent/US20250104273A1/en active Pending
- 2023-01-20 DE DE112023000687.3T patent/DE112023000687T5/en active Pending
- 2023-01-20 WO PCT/EP2023/051322 patent/WO2023144023A1/en active Application Filing
Non-Patent Citations (1)
Title |
---|
XIAO ZHONGYANG ET AL: "Monocular Vehicle Self-localization method based on Compact Semantic Map*", 2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), IEEE, 4 November 2018 (2018-11-04), pages 3083 - 3090, XP033469964, ISBN: 978-1-7281-0321-1, [retrieved on 20181207], DOI: 10.1109/ITSC.2018.8569274 * |
Also Published As
Publication number | Publication date |
---|---|
GB202200894D0 (en) | 2022-03-09 |
US20250104273A1 (en) | 2025-03-27 |
DE112023000687T5 (en) | 2024-11-07 |
CN118591821A (en) | 2024-09-03 |
GB2615073A (en) | 2023-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102221695B1 (en) | Apparatus and method for updating high definition map for autonomous driving | |
CN111263960B (en) | Apparatus and method for updating high definition maps | |
KR102022388B1 (en) | Calibration system and method using real-world object information | |
CN108692719B (en) | Object detection device | |
JP5588812B2 (en) | Image processing apparatus and imaging apparatus using the same | |
CN112805766B (en) | Apparatus and method for updating detailed map | |
US11887336B2 (en) | Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle | |
US10996337B2 (en) | Systems and methods for constructing a high-definition map based on landmarks | |
JP2006053756A (en) | Object detection device | |
JP2006053890A (en) | Obstacle detection apparatus and method therefor | |
KR101544021B1 (en) | Apparatus and method for generating 3d map | |
JP6278791B2 (en) | Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system | |
CN114419592B (en) | Road area identification method, automatic driving control method and device | |
US20230245469A1 (en) | Method and processor circuit for localizing a motor vehicle in an environment during a driving operation and accordingly equipped motor vehicle | |
JP2017078607A (en) | Vehicle position estimation device and program | |
JP2017181476A (en) | Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection | |
CN112424568A (en) | System and method for constructing high-definition map | |
JP2006053754A (en) | Planar detection apparatus and detection method | |
WO2020118619A1 (en) | Method for detecting and modeling of object on surface of road | |
US20210400190A1 (en) | Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method | |
Wong et al. | Single camera vehicle localization using SURF scale and dynamic time warping | |
US20250104273A1 (en) | A method for correcting a pose of a motor vehicle, a computer program product, as well as an assistance system | |
CN117922464A (en) | Method and system for adjusting an information system of a mobile machine | |
JP2006053755A (en) | Moving body moving amount calculation device | |
Li et al. | Automatic surround camera calibration method in road scene for self-driving car |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23701657 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202380018602.7 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112023000687 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 23701657 Country of ref document: EP Kind code of ref document: A1 |
|
WWP | Wipo information: published in national office |
Ref document number: 18730952 Country of ref document: US |