EP3607272B1 - Étiquetage automatisé d'image pour véhicule sur la base de cartes - Google Patents
Étiquetage automatisé d'image pour véhicule sur la base de cartes Download PDFInfo
- Publication number
- EP3607272B1 EP3607272B1 EP18705594.2A EP18705594A EP3607272B1 EP 3607272 B1 EP3607272 B1 EP 3607272B1 EP 18705594 A EP18705594 A EP 18705594A EP 3607272 B1 EP3607272 B1 EP 3607272B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- features
- map
- electronic processor
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002372 labelling Methods 0.000 title description 8
- 238000000034 method Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 description 17
- 239000003550 marker Substances 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000005457 optimization Methods 0.000 description 7
- 238000006073 displacement reaction Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
- G01C21/3673—Labelling using text of road map data items, e.g. road names, POI names
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3647—Guidance involving output of stored or live camera images or video streams
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/54—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
Definitions
- High-definition maps that include information regarding objects located within the high-definition map are used in driving-assist systems. These maps are used to bridge the gap between state-of-the-art, real-time lane detection devices and reliability and range of object detection requirements (which often cannot be met by existing lane detection devices alone) for semi-autonomous driving.
- a device and system for labeling sight images is known from US 6222583 B1 .
- US 2017/039436 A1 relates to fusion of RGB images and Lidar data for lane classification. Determining lane level traffic levels based on traffic camera images is described in US 2016/004915 A1 .
- High-definition maps provide, among other things, information to assist in performing vehicle maneuvers.
- the high definition maps provide information relating to position and characteristics of objects such as roads, lane markings, and roadway infrastructure.
- high-definition maps also assist a driver by providing information about landmarks and areas of interest in relation to the vehicle.
- semi-autonomous vehicles may perform some navigation and maneuvering based, at least in part, on the information about objects location within the high-definition maps. For example, the vehicle may use the lane markings to travel within a single lane of traffic, to determine a number of traffic lanes on the roadway, to perform lane changes, and others.
- Deep learning provides a highly accurate technique for training a vehicle system to detect lane markers.
- deep learning also requires vast amounts of labeled data to properly train the vehicle system.
- a neural network is trained for detecting lane markers in camera images without manually labeling any images.
- high-definition maps for automated driving are projected into camera images and the vehicle system corrects for misalignments due to inaccuracies in localization and coordinate frame transformations.
- the corrections may be performed by calculating the offset between objects or features within the high-definition map and detected objects in the camera images.
- object detections in camera images for refining the projections, labels of objects within the camera images may be accurately determined based on pixel location.
- projected lane markers are used for training a fully convolutional network to segment lane markers in images.
- the optional visual quality check may be performed at a much higher rate than manually labelling individual images. For example, a single worker may quality check 20,000 automatically generated labels within a single day.
- the convolutional network may be trained based on only automatically generated labels. Additionally, the detections of objects within the camera images may be based solely on grayscale mono camera inputs without any additional information.
- the resulting trained neural network may detect lane markers at distances of approximately 150 meters on a 1 Megapixel camera.
- Embodiments provide an automated system that generates labels for objects within camera images.
- the automated system generates labels that identify lane markers within an image based at least in part on map data.
- one embodiment provides a method of navigation for a vehicle using automatically labeled images.
- the method includes loading a high-definition map that includes a first plurality of features into an electronic processor of the vehicle and capturing an image that includes a second plurality of features with a camera of the vehicle.
- the method further includes projecting the map onto the image; detecting, with the electronic processor, the second plurality of features within the image; and aligning the map with the image by aligning the first plurality of features with the second plurality of features.
- the method further includes copying a label describing one of the first plurality of features onto a corresponding one of the second plurality of features to create a labelled image and using the labelled image to assist in navigation of the vehicle.
- the system includes a camera configured to capture an image of a roadway and an electronic processor communicatively connected to the camera.
- the electronic processor is configured to load a high-definition map that includes a first plurality of features and receive the image.
- the image includes a second plurality of features.
- the electronic processor is further configured to project the map onto the image; detect the second plurality of features within the image; and align the map with the image by aligning the first plurality of features with the second plurality of features.
- the electronic processor is yet further configured to copy a label describing one of the first plurality of features onto a corresponding one of the second plurality of features to create a labelled image and to use the labelled image to assist in navigation of the vehicle.
- a plurality of hardware and software based devices may be used to implement various embodiments.
- embodiments may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware.
- the electronic based aspects of the invention may be implemented in software (for example, stored on non-transitory computer-readable medium) executable by one or more processors.
- control units and “controllers” described in the specification can include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, one or more application specific integrated circuits (ASICs), and various connections (for example, a system bus) connecting the various components.
- ASICs application specific integrated circuits
- Fig. 1 illustrates an embodiment of a vehicle 100 equipped with a system 105 for labelling objects within a camera image according to one embodiment.
- the vehicle 100 although illustrated as a four-wheeled vehicle, encompasses various types and designs.
- the vehicle 100 may include an automobile, a motorcycle, a truck, a bus, a semi-tractor, and others.
- the system 105 includes an electronic control unit (ECU) 110, at least one sensor 115, a map database 120, a vehicle control system 125, and a global positioning system (GPS) 130.
- ECU electronice control unit
- GPS global positioning system
- the electronic control unit 110 may be communicatively connected to the sensor 115, the map database 120, the vehicle control system 125, and the global positioning system 130 via different and various mechanisms or protocols.
- the electronic control unit 110 and the sensor 115 may be directly wired, wired through a communication bus, or wirelessly connected (for example, via a wireless network).
- the electronic control unit 110 is configured to, among other things, receive information from the sensor 115 regarding the area surrounding the vehicle 100, receive high-definition maps with labelled objects from the map database 120, and generate requests and information related to navigation and maneuvering for the vehicle control system 125.
- the electronic control unit 110 may determine the location or position of the vehicle 100 based at least in part on the global positioning system 130.
- the electronic control unit 110 may obtain an initial location via the global positioning system 130 and then optimize or refine the location using the sensor 115 and detected landmarks.
- the sensor 115 may include various types and styles of sensors.
- the sensor 115 may include one or more sensors and sensor arrays that are configured to use radar, lidar, ultrasound, infrared, and others.
- the sensor 115 may also include one or more optical cameras.
- the sensor 115 is positioned with a field of view that includes lane markings on either side of the vehicle 100.
- the sensor 115 is configured to capture images of objects around the vehicle 100.
- the sensor 115 is configured to capture images of lane markings around the vehicle 100.
- the map database 120 may be of various different types and use various different technologies.
- the map database 120 is located within the vehicle 100 and updatable via external communications (for example, via a wide area network).
- the map database 120 is located externally from the vehicle 100 (for example, at a central server).
- the vehicle 100 downloads high-definition maps for the map database 120 for use by the electronic control unit 110.
- the vehicle 100 uploads high-definition maps to the map database 120 that are captured by the sensor 115.
- the map database 120 includes a plurality of high-definition maps that may be generated by the electronic control unit 110, by similar systems of other vehicles, or by manual methods.
- the high-definition maps contained in the map database 120 provide characteristics of the objects within the high-definition map including position of lane markings.
- the high-definition maps along with images from the sensor 115 are used to train the electronic control unit 110 to detect and label objects within the images with high precision as discussed below.
- the vehicle control system 125 is configured to receive instructions and information from the electronic control unit 110 to aid in navigation and control of the vehicle 100.
- the vehicle control system 125 is configured to perform autonomous driving and various automatic vehicle maneuvers based, at least in part on signals received from the electronic control unit 110.
- the vehicle control system 125 is communicatively connected to the sensor 115 and the global positioning system 130 independently of the electronic control unit 110.
- the vehicle control system 125 and the electronic control unit 110 are incorporated into a single control unit.
- Fig. 2 is a block diagram of the electronic control unit 110 of the system 105 according to one embodiment.
- the electronic control unit 110 includes a plurality of electrical and electronic components that provide power, operational control, and protection to the components and modules within the electronic control unit 110.
- the electronic control unit 110 includes, among other things, an electronic processor 210 (such as a programmable electronic microprocessor, microcontroller, or similar device), a memory 215 (for example, non-transitory, machine readable memory), and an input/output interface 220.
- the electronic control unit 110 includes additional, fewer, or different components.
- the electronic control unit 110 may be implemented in several independent electronic control units or modules each configured to perform specific steps or functions of the electronic control unit 110.
- the electronic processor 210 in coordination with the memory 215, the input/output interface 220, and other components of the electronic control unit 110, is configured to perform the processes and methods discussed herein.
- the electronic processor 210 is configured to retrieve from memory 215 and execute, among other things, instructions related to receiving camera images from the sensor 115, receiving map data from the map database 120, and generating labelled camera images based on the receive camera images and the map data.
- the input/output interface 220 may include one or more input and output modules for communicating with the other components of the system 105 as well as other components of the vehicle 100.
- the input/output interface 220 is configured to communicate with the sensor 115, the map database 120, and the vehicle control system 125.
- multiple functions including creating high-definition maps and using high-definition maps to generate labeled camera images are described as being performed by the electronic processor 210. However, these functions, as well as others described herein, may be performed individually and independently by multiple electronic processors and multiple vehicles.
- one or more vehicles generate a plurality of high-definition maps and upload the high-definition maps to a centralized map database.
- other vehicles such as vehicle 100, download the high-definition maps and generate high-accuracy, labelled images using the high-definition maps.
- the labelled camera images may be generated by the electronic processor 210 by training the electronic processor 210 with the high-definition maps for detection and recognition of objects.
- the labelled camera images may then be used by the vehicle control system 125 to navigate and maneuver the vehicle 100.
- the electronic control unit 110 uses the high-definition maps to improve real-time detection of objects by using them to generate large labeled data sets of static objects including, for example, lane markings.
- the electronic control unit 110 can detect static objects at short ranges with high accurately for mapping operation using, for example, high accuracy object detection sensors (for example, light detection and ranging (LIDAR)) and can generate high-definition maps.
- the high-definition maps may include various features and positional information including roadway infrastructure and lane markers. Due to the static nature of the mapped objects, the high-definition maps may be projected into sensor frames (for example, camera image frames) in poor environmental conditions to assist in detection and provide longer detection ranges for the sensor 115.
- Fig. 3 illustrates a method of using automatically labeled images according to one embodiment.
- the method includes using the automatically labelled images to train an object detector (for example, a deep-learning, neural network) within the electronic processor 210.
- the method also includes using the automatically labelled images for navigation of the vehicle 100.
- the electronic processor 210 loads a high-definition map that includes a first plurality of features (block 305).
- the electronic processor 210 receives the image including a second plurality of features from the sensor 115 (block 310).
- the electronic processor 210 receives a camera image from a vehicle camera as the vehicle is operating.
- the electronic processor 210 detects the second plurality of features with an object detector (block 315).
- the object detector may be a simple preprogrammed detector.
- the electronic processor 210 then projects the high-definition map onto the image (block 320).
- the electronic processor 210 aligns the high-definition map with the image by aligning the first plurality of features with the second plurality of features (block 325).
- the electronic processor 210 then copies a label describing one of the first plurality of features onto a corresponding one of the second plurality of features to create a labelled image (block 330).
- the labelled image is then used to assist in navigation of the vehicle 100 (block 335).
- the method illustrated in Fig. 3 is repeated performed over many iterations.
- the object detector is replaced with a neural network or is simply an untrained neural network that is trained for subsequent iterations (for example, after the first iteration).
- detecting the second plurality of features within the second image is done with the trained detector (see block 315).
- a labelled image is generated based on updated detections. This generates an updated image with increased accuracy of labelled images.
- the neural network is thereby trained to improve object detection and alignment based on the labelled camera images. This process is explained in additional detail below.
- Fig. 4 illustrates an example of a high-definition map projected into a camera image frame. Finding the sensor pose with respect to the high-definition map is performed using localization techniques. However, for the process of labelling the camera images (see block 330), full time-series of measurements may be performed during an offline graph optimization.
- the labeling process consists of three generalized steps: 1) Coarse pose graph alignment using only GPS and relative motion constraints; 2) Lane alignment by adding lane marker constraints to the graph; and 3) Pixel-accurate refinement in image space using reprojection optimization per image starting from the corresponding graph pose.
- Fig. 5 illustrates the initial coarse solution for a complete track by creating a graph and matching the graph to the high-definition map.
- a graph of GPS measurements 505 and six degrees of freedom (6-DOF) relative motion edges 510 connecting pose vertices 515 are built (without the lane marker matches).
- a graph optimization finds the minimum energy solution by moving the 6-DOF pose vertices around. After this step, the pose vertices may be inaccurate by up to several meters.
- matches of detected lane markers 520 to all map lane markers are added based on a matching range threshold. All potential matches within matching range may be kept, as seen on the left side of Fig. 5 .
- three dimensional lane marker detections for alignment can be computed with simple techniques such as a top-hat filter and a stereo camera setup with, for example, a symmetrical local threshold filter.
- a simple detector for example, an untrained object detector
- the simple detector is replaced by the first neural net detector for further robustness improvements.
- line segments are extracted from these detections by running a Douglas-Peucker polygonization and the resulting 3D line segments are added to the corresponding pose vertices 515 for matching.
- the matching criterion Due to uncertainty in GPS measurements 505 with respect to elevation, the matching criterion only takes into account the 2D displacement of lane markers 520 in the plane space, tangential to the earth ellipsoid.
- An initial matching range of 4 meters may be used to robustly handle significant deviations between GPS measurements 505 and the map frame.
- the matching range is iteratively reduced. Outlier matches and their bias are thereby removed, as shown in the right side of Fig. 5 . This approach allows the system to deal with large initial displacements robustly.
- Fig. 6 illustrates an example of an initial displacement of the map lane markers 520 to the detected lane markers.
- the initial displacement is exaggerated for clarity of illustration.
- the actual initial displacement may be significantly smaller (e.g., appearing at approximately 50 meters from the vehicle 100).
- Fig. 6 the remaining displacement after graph alignment of the projected lane markers 605 and the detected lane markers 520 from the simple object detector are shown.
- the perpendicular average distance between line segments is used as a matching criterion for a non-linear optimization that solves for the pixel-accurate corrected 6-DOF camera pose.
- the perpendicular average distance between line segments is used as a matching criterion for a non-linear optimization that solves for the pixel-accurate corrected 6-DOF camera pose.
- a reprojection optimization is performed with line segments in image space.
- the 3D lane marker map is projected to a camera image using the vertex pose from the previously optimized graph as initialization.
- inaccurate 6-DOF motion constraints and small roll/pitch deviations will keep the initial pose from being sufficient for our purposes.
- Line segments in image space are repeatedly matched based on overlap and perpendicular average distance.
- the corrected 6-DOF camera pose is determined using a non-linear Levenberg-Marquardt optimization. After each iteration, the matching distance threshold is halved to successively remove bias from outlier matches. In one example, a 32 pixel matching range is selected to include all potential inliers and a 4-pixel matching range is selected to remove the majority of outlier matches. Once the poses are refined, all map elements may be precisely projected to generate high-quality image labels.
- the electronic processor 210 classifies every pixel in the image as either belonging to a lane marker or not. This approach does not necessarily require precise labels and comes with a few advantages. Using this approach, the electronic processor 210 is able to generate probability maps over the image without losing information such as lane marker width. The electronic processor 210 does not need to make assumptions about the number of traffic lanes or type of lane markers (for example, solid or dashed). Based on the neural networks pixelwise output, it is still possible to model the output using popular approaches such as splines.
- Lane marker detection may be addressed as a semantic segmentation problem by employing fully convolutional neural networks. For this, a fairly small, yet highly accurate network may be used. The network may be run in real-time for every incoming camera image from the sensor 115.
- Fig. 7 illustrates detection of objects using the electronic processor 210 after training the electronic processor 210 using the method illustrated in Fig. 3 .
- Fig. 7 illustrates detected objects by the neural network after being trained on the automatically generated labelled images.
- the detected lane markers 605 are closely matched with the actual lane markers in the camera image.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Radar, Positioning & Navigation (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Automation & Control Theory (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Library & Information Science (AREA)
- Biodiversity & Conservation Biology (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Claims (16)
- Procédé de navigation pour un véhicule (100) utilisant des images étiquetées automatiquement, le procédé comprenant les étapes suivantes :charger une carte haute définition qui comprend une première pluralité de caractéristiques dans un processeur électronique (210) du véhicule (100) ;capturer une image qui comprend une deuxième pluralité de caractéristiques avec une caméra du véhicule (100) ;projeter la carte sur l'image ;détecter, avec le processeur électronique (210), la deuxième pluralité de caractéristiques dans l'image ;aligner la carte avec l'image en alignant la première pluralité de caractéristiques avec la deuxième pluralité de caractéristiques ;copier une étiquette décrivant l'une de la première pluralité de caractéristiques sur une caractéristique correspondante de la deuxième pluralité de caractéristiques pour créer une image étiquetée ; etutiliser l'image étiquetée pour aider à la navigation du véhicule (100).
- Procédé selon la revendication 1, comprenant en outre d'entraîner un détecteur d'objets au sein du processeur électronique (210) avec l'image étiquetée.
- Procédé selon la revendication 2, comprenant en outre, après l'entraînement du détecteur d'objets, de détecter la deuxième pluralité de caractéristiques dans une deuxième image avec le détecteur d'objets pour créer une image étiquetée mise à jour.
- Procédé selon la revendication 3, comprenant en outre de corriger un alignement de la carte avec la deuxième image sur la base de l'image étiquetée mise à jour.
- Procédé selon la revendication 4, comprenant en outre d'entraîner le détecteur d'objets au sein du processeur électronique (210) avec la deuxième image.
- Procédé selon la revendication 1, dans lequel la première pluralité de caractéristiques et la deuxième pluralité de caractéristiques sont des marqueurs de voie.
- Procédé selon la revendication 1, dans lequel le chargement de la carte qui comprend la première pluralité de caractéristiques comprend la réception de la carte depuis une base de données centrale (120).
- Procédé selon la revendication 1, dans lequel le chargement de la carte dans le processeur électronique (210) comprend la génération de la carte sur la base d'une entrée provenant d'un capteur (115) du véhicule (100) .
- Système de navigation destiné à un véhicule (100) utilisant des images étiquetées automatiquement, le système comprenant :une caméra configurée pour capturer une image d'une chaussée ;un processeur électronique (210) connecté de manière communicative à la caméra, le processeur électronique (210) étant configuré pour :charger une carte haute définition qui comprend une première pluralité de caractéristiques,recevoir l'image, l'image comprenant une deuxième pluralité de caractéristiques,projeter la carte sur l'image,détecter la deuxième pluralité de caractéristiques dans l'image,aligner la carte avec l'image en alignant la première pluralité de caractéristiques avec la deuxième pluralité de caractéristiques,copier une étiquette décrivant l'une de la première pluralité de caractéristiques sur une caractéristique correspondante de la deuxième pluralité de caractéristiques pour créer une image étiquetée, etutiliser l'image étiquetée pour aider à la navigation du véhicule (100).
- Système selon la revendication 9, dans lequel le processeur électronique (210) est en outre configuré pour entraîner un détecteur d'objets au sein du processeur électronique (210) avec l'image étiquetée.
- Système selon la revendication 10, dans lequel le processeur électronique (210) est en outre configuré pour, après l'entraînement du détecteur d'objets, détecter la deuxième pluralité de caractéristiques dans une deuxième image avec le détecteur d'objets pour créer une image étiquetée mise à jour.
- Système selon la revendication 11, dans lequel le processeur électronique (210) est en outre configuré pour corriger un alignement de la carte avec la deuxième image sur la base de l'image étiquetée mise à jour.
- Système selon la revendication 12, dans lequel le processeur électronique (210) est en outre configuré pour entraîner le détecteur d'objets au sein du processeur électronique (210) avec l'image étiquetée mise à jour.
- Système selon la revendication 9, dans lequel la première pluralité de caractéristiques et la deuxième pluralité de caractéristiques sont des marqueurs de voie.
- Système selon la revendication 9, dans lequel le processeur électronique (210) est en outre configuré pour recevoir la carte depuis une base de données centrale (120).
- Système selon la revendication 9, dans lequel le processeur électronique (210) est en outre configuré pour générer la carte sur la base d'une entrée provenant d'un capteur (115) du véhicule (100).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/477,993 US10209089B2 (en) | 2017-04-03 | 2017-04-03 | Automated image labeling for vehicles based on maps |
PCT/EP2018/053476 WO2018184757A1 (fr) | 2017-04-03 | 2018-02-13 | Étiquetage automatisé d'image pour véhicule sur la base de cartes |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3607272A1 EP3607272A1 (fr) | 2020-02-12 |
EP3607272B1 true EP3607272B1 (fr) | 2022-04-27 |
Family
ID=61231241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18705594.2A Active EP3607272B1 (fr) | 2017-04-03 | 2018-02-13 | Étiquetage automatisé d'image pour véhicule sur la base de cartes |
Country Status (5)
Country | Link |
---|---|
US (1) | US10209089B2 (fr) |
EP (1) | EP3607272B1 (fr) |
KR (1) | KR102583989B1 (fr) |
CN (1) | CN110462343B (fr) |
WO (1) | WO2018184757A1 (fr) |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170270406A1 (en) * | 2016-03-18 | 2017-09-21 | Qualcomm Incorporated | Cloud-based processing using local device provided sensor data and labels |
US10309778B2 (en) * | 2016-12-30 | 2019-06-04 | DeepMap Inc. | Visual odometry and pairwise alignment for determining a position of an autonomous vehicle |
MX2019014559A (es) * | 2017-06-07 | 2020-02-07 | Nissan Motor | Metodo y dispositivo de correccion de datos de mapa. |
US10816354B2 (en) | 2017-08-22 | 2020-10-27 | Tusimple, Inc. | Verification module system and method for motion-based lane detection with multiple sensors |
US10565457B2 (en) | 2017-08-23 | 2020-02-18 | Tusimple, Inc. | Feature matching and correspondence refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map |
US10762673B2 (en) | 2017-08-23 | 2020-09-01 | Tusimple, Inc. | 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map |
US10649458B2 (en) | 2017-09-07 | 2020-05-12 | Tusimple, Inc. | Data-driven prediction-based system and method for trajectory planning of autonomous vehicles |
US10953881B2 (en) | 2017-09-07 | 2021-03-23 | Tusimple, Inc. | System and method for automated lane change control for autonomous vehicles |
US10953880B2 (en) | 2017-09-07 | 2021-03-23 | Tusimple, Inc. | System and method for automated lane change control for autonomous vehicles |
CN112004729B (zh) | 2018-01-09 | 2023-12-01 | 图森有限公司 | 具有高冗余的车辆的实时远程控制 |
CN115834617A (zh) | 2018-01-11 | 2023-03-21 | 图森有限公司 | 用于自主车辆操作的监视系统 |
US11009365B2 (en) | 2018-02-14 | 2021-05-18 | Tusimple, Inc. | Lane marking localization |
US11009356B2 (en) | 2018-02-14 | 2021-05-18 | Tusimple, Inc. | Lane marking localization and fusion |
US10685244B2 (en) | 2018-02-27 | 2020-06-16 | Tusimple, Inc. | System and method for online real-time multi-object tracking |
CN110378185A (zh) | 2018-04-12 | 2019-10-25 | 北京图森未来科技有限公司 | 一种应用于自动驾驶车辆的图像处理方法、装置 |
CN116129376A (zh) | 2018-05-02 | 2023-05-16 | 北京图森未来科技有限公司 | 一种道路边缘检测方法和装置 |
EP3849868A4 (fr) | 2018-09-13 | 2022-10-12 | Tusimple, Inc. | Procédés et systèmes de conduite sans danger à distance |
KR20200046437A (ko) * | 2018-10-24 | 2020-05-07 | 삼성전자주식회사 | 영상 및 맵 데이터 기반 측위 방법 및 장치 |
US10685252B2 (en) * | 2018-10-30 | 2020-06-16 | Here Global B.V. | Method and apparatus for predicting feature space decay using variational auto-encoder networks |
US10942271B2 (en) | 2018-10-30 | 2021-03-09 | Tusimple, Inc. | Determining an angle between a tow vehicle and a trailer |
CN111198890A (zh) * | 2018-11-20 | 2020-05-26 | 北京图森智途科技有限公司 | 地图更新方法、路侧设备、车载装置、车辆和系统 |
US11501104B2 (en) | 2018-11-27 | 2022-11-15 | Here Global B.V. | Method, apparatus, and system for providing image labeling for cross view alignment |
CN111319629B (zh) | 2018-12-14 | 2021-07-16 | 北京图森智途科技有限公司 | 一种自动驾驶车队的组队方法、装置及系统 |
US10922845B2 (en) * | 2018-12-21 | 2021-02-16 | Here Global B.V. | Apparatus and method for efficiently training feature detectors |
IL270540A (en) | 2018-12-26 | 2020-06-30 | Yandex Taxi Llc | Method and system for training a machine learning algorithm to recognize objects from a distance |
US10423840B1 (en) * | 2019-01-31 | 2019-09-24 | StradVision, Inc. | Post-processing method and device for detecting lanes to plan the drive path of autonomous vehicle by using segmentation score map and clustering map |
US10540572B1 (en) * | 2019-01-31 | 2020-01-21 | StradVision, Inc. | Method for auto-labeling training images for use in deep learning network to analyze images with high precision, and auto-labeling device using the same |
US10373004B1 (en) * | 2019-01-31 | 2019-08-06 | StradVision, Inc. | Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image |
CN110954112B (zh) * | 2019-03-29 | 2021-09-21 | 北京初速度科技有限公司 | 一种导航地图与感知图像匹配关系的更新方法和装置 |
US11823460B2 (en) | 2019-06-14 | 2023-11-21 | Tusimple, Inc. | Image fusion for autonomous vehicle operation |
US11527012B2 (en) | 2019-07-03 | 2022-12-13 | Ford Global Technologies, Llc | Vehicle pose determination |
KR20210034253A (ko) | 2019-09-20 | 2021-03-30 | 삼성전자주식회사 | 위치 추정 장치 및 방법 |
CN112667837A (zh) * | 2019-10-16 | 2021-04-16 | 上海商汤临港智能科技有限公司 | 图像数据自动标注方法及装置 |
KR20220088710A (ko) * | 2019-11-06 | 2022-06-28 | 엘지전자 주식회사 | 차량용 디스플레이 장치 및 그 제어 방법 |
US10867190B1 (en) | 2019-11-27 | 2020-12-15 | Aimotive Kft. | Method and system for lane detection |
CN111178161B (zh) * | 2019-12-12 | 2022-08-23 | 重庆邮电大学 | 一种基于fcos的车辆追踪方法及系统 |
KR102270827B1 (ko) * | 2020-02-21 | 2021-06-29 | 한양대학교 산학협력단 | 360도 주변 물체 검출 및 인식 작업을 위한 다중 센서 데이터 기반의 융합 정보 생성 방법 및 장치 |
EP3893150A1 (fr) | 2020-04-09 | 2021-10-13 | Tusimple, Inc. | Techniques d'estimation de pose de caméra |
CN113688259A (zh) * | 2020-05-19 | 2021-11-23 | 阿波罗智联(北京)科技有限公司 | 导航目标的标注方法及装置、电子设备、计算机可读介质 |
CN111724441A (zh) * | 2020-05-28 | 2020-09-29 | 上海商汤智能科技有限公司 | 图像标注方法及装置、电子设备及存储介质 |
AU2021203567A1 (en) | 2020-06-18 | 2022-01-20 | Tusimple, Inc. | Angle and orientation measurements for vehicles with multiple drivable sections |
CN111735473B (zh) * | 2020-07-06 | 2022-04-19 | 无锡广盈集团有限公司 | 一种能上传导航信息的北斗导航系统 |
CN112597328B (zh) * | 2020-12-28 | 2022-02-22 | 推想医疗科技股份有限公司 | 标注方法、装置、设备及介质 |
DE102021205827A1 (de) | 2021-06-09 | 2022-12-15 | Robert Bosch Gesellschaft mit beschränkter Haftung | Automatisierung in einem Fahrzeug anhand informationsminimierter Sensordaten |
CN114128673B (zh) * | 2021-12-14 | 2022-09-23 | 仲恺农业工程学院 | 基于混合深度神经网络的肉鸽精准饲喂方法 |
KR20240048297A (ko) | 2022-10-06 | 2024-04-15 | 주식회사 엘지유플러스 | 도로 객체 검출을 위한 학습 방법, 장치 및 시스템 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030110157A1 (en) * | 2001-10-02 | 2003-06-12 | Nobuhiro Maki | Exclusive access control apparatus and method |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3053172B2 (ja) * | 1997-07-11 | 2000-06-19 | 日本電信電話株式会社 | 距離参照型景観ラベリング装置およびシステム |
US6222583B1 (en) * | 1997-03-27 | 2001-04-24 | Nippon Telegraph And Telephone Corporation | Device and system for labeling sight images |
CN101641610A (zh) * | 2007-02-21 | 2010-02-03 | 电子地图北美公司 | 用于包含绝对及相对坐标的车辆导航及领航的系统及方法 |
US8605947B2 (en) * | 2008-04-24 | 2013-12-10 | GM Global Technology Operations LLC | Method for detecting a clear path of travel for a vehicle enhanced by object detection |
US9052207B2 (en) | 2009-10-22 | 2015-06-09 | Tomtom Polska Sp. Z O.O. | System and method for vehicle navigation using lateral offsets |
US9165196B2 (en) * | 2012-11-16 | 2015-10-20 | Intel Corporation | Augmenting ADAS features of a vehicle with image processing support in on-board vehicle platform |
DE102013101639A1 (de) | 2013-02-19 | 2014-09-04 | Continental Teves Ag & Co. Ohg | Verfahren und Vorrichtung zur Bestimmung eines Fahrbahnzustands |
US9129161B2 (en) | 2013-05-31 | 2015-09-08 | Toyota Jidosha Kabushiki Kaisha | Computationally efficient scene classification |
US9435653B2 (en) | 2013-09-17 | 2016-09-06 | GM Global Technology Operations LLC | Sensor-aided vehicle positioning system |
US9734399B2 (en) | 2014-04-08 | 2017-08-15 | The Boeing Company | Context-aware object detection in aerial photographs/videos using travel path metadata |
US9747505B2 (en) | 2014-07-07 | 2017-08-29 | Here Global B.V. | Lane level traffic |
US9569693B2 (en) | 2014-12-31 | 2017-02-14 | Here Global B.V. | Method and apparatus for object identification and location correlation based on received images |
US9710714B2 (en) | 2015-08-03 | 2017-07-18 | Nokia Technologies Oy | Fusion of RGB images and LiDAR data for lane classification |
-
2017
- 2017-04-03 US US15/477,993 patent/US10209089B2/en active Active
-
2018
- 2018-02-13 CN CN201880022359.5A patent/CN110462343B/zh active Active
- 2018-02-13 EP EP18705594.2A patent/EP3607272B1/fr active Active
- 2018-02-13 WO PCT/EP2018/053476 patent/WO2018184757A1/fr unknown
- 2018-02-13 KR KR1020197028921A patent/KR102583989B1/ko active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030110157A1 (en) * | 2001-10-02 | 2003-06-12 | Nobuhiro Maki | Exclusive access control apparatus and method |
Also Published As
Publication number | Publication date |
---|---|
WO2018184757A1 (fr) | 2018-10-11 |
KR20190137087A (ko) | 2019-12-10 |
US10209089B2 (en) | 2019-02-19 |
KR102583989B1 (ko) | 2023-10-05 |
CN110462343B (zh) | 2023-08-18 |
EP3607272A1 (fr) | 2020-02-12 |
CN110462343A (zh) | 2019-11-15 |
US20180283892A1 (en) | 2018-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3607272B1 (fr) | Étiquetage automatisé d'image pour véhicule sur la base de cartes | |
KR102483649B1 (ko) | 차량 위치 결정 방법 및 차량 위치 결정 장치 | |
EP3732657B1 (fr) | Localisation de véhicule | |
CN109945858B (zh) | 用于低速泊车驾驶场景的多传感融合定位方法 | |
US10650253B2 (en) | Method for estimating traffic lanes | |
CN111856491B (zh) | 用于确定车辆的地理位置和朝向的方法和设备 | |
CN110530372B (zh) | 定位方法、路径确定方法、装置、机器人及存储介质 | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
US10552982B2 (en) | Method for automatically establishing extrinsic parameters of a camera of a vehicle | |
CN107167826B (zh) | 一种自动驾驶中基于可变网格的图像特征检测的车辆纵向定位系统及方法 | |
JP6454726B2 (ja) | 自車位置推定装置 | |
US20200353914A1 (en) | In-vehicle processing device and movement support system | |
US10223598B2 (en) | Method of generating segmented vehicle image data, corresponding system, and vehicle | |
CN109426800B (zh) | 一种车道线检测方法和装置 | |
US10963708B2 (en) | Method, device and computer-readable storage medium with instructions for determining the lateral position of a vehicle relative to the lanes of a road | |
KR20190097326A (ko) | 신호기 인식 장치 및 신호기 인식 방법 | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
WO2019208101A1 (fr) | Dispositif d'estimation de position | |
JP6932058B2 (ja) | 移動体の位置推定装置及び位置推定方法 | |
JP2018048949A (ja) | 物体識別装置 | |
US20220355818A1 (en) | Method for a scene interpretation of an environment of a vehicle | |
CN113405555B (zh) | 一种自动驾驶的定位传感方法、系统及装置 | |
US20200132471A1 (en) | Position Estimating Device | |
KR102195040B1 (ko) | 이동식 도면화 시스템 및 모노카메라를 이용한 도로 표지 정보 수집 방법 | |
CN113390422B (zh) | 汽车的定位方法、装置及计算机存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20191104 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ROBERT BOSCH GMBH |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06N 3/04 20060101ALI20211118BHEP Ipc: G06K 9/46 20060101ALI20211118BHEP Ipc: G06F 16/58 20190101ALI20211118BHEP Ipc: G06F 16/54 20190101ALI20211118BHEP Ipc: G06F 16/29 20190101ALI20211118BHEP Ipc: G06N 3/08 20060101ALI20211118BHEP Ipc: G06K 9/00 20060101ALI20211118BHEP Ipc: G06K 9/62 20060101ALI20211118BHEP Ipc: G01C 21/36 20060101AFI20211118BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06V 10/44 20220101ALI20220112BHEP Ipc: G06V 20/56 20220101ALI20220112BHEP Ipc: G06K 9/62 20060101ALI20220112BHEP Ipc: G06K 9/00 20060101ALI20220112BHEP Ipc: G06N 3/08 20060101ALI20220112BHEP Ipc: G06F 16/29 20190101ALI20220112BHEP Ipc: G06F 16/54 20190101ALI20220112BHEP Ipc: G06F 16/58 20190101ALI20220112BHEP Ipc: G06N 3/04 20060101ALI20220112BHEP Ipc: G01C 21/36 20060101AFI20220112BHEP |
|
INTG | Intention to grant announced |
Effective date: 20220128 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018034483 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1487252 Country of ref document: AT Kind code of ref document: T Effective date: 20220515 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20220427 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1487252 Country of ref document: AT Kind code of ref document: T Effective date: 20220427 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220829 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220727 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220728 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220727 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220827 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018034483 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
26N | No opposition filed |
Effective date: 20230130 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230217 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20230426 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20230228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230213 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230228 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230228 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220427 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230213 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230228 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240222 Year of fee payment: 7 |