GB2617866A - Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection, - Google Patents

Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection, Download PDF

Info

Publication number
GB2617866A
GB2617866A GB2206225.1A GB202206225A GB2617866A GB 2617866 A GB2617866 A GB 2617866A GB 202206225 A GB202206225 A GB 202206225A GB 2617866 A GB2617866 A GB 2617866A
Authority
GB
United Kingdom
Prior art keywords
intersection
attribute
image
decision tree
classes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2206225.1A
Other versions
GB2617866A9 (en
GB202206225D0 (en
Inventor
Balanescu Adrian-Gabriel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Autonomous Mobility Germany GmbH
Original Assignee
Continental Autonomous Mobility Germany GmbH
Continental Automotive Romania SRL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Autonomous Mobility Germany GmbH, Continental Automotive Romania SRL filed Critical Continental Autonomous Mobility Germany GmbH
Publication of GB202206225D0 publication Critical patent/GB202206225D0/en
Publication of GB2617866A publication Critical patent/GB2617866A/en
Publication of GB2617866A9 publication Critical patent/GB2617866A9/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention refers to a method of training a decision tree model for detecting an intersection from one or more images, wherein the training data includes a training dataset comprising Hu moments characteristics of a set of top-view projected, binary segmented images, comprising attribute-classified pixels of road classes, comprising four classes of intersection and associated intersection ground truth labels to each of the Hu moments characteristics and the decision tree model classifies the attribute-classified pixels of the road classes in one of the four classes of intersection. The invention further refers to a method for detecting an intersection from one or more raw images using the trained decision tree model by associating a set of pixel luminous intensity values to each raw image, generating and processing a binary segmented image comprising attribute-classified pixels from the attribute classified pixels of road classes by computing raw image Hu moments characteristics as a weighted average of the raw intensity value associated to each attribute-classified pixel of the projected binary segmented image generating a raw dataset of seven raw image Hu moments characteristics of the projected binary segmented image, classifying the attribute-classified pixels of road classes of the projected binary segmented image in one of the four classes of intersection, detecting an intersection, and temporary storing of an integer value encoding the class of intersection corresponding to the detected intersection said integer value to be used by an advanced driving assistance system ADAS processing chain.

Description

Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method for detecting an intersection, a training processing unit, and an intersection detection computing unit
Description
The present invention relates to advanced driver assistance systems ADAS and to prediction of the intersections of the roads. In particulars the invention relates to a computer-method method for training a decision tree model for detecting an intersection, computer implemented method for detecting an intersection from one or more images acquired by an image acquisition software component of a camera arrangement of an ego vehicle, a training processing unit, and an intersection detection computing unit.
Throughout this invention, the term "ego vehicle" stands for a road or land or agricultural vehicle equipped with advanced driver assistance systems ADAS technology. The ego vehicle may be autonomous or manned.
Advanced driver assistance systems ADAS cameras, hereafter alternatively called cameras, are more and more used in automotive industry for the purpose to provide the advanced driver assistance systems ADAS processing chain with quick and accurate detection and recognize of objects and persons from the exterior of the ego vehicle. The information captured by cameras is then analysed by ADAS processing chain and used to trigger a response by the vehicle, being used for a wide range of functions, which are outside the scope of the invention.
A considerable reduction of vehicle crashes occurred since the introduction and progress of the advanced driver assistance systems ADAS because these systems have a major contribution in the avoidance of a significant part of the collisions by predicting them, and by taking actions.
Part of the crashes occur in intersections, that is when an ego vehicle is colliding with a target vehicle.
Therefore, it has a been a constant preoccupation for detecting intersections based on the images provided from the cameras and, in some cases, for using elements of artificial intelligence to improve the detection of the intersections.
Most methods for detecting intersections are based either on images from camera or point clouds from lidar. Usually a machine learning classifier (e.g.. SVM) based on certain features of the image or point cloud representation is used to classify if an intersection is present in the currently analysed data based on the images acquired from the cameras, as thought by Christopher Rasmussen [1].
Other methods for detecting intersections start from recognition techniques of visual patterns and characters based on a theory of two-dimensional moment invariants for planar geometric figures-such as the intersection, as thought by Ming-Kuei Hu [2].
The known methods for detecting intersections using artificial intelligence use trained models that are very sensitive to the features used for the classification of the intersections, thus the reliability of the classification is low.
Also, the models of known solutions, and in particular of the solutions implying classifiers based on convolutional neural networks, are computationally intensive, usually requiring a dedicated hardware accelerator in order to run in real-time.
The technical problem to be solved is find a robust training model to be used for classifying the types of intersection as they appear in the images acquired from the camera and to use said robust training model to detect the intersections.
In order to overcome the disadvantages of prior art, in a first aspect of the invention it is presented a computer implemented method for training a decision tree model for detecting an intersection from one or more images acquired by an image acquisition software component of a camera arrangement of an ego vehicle, said camera arrangement acquiring forward-facing images of a road, the road including an intersection.
The method comprises the following steps carried out by a training processing unit: First, acquiring training data, wherein the training data includes: a training dataset comprising Hu moments characteristics of a set of top-view projected, binary segmented images, the set of top-view projected, binary segmented images acquired and processed by an image pre-processing component of the camera arrangement, each top-view projected, binary segmented image comprising attribute-classified pixels of road classes, the road classes comprising four classes of intersection, defined as follows: (i) no intersection, (ii) intersection on the right side of the ego vehicle, (iii) intersection on the left side of the ego vehicle, (iv) intersection on the right side and left side of the ego vehicle, and associated intersection ground truth labels to each of the Hu moments characteristics. The associated intersection ground truth labels comprise four classes of intersection for each of the attribute-classified pixels of road classes. Second, training the decision tree model for defining nested conditional statements data of said decision tree model to classify the attribute -classified pixels of the road classes in one of the four classes of intersection, wherein the nested conditional statements data comprises a respective threshold for each of the Hu moments characteristics, each respective threshold being associated to each of the four classes of intersection. Further on, generating a labelled top-view projected, binary segmented images. The labelled top-view projected, binary segmented images comprise classified pixels corresponding to the four classes of intersection: (i) no intersection attribute classified pixels for the no intersection class of intersection, (ii) intersection from right attribute classified pixels for intersection on the right side of the ego vehicle class, (iii) intersection from left attribute classified pixels for the intersection on the left side of the ego vehicle, (iv) intersection from both sides attribute classified pixels for intersection on the right side and left side of the ego vehicle. Last, generating a trained decision tree model trained to classify the attribute -classified pixels of the road classes in one of the four classes of intersection, and sending the trained decision tree model to a decision tree processing unit of the ego vehicle.
In a second aspect of the invention, it is presented a computer implemented method for detecting an intersection from one or more raw images provided by a camera arrangement of an ego vehicle using the decision tree model trained by the computer implemented method from the first aspect of the invention, said camera arrangement acquiring forward-facing images.
Said method comprises the following steps, carried out for each raw image: First, acquiring the raw image by an image acquisition software component of the camera arrangement, and processing said raw image by means of an image pre-processing component to determine a raw pixel luminous intensity value associated to each pixel of the raw image, and associating a set of pixel luminous intensity values to each raw image. Second, semantic segmentation of said raw image by means of the image pre-processing component by: assigning to each pixel of said raw image an intensity-associated classifying attribute, the intensity-associated classifying attribute determined based on the raw intensity value associated to each pixel; and generating a binary segmented image, the binary segmented image comprising attribute-classified pixels from the attribute classified pixels of road classes. Third, projecting the binary segmented image in a top-down view by means of image pre-processing component generating a projected binary segmented image in a XoY plane view comprising the attribute classified pixels of road classes. Fourth, processing by the image pre-processing component of the projected binary segmented image by computing raw image Hu moments characteristics as a weighted average of the raw intensity value associated to each attribute-classified pixel of the projected binary segmented image, generating a raw dataset of seven raw image Hu moments characteristics of the projected binary segmented image comprising the attribute-classified pixels of road classes. Fifth, classifying, by a decision tree processing unit using the trained decision tree model, the attribute-classified pixels of road classes of the projected binary segmented image in one of the four classes of intersection. Further on, detecting an intersection by generating labelled top-view projected, binary segmented images, the labelled top-view projected, binary segmented images comprising classified pixels corresponding to the four classes of intersection: (i) no intersection attribute classified pixels for the no intersection class of intersection, (ii) intersection from right attribute classified pixels for intersection on the right side of the ego vehicle class, (iii) intersection from left attribute classified pixels for the intersection on the left side of the ego vehicle, (iv) intersection from both sides attribute classified pixels for intersection on the right side and left side of the ego vehicle. Last, temporary storing, by the decision tree processing unit, an integer value encoding the class of intersection corresponding to the detected intersection, said integer value to be used by an advanced driving assistance system ADAS processing chain.
In a third aspect of the invention, it is presented a training processing unit comprising a communication interface being arranged to receive a training dataset, at least one processor and at least one non-volatile memory, training processing unit configured for training a decision tree model for detecting an intersection from a raw image acquired by an image acquisition software component of a camera arrangement of an ego vehicle by means of carrying out the steps of the computer implemented method for training a decision tree model of any preferred embodiment.
In a fourth aspect of the invention, it is presented an intersection detection computing unit of an ego vehicle, the intersection detection computing unit comprising a camera arrangement and a decision tree processing unit, the camera arrangement comprising an image acquisition software component, the image acquisition software component configured to acquire one or more raw images, and an image pre-processing component, the intersection detection computing unit being configured to detect an intersection from one or more raw images by means of carrying out the steps of the method computer implemented method for detecting an intersection.
In a fifth aspect of the invention, it is presented a first non-transitory computer-readable storage medium encoded with a first computer program, the first computer program comprising instructions executable by one or more processors of the training processing unit which, upon such execution by the training processing 30 unit, causes the one or more processors to perform operations of the computer-implemented method for training a decision tree model of any preferred embodiment.
In a sixth aspect of the invention, it is presented a second non-transitory computer-readable storage medium encoded with a second computer program, the second computer program comprising instructions executable by one or more processors of the intersection detection computing unit, which, upon such execution by the intersection detection computing unit, causes the one or more processors to perform operations of the computer-implemented method of for detecting an intersection.
In a seventh aspect of the invention, it is presented a trained decision tree model for detecting an intersection, trained according to the computer-implemented method of training of a decision tree model for detecting an intersection.
Further advantageous embodiments are the subject matter of the dependent claims.
The main advantages of using the invention are as follows: -the method for training the decision tree model as well as the method for detecting an intersection of the invention are invariant with respect to translation, scale and rotation of the images, therefore the detection of the intersection by using the trained decision tree model of the invention is more robust and reliable, -the method for training the decision tree model as well as the method for detecting an intersection of the invention are not computation-intensive, making the use of the trained decision tree model of the invention suitable for resource-constrained devices.
Figures Further special features and advantages of the present invention can be taken from the following description of an advantageous embodiment by way of the 30 accompanying drawings: Fig. 1 illustrates the computer implemented method for detecting an intersection from one or more raw images provided by a camera arrangement of an ego vehicle using the decision tree model trained by the computer implemented methoda.
Fig 2 illustrates a projected binary segmented image in a XOY plane view comprising the attribute classified pixels of road classes.
Detailed description
In a first aspect of the invention, it is presented a computer implemented method for training a decision tree model for detecting an intersection from one or more images. Said images are acquired by an image acquisition software component of a camera arrangement of an ego vehicle, said camera arrangement acquiring forward-facing images of a road, the road including an intersection.
The method comprises two steps carried out by a training processing unit: a first step of acquiring training data and a second step of training the decision tree model.
In the first step, training data s acquired. The training data includes a training dataset. The clataset comprises Hu moments characteristics TSHU of a set of top view projected, binary segmented images PrBinIMGn and associated intersection ground truth labels to each of the Hu moments characteristics TSHU.
The set of top-view projected, binary segmented images PrBinIMG, are acquired 25 and processed by an image pre-processing component of the camera arrangement, each top-view projected, binary segmented image PrBinIMGn comprising attribute-classified pixels of road classes.
The road classes comprise four classes of intersection, defined as follows: (i) no intersection Nol, (ii) intersection on the right side of the ego vehicle RI, (iii) intersection on the left side of the ego vehicle IL, (iv) intersection on the right side and left side of the ego vehicle IS.
The associated intersection ground truth labels comprise four classes of intersection for each of the attribute-classified pixels of road classes defined above.
In the second step, decision tree model is trained for defining nested conditional statements data of said decision tree model to classify the attribute -classified pixels of the road classes in one of the four classes of intersection defined above. The nested conditional statements data comprises a respective threshold for each of the Hu moments characteristics TSHU, each respective threshold being associated to each of the four classes of intersection. Then, labelled top-view projected, binary segmented images Lab-PrBinIMGn are generated, said labelled top-view projected, binary segmented images Lab-PrBinIMGn comprising classified pixels corresponding to the four classes of intersection: (0 no intersection attribute-classified pixels ANoi for the no intersection class of intersection Nol, (ii) intersection from right attribute-classified pixels AR I for intersection on the right side of the ego vehicle class RI, (iii) intersection from left attribute-classified pixels AIL for the intersection on the left side of the ego vehicle IL, (iv) intersection from both sides attribute-classified pixels AIB for intersection on the right side and left side of the ego vehicle IB.
At the end of the method of the first aspect, a trained decision tree model is generated, trained to classify the attribute-classified pixels of the road classes in one of the four classes of intersection. The trained decision tree model is sent to a decision tree processing unit of the ego vehicle.
In a preferred embodiment, the associated intersection ground truth labels are obtained by overlaying the trajectory of the ego vehicle on a map and assigning to the set of top view projected, binary segmented images PrBinliV1Gn, the corresponding label representing the four classes of intersection.
In a second aspect of the invention, with reference to Fig. 1, it is presented a computer implemented method for detecting an intersection from one or more raw images IMGn provided by a camera arrangement of an ego vehicle using the decision tree model trained by the computer implemented method according to any preferred embodiment, said camera arrangement acquiring forward-facing images.
the method of the second aspect of the invention comprises six following steps, carried out for each raw image IMGn.
It is important to note that, in order to improve the accuracy of the method of the second aspect of the invention, the same type of camera arrangement has to be used in the method of the first aspect of the invention and in the method of the second aspect of the invention.
In the first step, the raw image IMGn is acquired by an image acquisition software component of the camera arrangement. Said acquired raw image IMGn is processed by means of an image pre-processing component in order to determine a raw pixel luminous intensity value associated to each pixel Vn of the raw image IMGn and defining a set of pixel luminous intensity values SVn is associated to each raw image IMGn.
In the second step, a semantic segmentation of said raw image IMGn is carried out by means of the image pre-processing component by: -assigning to each pixel Vn of said raw image IMGn an intensity-associated classifying attribute AV, the intensity-associated classifying attribute AVn determined based on the raw intensity value associated to each pixel Vn, and -generating a binary segmented image BinIMGn, the binary segmented image BinIMGn comprising attribute-classified pixels from the attribute classified pixels of road classes.
In the third step, as it is represented in Fig. 2, the binary segmented image BinIMGn 30 is projected in a top-down view by means of image pre-processing component generating a projected binary segmented image PrBinlf,f1Gn in a XoY plane view comprising the attribute classified pixels of road classes.
Then, in the fourth step, the irnage pre-processing component processes the projected binary segmented image PrBinifylGn by computing raw image Hu moments characteristics INIGn-HU as a weighted average of the raw intensity value associated to each attribute-classified pixel of the projected binary segmented image PrBinitlIGH, and generates a raw dataset of seven raw image Hu moments characteristics IMGri-HU of the projected binary segmented image PrBinIMCI, comprising the attribute-classified pixels of road classes.
In the fifth step, a decision tree processing unit classifies, by using the trained decision tree model, the attribute-classified pixels of road classes of the projected binary segmented image PreinIMGR in one of the four classes of intersection, and detects an intersection by generating labelled top-view projected, binary segmented images Lab-PrBinIEVIGn, the labelled top-view projected, binary segmented images Lab-PrBinIMGri comprising classified pixels corresponding to the four classes of intersection: (i) no intersection attribute classified pixels ANol for the no intersection class of intersection Nol, (ii) intersection from right attribute classified pixels ARI for intersection on the right side of the ego vehicle class RI, (iii) intersection from left attribute classified pixels AIL for the intersection on the left side of the ego vehicle IL, (iv) intersection from both sides attribute classified pixels AlE3 for intersection on the right side and left side of the ego vehicle IB, Finally, in the sixth step, the decision tree processing unit temporary stores an integer value encoding the class of intersection corresponding to the detected intersection, said integer value to be used by an advanced driving assistance system ADAS processing chain.
The integer value can be of 0/1/2/3/ types as follows: (i) 0 for no intersection Nol, (ii) 1 for intersection on the right side of the ego vehicle RI, (iii) 2 for intersection on the left side of the ego vehicle IL, (iv) 3 for intersection on the right side and left side of the ego vehicle IB.
In a preferred embodiment of the computer implemented method for detecting an intersection from one or more raw images IMGn, additional heuristics are applied on the raw output of the classifier such as suppressing the output if the input provided by the semantic segmentation is not considered of good quality, e.g., noisy shape due to occlusion from other objects on the road.
The quality of the segmented image input is assessed by computing the hull of the shape from the binary segmented image and subtracting the original shape from it. If the area of the resulting difference is bigger than a certain threshold, in particular in amount of 20% of the convex hull image the input is considered of low quality, and the output is suppressed.
In a third aspect of the invention, it is presented a training processing unit comprising a communication interface being arranged to receive a training dataset, at least one processor and at least one non-volatile memory, training processing unit configured for training a decision tree model for detecting an intersection from a raw image IMGn acquired by an image acquisition software component of a camera arrangement of an ego vehicle by means of carrying out the steps of the computer implemented method for training a decision tree model of any preferred embodiment.
In a fourth aspect of the invention, it is presented an intersection detection computing unit of an ego vehicle, the intersection detection computing unit comprising a camera arrangement and a decision tree processing unit.
The camera arrangement comprises an image acquisition software component, the image acquisition software component configured to acquire one or more raw 30 images IMGn, and an image pre-processing component.
The intersection detection computing unit being configured to detect an intersection from one or more raw images IMGn by means of carrying out the steps of the method computer implemented method for detecting an intersection.
In a preferred embodiment, the decision tree processing unit is included in the camera arrangement, whereas, in an alternative preferred embodiment, the decision tree processing unit is not included in the camera arrangement, being included in another ego vehicle's processing unit or as a stand-alone processing unit.
In a fifth aspect of the invention, it is presented a first non-transitory computer-readable storage medium encoded with a first computer program, the first computer program comprising instructions executable by one or more processors of the training processing unit which, upon such execution by the training processing unit, causes the one or more processors to perform operations of the computer-implemented method for training a decision tree model of any preferred embodiment.
In a sixth aspect of the invention, it is presented a second non-transitory computer-readable storage medium encoded with a second computer program, the second computer program comprising instructions executable by one or more processors of the intersection detection computing unit, which, upon such execution by the intersection detection computing unit, causes the one or more processors to perform operations of the computer-implemented method of for detecting an intersection.
In a seventh aspect of the invention, it is presented a trained decision tree model for detecting an intersection, trained according to the computer-implemented method of training of a decision tree model for detecting an intersection.
While certain embodiments of the present invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.
Bibliographical references [1] Christopher Rasmussen: "Road Shape Classification for Detecting and Negotiating Intersections" -Department of Computer & Information Sciences University of Delaware Newark, DE 19716, United States of America, published 28 July 2003.
[2] Ming-Kuei Hu: "Visual pattern recognition by moment invariants"-published February 1962

Claims (10)

  1. Patent claims 1. Computer implemented method for training a decision tree model for detecting an intersection from one or more images acquired by an image acquisition software component of a camera arrangement of an ego vehicle, said camera arrangement acquiring forward-facing images of a road, the road including an intersection, the method comprising the following steps carried out by a training processing unit: (S1.1) Acquiring training data, wherein the training data includes: a training ciataset comprising Hu moments characteristics (TSHU) of a set of top-view projected, binary segmented images (PreinIMGH), the set of top-view projected, binary segmented images (PrBinINAGi,) acquired and processed by an image pre-processing component of the camera arrangement, each top view projected, binary segmented image (PrBinIMGn) comprising attribute-classified pixels of road classes, the road classes comprising four classes of intersection, defined as follows: no intersection (Nol), intersection on the right side of the ego vehicle (RI), intersection on the left side of the ego vehicle (IL), intersection on the right side and left side of the ego vehicle (IB), associated intersection ground truth labels to each of the Flu moments characteristics (ISHLI), the associated intersection ground truth labels comprising four classes of intersection for each of the attribute-classified pixels of road classes, (S1,2) Training the decision tree model for defining nested conditional statements data of said decision tree model to classify the attribute -classified pixels of the road classes in one of the four classes of intersection, wherein the nested conditional statements data comprises a respective threshold for each of the Flu moments characteristics (TSHU), each respective threshold being associated to each of the four classes of intersection, (S1.2.1) Generating a labelled top-view projected, binary segmented images 30 (Lab-PrBinlIVIGn), the labelled top-view projected, binary segmented images (Lab-PrBinlIVIGii) comprising classified pixels corresponding to the four classes of intersection: no intersection attribute classified pixels ANol) for the no intersection class of intersection (Nol), intersection from right attribute classified pixels (ARI) for intersection on the right side of the ego vehicle class (RI); intersection from left attribute cssified pixels (AIL) for the intersection on the left side of the ego vehicle (IL), intersection from both sides attribute classified pixels (A113) for intersection on the right side and left side of the ego vehicle (IB), (S1.2.2) Generating a trained decision tree model trained to classify the attribute -classified pixels of the road classes in one of the four classes of intersection, and sending the trained decision tree model to a decision tree processing unit of the ego vehicle.
  2. 2. The computer implemented method of claim 1, wherein the associated intersection ground truth labels are obtained by overlaying the trajectory of the ego vehicle on a map and assigning to the set of top view projected. binary segmented images (PrBinIMG11), the corresponding label representing the four classes of intersection.
  3. 3. Computer implemented method for detecting an intersection from one or more raw images (IMGnj provided by a camera arrangement of an ego vehicle using the decision tree model trained by the computer implemented method according to claim -I or 2, said camera arrangement acquiring forward-facing images, comprising the following steps, carried out for each raw image (IMGni.(S3.1) Acquiring the raw image (IMGni by an image acquisition software component of the camera arrangement, and processing said raw image (IMGn) by means of an image pre-processing component to determine a raw pixel luminous intensity value associated to each pixel (Vn) of the raw image (IMGn), and associating a set of pixel luminous intensity values (SVn) to each raw image (IMGn), (S3.2) Semantic segmentation of said raw image (IMGn) by means of the image 30 pre-processing component by: assigning to each pixel (Vn) of said raw image (IMGn) an intensity-associated classifying attribute (AVn), the intensity-associated classifying attribute (AVn) determined based on the raw intensity value associated to each pixel (Vn) and -generating a binary segmented image (BinIMGn), the binary segmented image (BinIMGn) comprising attribute-classified pixels from the attribute classified pixels of road classes, (S3.3) Projecting the binary segmented image (BinIMGn) in a top-down view by 5 means of image pre-processing component generating a projected binary segmented image (PrBinlIVIGn) in a XoY plane view comprising the attribute classified pixels of road classes, (S3.4) Processing by the image pre-processing component of the projected binary segmented image (PrBinIMGri) by computing raw image Flu moments characteristics (IMGn-FILJ) as a weighted average of the raw intensity value associated to each attribute-classified pixel of the projected binary segmented image (PrBinINIGn), generating a raw dataset of seven raw image Hu moments characteristics (IIVIGn-HU) of the projected binary segmented image (PrBinIMGn) comprising the attribute-classified pixels of road classes, (S3.5) Classifying, by a decision tree processing unit using the trained decision tree model, the attribute-classified pixels of road classes of the projected binary segmented image (PrBinIMGn) in one of the four classes of intersection, and detecting an intersection by generating labelled top-view projected, binary segmented images (Lab-F'rBinlIMGH), the labelled top-view projected, binary segmented images (Lab-PrBinINIGn) comprising classified pixels corresponding to the four classes of intersection: no intersection attribute classified pixels (ANol) for the no intersection class of intersection (Nol), intersection from right attribute classified pixels (AR for intersection on the right side of the ego vehicle class (RI), intersection from left attribute classified pixels (AIL) for the intersection on the left side of the ego vehicle (IL), intersection from both sides attribute classified pixels (AIB) for intersection on the right side and left side of the ego vehicle (IB), (53.6) Temporary storing, by the decision tree processing unit, an integer value encoding the class of intersection corresponding to the detected intersection, said integer value to be used by an advanced driving assistance system ADAS processing chain.
  4. 4. A training processing unit comprising a communication interface being arranged to receive a training dataset, at least one processor and at least one non-volatile memory, training processing unit configured for training a decision tree model for detecting an intersection from a raw image (IMGn.1 acquired by an image acquisition software component of a camera arrangement of an ego vehicle by means of carrying out the steps of the computer implemented method for training a decision tree model according to the claim 1 or 2.
  5. 5. An intersection detection computing unit of an ego vehicle, the intersection detection computing unit comprising a camera arrangement and a decision tree processing unit, the camera arrangement comprising an image acquisition software component, the image acquisition software component configured to acquire one or more raw images (IMGn), and an image pre-processing component, the intersection detection computing unit being configured to detect an intersection from one or more raw images (IMG,LI by means of carrying out the steps of the method, according to claim 3.
  6. 6. The intersection detection computing unit of claim 5 wherein the decision tree processing unit is included in the camera arrangement.
  7. 7. The intersection detection computing unit of claim 5 wherein the decision tree processing unit is included in another ego vehicle's processing unit or as a stand-alone processing unit.
  8. 8. A first non-transitory computer-readable storage medium encoded with a first computer program, the first computer program comprising instructions executable by one or more processors of the training processing unit of claim 3 which, upon such execution by the training processing unit, causes the one or more processors to perform operations of the computer-implemented method of training of a decision tree model for detecting an intersection according to claim 1 or 2.
  9. 9. A second non-transitory computer-readable storage medium encoded with a second computer program, the second computer program comprising instructions executable by one or more processors of the intersection detection computing unit of claim 4 or 5, which, upon such execution by the intersection detection computing unit, causes the one or more processors to perform operations of the computer-implemented method of for detecting an intersection according to claim 3.
  10. 10. A trained decision tree model for detecting an intersection, trained according to the computer-implemented method of training of a decision tree model for detecting an intersection according to claim 1 or 2.
GB2206225.1A 2022-04-21 2022-04-28 Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection, Pending GB2617866A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22465529 2022-04-21

Publications (3)

Publication Number Publication Date
GB202206225D0 GB202206225D0 (en) 2022-06-15
GB2617866A true GB2617866A (en) 2023-10-25
GB2617866A9 GB2617866A9 (en) 2023-11-08

Family

ID=81850372

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2206225.1A Pending GB2617866A (en) 2022-04-21 2022-04-28 Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection,

Country Status (1)

Country Link
GB (1) GB2617866A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180164812A1 (en) * 2016-12-14 2018-06-14 Samsung Electronics Co., Ltd. Apparatus and method for generating training data to train neural network determining information associated with road included in image
US20200324795A1 (en) * 2019-04-12 2020-10-15 Nvidia Corporation Neural network training using ground truth data augmented with map information for autonomous machine applications
US20200341466A1 (en) * 2019-04-26 2020-10-29 Nvidia Corporation Intersection pose detection in autonomous machine applications
CN110077416B (en) * 2019-05-07 2020-12-11 济南大学 Decision tree-based driver intention analysis method and system
US20200410254A1 (en) * 2019-06-25 2020-12-31 Nvidia Corporation Intersection region detection and classification for autonomous machine applications
CN112784639A (en) * 2019-11-07 2021-05-11 北京市商汤科技开发有限公司 Intersection detection, neural network training and intelligent driving method, device and equipment
US20210201145A1 (en) * 2019-12-31 2021-07-01 Nvidia Corporation Three-dimensional intersection structure prediction for autonomous driving applications
CN113743466A (en) * 2021-08-02 2021-12-03 南斗六星系统集成有限公司 Road type identification method and system based on decision tree
CN114140903A (en) * 2021-08-02 2022-03-04 南斗六星系统集成有限公司 Road type recognition vehicle-mounted device based on decision tree generation rule

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180164812A1 (en) * 2016-12-14 2018-06-14 Samsung Electronics Co., Ltd. Apparatus and method for generating training data to train neural network determining information associated with road included in image
US20200324795A1 (en) * 2019-04-12 2020-10-15 Nvidia Corporation Neural network training using ground truth data augmented with map information for autonomous machine applications
US20200341466A1 (en) * 2019-04-26 2020-10-29 Nvidia Corporation Intersection pose detection in autonomous machine applications
CN110077416B (en) * 2019-05-07 2020-12-11 济南大学 Decision tree-based driver intention analysis method and system
US20200410254A1 (en) * 2019-06-25 2020-12-31 Nvidia Corporation Intersection region detection and classification for autonomous machine applications
CN112784639A (en) * 2019-11-07 2021-05-11 北京市商汤科技开发有限公司 Intersection detection, neural network training and intelligent driving method, device and equipment
US20210201145A1 (en) * 2019-12-31 2021-07-01 Nvidia Corporation Three-dimensional intersection structure prediction for autonomous driving applications
CN113743466A (en) * 2021-08-02 2021-12-03 南斗六星系统集成有限公司 Road type identification method and system based on decision tree
CN114140903A (en) * 2021-08-02 2022-03-04 南斗六星系统集成有限公司 Road type recognition vehicle-mounted device based on decision tree generation rule

Also Published As

Publication number Publication date
GB2617866A9 (en) 2023-11-08
GB202206225D0 (en) 2022-06-15

Similar Documents

Publication Publication Date Title
US11783230B2 (en) Automatic generation of ground truth data for training or retraining machine learning models
US10936922B2 (en) Machine learning techniques
US10817740B2 (en) Instance segmentation inferred from machine learning model output
Feng et al. Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection
US20230266771A1 (en) Restricted multi-scale inference for machine learning
Nguyen Improving faster R-CNN framework for fast vehicle detection
KR102371317B1 (en) Rare Instance Classifiers
Gilroy et al. Overcoming occlusion in the automotive environment—A review
Xiao et al. CRF based road detection with multi-sensor fusion
Geiger et al. 3d traffic scene understanding from movable platforms
US20210110180A1 (en) Method and apparatus for traffic sign detection, electronic device and computer storage medium
Haque et al. A computer vision based lane detection approach
Kuang et al. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection
Kuang et al. Bayes saliency-based object proposal generator for nighttime traffic images
CN112334906A (en) Example segmentation inferred from machine learning model output
Antony et al. Vision based vehicle detection: A literature review
CN113255779B (en) Multi-source perception data fusion identification method, system and computer readable storage medium
Agunbiade et al. Enhancement performance of road recognition system of autonomous robots in shadow scenario
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
US20240010225A1 (en) Representation learning for object detection from unlabeled point cloud sequences
US20230215184A1 (en) Systems and methods for mitigating mis-detections of tracked objects in the surrounding environment of a vehicle
GB2617866A (en) Computer implemented method for training a decision tree model for detecting an intersection, computer implemented method detecting an intersection,
US20220129685A1 (en) System and Method for Determining Object Characteristics in Real-time
Al Khafaji et al. Traffic Signs Detection and Recognition Using A combination of YOLO and CNN
JP2015001966A (en) Object detection device

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: CONTINENTAL AUTONOMOUS MOBILITY GERMANY GMBH

Free format text: FORMER OWNER: CONTINENTAL AUTOMOTIVE GMBH