WO2023284932A1 - Device and method for cut-in maneuver auto-labeling - Google Patents
Device and method for cut-in maneuver auto-labeling Download PDFInfo
- Publication number
- WO2023284932A1 WO2023284932A1 PCT/EP2021/025254 EP2021025254W WO2023284932A1 WO 2023284932 A1 WO2023284932 A1 WO 2023284932A1 EP 2021025254 W EP2021025254 W EP 2021025254W WO 2023284932 A1 WO2023284932 A1 WO 2023284932A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- cut
- detecting
- maneuver
- moment
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Definitions
- the invention relates to a device and method for cut-in maneuver auto-labeling.
- DE102017103113A1 discloses a vehicular labeling system that uses a camera to capture data of highway trips and analyses the data to identify and label cut-in events using previous event and/or historical data.
- US9443153B discloses vehicular labeling system, which analyses data of highway trips to identify and label cut-in events using an Al or machine learning algorithm.
- a device and method for cut-in maneuver auto-labeling according to the independent claims replaces manual work and helps to gather more and faster data for training machine learning algorithms.
- the method of cut-in maneuver auto-labeling in a first vehicle comprises recording a sequence of movement of a second vehicle in a surrounding of the first vehicle, detecting at a first moment in time an end of a cut-in maneuver, detecting in the sequence a start of the cut-in maneuver at a second moment in time, labelling in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
- Detecting the end of the cut-in maneuver advantageously comprises detecting a movement of the second vehicle from a position left of the first vehicle or from a position right of the first vehicle to a position in front of the first vehicle.
- a target sequence of left or right vehicle movements is particularly useful for detecting cut-in events in the first vehicle.
- Detecting the end of the cut-in maneuver advantageously comprises detecting whether the first vehicle was staying in the same lane or not. This allows distinguishing the cut-in maneuver from a lane change by the first vehicle.
- detecting the end of the cut-in maneuver comprises detecting that the second vehicle moved from a lane left of a lane in that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting the end of the cut-in maneuver comprises detecting that the second vehicle moved from a lane right of the lane that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting whether the first vehicle was staying in the same lane or not comprises detecting whether the first vehicle crossed a lane line or not.
- a timestamp for an end of the cut-in event comprising the first moment in time is stored and/or a timestamp for a start of the cut-in event comprising the second moment in time is stored.
- An auto-labeled dataset is created by adding the interval labelled with the first label and the second label to a dataset.
- recording the sequence of movement of the second vehicle comprises storing digital images captured by a camera of the first vehicle that comprise the sequence of movement of the second vehicle.
- detecting the end of the cut-in maneuver comprises analyzing in particular chronological position information of the second vehicle from position signals in particular from digital images captured by a camera of the first vehicle.
- detecting the start of the cut-in maneuver comprises detecting a unique identification of the second vehicle in front of the first vehicle at the first moment in time and detecting the same unique identification in the recorded sequence of movement at the second moment in time in particular from digital images.
- the unique identification may result from object recognition.
- the device for cut-in maneuver auto-labeling in a first vehicle comprises a recording device that is configured for recording a sequence of movement of a second vehicle in a surrounding of the first vehicle, a detecting device that is configured for detecting at a first moment in time an end of a cut-in maneuver, detecting in the sequence a start of the cut-in maneuver at a second moment in time, and a labeling device that is configured for labelling in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
- the device comprises a camera that is configured for capturing digital images, in particular in a chronological order, wherein the digital images in the chronological order represent the sequence of movement of the second vehicle, wherein the recording device is configured for storing the digital images, in particular in the chronological order, and/or that the detecting device is configured for detecting the end of the cut-in maneuver by detecting in particular chronological position information of the second vehicle in particular from digital images captured by the camera in particular in an chronological order.
- the device comprises storage configured to store a labelled dataset, and wherein the recording device is configured for adding the interval labelled with the first label and the second label to the dataset.
- Fig. 1 schematically depicts a part of a device for cut-in maneuver auto-labeling
- Fig. 2 depicts steps in a method of cut-in maneuver auto-labeling
- Fig. 3 depicts steps for cut-in maneuver detection.
- Automatic labeling i.e. auto-labeling, of cut-in maneuvers is used to automatically detect cut-in events and to annotate a dataset automatically.
- Auto-labeling in one example refers to a software that is configured for an analysis of recorded data of, e.g. of a highway trip, to identify and label cut-in events. Auto-labeling replaces manual work and helps to gather more and faster data for training machine learning algorithms.
- Auto-labeling in one example comprises an implementation of an algorithm, which analyses a sequence of movements of a car and surrounding vehicles to identify a specific pattern that describes a cut-in event.
- Auto-labeling in one example comprises a procedure of reading historical driving recordings from trips in the past.
- Auto-labeling in one example comprises utilizing, at a moment in time, knowledge of future moments in time from recorded data.
- An input for auto-labeling in one example is prerecorded car driving data on which the analysis according to the algorithm run.
- An output of auto-labeling in one example comprises intervals of the sequence that represent cut-in events that were found with auto-labeling.
- each cut-in event is assigned a timestamp of its cut-in event beginning and cut-in event end.
- Figure 1 schematically depicts a part of a device 100 for cut-in maneuver auto labeling.
- the device 100 comprises a camera 102 that is configured for capturing digital images, in particular in a chronological order. Timestamps may be assigned in chronological order to captured digital images.
- the digital images in the chronological order represent the sequence of movement in a surrounding of the camera 102.
- the device 100 comprises a recording device 104 that is configured for recording a sequence of movement in a surrounding of the camera 102.
- the recording device 104 is configured for storing the digital images, in particular in the chronological order.
- the respective timestamps may be stored along with the digital images.
- the device 100 comprises a detecting device 106 that is configured for detecting, at a first moment in time, an end of a cut-in maneuver.
- the detecting device 106 is configured for detecting in the sequence a start of the cut-in maneuver at a second moment in time.
- the device 100 comprises a labeling device 108 that is configured for labelling in the sequence a start of an interval that starts at the second moment in time with a first label.
- the first label labels the start of the interval as beginning of a cut- in event.
- the labeling device 108 is configured for labelling in the sequence an end of the interval that ends at the first moment in time with a second label.
- the second label labels the end of the interval as end of the cut-in event.
- the recording device 104 is configured for adding the interval labelled with the first label and the second label to a labelled dataset.
- the device 100 in the example comprises storage 110 configured to store the labelled dataset.
- the camera 102, the recording device 104, the detecting device 106, the labelling device 108 and the storage 110 are connected in the example by a data link 112.
- Figure 2 depicts steps in a method of cut-in maneuver auto-labeling.
- the method may start, when the first vehicle starts to operate.
- the method is executed, at least in part, in a first vehicle.
- the device 100 is configured to execute steps in the method.
- the device 100, the recording device 104, the detection device 106 and/or the labelling device 108 may comprise at least one processor that is configured to execute steps in the method. These may comprise dedicated hardware for digital image processing and object detection, in particular for assigning unique identifications in different digital images to vehicles detected therein.
- a step 202 recording of a sequence of movement in a surrounding of the first vehicle is started.
- a sequence of digital images comprising information about movement in surroundings of the first vehicle is stored.
- Recording the sequence of movement of the second vehicle comprises storing digital images captured by at least one camera 102 of the first vehicle.
- a sequence of digital images that comprises the sequence of movement of vehicles in the surroundings of the first vehicle is stored.
- Cameras may be mounted to the first vehicle to provide a surround view or a view of a left side, a right side and a front of the first vehicle.
- a step 204 is executed.
- step 204 an end of a cut-in maneuver is detected at a first moment in time.
- a step 206 is executed.
- a start of the cut-in maneuver is detected at a second moment in time.
- Detecting the start of the cut-in maneuver may comprise detecting the unique identification of the second vehicle in front of the first vehicle at the first moment in time and detecting the same unique identification in the recorded sequence of movement at the second moment in time in particular from digital images.
- a step 208 is executed.
- a start of an interval in the sequence that starts at the second moment in time is labelled with a first label, in particular as beginning of a cut-in event.
- An end of the interval that ends at the first moment in time is labelled with a second label, in particular as end of the cut-in event.
- step 210 is executed.
- a timestamp for an end of the cut-in event comprising the first moment in time and/or a timestamp for a start of the cut-in event comprising the second moment in time is stored.
- the method may comprise adding the interval labelled with the first label and the second label to a dataset, in particular the labelled dataset.
- Step 210 may comprise resetting the recording of the sequence of movement.
- step 202 is executed, e.g. while the first vehicle operates.
- Figure 3 depicts exemplary steps for cut-in maneuver detection.
- a step 302 comprises detecting, in particular in a digital image, a change of a vehicle that is in front of the first vehicle.
- the digital image may be processed to detect an object in the digital image that is a vehicle in front of the first vehicle.
- a step 304 comprises detecting, in particular in a recorded digital image of the sequence, a change of a vehicle that is either on the left or on the right of the first vehicle.
- the digital image may be processed to detect an object in the digital image that is a vehicle on the left side or on the right side of the first vehicle.
- a step 306 comprises checking unique identifications of any of the vehicles. For example, it is checked if the current object in front and the object right or left of the first vehicle in the past have the same unique identification.
- a step 308 comprises checking, if there was a lane change of the first vehicle.
- detecting the end of the cut-in maneuver comprises detecting that the first vehicle was staying in the same lane.
- a cut-in maneuver is detected if the first vehicle did not change the lane and a movement of a vehicle that has the same unique identification from a position left of the first vehicle or from a position right of the first vehicle to a position in front of the first vehicle is detected.
- the steps 304, 306 and 308 may comprise detecting lane lines, in particular from digital images of the camera 102 of the first vehicle.
- Detecting the end of the cut-in maneuver comprises for example, detecting that the second vehicle moved from a lane left of a lane in that the first vehicle is in to the lane that the first vehicle is in.
- Detecting the end of the cut-in maneuver comprises for example, detecting that the second vehicle moved from a lane right of the lane that the first vehicle is in to the lane that the first vehicle is in. Detecting the end of the cut-in maneuver comprises for example, detecting that the first vehicle was staying in the same lane. This may be determined, for example, by detecting whether the first vehicle crossed a lane line or not. The first vehicle stayed in the line, when it crossed no lane line in particular since the start of the cut-in event.
- detecting the end of the cut-in maneuver comprises analyzing chronological position information of the second vehicle from position signals in particular from digital images captured by the camera 102 of the first vehicle.
Abstract
A device and a method of cut-in maneuver auto-labeling in a first vehicle, the method comprising recording (202) a sequence of movement of a second vehicle in a surrounding of the first vehicle, detecting (204) at a first moment in time an end of a cut-in maneuver, detecting (206) in the sequence a start of the cut-in maneuver at a second moment in time, labelling (208) in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
Description
Device and method for cut-in maneuver auto-labeling
The invention relates to a device and method for cut-in maneuver auto-labeling.
DE102017103113A1 discloses a vehicular labeling system that uses a camera to capture data of highway trips and analyses the data to identify and label cut-in events using previous event and/or historical data.
US9443153B discloses vehicular labeling system, which analyses data of highway trips to identify and label cut-in events using an Al or machine learning algorithm.
A device and method for cut-in maneuver auto-labeling according to the independent claims replaces manual work and helps to gather more and faster data for training machine learning algorithms.
The method of cut-in maneuver auto-labeling in a first vehicle comprises recording a sequence of movement of a second vehicle in a surrounding of the first vehicle, detecting at a first moment in time an end of a cut-in maneuver, detecting in the sequence a start of the cut-in maneuver at a second moment in time, labelling in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
Detecting the end of the cut-in maneuver advantageously comprises detecting a movement of the second vehicle from a position left of the first vehicle or from a position right of the first vehicle to a position in front of the first vehicle. A target sequence of left or right vehicle movements is particularly useful for detecting cut-in events in the first vehicle.
Detecting the end of the cut-in maneuver advantageously comprises detecting whether the first vehicle was staying in the same lane or not. This allows distinguishing the cut-in maneuver from a lane change by the first vehicle.
Further improvements are achieved by detecting lane lines, in particular from digital images of a camera of the first vehicle, wherein detecting the end of the cut-in maneuver comprises detecting that the second vehicle moved from a lane left of a lane in that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting the end of the cut-in maneuver comprises detecting that the second vehicle moved from a lane right of the lane that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting whether the first vehicle was staying in the same lane or not comprises detecting whether the first vehicle crossed a lane line or not.
In one example, a timestamp for an end of the cut-in event comprising the first moment in time is stored and/or a timestamp for a start of the cut-in event comprising the second moment in time is stored.
An auto-labeled dataset is created by adding the interval labelled with the first label and the second label to a dataset.
Advantageously, recording the sequence of movement of the second vehicle comprises storing digital images captured by a camera of the first vehicle that comprise the sequence of movement of the second vehicle.
Advantageously, detecting the end of the cut-in maneuver comprises analyzing in particular chronological position information of the second vehicle from position signals in particular from digital images captured by a camera of the first vehicle.
Preferably, detecting the start of the cut-in maneuver comprises detecting a unique identification of the second vehicle in front of the first vehicle at the first moment in
time and detecting the same unique identification in the recorded sequence of movement at the second moment in time in particular from digital images. The unique identification may result from object recognition.
The device for cut-in maneuver auto-labeling in a first vehicle comprises a recording device that is configured for recording a sequence of movement of a second vehicle in a surrounding of the first vehicle, a detecting device that is configured for detecting at a first moment in time an end of a cut-in maneuver, detecting in the sequence a start of the cut-in maneuver at a second moment in time, and a labeling device that is configured for labelling in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
Advantageously, the device comprises a camera that is configured for capturing digital images, in particular in a chronological order, wherein the digital images in the chronological order represent the sequence of movement of the second vehicle, wherein the recording device is configured for storing the digital images, in particular in the chronological order, and/or that the detecting device is configured for detecting the end of the cut-in maneuver by detecting in particular chronological position information of the second vehicle in particular from digital images captured by the camera in particular in an chronological order.
Advantageously, the device comprises storage configured to store a labelled dataset, and wherein the recording device is configured for adding the interval labelled with the first label and the second label to the dataset.
Further advantageous embodiments are derivable from the following description and the drawing. In the drawing:
Fig. 1 schematically depicts a part of a device for cut-in maneuver auto-labeling, Fig. 2 depicts steps in a method of cut-in maneuver auto-labeling,
Fig. 3 depicts steps for cut-in maneuver detection.
Automatic labeling, i.e. auto-labeling, of cut-in maneuvers is used to automatically detect cut-in events and to annotate a dataset automatically.
Auto-labeling in one example refers to a software that is configured for an analysis of recorded data of, e.g. of a highway trip, to identify and label cut-in events. Auto-labeling replaces manual work and helps to gather more and faster data for training machine learning algorithms.
Auto-labeling in one example comprises an implementation of an algorithm, which analyses a sequence of movements of a car and surrounding vehicles to identify a specific pattern that describes a cut-in event.
Auto-labeling in one example comprises a procedure of reading historical driving recordings from trips in the past. Auto-labeling in one example comprises utilizing, at a moment in time, knowledge of future moments in time from recorded data.
An input for auto-labeling in one example is prerecorded car driving data on which the analysis according to the algorithm run. An output of auto-labeling in one example comprises intervals of the sequence that represent cut-in events that were found with auto-labeling.
In one example, each cut-in event is assigned a timestamp of its cut-in event beginning and cut-in event end.
Figure 1 schematically depicts a part of a device 100 for cut-in maneuver auto labeling.
The device 100 comprises a camera 102 that is configured for capturing digital images, in particular in a chronological order. Timestamps may be assigned in chronological order to captured digital images.
The digital images in the chronological order represent the sequence of movement in a surrounding of the camera 102.
The device 100 comprises a recording device 104 that is configured for recording a sequence of movement in a surrounding of the camera 102. The recording device 104 is configured for storing the digital images, in particular in the chronological order. The respective timestamps may be stored along with the digital images.
The device 100 comprises a detecting device 106 that is configured for detecting, at a first moment in time, an end of a cut-in maneuver.
The detecting device 106 is configured for detecting in the sequence a start of the cut-in maneuver at a second moment in time.
The device 100 comprises a labeling device 108 that is configured for labelling in the sequence a start of an interval that starts at the second moment in time with a first label. In the example, the first label labels the start of the interval as beginning of a cut- in event. The labeling device 108 is configured for labelling in the sequence an end of the interval that ends at the first moment in time with a second label. In the example, the second label labels the end of the interval as end of the cut-in event.
The recording device 104 is configured for adding the interval labelled with the first label and the second label to a labelled dataset.
The device 100 in the example comprises storage 110 configured to store the labelled dataset.
The camera 102, the recording device 104, the detecting device 106, the labelling device 108 and the storage 110 are connected in the example by a data link 112.
Figure 2 depicts steps in a method of cut-in maneuver auto-labeling. The method may start, when the first vehicle starts to operate. The method is executed, at least in part, in a first vehicle. The device 100 is configured to execute steps in the method. The device 100, the recording device 104, the detection device 106 and/or the labelling device 108 may comprise at least one processor that is configured to execute steps in the method. These may comprise dedicated hardware for digital image processing and object detection, in particular for assigning unique identifications in different digital images to vehicles detected therein.
In a step 202, recording of a sequence of movement in a surrounding of the first vehicle is started. In the example, a sequence of digital images comprising information about movement in surroundings of the first vehicle is stored.
Recording the sequence of movement of the second vehicle in the example comprises storing digital images captured by at least one camera 102 of the first vehicle. For example, a sequence of digital images that comprises the sequence of movement of vehicles in the surroundings of the first vehicle is stored. Cameras may be mounted to the first vehicle to provide a surround view or a view of a left side, a right side and a front of the first vehicle.
Afterwards, a step 204 is executed.
In the step 204, an end of a cut-in maneuver is detected at a first moment in time.
Afterwards, a step 206 is executed.
In the step 206 in the sequence a start of the cut-in maneuver is detected at a second moment in time.
Detecting the start of the cut-in maneuver may comprise detecting the unique identification of the second vehicle in front of the first vehicle at the first moment in time and detecting the same unique identification in the recorded sequence of movement at the second moment in time in particular from digital images.
Afterwards, a step 208 is executed.
In the step 208, a start of an interval in the sequence that starts at the second moment in time is labelled with a first label, in particular as beginning of a cut-in event. An end of the interval that ends at the first moment in time is labelled with a second label, in particular as end of the cut-in event.
Afterwards a step 210 is executed.
In the step 210, a timestamp for an end of the cut-in event comprising the first moment in time and/or a timestamp for a start of the cut-in event comprising the second moment in time is stored.
The method may comprise adding the interval labelled with the first label and the second label to a dataset, in particular the labelled dataset.
Step 210 may comprise resetting the recording of the sequence of movement.
Afterwards, step 202 is executed, e.g. while the first vehicle operates.
Figure 3 depicts exemplary steps for cut-in maneuver detection.
A step 302 comprises detecting, in particular in a digital image, a change of a vehicle that is in front of the first vehicle. The digital image may be processed to detect an object in the digital image that is a vehicle in front of the first vehicle.
A step 304 comprises detecting, in particular in a recorded digital image of the sequence, a change of a vehicle that is either on the left or on the right of the first vehicle. The digital image may be processed to detect an object in the digital image that is a vehicle on the left side or on the right side of the first vehicle.
A step 306 comprises checking unique identifications of any of the vehicles. For example, it is checked if the current object in front and the object right or left of the first vehicle in the past have the same unique identification.
A step 308 comprises checking, if there was a lane change of the first vehicle.
This means, that detecting the end of the cut-in maneuver comprises detecting that the first vehicle was staying in the same lane.
In a step 310, a cut-in maneuver is detected if the first vehicle did not change the lane and a movement of a vehicle that has the same unique identification from a position left of the first vehicle or from a position right of the first vehicle to a position in front of the first vehicle is detected.
In the example, a cut-in maneuver by the second vehicle is recognized.
The steps 304, 306 and 308 may comprise detecting lane lines, in particular from digital images of the camera 102 of the first vehicle.
Detecting the end of the cut-in maneuver comprises for example, detecting that the second vehicle moved from a lane left of a lane in that the first vehicle is in to the lane that the first vehicle is in.
Detecting the end of the cut-in maneuver comprises for example, detecting that the second vehicle moved from a lane right of the lane that the first vehicle is in to the lane that the first vehicle is in.
Detecting the end of the cut-in maneuver comprises for example, detecting that the first vehicle was staying in the same lane. This may be determined, for example, by detecting whether the first vehicle crossed a lane line or not. The first vehicle stayed in the line, when it crossed no lane line in particular since the start of the cut-in event.
In the example, detecting the end of the cut-in maneuver comprises analyzing chronological position information of the second vehicle from position signals in particular from digital images captured by the camera 102 of the first vehicle.
Claims
1. A method of cut-in maneuver auto-labeling in a first vehicle, characterized by recording (202) a sequence of movement of a second vehicle in a surrounding of the first vehicle, detecting (204) at a first moment in time an end of a cut-in maneuver, detecting (206) in the sequence a start of the cut-in maneuver at a second moment in time, labelling (208) in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
2. The method according to claim 1 , characterized in that detecting (204) the end of the cut-in maneuver comprises detecting (304, 306) a movement of the second vehicle from a position left of the first vehicle or from a position right of the first vehicle to a position in front of the first vehicle.
3. The method according to claim 1 or 2, characterized in that detecting (204) the end of the cut-in maneuver comprises detecting (308) that the first vehicle was staying in the same lane.
4. The method according to claim 3, characterized by detecting lane lines, in particular from digital images of a camera of the first vehicle, wherein detecting (204) the end of the cut-in maneuver comprises detecting (304, 306) that the second vehicle moved from a lane left of a lane in that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting the end of the cut-in maneuver (204) comprises detecting (304, 306) that the second vehicle moved from a lane right of the lane that the first vehicle is in to the lane that the first vehicle is in, and/or wherein detecting (308) that the first vehicle was staying in the same lane comprises detecting whether the first vehicle crossed a lane line or not.
5. The method according to one of the previous claims, characterized by storing (210) a timestamp for an end of the cut-in event comprising the first moment in time and/or storing (210) a timestamp for a start of the cut-in event comprising the second moment in time.
6. The method according to one of the previous claims, characterized by adding (210) the interval labelled with the first label and the second label to a dataset.
7. The method according to one of the previous claims, characterized in that recording (202) the sequence of movement of the second vehicle comprises storing digital images captured by a camera (102) of the first vehicle that comprise the sequence of movement of the second vehicle.
8. The method according to one of the previous claims, characterized in that detecting (204) the end of the cut-in maneuver comprises analyzing chronological position information of the second vehicle from position signals in particular from digital images captured by a camera (102) of the first vehicle.
9. The method according to one of the previous claims, characterized in that detecting (206) the start of the cut-in maneuver comprises detecting (306) a unique identification of the second vehicle in front of the first vehicle at the first moment in time and detecting the same unique identification in the recorded sequence of movement at the second moment in time in particular from digital images.
10. A device (100) for cut-in maneuver auto-labeling in a first vehicle, characterized by a recording device (104) that is configured for recording a sequence of movement of a second vehicle in a surrounding of the first vehicle, a detecting device (106) that is configured for detecting at a first moment in time an end of a cut-in maneuver, detecting in the sequence a start of the cut-in maneuver at a second moment in time, and a labeling device (108) that is
configured for labelling in the sequence a start of an interval that starts at the second moment in time with a first label, in particular as beginning of a cut-in event, and an end of the interval that ends at the first moment in time with a second label, in particular as end of the cut-in event.
11. The device (100) according to claim 10, characterized in that the device comprises a camera (102) that is configured for capturing digital images, in particular in a chronological order, wherein the digital images in the chronological order represent the sequence of movement of the second vehicle, wherein the recording device (104) is configured for storing the digital images, in particular in the chronological order, and/or that the detecting device (106) is configured for detecting the end of the cut-in maneuver by detecting in particular chronological position information of the second vehicle in particular from digital images captured by the camera (102) in particular in an chronological order.
12. The device according to one of the claims 10 or 11 , characterized in that the device comprises storage (110) configured to store a labelled dataset, and wherein the recording device (104) is configured for adding the interval labelled with the first label and the second label to the dataset.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2021/025254 WO2023284932A1 (en) | 2021-07-12 | 2021-07-12 | Device and method for cut-in maneuver auto-labeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2021/025254 WO2023284932A1 (en) | 2021-07-12 | 2021-07-12 | Device and method for cut-in maneuver auto-labeling |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023284932A1 true WO2023284932A1 (en) | 2023-01-19 |
Family
ID=76958904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2021/025254 WO2023284932A1 (en) | 2021-07-12 | 2021-07-12 | Device and method for cut-in maneuver auto-labeling |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023284932A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9443153B1 (en) | 2015-06-12 | 2016-09-13 | Volkswagen Ag | Automatic labeling and learning of driver yield intention |
DE102017103113A1 (en) | 2016-03-01 | 2017-09-07 | Ford Global Technologies, Llc | VEHICLE TRACK LEARNING |
WO2018115963A2 (en) * | 2016-12-23 | 2018-06-28 | Mobileye Vision Technologies Ltd. | Navigational system with imposed liability constraints |
-
2021
- 2021-07-12 WO PCT/EP2021/025254 patent/WO2023284932A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9443153B1 (en) | 2015-06-12 | 2016-09-13 | Volkswagen Ag | Automatic labeling and learning of driver yield intention |
EP3104284A1 (en) * | 2015-06-12 | 2016-12-14 | Volkswagen Aktiengesellschaft | Automatic labeling and learning of driver yield intention |
DE102017103113A1 (en) | 2016-03-01 | 2017-09-07 | Ford Global Technologies, Llc | VEHICLE TRACK LEARNING |
WO2018115963A2 (en) * | 2016-12-23 | 2018-06-28 | Mobileye Vision Technologies Ltd. | Navigational system with imposed liability constraints |
Non-Patent Citations (1)
Title |
---|
ERWIN DE GELDER ET AL: "Real-World Scenario Mining for the Assessment of Automated Vehicles", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 31 May 2020 (2020-05-31), XP081677337 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8879786B2 (en) | Method for detecting and/or tracking objects in motion in a scene under surveillance that has interfering factors; apparatus; and computer program | |
US10757330B2 (en) | Driver assistance system with variable image resolution | |
CN108389396B (en) | Vehicle type matching method and device based on video and charging system | |
US20170344855A1 (en) | Method of predicting traffic collisions and system thereof | |
CN104798368A (en) | Onboard image processing system | |
GB2550032A (en) | Method for detecting contamination of an optical component of a surroundings sensor for recording the surrounding area of a vehicle, method for the machine | |
CN109099920B (en) | Sensor target accurate positioning method based on multi-sensor association | |
CN109871732B (en) | Parking grid identification system and method thereof | |
CN113470371B (en) | Method, system, and computer-readable storage medium for identifying an offending vehicle | |
EP3140777A1 (en) | Method for performing diagnosis of a camera system of a motor vehicle, camera system and motor vehicle | |
CN110533921B (en) | Triggering snapshot method and system for vehicle | |
EP3782871B1 (en) | Trajectory discrimination device | |
JP2010210477A (en) | Navigation device | |
CN112352169A (en) | Method and device for detecting an environment and vehicle having such a device | |
WO2023284932A1 (en) | Device and method for cut-in maneuver auto-labeling | |
EP2945138B1 (en) | Method and apparatus for providing information data about entities along a route taken by a vehicle | |
EP3287940A1 (en) | Intersection detection system for a vehicle | |
CN111126154A (en) | Method and device for identifying road surface element, unmanned equipment and storage medium | |
WO2019220987A1 (en) | Terminal device and collection method | |
CN111460852A (en) | Vehicle-mounted 3D target detection method, system and device | |
CN111373411A (en) | Method, device and computer program for determining a distance to an object | |
US20190180123A1 (en) | Video Output System | |
US20210155249A1 (en) | Device and method for estimating the attention level of a driver of a vehicle | |
JP2018169924A (en) | Learning data collection apparatus, learning data collection system, and learning data collection method | |
CN110969895B (en) | Vehicle distance detection method, device, system and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21742719 Country of ref document: EP Kind code of ref document: A1 |