WO2022033652A1 - Device for and method of image signal processing in a vehicle - Google Patents

Device for and method of image signal processing in a vehicle Download PDF

Info

Publication number
WO2022033652A1
WO2022033652A1 PCT/EP2020/025370 EP2020025370W WO2022033652A1 WO 2022033652 A1 WO2022033652 A1 WO 2022033652A1 EP 2020025370 W EP2020025370 W EP 2020025370W WO 2022033652 A1 WO2022033652 A1 WO 2022033652A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
input data
output
dimension
data
Prior art date
Application number
PCT/EP2020/025370
Other languages
French (fr)
Inventor
Adrian-Sergiu FATOL
George-Florin GROSU
Arsen SAGOIAN
Giani Ionut STATIE
Philipp WUSTMANN
Original Assignee
Dr. Ing. H.C. F. Porsche Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dr. Ing. H.C. F. Porsche Aktiengesellschaft filed Critical Dr. Ing. H.C. F. Porsche Aktiengesellschaft
Priority to PCT/EP2020/025370 priority Critical patent/WO2022033652A1/en
Priority to DE112020007498.6T priority patent/DE112020007498T5/en
Publication of WO2022033652A1 publication Critical patent/WO2022033652A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4052Means for monitoring or calibrating by simulation of echoes
    • G01S7/406Means for monitoring or calibrating by simulation of echoes using internally generated reference signals, e.g. via delay line, via RF or IF signal injection or via integrated reference reflector or transponder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the invention concerns a device for and method of signal processing in particular for vehicle data.
  • the method and device of signal processing according to the independent claims provides a new approach for facilitating data collection, through which an engineer is able to mirror scenarios in order to obtain more data.
  • the method of signal processing comprises receiving first input data and second input data, wherein the first input data defines a first dimension, in particular a direction of travel of the vehicle, and the second input data defines a second dimension of a signal determined from signal data captured by at least one sensor of a vehicle, wherein the first input data comprises information about a movement or a position of an object, in particular relative and not relative, i.e.
  • the second input data comprises information about the movement or the position of the object in particular relative to the vehicle, wherein the second input is assigned in the second dimension to a first side of the vehicle, determining first output data depending on the first input data, determining second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle.
  • the signal defines a position, a speed or an acceleration of the object in particular relative to the vehicle.
  • the signals may be available from a Flexray Databus in the vehicle as a decomposition in a two-dimensional coordinate system, such that distance relative to a neighboring vehicle is described by forward distance, e.g. position X, and lateral distance, e.g. position Y. Signals that are derived from position e.g. speed and acceleration are also decomposed.
  • the second input data defines a position, a speed or an acceleration of the object in the second dimension on the first side of the vehicle
  • the second output data defines a position, a speed or an acceleration of an object on the second side of the vehicle
  • the first input data is defined by a decomposition of the signal in the first dimension and the second input data is defined by a decomposition of the signal in the second dimension.
  • the senor is one of the group of a radar sensor, camera and LiDAR- sensor.
  • the object is another vehicle, a subject, in particular a person, or part of a road infrastructure, in particular a traffic sign.
  • the signal is defined by image data that captures an environment of the first vehicle on the first side of the vehicle, wherein the first output and the second output are composed to form an output signal that defines an image that displays an environment of the first vehicle on the second side of the vehicle.
  • training data for training a model or an artificial neural network for recognizing a behavior of the object on the second side of the vehicle is determined depending on the first output and the second output.
  • the training data for the model or network may be used for further training or for testing for consistency and performance.
  • the model or artificial neural network is trained with the training data.
  • the model or network may have been modeled for recognizing behavior of the vehicle's neighboring vehicles, specifically only for the right side.
  • the model or network may be trained or a machine learning approach may be implemented using this training data. This will result in more maintainable, robust model or network which require less data and implicitly less expensive to develop.
  • the device for signal processing is adapted to process input data from at least one sensor of the group of radar sensor, camera and LiDAR sensor and to execute the steps of the method according to one of the previous claims.
  • Fig. 1 schematically depicts a road
  • Fig. 2 depicts steps in a method for signal processing.
  • Figure 1 depicts a road 100, a vehicle 101 and an object 102.
  • the object 102 is a vehicle.
  • the object may be a subject, in particular a person, or part of a road infrastructure, in particular a traffic sign.
  • the vehicle 101 comprises a device for signal processing.
  • the vehicle 101 comprises at least one sensor of the group of radar sensor, camera and LiDAR sensor.
  • the device comprises a processor, in particular a signal processor, that is adapted to process input data from the at least one sensor and to execute the steps of the method described below.
  • a synthetic vehicle 103 is depicted as an example of an output image generated in one aspect of the method.
  • the vehicle 101 is oriented on the road 100 in a direction of travel 104.
  • the direction of travel 104 defines a first dimension X.
  • a direction perpendicular to the direction of travel 104 defines a second dimension Y.
  • the at least one sensor in the example is adapted to capture an environment on a first side of the vehicle 101 in the second dimension Y.
  • the at least one sensor or a controller thereof is adapted to output a signal or to output first input data and second input data for further processing by the device.
  • the at least one sensor captures an image of the object 102, i.e. the vehicle left of the vehicle 101.
  • the signal is decomposable into first input data that defines the first dimension X and second input data that defines the second dimension Y.
  • a two-dimensional Cartesian coordinate system with an origin in the center of the vehicle 101 may be used for this purpose.
  • the signal is in one aspect defined by the image data that captures the environment of the vehiclel 01 on the first side of the vehicle 101 .
  • the method comprises a step 202 of receiving first input data and second input data.
  • the first input data defines the first dimension X and the second input data defines the second dimension Y of the signal.
  • the signal is determined from image data that is captured by the at least one sensor of the vehicle 101 .
  • the signal in the example defines the position, the speed or the acceleration of the object 102 in particular relative to the vehicle 101 .
  • the first input data comprises information about a movement or a position of the object 102, in particular relative to the vehicle 101 in the first dimension X.
  • the second input data comprises information about the movement or the position of the object 102 in particular relative to the vehicle 101 in the second dimension Y.
  • the second input is assigned in the second dimension Y to the first side of the vehicle 101 , i.e. the side where the at least one sensor monitors the environment of the vehicle 101. In the example, this is left of the vehicle 101 when viewing in the direction of travel.
  • the second input data defines in the example the position, the speed or the acceleration of the object 102 in the second dimension X on the first side of the vehicle 101 .
  • the first input data may be defined by a decomposition of the signal in the first dimension X and the second input data is defined by a decomposition of the signal in the second dimension Y.
  • the method comprises a step 204 of determining first output data depending on the first input data.
  • the first output data is identical to the first input data.
  • the method comprises a step 206 of determining second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle 101 .
  • the second output is assigned to the right sight of the vehicle 101 .
  • the second input data defines the position of the object 102 on the first side
  • the second output data is determined to define a position of an object, in the example the synthesized vehicle 103, on the second side of the vehicle.
  • the position is mirrored at an axis defined by the first dimension X.
  • the second input data may comprise numerical values defining a distance as information about the position. In this aspect, the numerical value may be multiplied by -1 or the sign may be toggled in order to determine the new position for the second output data.
  • the second input data may comprise an identifier assigning the second input data to the first side of the vehicle 101.
  • the identifier may be changed to assign the second output data to the second side of the vehicle 101 , in particular without changing the numerical value.
  • the second input data is a speed for the first side
  • the second output is a speed for the second side.
  • the second input data is an acceleration for the first side
  • the second output is an acceleration for the second side.
  • the signal is defined by the image data that captures the environment of the vehicle101 on the first side of the vehicle 101
  • the first output and the second output may be composed to form an output signal that defines an image that displays an environment of the vehicle 101 on the second side of the vehicle 101.
  • the first output data and the second output data may be stored on a database.
  • training data for training a model or an artificial neural network for recognizing a behavior of the object 102 on the second side of the vehicle 101 is determined depending on the first output and the second output.
  • the artificial neural network is trained with the training data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Traffic Control Systems (AREA)

Abstract

Device and method of signal processing, comprising receiving first input data and second input data, wherein the first input data defines a first dimension (X), in particular a direction of travel of the vehicle (101), and the second input data defines a second dimension (Y) of a signal determined from image data captured by at least one sensor of a vehicle (101), wherein the first input data comprises information about a movement or a position of an object (102), in particular relative to the vehicle (101) in the first dimension (X), wherein the second input data comprises information about the movement or the position of the object (102) in particular relative to the vehicle (101), wherein the second input is assigned in the second dimension (Y) to a first side of the vehicle (101), determining first output data depending on the first input data, determining second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle (101).

Description

DEVICE FOR AND METHOD OF IMAGE SIGNAL PROCESSING IN A VEHICLE
The invention concerns a device for and method of signal processing in particular for vehicle data.
As of this time, in particular in the automotive industry there exists a constant need and demand for data. Safety and reliability are directly linked to the amount of data that an engineer has utilized in tuning and evaluating the final product. Nonetheless, gathering and storing such amounts of data will prove expensive and hard to maintain overtime.
The method and device of signal processing according to the independent claims provides a new approach for facilitating data collection, through which an engineer is able to mirror scenarios in order to obtain more data.
The method of signal processing comprises receiving first input data and second input data, wherein the first input data defines a first dimension, in particular a direction of travel of the vehicle, and the second input data defines a second dimension of a signal determined from signal data captured by at least one sensor of a vehicle, wherein the first input data comprises information about a movement or a position of an object, in particular relative and not relative, i.e. absolute, to the vehicle in the first dimension, wherein the second input data comprises information about the movement or the position of the object in particular relative to the vehicle, wherein the second input is assigned in the second dimension to a first side of the vehicle, determining first output data depending on the first input data, determining second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle. This data-augmentation procedure alters signals that describe a neighboring vehicle such that a copy of that vehicle will be placed on the opposing side of the vehicle. By the means of this procedure, one will be able to generate two separate measurements, one with the original signals and one with processed signals, at the expense of one data gathering campaign. Preferably, the signal defines a position, a speed or an acceleration of the object in particular relative to the vehicle. These provide valuable information of the neighboring vehicles with respect to vehicle. The signals may be available from a Flexray Databus in the vehicle as a decomposition in a two-dimensional coordinate system, such that distance relative to a neighboring vehicle is described by forward distance, e.g. position X, and lateral distance, e.g. position Y. Signals that are derived from position e.g. speed and acceleration are also decomposed.
Preferably, the second input data defines a position, a speed or an acceleration of the object in the second dimension on the first side of the vehicle, and the second output data defines a position, a speed or an acceleration of an object on the second side of the vehicle. Signals, which describe the relationship between the vehicle and the neighboring vehicle, can be vertically flipped such that a neighboring right vehicle will be perceived as driving from at the left and vice-versa. As a result, one will obtain two different real-life scenarios and implicitly be able to double and existing database.
Preferably, the first input data is defined by a decomposition of the signal in the first dimension and the second input data is defined by a decomposition of the signal in the second dimension.
Preferably, the sensor is one of the group of a radar sensor, camera and LiDAR- sensor.
Preferably, the object is another vehicle, a subject, in particular a person, or part of a road infrastructure, in particular a traffic sign.
Preferably, the signal is defined by image data that captures an environment of the first vehicle on the first side of the vehicle, wherein the first output and the second output are composed to form an output signal that defines an image that displays an environment of the first vehicle on the second side of the vehicle. Preferably, training data for training a model or an artificial neural network for recognizing a behavior of the object on the second side of the vehicle is determined depending on the first output and the second output. When showed events that have been determined by the method described above, i.e. that have been mirrored left to right, the training data for the model or network may be used for further training or for testing for consistency and performance.
Preferably, the model or artificial neural network is trained with the training data. The model or network may have been modeled for recognizing behavior of the vehicle's neighboring vehicles, specifically only for the right side. The model or network may be trained or a machine learning approach may be implemented using this training data. This will result in more maintainable, robust model or network which require less data and implicitly less expensive to develop.
The device for signal processing is adapted to process input data from at least one sensor of the group of radar sensor, camera and LiDAR sensor and to execute the steps of the method according to one of the previous claims.
Further advantageous embodiments are derivable from the following description and the drawing. In the drawing:
Fig. 1 schematically depicts a road,
Fig. 2 depicts steps in a method for signal processing.
Figure 1 depicts a road 100, a vehicle 101 and an object 102. The object 102 is a vehicle. The object may be a subject, in particular a person, or part of a road infrastructure, in particular a traffic sign.
The vehicle 101 comprises a device for signal processing. The vehicle 101 comprises at least one sensor of the group of radar sensor, camera and LiDAR sensor. The device comprises a processor, in particular a signal processor, that is adapted to process input data from the at least one sensor and to execute the steps of the method described below. In Figure 1 , a synthetic vehicle 103 is depicted as an example of an output image generated in one aspect of the method.
The vehicle 101 is oriented on the road 100 in a direction of travel 104. The direction of travel 104 defines a first dimension X. A direction perpendicular to the direction of travel 104 defines a second dimension Y.
The at least one sensor in the example is adapted to capture an environment on a first side of the vehicle 101 in the second dimension Y. The at least one sensor or a controller thereof is adapted to output a signal or to output first input data and second input data for further processing by the device. In the example, the at least one sensor captures an image of the object 102, i.e. the vehicle left of the vehicle 101. The signal is decomposable into first input data that defines the first dimension X and second input data that defines the second dimension Y. A two-dimensional Cartesian coordinate system with an origin in the center of the vehicle 101 may be used for this purpose. The signal is in one aspect defined by the image data that captures the environment of the vehiclel 01 on the first side of the vehicle 101 .
A method of signal processing for this signal will be described with reference to figure 2.
The method comprises a step 202 of receiving first input data and second input data.
The first input data defines the first dimension X and the second input data defines the second dimension Y of the signal.
The signal is determined from image data that is captured by the at least one sensor of the vehicle 101 . The signal in the example defines the position, the speed or the acceleration of the object 102 in particular relative to the vehicle 101 .
The first input data comprises information about a movement or a position of the object 102, in particular relative to the vehicle 101 in the first dimension X. The second input data comprises information about the movement or the position of the object 102 in particular relative to the vehicle 101 in the second dimension Y. The second input is assigned in the second dimension Y to the first side of the vehicle 101 , i.e. the side where the at least one sensor monitors the environment of the vehicle 101. In the example, this is left of the vehicle 101 when viewing in the direction of travel. The second input data defines in the example the position, the speed or the acceleration of the object 102 in the second dimension X on the first side of the vehicle 101 .
The first input data may be defined by a decomposition of the signal in the first dimension X and the second input data is defined by a decomposition of the signal in the second dimension Y.
The method comprises a step 204 of determining first output data depending on the first input data. In the example, the first output data is identical to the first input data.
The method comprises a step 206 of determining second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle 101 . In the example, the second output is assigned to the right sight of the vehicle 101 . If the second input data defines the position of the object 102 on the first side, the second output data is determined to define a position of an object, in the example the synthesized vehicle 103, on the second side of the vehicle. In the example, the position is mirrored at an axis defined by the first dimension X. The second input data may comprise numerical values defining a distance as information about the position. In this aspect, the numerical value may be multiplied by -1 or the sign may be toggled in order to determine the new position for the second output data. The second input data may comprise an identifier assigning the second input data to the first side of the vehicle 101. In this aspect, the identifier may be changed to assign the second output data to the second side of the vehicle 101 , in particular without changing the numerical value. When the second input data is a speed for the first side, the second output is a speed for the second side. When the second input data is an acceleration for the first side, the second output is an acceleration for the second side. When the signal is defined by the image data that captures the environment of the vehicle101 on the first side of the vehicle 101 the first output and the second output may be composed to form an output signal that defines an image that displays an environment of the vehicle 101 on the second side of the vehicle 101. The first output data and the second output data may be stored on a database.
In one embodiment, training data for training a model or an artificial neural network for recognizing a behavior of the object 102 on the second side of the vehicle 101 is determined depending on the first output and the second output.
In one embodiment, the artificial neural network is trained with the training data.

Claims

7 CLAIMS
1 . Method of signal processing, characterized by receiving (202) first input data and second input data, wherein the first input data defines a first dimension (X), in particular a direction of travel of the vehicle (101 ), and the second input data defines a second dimension (Y) of a signal determined from image data captured by at least one sensor of a vehicle (101 ), wherein the first input data comprises information about a movement or a position of an object (102), in particular relative to the vehicle (101 ) in the first dimension (X) wherein the second input data comprises information about the movement or the position of the object (102) in particular relative to the vehicle (101 ), wherein the second input is assigned in the second dimension (Y) to a first side of the vehicle (101 ), determining (204) first output data depending on the first input data, determining (206) second output data depending on the second input data, wherein the second output is assigned to a second side of the vehicle (101 ).
2. The method according to claim 1 , characterized in that the signal defines a position, a speed or an acceleration of the object (102) in particular relative to the vehicle (101 ).
3. The method according to one of the previous claims, characterized in that and the second input data defines a position, a speed or an acceleration of the object (102) in the second dimension (X) on the first side of the vehicle (101 ), and the second output data defines a position, a speed or an acceleration of an object on the second side of the vehicle (101 ).
4. The method according to one of the previous claims, characterized in that the first input data is defined by a decomposition of the signal in the first dimension (X) and the second input data is defined by a decomposition of the signal in the second dimension (Y).
5. The method according to one of the previous claims, characterized in that the sensor is one of the group of a radar sensor, camera and LiDAR-sensor. 8
6. The method according to one of the previous claims, characterized in that the object (102) is another vehicle, a subject, in particular a person, or part of a road infrastructure, in particular a traffic sign.
7. The method according to one of the previous claims, characterized in that the signal is defined by image data that captures an environment of the vehicle (101 ) on the first side of the vehicle (101 ), wherein the first output and the second output are composed to form an output signal that defines an image that displays an environment of the vehicle (101 ) on the second side of the vehicle (101 ).
8. The method according to one of the previous claims, characterized in that training data for training a model or an artificial neural network for recognizing a behavior of the object (102) on the second side of the vehicle (101 ) is determined depending on the first output and the second output.
9. The method according to claim 8, characterized in that the artificial neural network is trained with the training data.
10. Device for signal processing, characterized in that the device is adapted to process input data from at least one sensor of the group of radar sensor, camera and LiDAR sensor and to execute the steps of the method according to one of the previous claims.
PCT/EP2020/025370 2020-08-12 2020-08-12 Device for and method of image signal processing in a vehicle WO2022033652A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2020/025370 WO2022033652A1 (en) 2020-08-12 2020-08-12 Device for and method of image signal processing in a vehicle
DE112020007498.6T DE112020007498T5 (en) 2020-08-12 2020-08-12 Device and method for signal processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/025370 WO2022033652A1 (en) 2020-08-12 2020-08-12 Device for and method of image signal processing in a vehicle

Publications (1)

Publication Number Publication Date
WO2022033652A1 true WO2022033652A1 (en) 2022-02-17

Family

ID=72322418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/025370 WO2022033652A1 (en) 2020-08-12 2020-08-12 Device for and method of image signal processing in a vehicle

Country Status (2)

Country Link
DE (1) DE112020007498T5 (en)
WO (1) WO2022033652A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114705121A (en) * 2022-03-29 2022-07-05 智道网联科技(北京)有限公司 Vehicle pose measuring method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190179317A1 (en) * 2017-12-13 2019-06-13 Luminar Technologies, Inc. Controlling vehicle sensors using an attention model
US20190347819A1 (en) * 2018-05-09 2019-11-14 Neusoft Corporation Method and apparatus for vehicle position detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190179317A1 (en) * 2017-12-13 2019-06-13 Luminar Technologies, Inc. Controlling vehicle sensors using an attention model
US20190347819A1 (en) * 2018-05-09 2019-11-14 Neusoft Corporation Method and apparatus for vehicle position detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114705121A (en) * 2022-03-29 2022-07-05 智道网联科技(北京)有限公司 Vehicle pose measuring method and device, electronic equipment and storage medium
CN114705121B (en) * 2022-03-29 2024-05-14 智道网联科技(北京)有限公司 Vehicle pose measurement method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
DE112020007498T5 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
EP3540457A1 (en) Identification of objects by means of radar data
CN107886043B (en) Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile
US20210206382A1 (en) Information processing apparatus, information processing system, information processing method, and program
CN103578115A (en) Moving object recognition systems and moving object recognition methods
EP2256690A1 (en) Object motion detection system based on combining 3D warping techniques and a proper object motion detection
JP2020046706A (en) Object detection apparatus, vehicle control system, object detection method and computer program for object detection
DE102011108292A1 (en) Method for operating driver assistance device of vehicle, involves determining scenario-dependent sensor variances or sensor variances depending on driver assistance device in context of error propagation determination
CN108216249A (en) The system and method detected for the ambient enviroment of vehicle
CN113848855A (en) Vehicle control system test method, apparatus, device, medium, and program product
CN111583716B (en) Vehicle obstacle avoidance method and device, electronic equipment and storage medium
DE102021002798A1 (en) Process for camera-based environment detection
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
JP2019139420A (en) Three-dimensional object recognition device, imaging device, and vehicle
US20220237889A1 (en) Analysis of dynamic spatial scenarios
Khastgir et al. Introducing ASIL inspired dynamic tactical safety decision framework for automated vehicles
Bruggner et al. Model in the loop testing and validation of embedded autonomous driving algorithms
WO2022033652A1 (en) Device for and method of image signal processing in a vehicle
EP3637311A1 (en) Device and method for determining the altitude information of an object in an environment of a vehicle
JP2934330B2 (en) Vehicle operation amount determination device
Lehmann et al. Use of a criticality metric for assessment of critical traffic situations as part of SePIA
Carello et al. Human-Driving Highway Overtake and Its Perceived Comfort: Correlational Study Using Data Fusion
CN113918615A (en) Simulation-based driving experience data mining model construction method and system
CN111422203B (en) Driving behavior evaluation method and device
CN114830204A (en) Training neural networks through neural networks
Sukthankar et al. Tactical-level simulation for intelligent transportation systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20764945

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20764945

Country of ref document: EP

Kind code of ref document: A1