EP4211490A1 - Procédé, système radar et véhicule pour le traitement de signaux radars - Google Patents

Procédé, système radar et véhicule pour le traitement de signaux radars

Info

Publication number
EP4211490A1
EP4211490A1 EP21772783.3A EP21772783A EP4211490A1 EP 4211490 A1 EP4211490 A1 EP 4211490A1 EP 21772783 A EP21772783 A EP 21772783A EP 4211490 A1 EP4211490 A1 EP 4211490A1
Authority
EP
European Patent Office
Prior art keywords
radar
units
view
radar system
measurement data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21772783.3A
Other languages
German (de)
English (en)
Inventor
Marcel Hoffmann
Michael GOTTINGER
Martin Vossiek
Mark Christmann
Peter Gulden
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symeo GmbH
Original Assignee
Symeo GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symeo GmbH filed Critical Symeo GmbH
Publication of EP4211490A1 publication Critical patent/EP4211490A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/589Velocity or trajectory determination systems; Sense-of-movement determination systems measuring the velocity vector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • G01S13/878Combination of several spaced transmitters or receivers of known location for determining the position of a transponder or a reflector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • G01S7/356Receivers involving particularities of FFT processing

Definitions

  • the invention relates to a method for signal processing of radar signals according to claim 1, a radar system according to claim 21 and a vehicle according to claim 25.
  • Radar systems are increasingly being used in particular in detecting the surroundings of vehicles, for example in the automobile sector, in addition to optical sensors such as mono- or stereoscopic camera systems or light detection and ranging (lidar) sensors, since with radar systems precise radial distance and speed measurements are made possible.
  • Reliable detection of the surroundings of vehicles can be seen as a prerequisite for the further automation of, sometimes safety-critical, driving functions of the vehicles, such as driver assistance systems, highly automated driving systems and fully autonomous driving systems.
  • radar units are used that emit or send frequency-modulated continuous wave signals (frequency modulated continuous wave signals, FMCW signals), so-called chirp sequence radar units.
  • FMCW signals frequency modulated continuous wave signals
  • chirp sequence radar units a periodic sequence of I FMCW signals, each of which has a linear frequency ramp, is emitted by the radar unit as a transmission signal.
  • Such radar units known from the prior art have at least one transmitting antenna (or several transmitting antenna elements of a transmitting antenna array) and several receiving antenna elements that can be operated as a receiving antenna array, with the receiving antenna array having both a - Can be designed as well as two-dimensional.
  • Such a radar unit can be used, for example, to measure (determine) the radial distance and/or the radial speed of one (or more) objects located in the field of view (Field-of-View, FoV) of the radar unit relative to the radar unit.
  • FoV Field-of-View
  • the radial distance and/or the radial speed relative to the radar unit can typically be determined using a chirp sequence radar unit, with a transmission signal being transmitted or radiated by the at least one transmitting antenna of the radar unit via a reciprocal transmission channel. which is reflected by at least one object and is received by the receiving antenna elements of the receiving antenna array of the radar unit.
  • the signals received from the receiving antenna array of the radar unit can be processed in such a way that a three-dimensional result space is created that NEN (radial) distance d j, a radial velocity v r,j and an azimuth angle ⁇ j of the object relative to the radar unit.
  • the angular resolution that can be achieved is limited in particular by the full width at half maximum or the 3 dB opening angle of the receiving antenna array used.
  • the angular resolution results in where L indicates the dimension(s) (aperture size) of the receiving antenna array in the azimuth and/or elevation direction, depending on whether it is a one- or two-dimensional receiving antenna array.
  • the maximum possible dimensions of the receiving antenna array ie the aperture size L
  • the aperture size L are strictly limited due to a small installation space and for design and cost reasons.
  • the angular resolutions that can be achieved are often too low for safety applications in road traffic, as for example in F. Roos, J. Bechter, C. Knill, B. Schweizer, and C. Waldschmidt, "Radar Sensors for Auto- nomous Driving,” in IEEE Microwave Magazine, September 2019, pp. 58-72.
  • window functions are typically used to improve sidelobe attenuation (which is 13.3 dB with an equidistant aperture occupancy). The result of this, however, is that the angular resolution specified above is further degraded (by a factor of approximately 2) in practical applications.
  • the object of the invention is therefore to improve the disadvantages of the solutions known from the prior art and to provide an alternative possibility for a method for processing radar signals of a radar system and a corresponding radar system with which improved angular resolution can preferably be achieved , particularly without increasing the physical dimensions of the aperture of the receiving antenna array.
  • the object of the invention is achieved by a method for signal processing of radar signals according to claim 1, a radar system according to claim 21 and a vehicle according to claim 25.
  • the object of the invention is achieved by a method for signal processing of radar signals of a radar system, in particular a vehicle radar system, preferably an automobile radar system, with at least two radar units arranged at a distance from one another, comprising the following steps:
  • generating a discrete total coordinate system of the field of view (in particular in which measurement data of the at least two radar units of the radar system generated by detecting the field of view are co-registered); preferably determining a preferably multi-dimensional vector speed for at least one resolution cell (pixel or voxel) of the discrete total coordinate system and/or a preferably multi-dimensional vector speed for the radar system
  • a discrete overall coordinate system can be generated from measurement data from at least two radar units of a radar system that cover a common field of view (in which the fields of view of the individual radar units overlap at least partially, possibly only partially).
  • the measurement data from the at least two radar units are co-registered in the discrete overall coordinate system, which makes it possible to detect any objects located in the common field of view from at least two (different) perspectives.
  • a vector velocity for this at least one resolution cell pixel or voxel
  • Determining the vector speed of the at least one resolution cell (which can belong to an object or radar target, for example) from the discrete overall coordinate system opens up the possibility of improving the angular resolution using the determined vector speed.
  • a partial radar image with increased angular resolution (azimuth resolution) can be calculated (reconstructed).
  • the reconstruction takes place in a partial spatial area in which the vector speed of at least one resolution cell (which can belong to a detected object, for example) is known.
  • the vector speed of the radar system i.e. the intrinsic motion ⁇ Ego-Motion-Estimatiori) or the trajectory of the radar system, can be determined from the discrete total coordinate system.
  • an environment radar image can additionally or alternatively be generated, for example with a synthetic aperture radar ⁇ ierfa ⁇ vc&c ⁇ , in which an improved angular resolution is obtained.
  • the surround radar image generated in this way is not limited to the common field of view of the at least two radar units.
  • the measurement data for visibility areas that can only be detected by one radar unit can also be used to generate an environment radar image, for example with a synthetic aperture radar method, that has an improved angular resolution.
  • the method can be used for vehicle applications, such as automobile applications, in which at least two radar units detect a common field of view and at least essentially temporally synchronized radar units of a radar system are used.
  • the resulting angular resolution is no longer dependent on the size of the real aperture, but can be determined depending on the object from the portion of the object trajectory that is lateral to the radar system.
  • the method can be used for any object distribution (radar target distribution) and object movements without assuming knowledge of the radar system's own movement.
  • the radar units of the radar system can also be distributed vertically, with the horizontal offset in particular being retained.
  • at least two radar units of the radar system can be arranged horizontally and/or at least two radar units of the radar system can be arranged vertically and/or at least two radar units can be arranged at an angle that is oblique to the horizontal and vertical.
  • the radar system can include at least two or at least three or at least four or more radar units (with preferably at least partially a common field of view).
  • the radar units of the radar system can be arranged in at least two directions (eg horizontally and vertically).
  • the inverse SAR reconstruction can be carried out, for example, in a three-dimensional, discretized space, which additionally enables an interferometric evaluation of the height of objects.
  • the vector speed of a voxel (or a three-dimensional resolution cell) can be determined from the co-registered measurement data of the radar units of the radar system.
  • a discrete total coordinate system can be understood as a coordinate system in which measurement data for a field of view, which were recorded from several (different) perspectives, for example from several (different) radar units, are co-registered (inserted) and that with a discretization method in discrete resolution cells , such as pixels (in two dimensions) or voxels (in three dimensions), of the continuous overall coordinate system.
  • Co-registration can be understood as a transformation process in which several measurement data (sets) from different radar units are transferred to a common coordinate system.
  • reconstructing (the spatial sub-area of the field of view) can include the following step:
  • At least one partial radar image for a spatial sub-area of the field of view in which the at least one resolution cell is located based on the measurement data from at least one of the radar units and based on the (with at least two radar units) determined vector speed of the at least one Resolution cell, preferably using an inverse synthetic aperture radar method.
  • a partial radar image is understood to mean, in particular, a partial section in the discrete overall coordinate system that has a higher angular resolution.
  • the reconstruction (of the spatial sub-area of the viewing area) can in particular also include the following step:
  • At least one environment radar image for a spatial environment detected at least by one of the radar units, based on the measurement data of the at least one radar unit, the environment Practice area detected, and based on the vector speed of the radar system, preferably using a synthetic aperture radar method.
  • determining the vector speed of the radar system it can be made possible, in particular with a SAR method, to calculate an environment radar image from the measurement data of at least one radar unit, which has a higher angular resolution.
  • the higher-resolution surround- ing radar image is not limited to the common field of view of the two radar units, but can contain areas that are only covered by one radar unit.
  • the vector speed of the radar system is determined by additional sensors, such as odometrically and/or by using a global navigation satellite system and/or by inertial sensors.
  • the method further comprises the following step:
  • detecting at least one object located in the field of view in the discrete total coordinate system preferably by determining amplitude maxima using dynamic and/or constant power threshold values; a preferably multi-dimensional vector speed of at least one resolution cell being determined for the at least one detected object.
  • a (multidimensional) vector speed can be determined for the at least one object.
  • At least one partial radar image is preferably generated based on the measurement data from at least one of the radar units and based on the vector speed assigned for the at least one detected object using an inverse synthetic aperture radar method.
  • a partial radar image can be generated based on the (multidimensional) vector speed of the object for a partial spatial area in which the detected object is located, which has a higher angular resolution. The contours of the detected object are more pronounced in the partial radar image.
  • a large number of objects located in the field of view are preferably detected in the discrete overall coordinate system, with a preferably multi-dimensional vector speed of at least one resolution cell being determined for each detected object, and with a (partial) radar image being generated for each detected object using the measurement data from at least one of the radar units and based on the vector velocities of the respective detected objects, preferably using the inverse synthetic aperture radar method.
  • a partial radar image can be generated for each object that has a higher angular resolution than the radar image in which only the measurement data of both radar units were co-registered
  • the measurement data of the respective radar units generated by detecting the field of view contain distance data, radial speed data and angle data, the angle data preferably including angle data in the azimuth direction and in the elevation direction.
  • a preferably equidistant and/or Cartesian discretization of the visual range represented in the overall coordinate system is carried out.
  • resolution cells e.g. pixels or voxels
  • the distance data and angle data of the radar units are preferably superimposed (in terms of amount), as a result of which the measurement data of the radar units can be co-registered in the discrete total coordinate system.
  • an amplitude and (per radar unit used) a radial velocity can be assigned to each resolution cell of the discrete overall coordinate system in the common field of view.
  • the inverse synthetic aperture radar method is applied to the spatial sub-area in the discrete overall coordinate system in which the detected object(s) is/are located, preferably with the sub-area having a rectangular shape, an elliptical shape, or has a circular shape.
  • the inverse synthetic aperture radar method preferably includes one of the following methods: range Doppler, omega-K, phase shift migration, holography or extended chirp scaiing.
  • the inverse SAR method (direct) can be realized.
  • At least one vector speed error of the determined vector speed of the at least one resolution cell or of the at least one detected object is determined and corrected from the partial radar images generated by the two radar units.
  • Errors can occur when determining the vector speed, which can lead to systematic phase errors in the reconstructed/generated radar images, as a result of which the positions of the detected objects are reconstructed with errors. By determining and correcting any vector velocity errors, these systematic phase errors can be eliminated.
  • Determining and correcting the vector velocity errors preferably includes the following steps:
  • Vector speed change if the comparison of the partial radar images shows that there is a difference between the two partial radar images, the comparing and changing is preferably repeated iteratively until the difference between the two partial radar images is below one predetermined threshold is. As a result, the vector speed errors can be successively reduced. In this case, the change takes place in a targeted manner (not randomly) in that image deviations can be used to calculate which vector speed describes the actual vector speed better than the faulty vector speed or in which direction the vector speed must be changed.
  • An overall radar image of the common field of view is preferably generated from the individual partial radar images of the respective radar units, as a result of which an image can be generated in which the previously generated partial radar images of the detected objects are inserted and which improved angular resolution and more precise object contours.
  • the discrete overall coordinate system in which the measurement data of the radar units were co-registered, can be used as a basis into which the partial radar images can be inserted by a suitable arithmetic operation, such as addition, multiplication or substitution, of the corresponding partial areas.
  • a further object detection or classification can preferably be carried out in the overall radar image, for example using cluster algorithms such as density-based spatial clustering of applications with noise or connected component labeling or other methods of image and pattern recognition, such as machine learning methods, such as deep learning approaches.
  • cluster algorithms such as density-based spatial clustering of applications with noise or connected component labeling or other methods of image and pattern recognition, such as machine learning methods, such as deep learning approaches.
  • a separate overall radar image for each of the radar units used and/or to generate an overall radar image of the radar system from a combination of all radar units used.
  • a, preferably multi-dimensional, vector velocity is determined for at least part of, preferably for all, resolution cells of the discrete total coordinate system of the field of view, with at least part of, preferably the entire, field of view in the discrete total coordinate system, preferably using an inverse synthetic aperture radar image.
  • At least one object located in the field of view is detected in a distance-radial-velocity diagram and/or a distance-angle diagram that is/are generated from the measurement data of one of the radar units, preferably by Amplitude maxima can be determined using dynamic and/or constant power thresholds.
  • several, preferably multi-dimensional, vector velocities are determined for at least one resolution cell of the discrete total coordinate system of the field of vision (in particular if several objects are detected in the distance-radial-velocity diagram and/or in the distance-angle diagram for the resolution cell ).
  • the detection of the at least one field of view of the radar system is periodically repeated, with the measurement data from the radar units being combined to form an overall measurement data set, with the overall measurement data set being processed using the above method.
  • Measurement data which also contain the measurement data of cross paths of the radar signals between the radar units of the radar system, are preferably processed with the above procedure provided that the radar units of the radar system are operated coherently.
  • the cross path between the at least two radar units can also be assumed to be reciprocal, which means that the two cross path spectra can be combined (fused) in a separate step, so that a signal-to- Signal-to-noise ratio for the cross path can be improved (increased).
  • the reconstruction of the merged cross path can thus also be taken into account, which on the one hand increases the perspective gain and on the other hand the signal-to-noise ratio can be improved, resulting in an improved evaluation of the overall Radar image is made possible.
  • the radar system has a large number of radar units (e.g. at least three or at least four or at least eight) arranged at a distance from one another, a plurality of spatial visual ranges are recorded and recorded by at least two radar units of the radar system in each case the measurement data from the radar units are calculated for each of the viewing areas using the above method, e.g. B. processed in pairs.
  • a large number of radar units e.g. at least three or at least four or at least eight
  • a radar system in particular a vehicle radar system, preferably an automobile radar system, which has at least two radar units, the radar units preferably being arranged spaced apart from one another, preferably at a predetermined distance, the radar system being designed to do the above to carry out procedures.
  • a radar system in particular a vehicle radar system, preferably an automobile radar system, which has a large number of radar units, with a plurality of spatial viewing ranges of at least two radar units of the radar system overlapping at least partially, with the radar units being separated from one another, preferably at a predetermined distance, the radar system being designed to carry out the above method.
  • at least one radar unit of the at least two or the plurality of radar units has a computing module that is designed to carry out the above method, which means that an additional computing unit is not necessary.
  • the radar system also has a, preferably central, computing unit (master computing unit) which is designed to receive the measurement data from the computing units and to carry out the above method.
  • master computing unit preferably central, computing unit which is designed to receive the measurement data from the computing units and to carry out the above method.
  • the object is also achieved by a vehicle, in particular an automobile, which has the above radar system.
  • Mobile devices are also conceivable, such as manned or unmanned aircraft or preferably cars and/or trucks, which have the radar system according to the invention.
  • the radar system according to the invention can also be attached to static devices.
  • smaller radar units can be set up at the side of the road for traffic monitoring and the movement of vehicles driving laterally past them can be used to achieve good azimuth resolution.
  • FIG. 1 shows a schematic arrangement of a radar system according to the invention
  • FIG. 3 is a schematic representation of a receive antenna array containing multiple receive antenna elements
  • FIG. 4 shows a schematic representation of how a vector speed for at least one resolution cell can be calculated from the measurement data of at least two radar units 10, 20 of a radar system 100;
  • FIG. 5 shows a schematic arrangement of a radar system according to the invention
  • FIG. 6 shows a schematic plan view of a vehicle in which a radar system with a number of radar units for detecting the surroundings is arranged;
  • FIG. 7 shows a flow chart of a first exemplary embodiment of the method according to the invention.
  • FIG. 8 shows a flow chart of a second exemplary embodiment of the method according to the invention.
  • FIG. 9 shows a flow chart of a third exemplary embodiment of the method according to the invention.
  • FIG. 12 shows a flow chart of a sixth exemplary embodiment of the method according to the invention.
  • FIG. 13 shows an exemplary evaluation by means of a delay-and-sum-beam former for a radar unit
  • 16 shows an exemplary view of reconstruction results which are present after method step VS7;
  • 17 shows an example of measurement data from two radar units, which are co-registered in a common overall coordinate system and are present after method step VS4, with two objects lying close to one another; as
  • the radar system 100 has two radar units 10, 20 which are arranged spatially separate from one another.
  • Each radar unit 10, 20 has a spatial field of view (field of view) FoV10, FoV20, which extends from the respective radar unit 10, 20 at a specific opening angle.
  • the spatial viewing ranges FoV10, FoV20 of the radar units 10, 20 overlap at least partially in a spatial detection range or viewing range FoV of the entire radar system 100.
  • the two radar units 10, 20 are communicatively connected to a (central) processing unit 90.
  • the communicative connection between the radar units 10 and 20 can be wired or wireless.
  • the radar system 100 shown in FIG. 1 can be arranged, for example, in a vehicle, preferably an automobile.
  • FIG. 1 shows an object O that is located in a detection range FoV of radar system 100 .
  • the object O can move freely relative to the radar system 100 at an initially unknown speed.
  • the radar unit 10 is shown enlarged in FIG. 1 .
  • the radar unit 20 is constructed in the same way as the radar unit 10 .
  • the radar unit 10 has its own local oscillator LO, a modulation generator MG, at least one high-frequency mixer M, a transmitting antenna TX and a receiving antenna array RX, which contains four receiving antenna elements.
  • the local oscillator LO is connected to the modulation generator MG, in which a transmission signal can be generated.
  • the modulation generator MG is in turn connected to the transmission antenna TX so that it can transmit or emit the transmission signal.
  • the modulation generator MG is also connected to the high-frequency mixer M, in which the transmission signal is mixed with a reception signal from the reception antenna array.
  • the radar units 10, 20 are arranged (or installed in a vehicle, for example) in such a way that the positions of the radar units relative to one another are at least essentially known (at least in the range of centimeters).
  • the radar units 10, 20 can be arranged, for example, on the side or the front of a vehicle in such a way that the line of sight Fields (fields of view) FoV10, FoV20 of the radar units 10, 20 overlap in at least one spatial field of view FoV.
  • radar units 10, 20 can be connected, for example, via a trigger line and/or can be controlled via a reference clock.
  • the dimensioning of the radar units 10, 20 and the modulation of the transmission signals are designed so that each radar unit 10, 20 in the common field of view at least approximately simultaneously object parameters (distance, radial speed and angle) of an object in the (overlapping) field of view FoV - denden object can be determined.
  • a multiplex method can advantageously be used in the transmission of the various transmission signals (radar signals) in order to reduce (avoid) interference between the radar units 10, 20.
  • Object parameters are determined using a modulated transmission signal, which is transmitted or radiated with the transmission antenna TX via a reciprocal transmission channel and is reflected by at least one object, with the transmission signal reflected on the object being transmitted by the reception antenna elements of the reception antenna -Arrays RX is received as a received signal.
  • the received signal can then be mixed into the baseband with a high-frequency mixer M.
  • the baseband signals can then be sampled by the analog-to-digital converter ADC and processed digitally using a computing unit 90 .
  • Information regarding the distance d j of an object O can be calculated by evaluating the signal propagation time over the transmission channel.
  • the radial velocity v r,j of an object O is proportional to the frequency shift of the received signal based on the Doppler effect.
  • the digital baseband signals can be processed in such a way that a three-dimensional result space results as a data set (distance d j , radial speed v r,j , azimuth angle ⁇ j ).
  • a suitably dimensioned antenna array which has at least two antennas in the vertical plane of the real or virtual array, it is also possible to estimate an elevation angle for evaluating the three-dimensional position of an object.
  • Objects or Radar targets and their object parameters are detected and/or determined.
  • the data sets of the radar units 10, 20 can be transmitted to a computing unit 90 of the radar system 100, which is implemented as a separate computer, for example, or to a computing module of a (master) radar unit.
  • the different data sets can be transformed into a common coordinate system using the information about the installation positions of the radar units 10, 20 by co-registration.
  • a vector velocity v can be determined from the radial velocities of the radar units 10, 20.
  • the radar units 10, 20 used in FIG. 1 are, for example, chirp sequence radar units 10, 20. However, other forms of modulation are also conceivable.
  • Each chirp sequence radar unit 10, 20 emits a transmission signal which is a periodic sequence of I frequency-modulated continuous wave signals (frequency modulated continuous wave, FMCW) with a linear frequency ramp (a so-called chirp), which is generated by the local oscillator LO are generated contains.
  • An exemplary sequence of several chirps is shown in FIG. 2 . Starting from a transmitting antenna TX, this signal is sent via a transmission channel which delays the signal by the signal propagation time delayed and attenuated by a value A, which is proportional to the backscatter cross-section of the observed object.
  • the period results from the distances between the transmitting antenna TX and the object on which the transmission signal is reflected, and between the object and the receiving antenna array RX according to where c is the propagation speed of the electromagnetic wave.
  • the propagation time changes as a function of the component v r of the actual vector velocity which is radial to the respective radar unit 10, 20 of the object over time:
  • the resulting received signal that is received at the receiving antenna array RX is in the baseband, according to a mixer mixed and subjected to low-pass filtering, where an operation * stands for a complex conjugation. After inserting follows approximately for the so-called beat signal s B (t).
  • the transit time of the electromagnetic wave in the transmission channel or the Object distance d can be determined, since the beat frequency is frequency-shifted by an unknown Doppler component due to the radial object speed v r :
  • the beat signal it follows:
  • phase terms model the runtime change caused by target movement during a chirp. This causes a widening of the maximum in the frequency spectrum both in the distance (range) and in the Doppler direction and necessitates the use of an approximation in the determination of f D and f B .
  • the azimuth angle ⁇ i.e. the angle of incidence with which the received signal reflected on the object O was detected, is required in addition to the distance d, with the azimuth angle ⁇ from the perpendicular of the linear receiving antenna array of the radar unit 10, 20 is measured.
  • the azimuth angle ⁇ is typically evaluated in automobile radar units, for example, using an array of K horizontally distributed receiving antennas, which are arranged at distances of the order of magnitude in the wavelength range.
  • a so-called uniform linear array (ULA) with K receiving antennas distributed equidistantly at a distance of 2/2 is shown schematically in FIG.
  • the angle-dependent phase along the K receiving antenna elements can be described with a trigonometric evaluation via where b k is the distance of the k. Indicates antenna to the first antenna.
  • each radar unit 10, 20 must be calibrated initially. With the help of the calibration matrix used for this purpose, production, layout and coupling-related influences on the amplitudes and phases on the different reception channels can then be compensated for.
  • Direction-of-Arrival (DoA) methods can be used to estimate the angle of incidence ⁇ based on the vector ⁇ [k].
  • DoA Direction-of-Arrival
  • a so-called steering vector is used for this, with which the signals of the K receiving antenna elements of the receiving antenna array are weighted element by element.
  • a group of methods for angle estimation is what is known as beamforming (rarely also called “beam shaping”).
  • This group includes, for example, the Deiay-and-Sum or Bartiett beamformer and the minimum-variance-distortioniess-response - (MVDR) or Capon beamformer Both approaches mentioned above use digital beam steering (beam steering) for the angle-dependent estimation of the spectral power density.
  • the measured intensities are plotted in the distance direction (range direction) over the (azimuth) angle direction (angle direction) in FIG. 13 is also referred to as a range-angle diagram.
  • FIG. 13 shows a region I that is finely hatched to the top left, in which low intensities are measured which are approximately in the order of magnitude of the general noise level.
  • an area h is shown in FIG. 13, in which contour lines of an intensity maximum are shown. Intensities can be measured in area h which are, for example, of the order of more than 3 dB, more than 5 dB, more than 10 dB, more than 15 dB or more than 30 dB above the general noise level.
  • subspace methods In addition to beamforming, there are so-called subspace methods (rarely also called “subspace” methods), which are also attributed to imaging techniques with so-called superresolution. Subspace methods are based on the assumption that there is a noise subspace orthogonal to the signal subspace. As an example of a subspace method, the MUitipie-Signal-Ciassification (MUSIC) can be mentioned.
  • MUSIC MUitipie-Signal-Ciassification
  • the methods described above can be used in an equivalent manner to determine the elevation angle using a receiving antenna array RX with additional vertically arranged antenna elements.
  • the result of an evaluation of a chirp sequence radar unit is a discretized, three-dimensional result space with the dimensions that are typically defined as "Range” (distance d), "Doppler” (radial velocity v r ) and “Angle” (angle of incidence ⁇ ) are designated.
  • the information contained in the result space about the object in the detected area must be separated from the superimposed interference components such as the phase noise of the local oscillator, thermal noise, clutter or Interference from other transmitters or radar units can be separated.
  • Adaptive algorithms such as Constant False Alarm Rate (CFAR) methods can be used for this.
  • CFAR methods can be performed in one dimension (e.g. in the range direction), two dimensions (in the range and Doppler direction) or in three dimensions (range, Doppler and angle directions).
  • FIG. 4 shows schematically how the vector speed for at least one resolution cell (pixel), which was detected, for example, as the center point (intensity maximum) of an object O, from the measurement data of at least two radar units 10, 20 of a radar system 100 can be calculated.
  • the vector speed of a resolution cell is composed of a radial speed component v r and an angular speed component ⁇ a .
  • the angular velocity can be converted into a tangential velocity component v t via the resolution cell distance (pixel distance) d 0 :
  • Both speed components can be assumed to be constant during a measurement or during the duration of a chirp sequence.
  • the vector speed of any resolution cell in the common field of view FoV can be determined, for example, by determining a vector between the resolution cell under consideration and a point of intersection S of straight lines G1, G2 perpendicular to the end points of the radial speed vectors.
  • 0) should be assumed for the resolution cell under consideration.
  • the radial velocities of the resolution cell of all radar units 10, 20, which are distinguished by the count variable j, must be broken down into their x and y components.
  • FIG. 14 shows an example of measurement data from two radar units 10, 20 of radar system 100 of the exemplary embodiment from FIG. 5, which were co-registered in a common overall coordinate system.
  • an area (the common visual area) FoV is shown hatched to the upper right with wide shading.
  • the radar units 10, 20 each face one another by 45°.
  • a comparatively low intensity which is, for example, in the order of magnitude of the general noise level, is measured.
  • higher intensities are due to the increased noise power , which are, for example, on the order of twice the general noise level (depending on how many radar units detect the common field of view FoV) than measured in the individual detection areas FoV10, FoV20.
  • regions h are shown with contour lines in FIG. 14 .
  • even higher intensities are measured compared to the intensities measured in the rest of the visual range FoV, which are, for example, of the order of more than 10 dB, more than 15 dB, more than 20 dB or more than 30 dB be above the noise level of the common field of view FoV.
  • the relative reference of the global coordinate system to each individual radar unit can be easily established, with the coordinate origin being able to be chosen arbitrarily. It is particularly expedient to define the origin, for example, in the coordinate origin of a radar unit or on an axis in the middle between the radar units used.
  • the area around the vehicle is primarily suitable for the co-registered image to be generated, which can be viewed simultaneously by all radar units involved in this process.
  • a Cartesian discretization of this area that is equidistant per dimension is particularly expedient. Taking into account the distance-dependent size of the range-angle resolution cells, a more complex discretization can also be implemented, with which overall computing effort can be saved by reducing the number of cells.
  • the range-angle cells of all radar units involved can be assigned to a cell of the global coordinate system, for example by a two-dimensional interpolation.
  • a linear interpolation is particularly expedient here due to the low computational effort.
  • the range-angle data of the individual radar units can, for example, be superimposed in absolute terms in the global coordinate system. If the radar units are operated coherently, a coherent superimposition in terms of amount and phase is possible, but strong interference patterns arise with small radar units and due to the large base line.
  • each resolution cell of the global coordinate system in the common field of view can be assigned an amplitude and a radial velocity curve for each radar unit used.
  • step VS4 it is necessary for the three-dimensional range Doppler angle data records to be transmitted to a central processing unit. If there is sufficient computing capacity, this can be integrated in a "master" radar unit, for example.
  • Fig. 5 an embodiment is shown in which two radar units 10, 20 of a radar system 100 are mounted on a vehicle.
  • the radar units 10, 20 are arranged at a distance from one another by the base length b.
  • the radar units 10, 20 each cover a visual range FoV10, FoV20.
  • the common field of view FoV is covered by both radar units 10, 20.
  • the measurement data from the radar units 10, 20 are co-registered and discretized in an overall coordinate system.
  • the radar system 100 which is mounted on a vehicle in FIG. 5, is also moving with a vector speed (also initially unknown) which can be determined using the method described above for static scenes.
  • Fig. 6 shows a schematic plan view of a vehicle in which a radar system 100 is arranged that comprises seven radar units 10, 20, 30, 40, 50, 70, with the individual fields of view of the radar units 10, 20, 30, 40 , 50, 70 overlap in the Fovl, Fov2, Fov3, Fov4, Fov5 and Fov6 view areas.
  • the radar units 10, 20, 30, 40, 50, 70 shown in FIG. 6 are located at positions typical for the automotive sector. In the front area of the vehicle there is, for example, a long or full range radar unit 70 with a larger aperture than the other radar units 10, 20, 30, 40, 50 and 60 and a narrower field of view FoV70. Alternatively or additionally, the radar unit 70 can also be attached in the rear area.
  • FIG. 6 also shows six short-range radar units 10, 20, 30, 40, 50 and 60, each with a comparatively wide field of view FoV10, FoV20, FoV30, FoV40, FoV50, F0V60.
  • the fields of vision FoV10, Fov20, FoV30, FoV40, FoV50, F0V6O of the radar units 10, 20, 30, 40, 50 and 60 have, for example, an opening angle of 120° in the horizontal plane.
  • the radar units 10, 20, 30, 40, 50 and 60 are arranged around the vehicle in such a way that the radar units 10, 20, 30, 40 are in the corners of the vehicle and the wheels are at least essentially centrally on the sides of the vehicle. dar units 50, 60 are arranged.
  • the radar units 10, 20, 30, 40, 50, 60 and 70 are aligned and arranged in such a way that the area around the vehicle can be recorded as completely as possible.
  • the overlapping visual ranges FoV1, FoV2, FoV3, FoV4, FoV5, FoV6 are hatched in FIG.
  • the angular resolution can be improved due to the direction of movement of the vehicle while driving.
  • FIG. 7 shows a schematic sequence of a first exemplary embodiment of the method according to the invention with method steps VS1 to VS10.
  • Step VS1 is first carried out in radar units 10, 20 of radar system 100, in which step VS1 is carried out by radar units 10, 20 in common field of view FoV.
  • the radar units 10, 20 can measure the distance, the radial speed and the azimuth angle of reflecting objects that are in the common field of view.
  • the radar units 10, 20 are preferably spaced apart by a relatively large base length b. For example, basic lengths of about 1-5 m are possible for automobiles.
  • the radar units 10, 20 are arranged in specific installation positions that are known, for example, to within a range of a few centimeters.
  • the orientation of the radar units 10, 20 is preferably also known, since the orientation of the radar units 10, 20 can also have a greater effect on the image quality as the object distance increases.
  • Parameters of the transmission signals of the radar units 10, 20 can, for example, be dimensioned in such a way that the common field of view FoV can be detected with a maximum measurement distance that can be reached by the individual radar units 10, 20.
  • the parameters of the transmission signals are set, for example, in such a way that the speeds typical of vehicles in the automobile sector can be determined unambiguously and with sufficient accuracy.
  • this can be achieved, for example, with a sufficiently small chirp repetition rate T and a sufficiently large total measurement time IT.
  • suitable multiplex methods time, frequency or code multiplex methods
  • time, frequency or code multiplex methods can be used, for example, in order to avoid disturbing interference between the transmission signals of the individual radar units 10, 20.
  • a frequency-division multiplex method for example, can be particularly expedient here, since the transmission signals can be transmitted offset from one another by only a few megahertz. In particular, this can mean that disruptive interference in the baseband can be avoided.
  • the radar units 10, 20 can be used to generate beat signals in baseband for each receiving channel, as shown in equation (6).
  • the best signals can then be further processed as digital measurement data (as a digital data stream) by analog-digital conversion.
  • the measurement data can be further processed in a computing module of the respective radar unit 10, 20, for example.
  • the measurement data of the individual radar units 10, 20 are already transmitted to a (central) computing unit in this method step VS1, in order to be processed there collectively.
  • beat spectra are generated from the digital beat signals.
  • the digital beat signals are transformed into the frequency representation by a Fourier transformation ("Range FFT"), so that beat spectra with beat frequencies, such as for example according to equation (7), result.
  • the processing can either directly in a computing module of the respective radar unit 10, 20 or in a (central) computing unit.
  • range Doppler angle data sets ie distance, radial speed angle data sets, are generated for each radar unit 10, 20.
  • the entire sequence from equation (8) can first be converted into the so-called range Doppler representation using a two-dimensional Fourier transformation.
  • Equation (16) Any method of angle estimation based on the phases along the receiving antennas can then be used with Equation (16). To do this, it must be ensured that the measurement data from the radar units are processed taking into account the initially created calibration matrix.
  • the delay-and-sum beamformer is advantageous, for example, which can be evaluated directly by a Fourier transformation for a uniform linear array (for example, as shown in FIG. 3).
  • Method step VS3 can also take place either in the respective radar unit 10, 20 or in a (central) computing unit 90.
  • a method step VS4 the individual range Doppler angle data records generated in VS3 are co-registered in a common coordinate system (overall coordinate system).
  • the data sets of the individual radar units 10, 20 are co-registered in a common, "global" coordinate system, so that for at least one object in the common field of view at least two radial speeds recorded from different perspectives are available.
  • the relative reference of the overall coordinate system to each individual radar unit can be established, with the coordinate origin being able to be chosen arbitrarily.
  • the measurement data of the radar units 10, 20 co-registered in the overall coordinate system can be discretized, it being possible in particular to use an equidistant, Cartesian discretization for each dimension of the measurement data.
  • resolution cells pixels in two dimensions and voxels in three dimensions
  • More complex discretizations for example with distance-dependent variation of the dimensions of the resolution cells, are also conceivable.
  • the range resolution of the radar units is at least approximately independent of the range;
  • the triggering cells can have a constant size in the distance direction.
  • the angular resolution is also at least approximately constant, this means in Cartesian space that the resolution cells in the azimuth direction can have larger dimensions at greater distances, since here the resolution is reduced with increasing distance.
  • the resolution cells in the range-angle area of all radar units 10, 20 involved can be assigned to a resolution cell of the overall coordinate system, for example by a two-dimensional interpolation.
  • a linear interpolation can be particularly advantageous due to the low computational effort.
  • the range-angle data of the individual radar units can, for example, be superimposed in absolute terms in the overall coordinate system. If the radar units are operated coherently, a coherent superimposition in terms of amount and phase is possible. Through the superposition, each resolution cell of the overall coordinate system in the common field of view can be assigned an amplitude and a radial velocity curve for each radar unit used.
  • a (central) processing unit 90 it may be necessary for the three-dimensional range Doppler angle data sets to be transmitted to a (central) processing unit 90.
  • method step VS4 it is also conceivable for method step VS4 to be carried out in a computing module of a radar unit 10, which acts as a master radar unit, for example.
  • a vector speed of at least one resolution cell can be determined in method step VS5.
  • FIG. 15 a detailed view of the speed components vx, vy of the determined vector speeds of a sub-area is shown as an example.
  • Vector velocities are based on the maximum radial velocity of the section.
  • FIG. 15 there is an example of a speed range v1 with a low speed (between approximately 0 and -5 m/s) and a further speed range v2 with a higher speed (between approximately 0 and +5 m/s).
  • Method step VS6 includes the detection of objects (target detection).
  • object detection at least one object is detected in the common field of view.
  • the object coordinates of the detected object in the overall coordinate system can be stored in a data structure, for example a target list, with a vector speed determined at these object coordinates.
  • the number of objects entered in the data structure (target list) is given as N.
  • the search for local amplitude maxima in the common field of view is particularly advantageous for object detection. Suitable CFAR adaptations or constant power threshold values can be used for this. Since this object detection is two-dimensional in the common coordinate system, it is applied indirectly to the range-angle data from the radar units.
  • the stationary targets can be detected using a RANdom Sample Consensus Algorithm (RANSAC).
  • RANSAC RANdom Sample Consensus Algorithm
  • the inverse SAR reconstruction can be carried out (separately) for each detected object (target) entered in the target list, ie N times.
  • the reconstruction is carried out for a spatial area (reconstruction area) located around the object, it being assumed that the area under consideration lies completely in the common field of view of all radar units used.
  • the area can be implemented as rectangular, elliptical or circular, for example.
  • xe then applies in each case , where x n and y n are the coordinates of the nth object in the overall coordinate system.
  • each resolution cell moves with the target velocity v n relative to the zero point of the overall coordinate system.
  • the inverse SAR reconstruction can be implemented using one of the following algorithms, for example: range Doppler, omega-K, phase shift migration, holography or extended chirp scaiing.
  • range Doppler for example: range Doppler, omega-K, phase shift migration, holography or extended chirp scaiing.
  • matched filter is particularly advantageous for the discretized common field of vision.
  • the hypothetical runtime used here depends on the radar unit used j for the i. Chirp from the propagation time from the respective transmitting antenna m to the resolution cell under consideration and from the return path to the respective receiving antenna k at the time together (cf. (3)). This takes into account the fact that a target moves with the vectorial speed in the course of slow time moved on.
  • the resolution cell-dependent, hypothetical beat frequency f B,hyp,j,i,m,k (x,y) associated with this signal hypothesis is according to (7) where v rj (x,y) is the magnitude of the radial velocity at the resolution cell related to the radar unit j indicates.
  • the reconstructed image of the nth target is then created as a probability density function, which is determined by the summation of the signal components of all measurement paths (ie for all antenna combinations for all coordinates section under consideration, which can be rectangular, elliptical or circular, for example).
  • the hypothesis test is interpreted here as a complex conjugate multiplication of the hypothetical signal phase with the actually measured phase at the corresponding hypothetical beat frequency.
  • an amount is formed: indicates the Fourier transformation of the beat signal.
  • Each window function is given by w(i) depending on the chirp.
  • the data from the individual radar units must be processed separately due to the lack of coherence with each other. If the radar units are operated coherently, an overall evaluation of all radar units can be carried out, but in the case of small radar units and large base lines, this can result in strong interference patterns.
  • FIG. 16 shows the reconstruction results of the scenery shown in FIG. 14 based on the vector speed determined at the amplitude maximum.
  • Fig. 16 shows a partial radar image for the two radar units 10, 20.
  • two sub-radar images b 1 (x,y) and b 2 (x,y) can be created from different perspectives. If the radar units are operated coherently, a third partial radar image b 1 ⁇ 2 (x,y) can also be created on the basis of the reciprocal cross path.
  • Fig. 16 there is an area I in which a low intensity (which is, for example, of the order of magnitude of (twice) the general noise level) is measured, and an area h with contour lines in which a higher intensity (the for example approximately in the order of more than 30 dB, more than 35 dB or more than 40 dB above the noise level) is imaged.
  • a low intensity which is, for example, of the order of magnitude of (twice) the general noise level
  • a higher intensity the for example approximately in the order of more than 30 dB, more than 35 dB or more than 40 dB above the noise level
  • Target list objects with increased angular resolution and sharper contours calculated can optionally be carried out.
  • the errors can be determined in the event that the vector speed ascertained in method step VS7 could be determined with errors.
  • Errors in determining the vector velocity can result in systematic phase errors in the inverse SAR reconstruction in Equation (28). This can result in all affected objects being reconstructed with a position error. However, the error can be different due to the different radial speeds for the reconstruction of each radar unit 10, 20.
  • the reconstructed images of the different radar units 10, 20 can be compared. For example, this is possible via renewed CFAR-based target detection. In this case, it is examined whether the deviation of the target positions in the different images is greater than a previously specified threshold value.
  • the differences between the different images can be evaluated. If the difference amplitudes are above a previously defined threshold value, this indicates an incorrect position of the targets.
  • the position errors can be reduced by an iterative correction in method step VS9 until the correct or sufficiently accurate vector speeds have been found and the target positions have been determined (sufficiently) accurately.
  • the vector speed can be determined incorrectly in particular when two or more objects that do not necessarily have a different vector speed cannot be resolved in the common field of view in the range-angle dimension. In this case it is possible that a common amplitude maximum for all objects involved at the center of gravity of the objects is created. This maximum is then defined as a detected object in method step VS6 and the speed at these supposed target coordinates is used in the ISAR reconstruction.
  • the speed recorded in the target list only has a small error.
  • the error arises from the fact that the vector speed in method step VS5 is carried out by the geometric evaluation at a center of gravity that has been shifted as a result of the error as the target coordinates.
  • FIG. 17 shows a schematic representation of two objects (point targets) 01, 02 which are only a short distance apart.
  • 6.8) m and (x 2 ,y 2 ) (0.5
  • the respective actual vector velocities of the objects 01, 02 are, for example
  • the error-prone vector speed is determined at the amplitude maximum.
  • the error-prone agreed vector speed leads, as shown schematically in FIG. 18, to a position error occurring after the inverse SAR reconstruction, which is dependent on the radar unit 10, 20 being evaluated.
  • Method step VS9 If it is determined in method step VS8 that one or more objects were reconstructed with an incorrect position for the nth reconstruction process under consideration (method step VS7), in a separate correction step (in Method step VS9) the vector speed on which the reconstruction is based can be changed and the reconstruction can be carried out again in method step VS7 for the corresponding nth target list entry with the changed (adapted) vector speed.
  • Correction step VS9 can be repeated until a previously defined error threshold value for method step VS8 is not exceeded.
  • the vector speed in the nth target list entry can be substituted by the changed (adapted) vector speed used in the last reconstruction.
  • the correction speed required for the correction step VS9 can, for example, be determined analytically via the radar unit-specific position deviation from the original target list position.
  • a SAR autofocus algorithm such as a phase gradient autofocus method can be used as a correction step VS9.
  • an overall radar image can be generated from all partial radar images.
  • a total of N partial radar images are generated (generated) in the common field of view of the radar units 10, 20 used.
  • methods for cluster analysis such as density-based-spatial-clustering-of-applications- with-noise DBSCAN or connected-component-labeling CCL can be applied.
  • An overall radar image of the common field of view can be generated, for example, in such a way that the image generated by the co-registration in method step VS4 is selected as the lowest level.
  • the N reconstruction images can then be inserted in the corresponding partial areas by means of suitable arithmetic operations, such as addition, multiplication or substitution.
  • suitable arithmetic operations such as addition, multiplication or substitution.
  • the reconstructed images function as a kind of template, whereby a stronger contour sharpening can be achieved.
  • FIG. 8 shows a flow chart of a second exemplary embodiment of the method according to the invention.
  • the inverse SAR reconstruction is carried out in method step VS7 with the vector velocities for at least some of the resolution cells, in particular for all resolution cells.
  • Method steps VS1 to VS5 are carried out as previously described with reference to FIG. In the second exemplary embodiment, however, method step VS6 (object detection) is not carried out.
  • each resolution cell of the common field of view is processed in one pass with the vector speed determined for the respective resolution cell in method step VS5.
  • the error correction which is carried out in method steps VS8 and VS9, can be applied to the entire common field of view, areas being determined here, for example, in which the vector speed used for the reconstruction was not correctly determined.
  • the overall radar image generated in method step VS10 is now no longer composed of N individual image sections, the partial radar images the computational complexity can be reduced at this point.
  • the final fusion of the individual overall images of the various radar units takes place as described above.
  • the target detection (implemented with a CFAR method, for example) and a 1/4-times run through of the method steps VS7 to VS9 are thus avoided, the computing complexity of the overall method can be reduced.
  • the quality of the reconstruction results can decrease as a result of the resolution cell-by-pixel (pixel-by-pixel) processing in contrast to target-specific processing with constant vector speeds.
  • FIG. 9 shows a flow chart of a third exemplary embodiment of the method according to the invention.
  • an inverse SAR reconstruction is carried out in method step VS7 using only the measurement data from a radar unit 10 and without error correction (method step VS8 and VS9).
  • method steps VS1 to VS6 are carried out unchanged.
  • only the measurement data from a radar unit 10 are used for the inverse SAR reconstruction in method step VS7.
  • the correction steps VS8 and VS9 are omitted in the third exemplary embodiment. Since in method step VS7 only one reconstruction result is calculated for each detected object O (in the first and second exemplary embodiments, at least two reconstruction results are calculated in method step VS7 for each detected object O), any errors can be determined (an error analysis) and thus a correction of the erroneously determined vector velocities cannot be carried out either.
  • the use of an autofocus algorithm such as phase-gradient autofocus is nevertheless possible, since at least two reconstruction results are not necessary for this.
  • the N reconstruction results of the detected objects O can be entered in a method step VS10 in an overall radar image, with a fusion of the reconstructions (partial radar images) of other (additional) radar units 20 for the respective detected N objects omitted.
  • the computing effort is reduced, particularly in method step VS7, because on the one hand there is only one data set from a radar unit 10 are processed in method step VS7 and on the other hand an iterative correction (method steps VS8 and VS9) are not carried out.
  • the resulting overall radar image cannot benefit from capturing the scenery from different perspectives of the different radar units 10, 20 and reconstructed objects cannot be confirmed or disproved and corrected by other images.
  • FIG. 10 shows the schematic sequence of a fourth exemplary embodiment of the method according to the invention.
  • objects are already detected in method step VS2, in the range Doppler diagram.
  • the method steps VS1 to VS4 are carried out as described for the first to third exemplary embodiments.
  • a two-dimensional or three-dimensional CFAR adaptation can be applied to the three-dimensional range Doppler angle data sets from method step VS3.
  • the fourth exemplary embodiment makes it possible to determine a number of vector speeds for a range-angle resolution cell.
  • the number of vector speeds that can be evaluated is determined by the position and the number of radar units used.
  • the created target list consequently consists of N + >N detections, which can then be further processed in method steps VS7 to VS10, which can be applied unchanged to the first to third exemplary embodiments.
  • the fourth exemplary embodiment is particularly advantageous when a larger number of radar units 10, 20 is used, since this makes it easier to assign the different radial velocity curves.
  • the complexity and the computing effort increase slightly, the speed resolution of the individual radar units can be used at least partially, which improves the image quality overall.
  • the inverse SAR reconstruction is carried out in method step VS7 with an extended measurement time and, for example, with an at least partially separate speed measurement.
  • the inverse SAR reconstruction in method step VS7 and the speed measurement can be carried out on the basis of measurement data, with the measurement data resulting from at least partially different radar measurements (for example capturing the common visual range with a predetermined number of chirps). can put together.
  • the (at least two) radar units 10, 20 involved can each transmit a sequence of 1024 chirps that are equidistant in time.
  • a smaller number of consecutive chirps, for example 256, would be possible for determining the vector speeds and detecting a target in method steps VS3 to VS6.
  • the 1024 chirps and thus 1024 Fourier-transformed digital beat signals can be used for the inverse SAR reconstruction in method step VS7 and possibly VS8 to VS10.
  • the recorded scenery can be viewed for a longer measurement time.
  • the spatial sampling criterion is met, only every second or third chirp can be used for the inverse SAR reconstruction, for example.
  • the limitation of the chirps used in method steps VS3 to VS6 and a thinning out of the measurement data for method steps VS7 to VS10 does not increase the computational effort particularly significantly, but significantly longer measurement times (as in the previously described case - play by a factor of 4 or by a larger factor) can be realized.
  • the lengths of the trajectories caused by the target movement can increase during the measurement, as a result of which larger apertures can be synthesized and an improvement in the resolution (as in the example described above by a factor of 4 at best) can be achieved.
  • Fig. 12 the sequence of a sixth embodiment is shown.
  • the radar units 10, 20 are operated coherently, so that coherent processing, in particular also the use of the cross paths of the radar units 10, 20, is made possible in the inverse SAR reconstruction in method step VS7-K.
  • the measurement data of the cross paths of the radar units are also co-registered in the overall coordinate system (method step VS4-K).
  • the two cross path spectra can be fused in one process step VS-KF, which increases the signal-to-noise ratio for increases the cross path.
  • the processing can now be expanded such that the reconstruction of the merged cross path can also be taken into account in the inverse SAR reconstruction in method step VS7-K and in the optional error analysis in method step VS8 and VS9.
  • this results in a gain in perspective
  • the signal-to-noise ratio increases, as a result of which an improved evaluation of an overall radar image generated in method step VS10 is made possible.
  • ADC analog to digital converter b base length
  • FoV joint field of view of at least two radar units FoV10, FoV20 individual fields of view of radar units; FoV1, FoV2, FoV3, joint fields of view of several radar units;
  • TX transmit antenna transmit antenna array

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

L'invention concerne un procédé de traitement de signaux radar selon la revendication 1. L'invention concerne un procédé de traitement de signaux radar d'un système radar, en particulier d'un système radar de véhicule, de préférence un système radar d'automobile, comportant au moins deux unités radar espacées l'une de l'autre, ledit procédé comprenant les étapes suivantes : - la saisie d'au moins un champ de vue tridimensionnel du système radar avec des signaux radar des au moins deux unités radar ; - la génération d'un système de coordonnées total discret du champ de vue, les données de mesure, qui sont générées par la saisie du champ de vue, desdits au moins deux unités radar du système radar étant co-registrées ; et - la détermination d'une vitesse vectorielle, de préférence multidimensionnelle, pour au moins un pixel du système de coordonnées total distinct et/ou d'une vitesse vectorielle, de préférence multidimensionnelle, pour le système radar ; - la reconstruction d'au moins un champ partiel tridimensionnel du champ de vue à l'aide de la vitesse vectorielle déterminée et/ou de la vitesse vectorielle pour le système radar ainsi qu'à l'aide des données de mesure d'au moins une des unités radar. L'invention concerne en outre un système radar selon la revendication 21 et un véhicule selon la revendication 25. L'invention permet d'obtenir une résolution angulaire améliorée pour le système radar, sans augmenter les dimensions physiques de l'ouverture du réseau d'antennes de réception, de préférence sans qu'un mouvement intrinsèque ou un mouvement de la cible soit connu au préalable ou doive être déterminé par un système de capteur externe.
EP21772783.3A 2020-09-07 2021-09-03 Procédé, système radar et véhicule pour le traitement de signaux radars Pending EP4211490A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020123293.4A DE102020123293A1 (de) 2020-09-07 2020-09-07 Verfahren, Radarsystem und Fahrzeug zur Signalverarbeitung von Radarsignalen
PCT/EP2021/074356 WO2022049241A1 (fr) 2020-09-07 2021-09-03 Procédé, système radar et véhicule pour le traitement de signaux radars

Publications (1)

Publication Number Publication Date
EP4211490A1 true EP4211490A1 (fr) 2023-07-19

Family

ID=77801727

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21772783.3A Pending EP4211490A1 (fr) 2020-09-07 2021-09-03 Procédé, système radar et véhicule pour le traitement de signaux radars

Country Status (5)

Country Link
US (1) US20230314588A1 (fr)
EP (1) EP4211490A1 (fr)
CN (1) CN116507940A (fr)
DE (1) DE102020123293A1 (fr)
WO (1) WO2022049241A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230314559A1 (en) * 2022-04-05 2023-10-05 Gm Cruise Holdings Llc Multi-sensor radar microdoppler holography
CN116184402A (zh) * 2022-10-19 2023-05-30 四川航天燎原科技有限公司 一种机载实时三维成像雷达和飞机
CN116879857B (zh) * 2023-09-07 2023-11-17 成都远望科技有限责任公司 一种远场目标和雷达中心波束对准方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628227B1 (en) * 2002-07-23 2003-09-30 Ford Global Technologies, Llc Method and apparatus for determining a target vehicle position from a source vehicle using a radar
DE102007058242A1 (de) * 2007-12-04 2009-06-10 Robert Bosch Gmbh Verfahren zur Messung von Querbewegungen in einem Fahrerassistenzsystem
EP3367121B1 (fr) * 2017-02-23 2020-04-08 Veoneer Sweden AB Radar à ouverture synthétique inversée pour un système de radar de véhicule
DE102017110063A1 (de) 2017-03-02 2018-09-06 Friedrich-Alexander-Universität Erlangen-Nürnberg Verfahren und Vorrichtung zur Umfelderfassung
DE102018100632A1 (de) 2017-10-11 2019-04-11 Symeo Gmbh Radar-Verfahren und -System zur Bestimmung der Winkellage, des Ortes und/oder der, insbesondere vektoriellen, Geschwindigkeit eines Zieles
US20190187267A1 (en) * 2017-12-20 2019-06-20 Nxp B.V. True velocity vector estimation

Also Published As

Publication number Publication date
US20230314588A1 (en) 2023-10-05
WO2022049241A1 (fr) 2022-03-10
CN116507940A (zh) 2023-07-28
DE102020123293A1 (de) 2022-03-10

Similar Documents

Publication Publication Date Title
DE102006009121B4 (de) Verfahren zur Verarbeitung und Darstellung von mittels Synthetik-Apertur-Radarsystemen (SAR) gewonnen Bodenbildern
EP4211490A1 (fr) Procédé, système radar et véhicule pour le traitement de signaux radars
EP3803454B1 (fr) Procédé radar à ouverture synthetique et dispositif radar à ouverture synthetique
EP1531343B1 (fr) Procédé de suivi d'objets
DE3783060T2 (de) Verbesserte darstellung bei einem radar mit synthetischer apertur zur schiffsklassifizierung.
DE69620429T2 (de) Radarsystem mit synthetischer apertur
DE3882707T2 (de) Radar mit synthetischer apertur.
DE69621154T2 (de) Verfolgungsverfahren für radarsystem
DE112007000468T5 (de) Radarvorrichtung und mobiles Objekt
EP3374792A1 (fr) Procédé d'étalonnage d'un capteur d'un véhicule automobile pour une mesure d'angle, dispositif informatique, système d'assistance à la conduite ainsi que véhicule automobile
DE102014218092A1 (de) Erstellen eines Abbilds der Umgebung eines Kraftfahrzeugs und Bestimmen der relativen Geschwindigkeit zwischen dem Kraftfahrzeug und Objekten in der Umgebung
EP2270540A2 (fr) Procédé d'imagerie à l'aide d'une synthèse d'ouverture, procédé de détermination d'une vitesse relative entre un capteur à base d'ondes et un objet ou un dispositif destiné à l'exécution des procédés
DE102013104443A1 (de) Verkehrsüberwachungssystem zur Geschwindigkeitsmessung und Zuordnung von bewegten Fahrzeugen bei einem Mehrziel-Aufnahmemodul
EP3999876B1 (fr) Procédé et dispositif de détection d'un environnement
DE69026583T2 (de) Radar mit synthetischer Apertur und Strahlkeulenschärfungsfähigkeit in der Richtung der Fahrt
DE102010015723A1 (de) Verfahren und Vorrichtung zum Erfassen einer Bewegung eines Straßenfahrzeugs
DE102012024998A1 (de) Verfahren zum Bestimmen einer lateralen Geschwindigkeit eines Zielobjekts relativ zu einem Kraftfahrzeug mit Hilfe eines Radarsensors, Fahrerassistenzsystem und Kraftfahrzeug
DE102018000517A1 (de) Verfahren zur radarbasierten Vermessung und/oder Klassifizierung von Objekten in einer Fahrzeugumgebung
DE102019114723A1 (de) Abschwächen von schwingungen in einem radarsystem auf einer beweglichen plattform
DE102018202864A1 (de) Verfahren und System für Synthetische-Apertur-Radarsignalverarbeitung
DE102020211347A1 (de) Radarsystem und Verfahren zum Betreiben eines Radarsystems
DE102009013768A1 (de) Verfahren und Einrichtung zum Ermitteln von Aspektwinkeln
DE102020205187A1 (de) Radarvorrichtung und Verfahren zur radarbasierten Lokalisierung und Kartierung
DE102022208465A1 (de) Verfahren zur Korrektur eines Radarsignals zum Bestimmen einer synthetischen Apertur, Computerprogramm, Vorrichtung und Fahrzeug
DE112021004142T5 (de) Axialverschiebungsschätzvorrichtung

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230302

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)